filename
stringlengths
7
54
content
stringlengths
1.74k
710k
Neuron_Model_Cache.txt
Neuron Model Cache Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Neuron Model Cache AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Neuron Model Cache The Neuron Model Cache is a remote cache for compiled Neuron models in the neff format. It is integrated into the NeuronTrainer and NeuronModelForCausalLM classes to enable loading pretrained models from the cache instead of compiling them locally. Note: it is not available for models exported using any other NeuronModelXX classes, that use a different export mechanism. The Neuron Model Cache is hosted on the Hugging Face Hub and includes compiled files for all popular and supported optimum-neuron pre-trained models. Before training a Transformers or Diffusion model or loading a NeuronModelForCausalLM on Neuron platforms, it needs to be exported to neuron format with torch-neuronx . When exporting a model, torch-neuronx will: convert it to a set of XLA subgraphs, compile each subgraph with the neuronx compiler into a Neuron Executable File Format (NEFF) binary file. The first step is relatively fast, but the compilation takes a lot of time. To avoid recompiling all NEFF files every time a model is loaded on a NeuronX host, torch-neuronx stores NEFF files in a local directory, usually /var/tmp/neuron-compile-cache . However, this local cache is not shared between platforms, which means that every time you train or export a model on a new host, you need to recompile it. We created the Neuron Model Cache to solve this limitation by providing a public repository of precompiled model graphs. Note: we also support the creation of private, secured, remote model cache. How to use the Neuron model cache The public model cache will be used when you use the NeuronTrainer or NeuronModelForCausalLM classes. There are no additional changes needed. When exporting a model to neuron format, optimum-neuron will simply look for cached NEFF files in the hub repository during the compilation of the model subgraphs. If the NEFF files are cached, they will be fetched from the hub and directly loaded instead of being recompiled. How caching works The Optimum Neuron Cache is built on top of the NeuronX compiler cache . It is important to understand that the cache operates on NEFF binaries, and not on the model itself. As explained previously, each model exported to Neuron using the NeuronTrainer or NeuronModelForCausalLM is composed of XLA subgraphs. Each subgraph is unique, and results from the combination of: the transformers or transformers_neuronx python modeling code, the transformers model config, the input_shapes selected during the export, The precision of the model, full-precision, fp16 or bf16. When compiling a subgraph to a NEFF file, other parameters influence the result: The version of the Neuron X compiler, The number of Neuron cores used, The compilation parameters (such as the optimization level). All these parameters are combined together to create a unique hash that identifies a NEFF file. This has two very important consequences: it is only when actually exporting a model that the associated NEFF files can be identified, even a small change in the model configuration will lead to a different set of NEFF files. It is therefore very difficult to know in advance if the NEFFs associated to a specific model configuration are cached. Neuron model cache lookup (inferentia only) The neuron cache lookup is a feature allowing users to look for compatible cached model configurations before exporting a model for inference. It is based on a dedicated registry composed of stored cached configurations. Cached model configurations are stored as entries under a specific subfolder in the Neuron Model Cache: Copied neuronxcc -2 . 12 . 54 . 0 +f631c2365 ├── 0 _REGISTRY │ └── 0 . 0 . 18 │ └── llama │ └── meta- llama │ └── Llama-2-7b-chat-hf │ └── 54 c 1f 6689cd 88f 246f ce. json Each entry corresponds to the combination of a model configuration and its export parameters: this is as close as we can get to uniquely identify the exported model. You can use the optimum-cli to lookup for compatible cached entries by passing it a hub model_id or the path to a file containing a model config.json . Copied $ optimum-cli neuron cache lookup meta-llama/Llama-2-7b-chat-hf *** 1 entrie(s) found in cache for meta-llama/Llama-2-7b-chat-hf *** task: text-generation batch_size: 1 num_cores: 24 auto_cast_type: fp16 sequence_length: 2048 compiler_type: neuronx-cc compiler_version: 2.12.54.0+f631c2365 checkpoint_id: meta-llama/Llama-2-7b-chat-hf checkpoint_revision: c1b0db933684edbfe29a06fa47eb19cc48025e93 Note that even if compatible cached entries exist, this does not always guarantee that the model will not be recompiled during export if you modified the compilation parameters or updated the neuronx packages. Advanced usage (trainium only) How to use a private Neuron model cache (trainium only) The repository for the public cache is aws-neuron/optimum-neuron-cache . This repository includes all precompiled files for commonly used models so that it is publicly available and free to use for everyone. But there are two limitations: You will not be able to push your own compiled files on this repo It is public and you might want to use a private repo for private models To alleviate that you can create your own private cache repository using the optimum-cli or set the environment variable CUSTOM_CACHE_REPO . Using the Optimum CLI The Optimum CLI offers 2 subcommands for cache creation and setting: create : To create a new cache repository that you can use as a private Neuron Model cache. set : To set the name of the Neuron cache repository locally, the repository needs to exists and will be used by default by optimum-neuron . Create a new Neuron cache repository: Copied optimum-cli neuron cache create --help usage: optimum-cli neuron cache create [-h] [-n NAME ] [-- public ] optional arguments: -h, --help show this help message and exit -n NAME , -- name NAME The name of the repo that will be used as a remote cache for the compilation files. -- public If set , the created repo will be public . By default the cache repo is private . The -n / --name option allows you to specify a name for the Neuron cache repo, if not set the default name will be used. The --public flag allows you to make your Neuron cache public as it will be created as a private repository by default. Example: Copied optimum-cli neuron cache create Neuron cache created on the Hugging Face Hub: michaelbenayoun/optimum-neuron-cache [ private ]. Neuron cache name set locally to michaelbenayoun /optimum-neuron-cache in / home /michael/ .cache /huggingface/ optimum_neuron_custom_cache. Set a different Trainiun cache repository: Copied usage: optimum-cli neuron cache set [-h] name positional arguments: name The name of the repo to use as remote cache. optional arguments: -h, --help show this help message and exit Example: Copied optimum-cli neuron cache set michaelbenayoun/optimum-neuron-cache Neuron cache name set locally to michaelbenayoun /optimum-neuron-cache in / home /michael/ .cache /huggingface/ optimum_neuron_custom_cache The optimum-cli neuron cache set command is useful when working on a new instance to use your own cache. Using the environment variable CUSTOM_CACHE_REPO Using the CLI is not always feasible, and not very practical for small testing. In this case, you can simply set the environment variable CUSTOM_CACHE_REPO . For example, if your cache repo is called michaelbenayoun/my_custom_cache_repo , you just need to do: Copied CUSTOM_CACHE_REPO= "michaelbenayoun/my_custom_cache_repo" torchrun ... or: Copied export CUSTOM_CACHE_REPO= "michaelbenayoun/my_custom_cache_repo" torchrun ... You have to be logged into the Hugging Face Hub to be able to push and pull files from your private cache repository. Cache system flow Cache system flow At each the beginning of each training step, the NeuronTrainer computes a NeuronHash and checks the cache repo(s) (official and custom) on the Hugging Face Hub to see if there are compiled files associated to this hash. If that is the case, the files are downloaded directly to the local cache directory and no compilation is needed. Otherwise compilation is performed. Just as for downloading compiled files, the NeuronTrainer will keep track of the newly created compilation files at each training step, and upload them to the Hugging Face Hub at save time or when training ends. This assumes that you have writing access to the cache repo, otherwise nothing will be pushed. Optimum CLI The Optimum CLI can be used to perform various cache-related tasks, as described by the optimum-cli neuron cache command usage message: Copied usage: optimum-cli neuron cache [-h] {create, set , add , list } ... positional arguments: {create, set , add , list ,synchronize,lookup} create Create a model repo on the Hugging Face Hub to store Neuron X compilation files . set Set the name of the Neuron cache repo to use locally (trainium only ). add Add a model to the cache of your choice (trainium only ). list List models in a cache repo (trainium only ). synchronize Synchronize local compiler cache with the hub cache (inferentia only ). lookup Lookup the neuronx compiler hub cache for the specified model id (inferentia only ). optional arguments: -h, -- help show this help message and exit Add a model to the cache (trainium only) It is possible to add a model compilation files to a cache repo via the optimum-cli neuron cache add command: Copied usage: optimum-cli neuron cache add [-h] -m MODEL --task TASK --train_batch_size TRAIN_BATCH_SIZE [--eval_batch_size EVAL_BATCH_SIZE] [--sequence_length SEQUENCE_LENGTH] [--encoder_sequence_length ENCODER_SEQUENCE_LENGTH] [--decoder_sequence_length DECODER_SEQUENCE_LENGTH] [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS] --precision {fp,bf16} --num_cores { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 } [--max_steps MAX_STEPS] When running this command a small training session will be run and the resulting compilation files will be pushed. Make sure that the Neuron cache repo to use is set up locally, this can be done by running the `optimum-cli neuron cache set` command. You also need to make sure that you are logged in to the Hugging Face Hub and that you have the writing rights for the specified cache repo, this can be done via the `huggingface-cli login` command. If at least one of those requirements is not met, the command will fail. Example: Copied optimum-cli neuron cache add \ --model prajjwal1/bert-tiny \ --task text-classification \ --train_batch_size 16 \ --eval_batch_size 16 \ --sequence_length 128 \ --gradient_accumulation_steps 32 \ --num_cores 32 \ --precision bf16 This will push compilation files for the prajjwal1/bert-tiny model on the Neuron cache repo that was set up for the specified parameters. List a cache repo It can also be convenient to request the cache repo to know which compilation files are available. This can be done via the optimum-cli neuron cache list command: Copied usage: optimum-cli neuron cache list [-h] [- m MODEL] [-v VERSION] [name] positional arguments: name The name of the repo to list . Will use the locally saved cache repo if left unspecified. optional arguments: -h, -- help show this help message and exit - m MODEL, --model MODEL The model name or path of the model to consider. If left unspecified, will list all available models. -v VERSION, -- version VERSION The version of the Neuron X Compiler to consider. Will list all available versions if left unspecified. As you can see, it is possible to: List all the models available for all compiler versions. List all the models available for a given compiler version by specifying the -v / --version argument. List all compilation files for a given model, there can be many for different input shapes and so on, by specifying the -m / --model argument. Example: Copied optimum-cli neuron cache list aws-neuron/optimum-neuron- cache ← Training and Deployment using Amazon Sagemaker Fine-tune Transformers with AWS Trainium → Neuron Model Cache How to use the Neuron model cache How caching works Neuron model cache lookup (inferentia only) Advanced usage (trainium only) How to use a private Neuron model cache (trainium only) Using the Optimum CLI Using the environment variable CUSTO M_CACH E_REPO Cache system flow Optimum CLI Add a model to the cache (trainium only) List a cache repo
Fine_tune_BERT_for_Text_Classification_on_AWS_Trai.txt
Fine-tune BERT for Text Classification on AWS Trainium Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Fine-tune BERT for Text Classification on AWS Trainium AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Fine-tune BERT for Text Classification on AWS Trainium There is a notebook version of that tutorial here . This tutorial will help you to get started with AWS Trainium and Hugging Face Transformers. It will cover how to set up a Trainium instance on AWS, load & fine-tune a transformers model for text-classification You will learn how to: Setup AWS environment Load and process the dataset Fine-tune BERT using Hugging Face Transformers and Optimum Neuron Before we can start, make sure you have a Hugging Face Account to save artifacts and experiments. Quick intro: AWS Trainium AWS Trainium (Trn1) is a purpose-built EC2 for deep learning (DL) training workloads. Trainium is the successor of AWS Inferentia focused on high-performance training workloads claiming up to 50% cost-to-train savings over comparable GPU-based instances. Trainium has been optimized for training natural language processing, computer vision, and recommender models used. The accelerator supports a wide range of data types, including FP32, TF32, BF16, FP16, UINT8, and configurable FP8. The biggest Trainium instance, the trn1.32xlarge comes with over 500GB of memory, making it easy to fine-tune ~10B parameter models on a single instance. Below you will find an overview of the available instance types. More details here : instance size accelerators accelerator memory vCPU CPU Memory price per hour trn1.2xlarge 1 32 8 32 $1.34 trn1.32xlarge 16 512 128 512 $21.50 trn1n.32xlarge (2x bandwidth) 16 512 128 512 $24.78 Now we know what Trainium offers, let’s get started. 🚀 Note: This tutorial was created on a trn1.2xlarge AWS EC2 Instance. 1. Setup AWS environment In this example, we will use the trn1.2xlarge instance on AWS with 1 Accelerator, including two Neuron Cores and the Hugging Face Neuron Deep Learning AMI . This blog post doesn’t cover how to create the instance in detail. You can check out my previous blog about “Setting up AWS Trainium for Hugging Face Transformers” , which includes a step-by-step guide on setting up the environment. Once the instance is up and running, we can ssh into it. But instead of developing inside a terminal we want to use a Jupyter environment, which we can use for preparing our dataset and launching the training. For this, we need to add a port for forwarding in the ssh command, which will tunnel our localhost traffic to the Trainium instance. Copied PUBLIC_DNS= "" # IP address, e.g. ec2-3-80-.... KEY_PATH= "" # local path to key, e.g. ssh/trn.pem ssh -L 8080:localhost:8080 -i ${KEY_NAME} .pem ubuntu@ $PUBLIC_DNS We can now start our jupyter server. Copied python -m notebook --allow-root --port=8080 You should see a familiar jupyter output with a URL to the notebook. http://localhost:8080/?token=8c1739aff1755bd7958c4cfccc8d08cb5da5234f61f129a9 We can click on it, and a jupyter environment opens in our local browser. We are going to use the Jupyter environment only for preparing the dataset and then torchrun for launching our training script on both neuron cores for distributed training. Lets create a new notebook and get started. 2. Load and process the dataset We are training a Text Classification model on the emotion dataset to keep the example straightforward. The emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. We will use the load_dataset() method from the 🤗 Datasets library to load the emotion . Copied from datasets import load_dataset # Dataset id from huggingface.co/dataset dataset_id = "philschmid/emotion" # Load raw dataset raw_dataset = load_dataset(dataset_id) print ( f"Train dataset size: { len (raw_dataset[ 'train' ])} " ) print ( f"Test dataset size: { len (raw_dataset[ 'test' ])} " ) # Train dataset size: 16000 # Test dataset size: 2000 Let’s check out an example of the dataset. Copied from random import randrange random_id = randrange( len (raw_dataset[ 'train' ])) raw_dataset[ 'train' ][random_id] # {'text': 'i feel isolated and alone in my trade', 'label': 0} We must convert our “Natural Language” to token IDs to train our model. This is done by a Tokenizer, which tokenizes the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary). if you want to learn more about this, out chapter 6 of the Hugging Face Course . Our Neuron Accelerator expects a fixed shape of inputs. We need to truncate or pad all samples to the same length. Copied from transformers import AutoTokenizer import os # Model id to load the tokenizer model_id = "bert-base-uncased" save_dataset_path = "lm_dataset" # Load Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_id) # Tokenize helper function def tokenize ( batch ): return tokenizer(batch[ 'text' ], padding= 'max_length' , truncation= True ,return_tensors= "pt" ) # Tokenize dataset raw_dataset = raw_dataset.rename_column( "label" , "labels" ) # to match Trainer tokenized_dataset = raw_dataset. map (tokenize, batched= True , remove_columns=[ "text" ]) tokenized_dataset = tokenized_dataset.with_format( "torch" ) # save dataset to disk tokenized_dataset[ "train" ].save_to_disk(os.path.join(save_dataset_path, "train" )) tokenized_dataset[ "test" ].save_to_disk(os.path.join(save_dataset_path, "eval" )) 3. Fine-tune BERT using Hugging Face Transformers Normally you would use the Trainer and TrainingArguments to fine-tune PyTorch-based transformer models. But together with AWS, we have developed a NeuronTrainer to improve performance, robustness, and safety when training on Trainium or Inferentia2 instances. The NeuronTrainer also comes with a model cache , which allows us to use precompiled models and configuration from Hugging Face Hub to skip the compilation step, which would be needed at the beginning of training. This can reduce the training time by ~3x. The NeuronTrainer is part of the optimum-neuron library and can be used as a 1-to-1 replacement for the Trainer . You only have to adjust the import in your training script. Copied - from transformers import Trainer, TrainingArguments + from optimum.neuron import NeuronTrainer as Trainer + from optimum.neuron import NeuronTrainingArguments as TrainingArguments We prepared a simple train.py training script based on the “Getting started with Pytorch 2.0 and Hugging Face Transformers” blog post with the NeuronTrainer . Below is an excerpt Copied from transformers import TrainingArguments from optimum.neuron import NeuronTrainer as Trainer def parse_args (): ... def training_function ( args ): # load dataset from disk and tokenizer train_dataset = load_from_disk(os.path.join(args.dataset_path, "train" )) ... # Download the model from huggingface.co/models model = AutoModelForSequenceClassification.from_pretrained( args.model_id, num_labels=num_labels, label2id=label2id, id2label=id2label ) training_args = TrainingArguments( ... ) # Create Trainer instance trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, ) # Start training trainer.train() We can load the training script into our environment using the wget command or manually copy it into the notebook from here . Copied !wget https://raw.githubusercontent.com/huggingface/optimum-neuron/main/notebooks/text-classification/scripts/train.py We will use torchrun to launch our training script on both neuron cores for distributed training. torchrun is a tool that automatically distributes a PyTorch model across multiple accelerators. We can pass the number of accelerators as nproc_per_node arguments alongside our hyperparameters. We’ll use the following command to launch training: Copied !torchrun --nproc_per_node=2 train.py \ --model_id bert-base-uncased \ --dataset_path lm_dataset \ --lr 5e-5 \ --per_device_train_batch_size 16 \ --bf16 True \ --epochs 3 Note : If you see bad, bad accuracy, you might want to deactivate bf16 for now. After 9 minutes the training was completed and achieved an excellent f1 score of 0.914 . Copied ***** train metrics ***** epoch = 3.0 train_runtime = 0:08:30 train_samples = 16000 train_samples_per_second = 96.337 ***** eval metrics ***** eval_f1 = 0.914 eval_runtime = 0:00:08 Last but not least, terminate the EC2 instance to avoid unnecessary charges. Looking at the price-performance, our training only cost 20ct ( 1.34$/h * 0.15h = 0.20$ ) ← Notebooks Fine-tune Llama 3 8B on AWS Trainium → Fine-tune BER T for Text Classification on AW S Trainium Quick intro: AW S Trainium 1. Setup AW S environment 2. Load and process the dataset 3. Fine-tune BER T using Hugging Face Transformers
Run_training_on_Amazon_SageMaker.txt
Run training on Amazon SageMaker Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Amazon SageMaker documentation Run training on Amazon SageMaker Amazon SageMaker 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Hugging Face on Amazon SageMaker Get started Run training on Amazon SageMaker Deploy models to Amazon SageMaker Reference Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Run training on Amazon SageMaker This guide will show you how to train a 🤗 Transformers model with the HuggingFace SageMaker Python SDK. Learn how to: Install and setup your training environment . Prepare a training script . Create a Hugging Face Estimator . Run training with the fit method . Access your trained model . Perform distributed training . Create a spot instance . Load a training script from a GitHub repository . Collect training metrics . Installation and setup Before you can train a 🤗 Transformers model with SageMaker, you need to sign up for an AWS account. If you don’t have an AWS account yet, learn more here . Once you have an AWS account, get started using one of the following: SageMaker Studio SageMaker notebook instance Local environment To start training locally, you need to setup an appropriate IAM role . Upgrade to the latest sagemaker version: Copied pip install sagemaker --upgrade SageMaker environment Setup your SageMaker environment as shown below: Copied import sagemaker sess = sagemaker.Session() role = sagemaker.get_execution_role() Note: The execution role is only available when running a notebook within SageMaker. If you run get_execution_role in a notebook not on SageMaker, expect a region error. Local environment Setup your local environment as shown below: Copied import sagemaker import boto3 iam_client = boto3.client( 'iam' ) role = iam_client.get_role(RoleName= 'role-name-of-your-iam-role-with-right-permissions' )[ 'Role' ][ 'Arn' ] sess = sagemaker.Session() Prepare a 🤗 Transformers fine-tuning script Our training script is very similar to a training script you might run outside of SageMaker. However, you can access useful properties about the training environment through various environment variables (see here for a complete list), such as: SM_MODEL_DIR : A string representing the path to which the training job writes the model artifacts. After training, artifacts in this directory are uploaded to S3 for model hosting. SM_MODEL_DIR is always set to /opt/ml/model . SM_NUM_GPUS : An integer representing the number of GPUs available to the host. SM_CHANNEL_XXXX: A string representing the path to the directory that contains the input data for the specified channel. For example, when you specify train and test in the Hugging Face Estimator fit method, the environment variables are set to SM_CHANNEL_TRAIN and SM_CHANNEL_TEST . The hyperparameters defined in the Hugging Face Estimator are passed as named arguments and processed by ArgumentParser() . Copied import transformers import datasets import argparse import os if __name__ == "__main__" : parser = argparse.ArgumentParser() # hyperparameters sent by the client are passed as command-line arguments to the script parser.add_argument( "--epochs" , type = int , default= 3 ) parser.add_argument( "--per_device_train_batch_size" , type = int , default= 32 ) parser.add_argument( "--model_name_or_path" , type = str ) # data, model, and output directories parser.add_argument( "--model-dir" , type = str , default=os.environ[ "SM_MODEL_DIR" ]) parser.add_argument( "--training_dir" , type = str , default=os.environ[ "SM_CHANNEL_TRAIN" ]) parser.add_argument( "--test_dir" , type = str , default=os.environ[ "SM_CHANNEL_TEST" ]) Note that SageMaker doesn’t support argparse actions. For example, if you want to use a boolean hyperparameter, specify type as bool in your script and provide an explicit True or False value. Look train.py file for a complete example of a 🤗 Transformers training script. Training Output Management If output_dir in the TrainingArguments is set to ‘/opt/ml/model’ the Trainer saves all training artifacts, including logs, checkpoints, and models. Amazon SageMaker archives the whole ‘/opt/ml/model’ directory as model.tar.gz and uploads it at the end of the training job to Amazon S3. Depending on your Hyperparameters and TrainingArguments this could lead to a large artifact (> 5GB), which can slow down deployment for Amazon SageMaker Inference. You can control how checkpoints, logs, and artifacts are saved by customization the TrainingArguments . For example by providing save_total_limit as TrainingArgument you can control the limit of the total amount of checkpoints. Deletes the older checkpoints in output_dir if new ones are saved and the maximum limit is reached. In addition to the options already mentioned above, there is another option to save the training artifacts during the training session. Amazon SageMaker supports Checkpointing , which allows you to continuously save your artifacts during training to Amazon S3 rather than at the end of your training. To enable Checkpointing you need to provide the checkpoint_s3_uri parameter pointing to an Amazon S3 location in the HuggingFace estimator and set output_dir to /opt/ml/checkpoints . Note: If you set output_dir to /opt/ml/checkpoints make sure to call trainer.save_model("/opt/ml/model") or model.save_pretrained(“/opt/ml/model”)/ tokenizer.save_pretrained("/opt/ml/model") at the end of your training to be able to deploy your model seamlessly to Amazon SageMaker for Inference. Create a Hugging Face Estimator Run 🤗 Transformers training scripts on SageMaker by creating a Hugging Face Estimator . The Estimator handles end-to-end SageMaker training. There are several parameters you should define in the Estimator: entry_point specifies which fine-tuning script to use. instance_type specifies an Amazon instance to launch. Refer here for a complete list of instance types. hyperparameters specifies training hyperparameters. View additional available hyperparameters in train.py file . The following code sample shows how to train with a custom script train.py with three hyperparameters ( epochs , per_device_train_batch_size , and model_name_or_path ): Copied from sagemaker.huggingface import HuggingFace # hyperparameters which are passed to the training job hyperparameters={ 'epochs' : 1 , 'per_device_train_batch_size' : 32 , 'model_name_or_path' : 'distilbert-base-uncased' } # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3.2xlarge' , instance_count= 1 , role=role, transformers_version= '4.26' , pytorch_version= '1.13' , py_version= 'py39' , hyperparameters = hyperparameters ) If you are running a TrainingJob locally, define instance_type='local' or instance_type='local_gpu' for GPU usage. Note that this will not work with SageMaker Studio. Execute training Start your TrainingJob by calling fit on a Hugging Face Estimator. Specify your input training data in fit . The input training data can be a: S3 URI such as s3://my-bucket/my-training-data . FileSystemInput for Amazon Elastic File System or FSx for Lustre. See here for more details about using these file systems as input. Call fit to begin training: Copied huggingface_estimator.fit( { 'train' : 's3://sagemaker-us-east-1-558105141721/samples/datasets/imdb/train' , 'test' : 's3://sagemaker-us-east-1-558105141721/samples/datasets/imdb/test' } ) SageMaker starts and manages all the required EC2 instances and initiates the TrainingJob by running: Copied /opt/conda/bin/python train.py --epochs 1 --model_name_or_path distilbert-base-uncased --per_device_train_batch_size 32 Access trained model Once training is complete, you can access your model through the AWS console or download it directly from S3. Copied from sagemaker.s3 import S3Downloader S3Downloader.download( s3_uri=huggingface_estimator.model_data, # S3 URI where the trained model is located local_path= '.' , # local path where *.targ.gz is saved sagemaker_session=sess # SageMaker session used for training the model ) Distributed training SageMaker provides two strategies for distributed training: data parallelism and model parallelism. Data parallelism splits a training set across several GPUs, while model parallelism splits a model across several GPUs. Data parallelism The Hugging Face Trainer supports SageMaker’s data parallelism library. If your training script uses the Trainer API, you only need to define the distribution parameter in the Hugging Face Estimator: Copied # configuration for running training on smdistributed data parallel distribution = { 'smdistributed' :{ 'dataparallel' :{ 'enabled' : True }}} # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3dn.24xlarge' , instance_count= 2 , role=role, transformers_version= '4.26.0' , pytorch_version= '1.13.1' , py_version= 'py39' , hyperparameters = hyperparameters, distribution = distribution ) 📓 Open the sagemaker-notebook.ipynb notebook for an example of how to run the data parallelism library with TensorFlow. Model parallelism The Hugging Face [Trainer] also supports SageMaker’s model parallelism library. If your training script uses the Trainer API, you only need to define the distribution parameter in the Hugging Face Estimator (see here for more detailed information about using model parallelism): Copied # configuration for running training on smdistributed model parallel mpi_options = { "enabled" : True , "processes_per_host" : 8 } smp_options = { "enabled" : True , "parameters" : { "microbatches" : 4 , "placement_strategy" : "spread" , "pipeline" : "interleaved" , "optimize" : "speed" , "partitions" : 4 , "ddp" : True , } } distribution={ "smdistributed" : { "modelparallel" : smp_options}, "mpi" : mpi_options } # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3dn.24xlarge' , instance_count= 2 , role=role, transformers_version= '4.26.0' , pytorch_version= '1.13.1' , py_version= 'py39' , hyperparameters = hyperparameters, distribution = distribution ) 📓 Open the sagemaker-notebook.ipynb notebook for an example of how to run the model parallelism library. Spot instances The Hugging Face extension for the SageMaker Python SDK means we can benefit from fully-managed EC2 spot instances . This can help you save up to 90% of training costs! Note: Unless your training job completes quickly, we recommend you use checkpointing with managed spot training. In this case, you need to define the checkpoint_s3_uri . Set use_spot_instances=True and define your max_wait and max_run time in the Estimator to use spot instances: Copied # hyperparameters which are passed to the training job hyperparameters={ 'epochs' : 1 , 'train_batch_size' : 32 , 'model_name' : 'distilbert-base-uncased' , 'output_dir' : '/opt/ml/checkpoints' } # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3.2xlarge' , instance_count= 1 , checkpoint_s3_uri= f's3:// {sess.default_bucket()} /checkpoints' use_spot_instances= True , # max_wait should be equal to or greater than max_run in seconds max_wait= 3600 , max_run= 1000 , role=role, transformers_version= '4.26' , pytorch_version= '1.13' , py_version= 'py39' , hyperparameters = hyperparameters ) # Training seconds: 874 # Billable seconds: 262 # Managed Spot Training savings: 70.0% 📓 Open the sagemaker-notebook.ipynb notebook for an example of how to use spot instances. Git repository The Hugging Face Estimator can load a training script stored in a GitHub repository . Provide the relative path to the training script in entry_point and the relative path to the directory in source_dir . If you are using git_config to run the 🤗 Transformers example scripts , you need to configure the correct 'branch' in transformers_version (e.g. if you use transformers_version='4.4.2 you have to use 'branch':'v4.4.2' ). Tip: Save your model to S3 by setting output_dir=/opt/ml/model in the hyperparameter of your training script. Copied # configure git settings git_config = { 'repo' : 'https://github.com/huggingface/transformers.git' , 'branch' : 'v4.4.2' } # v4.4.2 refers to the transformers_version you use in the estimator # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'run_glue.py' , source_dir= './examples/pytorch/text-classification' , git_config=git_config, instance_type= 'ml.p3.2xlarge' , instance_count= 1 , role=role, transformers_version= '4.26' , pytorch_version= '1.13' , py_version= 'py39' , hyperparameters=hyperparameters ) SageMaker metrics SageMaker metrics automatically parses training job logs for metrics and sends them to CloudWatch. If you want SageMaker to parse the logs, you must specify the metric’s name and a regular expression for SageMaker to use to find the metric. Copied # define metrics definitions metric_definitions = [ { "Name" : "train_runtime" , "Regex" : "train_runtime.*=\D*(.*?)$" }, { "Name" : "eval_accuracy" , "Regex" : "eval_accuracy.*=\D*(.*?)$" }, { "Name" : "eval_loss" , "Regex" : "eval_loss.*=\D*(.*?)$" }, ] # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3.2xlarge' , instance_count= 1 , role=role, transformers_version= '4.26' , pytorch_version= '1.13' , py_version= 'py39' , metric_definitions=metric_definitions, hyperparameters = hyperparameters) 📓 Open the notebook for an example of how to capture metrics in SageMaker. < > Update on GitHub ← Get started Deploy models to Amazon SageMaker → Run training on Amazon Sage Maker Installation and setup Prepare a 🤗 Transformers fine-tuning script Training Output Management Create a Hugging Face Estimator Execute training Access trained model Distributed training Data parallelism Model parallelism Spot instances Git repository Sage Maker metrics
Object_Detection.txt
Object detection Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Object detection api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Object detection Object Detection models allow users to identify objects of certain defined classes. These models receive an image as input and output the images with bounding boxes and labels on detected objects. For more details about the object-detection task, check out its dedicated page ! You will find examples and related materials. Recommended models facebook/detr-resnet-50 : Solid object detection model pre-trained on the COCO 2017 dataset. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/facebook/detr-resnet-50" headers = { "Authorization" : "Bearer hf_***" } def query ( filename ): with open (filename, "rb" ) as f: data = f.read() response = requests.post(API_URL, headers=headers, data=data) return response.json() output = query( "cats.jpg" ) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload inputs* string The input image data as a base64-encoded string. If no parameters are provided, you can also provide the image data as a raw bytes payload. parameters object threshold number The probability necessary to make a prediction. Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body (array) object[] Output is an array of objects. label string The predicted label for the bounding box. score number The associated score / probability. box object xmin integer The x-coordinate of the top-left corner of the bounding box. xmax integer The x-coordinate of the bottom-right corner of the bounding box. ymin integer The y-coordinate of the top-left corner of the bounding box. ymax integer The y-coordinate of the bottom-right corner of the bounding box. < > Update on GitHub ← Image-Text to Text Question Answering → Object detection Recommended models Using the API AP I specification Request Response
The_Command_Line.txt
The Command Line Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation The Command Line Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started The Command Line Below is a list of all the available commands 🤗 Accelerate with their parameters accelerate config Command : accelerate config or accelerate-config Launches a series of prompts to create and save a default_config.yml configuration file for your training system. Should always be ran first on your machine. Usage : Copied accelerate config [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) — The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with ‘accelerate’, or if you don’t have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) — Show a help message and exit accelerate config default Command : accelerate config default or accelerate-config default Create a default config file for Accelerate with only a few flags set. Usage : Copied accelerate config default [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) — The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with ‘accelerate’, or if you don’t have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) — Show a help message and exit --mixed_precision {no,fp16,bf16} ( str ) — Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later. accelerate config update Command : accelerate config update or accelerate-config update Update an existing config file with the latest defaults while maintaining the old configuration. Usage : Copied accelerate config update [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) — The path to the config file to update. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with ‘accelerate’, or if you don’t have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) — Show a help message and exit accelerate env Command : accelerate env or accelerate-env or python -m accelerate.commands.env Lists the contents of the passed 🤗 Accelerate configuration file. Should always be used when opening an issue on the GitHub repository . Usage : Copied accelerate env [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) — The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with ‘accelerate’, or if you don’t have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) — Show a help message and exit accelerate launch Command : accelerate launch or accelerate-launch or python -m accelerate.commands.launch Launches a specified script on a distributed system with the right parameters. Usage : Copied accelerate launch [arguments] {training_script} --{training_script-argument-1} --{training_script-argument-2} ... Positional Arguments : {training_script} — The full path to the script to be launched in parallel --{training_script-argument-1} — Arguments of the training script Optional Arguments : -h , --help ( bool ) — Show a help message and exit --config_file CONFIG_FILE ( str )— The config file to use for the default values in the launching script. -m , --module ( bool ) — Change each process to interpret the launch script as a Python module, executing with the same behavior as ‘python -m’. --no_python ( bool ) — Skip prepending the training script with ‘python’ - just execute it directly. Useful when the script is not a Python script. --debug ( bool ) — Whether to print out the torch.distributed stack trace when something fails. -q , --quiet ( bool ) — Silence subprocess errors from the launch stack trace to only show the relevant tracebacks. (Only applicable to DeepSpeed and single-process configurations). The rest of these arguments are configured through accelerate config and are read in from the specified --config_file (or default configuration) for their values. They can also be passed in manually. Hardware Selection Arguments : --cpu ( bool ) — Whether or not to force the training on the CPU. --multi_gpu ( bool ) — Whether or not this should launch a distributed GPU training. --tpu ( bool ) — Whether or not this should launch a TPU training. --ipex ( bool ) — Whether or not this should launch an Intel Pytorch Extension (IPEX) training. Resource Selection Arguments : The following arguments are useful for fine-tuning how available hardware should be used --mixed_precision {no,fp16,bf16,fp8} ( str ) — Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later. --num_processes NUM_PROCESSES ( int ) — The total number of processes to be launched in parallel. --num_machines NUM_MACHINES ( int ) — The total number of machines used in this training. --num_cpu_threads_per_process NUM_CPU_THREADS_PER_PROCESS ( int ) — The number of CPU threads per process. Can be tuned for optimal performance. --enable_cpu_affinity ( bool ) — Whether or not CPU affinity and balancing should be enabled. Currently only supported on NVIDIA hardware. Training Paradigm Arguments : The following arguments are useful for selecting which training paradigm to use. --use_deepspeed ( bool ) — Whether or not to use DeepSpeed for training. --use_fsdp ( bool ) — Whether or not to use FullyShardedDataParallel for training. --use_megatron_lm ( bool ) — Whether or not to use Megatron-LM for training. --use_xpu ( bool ) — Whether to use IPEX plugin to speed up training on XPU specifically. Distributed GPU Arguments : The following arguments are only useful when multi_gpu is passed or multi-gpu training is configured through accelerate config : --gpu_ids ( str ) — What GPUs (by id) should be used for training on this machine as a comma-seperated list --same_network ( bool ) — Whether all machines used for multinode training exist on the same local network. --machine_rank ( int ) — The rank of the machine on which this script is launched. --main_process_ip ( str ) — The IP address of the machine of rank 0. --main_process_port ( int ) — The port to use to communicate with the machine of rank 0. -t , --tee ( str ) — Tee std streams into a log file and also to console. --log_dir ( str ) — Base directory to use for log files when using torchrun/torch.distributed.run as launcher. Use with —tee to redirect std streams info log files. --role ( str ) — User-defined role for the workers. --rdzv_backend ( str ) — The rendezvous method to use, such as ‘static’ (the default) or ‘c10d’ --rdzv_conf ( str ) — Additional rendezvous configuration (<key1>=<value1>,<key2>=<value2>,…). --max_restarts ( int ) — Maximum number of worker group restarts before failing. --monitor_interval ( int ) — Interval, in seconds, to monitor the state of workers. TPU Arguments : The following arguments are only useful when tpu is passed or TPU training is configured through accelerate config : --tpu_cluster ( bool ) — Whether to use a GCP TPU pod for training. --tpu_use_sudo ( bool ) — Whether to use sudo when running the TPU training script in each pod. --vm ( str ) — List of single Compute VM instance names. If not provided we assume usage of instance groups. For TPU pods. --env ( str ) — List of environment variables to set on the Compute VM instances. For TPU pods. --main_training_function ( str ) — The name of the main function to be executed in your script (only for TPU training). --downcast_bf16 ( bool ) — Whether when using bf16 precision on TPUs if both float and double tensors are cast to bfloat16 or if double tensors remain as float32. DeepSpeed Arguments : The following arguments are only useful when use_deepspeed is passed or deepspeed is configured through accelerate config : --deepspeed_config_file ( str ) — DeepSpeed config file. --zero_stage ( int ) — DeepSpeed’s ZeRO optimization stage. --offload_optimizer_device ( str ) — Decides where (none|cpu|nvme) to offload optimizer states. --offload_param_device ( str ) — Decides where (none|cpu|nvme) to offload parameters. --offload_optimizer_nvme_path ( str ) — Decides Nvme Path to offload optimizer states. --gradient_accumulation_steps ( int ) — No of gradient_accumulation_steps used in your training script. --gradient_clipping ( float ) — Gradient clipping value used in your training script. --zero3_init_flag ( str ) — Decides Whether (true|false) to enable deepspeed.zero.Init for constructing massive models. Only applicable with DeepSpeed ZeRO Stage-3. --zero3_save_16bit_model ( str ) — Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3. Only applicable with DeepSpeed ZeRO Stage-3. --deepspeed_hostfile ( str ) — DeepSpeed hostfile for configuring multi-node compute resources. --deepspeed_exclusion_filter ( str ) — DeepSpeed exclusion filter string when using mutli-node setup. --deepspeed_inclusion_filter ( str ) — DeepSpeed inclusion filter string when using mutli-node setup. --deepspeed_multinode_launcher ( str ) — DeepSpeed multi-node launcher to use. --deepspeed_moe_layer_cls_names ( str ) — comma-separated list of transformer MoE layer class names (case-sensitive) to wrap, e.g, MixtralSparseMoeBlock Qwen2MoeSparseMoeBlock , JetMoEAttention,JetMoEBlock Fully Sharded Data Parallelism Arguments : The following arguments are only useful when use_fsdp is passed or Fully Sharded Data Parallelism is configured through accelerate config : --fsdp_offload_params ( str ) — Decides Whether (true|false) to offload parameters and gradients to CPU. --fsdp_min_num_params ( int ) — FSDP’s minimum number of parameters for Default Auto Wrapping. --fsdp_sharding_strategy ( int ) — FSDP’s Sharding Strategy. --fsdp_auto_wrap_policy ( str ) — FSDP’s auto wrap policy. --fsdp_transformer_layer_cls_to_wrap ( str ) — Transformer layer class name (case-sensitive) to wrap, e.g, BertLayer , GPTJBlock , T5Block … --fsdp_backward_prefetch_policy ( str ) — FSDP’s backward prefetch policy. --fsdp_state_dict_type ( str ) — FSDP’s state dict type. --fsdp_forward_prefetch ( str ) — FSDP forward prefetch. --fsdp_use_orig_params ( str ) — If True, allows non-uniform requires_grad mixed in a FSDP unit. --fsdp_cpu_ram_efficient_loading ( str ) — If true, only the first process loads the pretrained model checkoint while all other processes have empty weights. When using this, --fsdp_sync_module_states needs to True. --fsdp_sync_module_states ( str ) — If true, each individually wrapped FSDP unit will broadcast module parameters from rank 0. --fsdp_activation_checkpointing ( bool ) — Decides Whether intermediate activations are freed during the forward pass, and a checkpoint is left as a placeholder Megatron-LM Arguments : The following arguments are only useful when use_megatron_lm is passed or Megatron-LM is configured through accelerate config : --megatron_lm_tp_degree (“) — Megatron-LM’s Tensor Parallelism (TP) degree. --megatron_lm_pp_degree (“) — Megatron-LM’s Pipeline Parallelism (PP) degree. --megatron_lm_num_micro_batches (“) — Megatron-LM’s number of micro batches when PP degree > 1. --megatron_lm_sequence_parallelism (“) — Decides Whether (true|false) to enable Sequence Parallelism when TP degree > 1. --megatron_lm_recompute_activations (“) — Decides Whether (true|false) to enable Selective Activation Recomputation. --megatron_lm_use_distributed_optimizer (“) — Decides Whether (true|false) to use distributed optimizer which shards optimizer state and gradients across Data Parallel (DP) ranks. --megatron_lm_gradient_clipping (“) — Megatron-LM’s gradient clipping value based on global L2 Norm (0 to disable). FP8 Arguments : --fp8_backend ( str ) — Choose a backend to train with FP8 ( te or msamp ) --fp8_use_autocast_during_eval ( bool ) — Whether to use FP8 autocast during eval mode (useful only when --fp8_backend=te is passed). Generally better metrics are found when this is not passed. --fp8_margin ( int ) — The margin to use for the gradient scaling (useful only when --fp8_backend=te is passed). --fp8_interval ( int ) — The interval to use for how often the scaling factor is recomputed (useful only when --fp8_backend=te is passed). --fp8_format ( str ) — The format to use for the FP8 recipe (useful only when --fp8_backend=te is passed). --fp8_amax_history_len ( int ) — The length of the history to use for the scaling factor computation (useful only when --fp8_backend=te is passed). --fp8_amax_compute_algo ( str ) — The algorithm to use for the scaling factor computation. (useful only when --fp8_backend=te is passed). --fp8_override_linear_precision ( Tuple[bool, bool, bool] ) — Whether or not to execute fprop , dgrad , and wgrad GEMMS in higher precision. --fp8_opt_level ( str ) — What level of 8-bit collective communication should be used with MS-AMP (useful only when --fp8_backend=msamp is passed) AWS SageMaker Arguments : The following arguments are only useful when training in SageMaker --aws_access_key_id AWS_ACCESS_KEY_ID ( str ) — The AWS_ACCESS_KEY_ID used to launch the Amazon SageMaker training job --aws_secret_access_key AWS_SECRET_ACCESS_KEY ( str ) — The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job accelerate estimate-memory Command : accelerate estimate-memory or accelerate-estimate-memory or python -m accelerate.commands.estimate Estimates the total vRAM a particular model hosted on the Hub needs to be loaded in with an estimate for training. Requires that huggingface_hub be installed. When performing inference, typically add ≤20% to the result as overall allocation as referenced here . We will have more extensive estimations in the future that will automatically be included in the calculation. Usage : Copied accelerate estimate-memory {MODEL_NAME} --library_name {LIBRARY_NAME} --dtypes {dtype_1} {dtype_2} ... Required Arguments : MODEL_NAME ( str )— The model name on the Hugging Face Hub Optional Arguments : --library_name {timm,transformers} ( str ) — The library the model has an integration with, such as transformers , needed only if this information is not stored on the Hub --dtypes {float32,float16,int8,int4} ( [{float32,float16,int8,int4} ...] ) — The dtypes to use for the model, must be one (or many) of float32 , float16 , int8 , and int4 --trust_remote_code ( bool ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be passed for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. accelerate tpu-config accelerate tpu-config Usage : Copied accelerate tpu-config [arguments] Optional Arguments : -h , --help ( bool ) — Show a help message and exit Config Arguments : Arguments that can be configured through accelerate config . --config_file ( str ) — Path to the config file to use for accelerate. --tpu_name ( str ) — The name of the TPU to use. If not specified, will use the TPU specified in the config file. --tpu_zone ( str ) — The zone of the TPU to use. If not specified, will use the zone specified in the config file. TPU Arguments : Arguments for options ran inside the TPU. --command_file ( str ) — The path to the file containing the commands to run on the pod on startup. --command ( str ) — A command to run on the pod. Can be passed multiple times. --install_accelerate ( bool ) — Whether to install accelerate on the pod. Defaults to False. --accelerate_version ( str ) — The version of accelerate to install on the pod. If not specified, will use the latest pypi version. Specify ‘dev’ to install from GitHub. --debug ( bool ) — If set, will print the command that would be run instead of running it. accelerate test accelerate test or accelerate-test Runs accelerate/test_utils/test_script.py to verify that 🤗 Accelerate has been properly configured on your system and runs. Usage : Copied accelerate test [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) — The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with ‘accelerate’, or if you don’t have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) — Show a help message and exit < > Update on GitHub ← Stateful classes DataLoaders, Optimizers, Schedulers → The Command Line accelerate config accelerate config default accelerate config update accelerate env accelerate launch accelerate estimate-memory accelerate tpu-config accelerate test
Perplexity_of_fixed-length_models.txt
Perplexity of fixed-length models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Perplexity of fixed-length models Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Perplexity of fixed-length models Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note that the metric applies specifically to classical language models (sometimes called autoregressive or causal language models) and is not well defined for masked language models like BERT (see summary of the models ). Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized sequence X = ( x 0 , x 1 , … , x t ) X = (x_0, x_1, \dots, x_t) X = ( x 0 ​ , x 1 ​ , … , x t ​ ) , then the perplexity of X X X is, PPL ( X ) = exp ⁡ { − 1 t ∑ i t log ⁡ p θ ( x i ∣ x < i ) } \text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\} PPL ( X ) = exp { − t 1 ​ i ∑ t ​ lo g p θ ​ ( x i ​ ∣ x < i ​ ) } where log ⁡ p θ ( x i ∣ x < i ) \log p_\theta (x_i|x_{<i}) lo g p θ ​ ( x i ​ ∣ x < i ​ ) is the log-likelihood of the ith token conditioned on the preceding tokens x < i x_{<i} x < i ​ according to our model. Intuitively, it can be thought of as an evaluation of the model’s ability to predict uniformly among the set of specified tokens in a corpus. Importantly, this means that the tokenization procedure has a direct impact on a model’s perplexity which should always be taken into consideration when comparing different models. This is also equivalent to the exponentiation of the cross-entropy between the data and model predictions. For more intuition about perplexity and its relationship to Bits Per Character (BPC) and data compression, check out this fantastic blog post on The Gradient . Calculating PPL with fixed-length models If we weren’t limited by a model’s context size, we would evaluate the model’s perplexity by autoregressively factorizing a sequence and conditioning on the entire preceding subsequence at each step, as shown below. When working with approximate models, however, we typically have a constraint on the number of tokens the model can process. The largest version of GPT-2 , for example, has a fixed length of 1024 tokens, so we cannot calculate p θ ( x t ∣ x < t ) p_\theta(x_t|x_{<t}) p θ ​ ( x t ​ ∣ x < t ​ ) directly when t t t is greater than 1024. Instead, the sequence is typically broken into subsequences equal to the model’s maximum input size. If a model’s max input size is k k k , we then approximate the likelihood of a token x t x_t x t ​ by conditioning only on the k − 1 k-1 k − 1 tokens that precede it rather than the entire context. When evaluating the model’s perplexity of a sequence, a tempting but suboptimal approach is to break the sequence into disjoint chunks and add up the decomposed log-likelihoods of each segment independently. This is quick to compute since the perplexity of each segment can be computed in one forward pass, but serves as a poor approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will have less context at most of the prediction steps. Instead, the PPL of fixed-length models should be evaluated with a sliding-window strategy. This involves repeatedly sliding the context window so that the model has more context when making each prediction. This is a closer approximation to the true decomposition of the sequence probability and will typically yield a more favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good practical compromise is to employ a strided sliding window, moving the context by larger strides rather than sliding by 1 token a time. This allows computation to proceed much faster while still giving the model a large context to make predictions at each step. Example: Calculating perplexity with GPT-2 in 🤗 Transformers Let’s demonstrate this process with GPT-2. Copied from transformers import GPT2LMHeadModel, GPT2TokenizerFast from accelerate.test_utils.testing import get_backend device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) model_id = "openai-community/gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) We’ll load in the WikiText-2 dataset and evaluate the perplexity using a few different sliding-window strategies. Since this dataset is small and we’re just doing one forward pass over the set, we can just load and encode the entire dataset in memory. Copied from datasets import load_dataset test = load_dataset( "wikitext" , "wikitext-2-raw-v1" , split= "test" ) encodings = tokenizer( "\n\n" .join(test[ "text" ]), return_tensors= "pt" ) With 🤗 Transformers, we can simply pass the input_ids as the labels to our model, and the average negative log-likelihood for each token is returned as the loss. With our sliding window approach, however, there is overlap in the tokens we pass to the model at each iteration. We don’t want the log-likelihood for the tokens we’re just treating as context to be included in our loss, so we can set these targets to -100 so that they are ignored. The following is an example of how we could do this with a stride of 512 . This means that the model will have at least 512 tokens for context when calculating the conditional likelihood of any one token (provided there are 512 preceding tokens available to condition on). Copied import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size( 1 ) nll_sum = 0.0 n_tokens = 0 prev_end_loc = 0 for begin_loc in tqdm( range ( 0 , seq_len, stride)): end_loc = min (begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # may be different from stride on last loop input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = - 100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # loss is calculated using CrossEntropyLoss which averages over valid labels # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels # to the left by 1. neg_log_likelihood = outputs.loss # Accumulate the total negative log-likelihood and the total number of tokens num_valid_tokens = (target_ids != - 100 ). sum ().item() # number of valid tokens in target_ids batch_size = target_ids.size( 0 ) num_loss_tokens = num_valid_tokens - batch_size # subtract batch_size due to internal label shift nll_sum += neg_log_likelihood * num_loss_tokens n_tokens += num_loss_tokens prev_end_loc = end_loc if end_loc == seq_len: break avg_nll = nll_sum / n_tokens # average negative log-likelihood per token ppl = torch.exp(avg_nll) Running this with the stride length equal to the max input length is equivalent to the suboptimal, non-sliding-window strategy we discussed above. The smaller the stride, the more context the model will have in making each prediction, and the better the reported perplexity will typically be. When we run the above with stride = 1024 , i.e. no overlap, the resulting PPL is 19.44 , which is about the same as the 19.93 reported in the GPT-2 paper. By using stride = 512 and thereby employing our striding window strategy, this jumps down to 16.44 . This is not only a more favorable score, but is calculated in a way that is closer to the true autoregressive decomposition of a sequence likelihood. < > Update on GitHub ← BERTology Pipelines for webserver inference → Perplexity of fixed-length models Calculating PP L with fixed-length models Example: Calculating perplexity with GP T-2 in 🤗 Transformers
Get_Croissant_metadata.txt
Get Croissant metadata Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Get Croissant metadata Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Get Croissant metadata The dataset viewer automatically generates the metadata in Croissant format (JSON-LD) for every dataset on the Hugging Face Hub. It lists the dataset’s name, description, URL, and the distribution of the dataset as Parquet files, including the columns’ metadata. The Croissant metadata is available for all the datasets that can be converted to Parquet format . What is Croissant? Croissant is a metadata format built on top of schema.org aimed at describing datasets used for machine learning to help indexing, searching and loading them programmatically. Get the metadata This guide shows you how to use Hugging Face /croissant endpoint to retrieve the Croissant metadata associated to a dataset. The /croissant endpoint takes the dataset name in the URL, for example for the ibm/duorc dataset: Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://huggingface.co/api/datasets/ibm/duorc/croissant" def query (): response = requests.get(API_URL, headers=headers) return response.json() data = query() Under the hood it uses the https://datasets-server.huggingface.co/croissant-crumbs endpoint and enriches it with the Hub metadata. The endpoint response is a JSON-LD containing the metadata in the Croissant format. For example, the ibm/duorc dataset has two subsets, ParaphraseRC and SelfRC (see the List splits and subsets guide for more details about splits and subsets). The metadata links to their Parquet files and describes the type of each of the six columns: plot_id , plot , title , question_id , question , and no_answer : Copied { "@context" : { "@language" : "en" , "@vocab" : "https://schema.org/" , "citeAs" : "cr:citeAs" , "column" : "cr:column" , "conformsTo" : "dct:conformsTo" , "cr" : "http://mlcommons.org/croissant/" , "data" : { "@id" : "cr:data" , "@type" : "@json" } , "dataBiases" : "cr:dataBiases" , "dataCollection" : "cr:dataCollection" , "dataType" : { "@id" : "cr:dataType" , "@type" : "@vocab" } , "dct" : "http://purl.org/dc/terms/" , "extract" : "cr:extract" , "field" : "cr:field" , "fileProperty" : "cr:fileProperty" , "fileObject" : "cr:fileObject" , "fileSet" : "cr:fileSet" , "format" : "cr:format" , "includes" : "cr:includes" , "isLiveDataset" : "cr:isLiveDataset" , "jsonPath" : "cr:jsonPath" , "key" : "cr:key" , "md5" : "cr:md5" , "parentField" : "cr:parentField" , "path" : "cr:path" , "personalSensitiveInformation" : "cr:personalSensitiveInformation" , "recordSet" : "cr:recordSet" , "references" : "cr:references" , "regex" : "cr:regex" , "repeated" : "cr:repeated" , "replace" : "cr:replace" , "sc" : "https://schema.org/" , "separator" : "cr:separator" , "source" : "cr:source" , "subField" : "cr:subField" , "transform" : "cr:transform" } , "@type" : "sc:Dataset" , "distribution" : [ { "@type" : "cr:FileObject" , "@id" : "repo" , "name" : "repo" , "description" : "The Hugging Face git repository." , "contentUrl" : "https://huggingface.co/datasets/ibm/duorc/tree/refs%2Fconvert%2Fparquet" , "encodingFormat" : "git+https" , "sha256" : "https://github.com/mlcommons/croissant/issues/80" } , { "@type" : "cr:FileSet" , "@id" : "parquet-files-for-config-ParaphraseRC" , "name" : "parquet-files-for-config-ParaphraseRC" , "description" : "The underlying Parquet files as converted by Hugging Face (see: https://huggingface.co/docs/dataset-viewer/parquet)." , "containedIn" : { "@id" : "repo" } , "encodingFormat" : "application/x-parquet" , "includes" : "ParaphraseRC/*/*.parquet" } , { "@type" : "cr:FileSet" , "@id" : "parquet-files-for-config-SelfRC" , "name" : "parquet-files-for-config-SelfRC" , "description" : "The underlying Parquet files as converted by Hugging Face (see: https://huggingface.co/docs/dataset-viewer/parquet)." , "containedIn" : { "@id" : "repo" } , "encodingFormat" : "application/x-parquet" , "includes" : "SelfRC/*/*.parquet" } ] , "recordSet" : [ { "@type" : "cr:RecordSet" , "@id" : "ParaphraseRC" , "name" : "ParaphraseRC" , "description" : "ibm/duorc - 'ParaphraseRC' subset\n\nAdditional information:\n- 3 splits: train, validation, test\n- 1 skipped column: answers" , "field" : [ { "@type" : "cr:Field" , "@id" : "ParaphraseRC/plot_id" , "name" : "ParaphraseRC/plot_id" , "description" : "Column 'plot_id' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "plot_id" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/plot" , "name" : "ParaphraseRC/plot" , "description" : "Column 'plot' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "plot" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/title" , "name" : "ParaphraseRC/title" , "description" : "Column 'title' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "title" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/question_id" , "name" : "ParaphraseRC/question_id" , "description" : "Column 'question_id' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "question_id" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/question" , "name" : "ParaphraseRC/question" , "description" : "Column 'question' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "question" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/no_answer" , "name" : "ParaphraseRC/no_answer" , "description" : "Column 'no_answer' from the Hugging Face parquet file." , "dataType" : "sc:Boolean" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "no_answer" } } } ] } , { "@type" : "cr:RecordSet" , "@id" : "SelfRC" , "name" : "SelfRC" , "description" : "ibm/duorc - 'SelfRC' subset\n\nAdditional information:\n- 3 splits: train, validation, test\n- 1 skipped column: answers" , "field" : [ { "@type" : "cr:Field" , "@id" : "SelfRC/plot_id" , "name" : "SelfRC/plot_id" , "description" : "Column 'plot_id' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "plot_id" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/plot" , "name" : "SelfRC/plot" , "description" : "Column 'plot' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "plot" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/title" , "name" : "SelfRC/title" , "description" : "Column 'title' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "title" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/question_id" , "name" : "SelfRC/question_id" , "description" : "Column 'question_id' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "question_id" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/question" , "name" : "SelfRC/question" , "description" : "Column 'question' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "question" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/no_answer" , "name" : "SelfRC/no_answer" , "description" : "Column 'no_answer' from the Hugging Face parquet file." , "dataType" : "sc:Boolean" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "no_answer" } } } ] } ] , "name" : "duorc" , "description" : "\n\t\n\t\t\n\t\n\t\n\t\tDataset Card for duorc\n\t\n\n\n\t\n\t\t\n\t\n\t\n\t\tDataset Summary\n\t\n\nThe DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the… See the full description on the dataset page: https://huggingface.co/datasets/ibm/duorc." , "alternateName" : [ "ibm/duorc" , "DuoRC" ] , "creator" : { "@type" : "Organization" , "name" : "IBM" , "url" : "https://huggingface.co/ibm" } , "keywords" : [ "question-answering" , "text2text-generation" , "abstractive-qa" , "extractive-qa" , "crowdsourced" , "crowdsourced" , "monolingual" , "100K<n<1M" , "10K<n<100K" , "original" , "English" , "mit" , "Croissant" , "arxiv:1804.07927" , "🇺🇸 Region: US" ] , "license" : "https://choosealicense.com/licenses/mit/" , "sameAs" : "https://duorc.github.io/" , "url" : "https://huggingface.co/datasets/ibm/duorc" } Load the dataset To load the dataset, you can use the mlcroissant library. It provides a simple way to load datasets from Croissant metadata. < > Update on GitHub ← Explore dataset statistics Overview → Get Croissant metadata What is Croissant? Get the metadata Load the dataset
RLOO_Trainer.txt
RLOO Trainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation RLOO Trainer TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started RLOO Trainer TRL supports training LLMs with REINFORCE Leave-One-Out (RLOO). The idea is that instead of using a value function, RLOO generates K completions for each prompt. For each completion, RLOO uses the mean scores from the other K-1 completions as a baseline to calculate the advantage. RLOO also models the entire completion as a single action, where as PPO models each token as an action. Note that REINFORCE / A2C is a special case of PPO, when the number of PPO epochs is 1 and the number of mini-batches is 1, which is how we implement RLOO in TRL. References: Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs A2C is a special case of PPO Fine-Tuning Language Models from Human Preferences Learning to Summarize from Human Feedback The N Implementation Details of RLHF with PPO The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization Get started To just run a RLOO script to make sure the trainer can run, you can run the following command to train a RLOO model with a dummy reward model. Copied python examples/scripts/rloo/rloo.py \ --dataset_name trl-internal-testing/descriptiveness-sentiment-trl-style \ --dataset_train_split descriptiveness \ --learning_rate 3e-6 \ --output_dir models/minimal/rloo \ --per_device_train_batch_size 64 \ --gradient_accumulation_steps 1 \ --total_episodes 10000 \ --model_name_or_path EleutherAI/pythia-14m \ --reward_model_path EleutherAI/pythia-14m \ --missing_eos_penalty 1.0 Explanation of the logged metrics The logged metrics are as follows. Here is an example tracked run at Weights and Biases eps : Tracks the number of episodes per second. objective/kl : The mean Kullback-Leibler (KL) divergence between the current policy and reference policy. objective/entropy : The mean entropy of the policy, indicating the randomness of the actions chosen by the policy. objective/non_score_reward : The mean reward from non-score-related sources, basically beta * kl.sum(1) , where beta is the KL penalty coefficient and kl is the per-token KL divergence. objective/rlhf_reward : The mean RLHF reward, which is score - non_score_reward . objective/scores : The mean scores returned by the reward model / environment. policy/approxkl_avg : The average approximate KL divergence between consecutive PPO policies. Note that this is not the same as objective/kl . policy/clipfrac_avg : The average fraction of policy updates that are clipped, indicating how often the policy updates are constrained to prevent large changes. loss/policy_avg : The average policy loss, indicating how well the policy is performing. val/clipfrac_avg : The average fraction of value function updates that are clipped, similar to policy/clipfrac_avg but for the value function. policy/entropy_avg : The average entropy of the policy during training, indicating how diverse the policy’s actions are. val/ratio : The mean ratio of the current policy probability to the old policy probability, providing a measure of how much the policy has changed. val/ratio_var : The variance of the val/ratio , indicating the variability in policy changes. val/num_eos_tokens : The number of end-of-sequence (EOS) tokens generated, which can indicate the number of complete responses. lr : lr: The current learning rate used by the optimizer. episode : episode: The current global step or episode count in the training process. Cookbook Debugging TIP: objective/rlhf_reward : this is the ultimate objective of the RLHF training. If training works as intended, this metric should keep going up. Debugging TIP: val/ratio : this number should float around 1.0, and it gets clipped by --cliprange 0.2 with PPO’s surrogate loss. So if this ratio is too high like 2.0 or 1000.0 or too small like 0.1, it means the updates between consecutive policies are too drastic. You should try undertand why this is happening and try to fix it. Memory TIP: If you are running out of memory, you can try to reduce the --per_device_train_batch_size or increase the --gradient_accumulation_steps to reduce the memory footprint. Memory TIP: If you have multiple GPUs, you can also run training with DeepSpeed stage 3 to reduce the memory footprint accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml . Usage TIP: We recommend to use the “EOS trick” via --missing_eos_penalty , which subtracts a static scalar penalty from the score of completions that do not end with an EOS token. This can help the model learn to generate more coherent completions. What is my model doing exactly? To help you understand what your model is doing, we periodically log some sample completions from the model. Here is an example of a completion. In an example tracked run at Weights and Biases , it looks like the following, allowing you to see the model’s response at different stages of training. By default we generate --num_sample_generations 10 during training, but you can customize the number of generations. In the logs the sampled generations look like Copied ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓ ┃ query ┃ model response ┃ score ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩ │ SUBREDDIT: r/AskReddit │ I'm in love with a friend, and │ 3.921875 │ │ │ I don't know how to get rid of │ │ │ TITLE: How do you get someone │ those feelings. I'm │ │ │ out of your head? │ desperate.<|endoftext|>[PAD][P… │ │ │ │ │ │ │ POST: Hi, │ │ │ │ I'm 22 , and I have been with my │ │ │ │ girlfriend for 5 years now. We │ │ │ │ recently moved together. We've │ │ │ │ always loved each other │ │ │ │ intensely. │ │ │ │ │ │ │ │ Problem, I recently started to │ │ │ │ have feelings for an other │ │ │ │ person (a friend). This person │ │ │ │ has had a boyfriend for now 3 │ │ │ │ years, and has absolutely no │ │ │ │ ideas. Those feelings were so │ │ │ │ strong, it was hard to hide │ │ │ │ them. After 2 months of me │ │ │ │ being distant and really sad, │ │ │ │ my girlfriend forced me to say │ │ │ │ what was bothering me . I'm not │ │ │ │ a good liar, and now she knows. │ │ │ │ │ │ │ │ We decided to give us a week │ │ │ │ alone, I went to my parents. │ │ │ │ │ │ │ │ Now, I'm completely lost. I │ │ │ │ keep on thinking about this │ │ │ │ person, and I hate that . I │ │ │ │ would like for those feelings │ │ │ │ to go away, to leave me alone. │ │ │ │ But I can't. │ │ │ │ │ │ │ │ What do I do? It's been 3 │ │ │ │ months now, and I'm just │ │ │ │ desperate. │ │ │ │ │ │ │ │ TL;DR: │ │ │ ├─────────────────────────────────┼─────────────────────────────────┼──────────┤ │ SUBREDDIT: r/pettyrevenge │ My mom woke me up with a loud │ 6.84375 │ │ │ TV. I blasted Gangnam Style on │ │ │ TITLE: So, my mom woke me up │ repeat , with the bass cranked │ │ │ with a loud TV. │ up as high as it could │ │ │ │ go.<|endoftext|>[PAD][PAD][PAD… │ │ │ POST: She was in her living │ │ │ │ room, watching TV. This was at │ │ │ │ about 8 : 30 in the morning, and │ │ │ │ she was exercising. She turned │ │ │ │ the TV up extra loud to hear it │ │ │ │ over her excercycle, and woke │ │ │ │ me up. I went in there asking │ │ │ │ for her to turn it down. She │ │ │ │ said she didn't have to ; I │ │ │ │ explained that I always used │ │ │ │ headphones so she didn't have │ │ │ │ to deal with my noise and that │ │ │ │ she should give me a little │ │ │ │ more respect, given that I paid │ │ │ │ rent at the time . │ │ │ │ │ │ │ │ She disagreed. I went back to │ │ │ │ my room, rather pissed off at │ │ │ │ the lack of equality. I had no │ │ │ │ lock on my door; but I had a │ │ │ │ dresser right next to it , so I │ │ │ │ pulled one of the drawers out │ │ │ │ enough so that it caused the │ │ │ │ door to not be openable. Then, │ │ │ │ I turned my speakers up really │ │ │ │ loud and blasted Gangnam Style │ │ │ │ on repeat , with the bass │ │ │ │ cranked up as high as it could │ │ │ │ go. │ │ │ │ │ │ │ │ If you hate Gangnam Style for │ │ │ │ being overplayed, you will see │ │ │ │ why I chose that particular │ │ │ │ song. I personally don't mind │ │ │ │ it . But here's the thing about │ │ │ │ my bass; it vibrates the walls, │ │ │ │ making one hell of a lot of │ │ │ │ noise. Needless to say , my mom │ │ │ │ was not pleased and shut off │ │ │ │ the internet. But it was oh so │ │ │ │ worth it . │ │ │ │ │ │ │ │ TL;DR: │ │ │ └─────────────────────────────────┴─────────────────────────────────┴──────────┘ Implementation details The bulk of RLOOTrainer is based on the PPO implementation, which is based on the The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization . Below is a vectorized advantage calculation for RLOO: Copied def test_rloo_reward (): local_batch_size = 3 rloo_k = 4 rlhf_reward = torch.tensor([ 1 , 2 , 3 , # first rlhf reward for three prompts 2 , 3 , 4 , # second rlhf reward for three prompts 5 , 6 , 7 , # third rlhf reward for three prompts 8 , 9 , 10 , # fourth rlhf reward for three prompts ]). float () # here we have 3 prompts which have 4 completions each baseline = (rlhf_reward. sum ( 0 ) - rlhf_reward) / (rloo_k - 1 ) advantages = torch.zeros_like(rlhf_reward) for i in range ( 0 , len (advantages), local_batch_size): other_response_rlhf_rewards = [] for j in range ( 0 , len (advantages), local_batch_size): if i != j: other_response_rlhf_rewards.append(rlhf_reward[j : j + local_batch_size]) advantages[i : i + local_batch_size] = rlhf_reward[i : i + local_batch_size] - torch.stack(other_response_rlhf_rewards).mean( 0 ) assert ( 1 - ( 2 + 5 + 8 ) / 3 - advantages[ 0 ].item()) < 1e-6 # First rlhf reward for the first prompt assert ( 6 - ( 3 + 2 + 9 ) / 3 - advantages[ 7 ].item()) < 1e-6 # Third rlhf reward for the second prompt # Vectorized implementation rlhf_reward = rlhf_reward.reshape(rloo_k, local_batch_size) baseline = (rlhf_reward. sum ( 0 ) - rlhf_reward) / (rloo_k - 1 ) vec_advantages = rlhf_reward - baseline torch.testing.assert_close(vec_advantages.flatten(), advantages) Benchmark experiments To validate the RLOO implementation works, we ran experiment on the 1B model. Here are the command we used to run the experiment. We take the SFT / RM models directly from The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization . Copied accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \ --output_dir models/minimal/rloo_tldr \ --dataset_name trl-internal-testing/tldr-preference-sft-trl-style \ --dataset_test_split validation \ --num_ppo_epochs 2 \ --num_mini_batches 2 \ --learning_rate 3e-6 \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 16 \ --total_episodes 1000000 \ --model_name_or_path EleutherAI/pythia-1b-deduped \ --sft_model_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr \ --reward_model_path cleanrl/EleutherAI_pythia-1b-deduped__reward__tldr \ --local_rollout_forward_batch_size 16 \ --missing_eos_penalty 1.0 \ --stop_token eos \ --kl_coef 0.03 Checkpoints and experiment tracking are available at: 🤗 Model checkpoint 🐝 Tracked experiment To evaluate, we use vLLM to load the checkpoints and GPT-4o mini as a judge model to evaluate the generated TL;DR against the reference TL;DR. For more information on how to use judges, see Judges . Copied $ python examples/scripts/evals/judge_tldr.py --model_name_or_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 33.00% $ python examples/scripts/evals/judge_tldr.py --model_name_or_path vwxyzjn/rloo_tldr --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 51.20% The RLOO checkpoint gets a 51.2% preferred rate vs the 33.0% preference rate of the SFT checkpoint. This is a good sign that the RLOO training is working as intended. Metrics: Copied # pip install openrlbenchmark==0.2.1a5 # see https://github.com/openrlbenchmark/openrlbenchmark#get-started for documentation # to use it, change `?we=huggingface&wpn=trl` to your own project and `?tag=pr-1540` to your own tag python -m openrlbenchmark.rlops_multi_metrics \ --filters '?we=huggingface&wpn=trl&xaxis=train/episode&ceik=output_dir&cen=sft_model_path&metrics=train/objective/rlhf_reward&metrics=train/objective/scores&metrics=train/objective/kl&metrics=train/objective/non_score_reward&metrics=train/objective/entropy&metrics=train/policy/approxkl_avg&metrics=train/policy/clipfrac_avg&metrics=train/loss/policy_avg&metrics=train/policy/entropy_avg&metrics=train/val/ratio&metrics=train/val/ratio_var&metrics=train/val/num_eos_tokens&metrics=train/lr&metrics=train/eps' \ "cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr?tag=pr-1540" \ --env-ids models/minimal/rloo_tldr \ --pc.ncols 4 \ --pc.ncols-legend 1 \ --pc.xlabel "Episode" \ --output-filename benchmark/trl/pr-1540/rloo \ --scan-history RLOOTrainer class trl. RLOOTrainer < source > ( config : RLOOConfig processing_class : typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] policy : Module ref_policy : Module reward_model : Module train_dataset : Dataset data_collator : typing.Optional[transformers.data.data_collator.DataCollatorWithPadding] = None eval_dataset : typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = None optimizers : tuple = (None, None) callbacks : typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None ) create_model_card < source > ( model_name : typing.Optional[str] = None dataset_name : typing.Optional[str] = None tags : typing.Union[str, list[str], NoneType] = None ) Parameters model_name ( str , optional , defaults to None ) — The name of the model. dataset_name ( str , optional , defaults to None ) — The name of the dataset used for training. tags ( str , list[str] or None , optional , defaults to None ) — Tags to be associated with the model card. Creates a draft of a model card using the information available to the Trainer . RLOOConfig class trl. RLOOConfig < source > ( output_dir : str overwrite_output_dir : bool = False do_train : bool = False do_eval : bool = False do_predict : bool = False eval_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only : bool = False per_device_train_batch_size : int = 8 per_device_eval_batch_size : int = 8 per_gpu_train_batch_size : typing.Optional[int] = None per_gpu_eval_batch_size : typing.Optional[int] = None gradient_accumulation_steps : int = 1 eval_accumulation_steps : typing.Optional[int] = None eval_delay : typing.Optional[float] = 0 torch_empty_cache_steps : typing.Optional[int] = None learning_rate : float = 5e-05 weight_decay : float = 0.0 adam_beta1 : float = 0.9 adam_beta2 : float = 0.999 adam_epsilon : float = 1e-08 max_grad_norm : float = 1.0 num_train_epochs : float = 3.0 max_steps : int = -1 lr_scheduler_type : typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs : typing.Union[dict, str, NoneType] = <factory> warmup_ratio : float = 0.0 warmup_steps : int = 0 log_level : typing.Optional[str] = 'passive' log_level_replica : typing.Optional[str] = 'warning' log_on_each_node : bool = True logging_dir : typing.Optional[str] = None logging_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step : bool = False logging_steps : float = 500 logging_nan_inf_filter : bool = True save_strategy : typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps : float = 500 save_total_limit : typing.Optional[int] = None save_safetensors : typing.Optional[bool] = True save_on_each_node : bool = False save_only_model : bool = False restore_callback_states_from_checkpoint : bool = False no_cuda : bool = False use_cpu : bool = False use_mps_device : bool = False seed : int = 42 data_seed : typing.Optional[int] = None jit_mode_eval : bool = False use_ipex : bool = False bf16 : bool = False fp16 : bool = False fp16_opt_level : str = 'O1' half_precision_backend : str = 'auto' bf16_full_eval : bool = False fp16_full_eval : bool = False tf32 : typing.Optional[bool] = None local_rank : int = -1 ddp_backend : typing.Optional[str] = None tpu_num_cores : typing.Optional[int] = None tpu_metrics_debug : bool = False debug : typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last : bool = False eval_steps : typing.Optional[float] = None dataloader_num_workers : int = 0 dataloader_prefetch_factor : typing.Optional[int] = None past_index : int = -1 run_name : typing.Optional[str] = None disable_tqdm : typing.Optional[bool] = None remove_unused_columns : typing.Optional[bool] = True label_names : typing.Optional[typing.List[str]] = None load_best_model_at_end : typing.Optional[bool] = False metric_for_best_model : typing.Optional[str] = None greater_is_better : typing.Optional[bool] = None ignore_data_skip : bool = False fsdp : typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params : int = 0 fsdp_config : typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap : typing.Optional[str] = None accelerator_config : typing.Union[dict, str, NoneType] = None deepspeed : typing.Union[dict, str, NoneType] = None label_smoothing_factor : float = 0.0 optim : typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args : typing.Optional[str] = None adafactor : bool = False group_by_length : bool = False length_column_name : typing.Optional[str] = 'length' report_to : typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters : typing.Optional[bool] = None ddp_bucket_cap_mb : typing.Optional[int] = None ddp_broadcast_buffers : typing.Optional[bool] = None dataloader_pin_memory : bool = True dataloader_persistent_workers : bool = False skip_memory_metrics : bool = True use_legacy_prediction_loop : bool = False push_to_hub : bool = False resume_from_checkpoint : typing.Optional[str] = None hub_model_id : typing.Optional[str] = None hub_strategy : typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token : typing.Optional[str] = None hub_private_repo : typing.Optional[bool] = None hub_always_push : bool = False gradient_checkpointing : bool = False gradient_checkpointing_kwargs : typing.Union[dict, str, NoneType] = None include_inputs_for_metrics : bool = False include_for_metrics : typing.List[str] = <factory> eval_do_concat_batches : bool = True fp16_backend : str = 'auto' evaluation_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id : typing.Optional[str] = None push_to_hub_organization : typing.Optional[str] = None push_to_hub_token : typing.Optional[str] = None mp_parameters : str = '' auto_find_batch_size : bool = False full_determinism : bool = False torchdynamo : typing.Optional[str] = None ray_scope : typing.Optional[str] = 'last' ddp_timeout : typing.Optional[int] = 1800 torch_compile : bool = False torch_compile_backend : typing.Optional[str] = None torch_compile_mode : typing.Optional[str] = None dispatch_batches : typing.Optional[bool] = None split_batches : typing.Optional[bool] = None include_tokens_per_second : typing.Optional[bool] = False include_num_input_tokens_seen : typing.Optional[bool] = False neftune_noise_alpha : typing.Optional[float] = None optim_target_modules : typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics : bool = False eval_on_start : bool = False use_liger_kernel : typing.Optional[bool] = False eval_use_gather_object : typing.Optional[bool] = False average_tokens_across_devices : typing.Optional[bool] = False dataset_num_proc : typing.Optional[int] = None num_mini_batches : int = 1 total_episodes : typing.Optional[int] = None local_rollout_forward_batch_size : int = 64 num_sample_generations : int = 10 response_length : int = 53 stop_token : typing.Optional[typing.Literal['eos']] = None stop_token_id : typing.Optional[int] = None temperature : float = 0.7 missing_eos_penalty : typing.Optional[float] = None sft_model_path : str = 'EleutherAI/pythia-160m' world_size : typing.Optional[int] = None num_total_batches : typing.Optional[int] = None micro_batch_size : typing.Optional[int] = None local_batch_size : typing.Optional[int] = None batch_size : typing.Optional[int] = None local_mini_batch_size : typing.Optional[int] = None mini_batch_size : typing.Optional[int] = None exp_name : str = 'rloo_config' reward_model_path : str = 'EleutherAI/pythia-160m' num_ppo_epochs : int = 4 whiten_rewards : bool = False kl_coef : float = 0.05 cliprange : float = 0.2 rloo_k : int = 2 ) Parameters exp_name ( str , optional , defaults to os.path.basename(__file__)[ -- -len(".py")] ): Name of this experiment. reward_model_path ( str , optional , defaults to "EleutherAI/pythia-160m" ) — Path to the reward model. num_ppo_epochs ( int , optional , defaults to 4 ) — Number of epochs to train. whiten_rewards ( bool , optional , defaults to False ) — Whether to whiten the rewards. kl_coef ( float , optional , defaults to 0.05 ) — KL coefficient. cliprange ( float , optional , defaults to 0.2 ) — Clip range. rloo_k ( int , optional , defaults to 2 ) — REINFORCE Leave-One-Out (RLOO) number of online samples per prompt. Configuration class for the RLOOTrainer . Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. < > Update on GitHub ← Reward SFT → RLO O Trainer Get started Explanation of the logged metrics Cookbook What is my model doing exactly? Implementation details Benchmark experiments RLOO Trainer RLOO Config
Training_FAQ.txt
Training FAQ Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Training FAQ TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Training FAQ What Metrics Should I Look at? When performing classical supervised fine-tuning of language models, the loss (especially the validation loss) serves as a good indicator of the training progress. However, in Reinforcement Learning (RL), the loss becomes less informative about the model’s performance, and its value may fluctuate while the actual performance improves. To address this, we recommend focusing on two key metrics first: Mean Reward : The primary goal is to maximize the reward achieved by the model during RL training. Objective KL Divergence : KL divergence (Kullback-Leibler divergence) measures the dissimilarity between two probability distributions. In the context of RL training, we use it to quantify the difference between the current model and a reference model. Ideally, we want to keep the KL divergence between 0 and 10 to ensure the model’s generated text remains close to what the reference model produces. However, there are more metrics that can be useful for debugging, checkout the logging section . Why Do We Use a Reference Model, and What’s the Purpose of KL Divergence? When training RL models, optimizing solely for reward may lead to unexpected behaviors, where the model exploits the environment in ways that don’t align with good language generation. In the case of RLHF, we use a reward model trained to predict whether a generated text is highly ranked by humans. However, the RL model being optimized against the reward model may learn patterns that yield high reward but do not represent good language. This can result in extreme cases where the model generates texts with excessive exclamation marks or emojis to maximize the reward. In some worst-case scenarios, the model may generate patterns completely unrelated to natural language yet receive high rewards, similar to adversarial attacks. Figure: Samples without a KL penalty from https://huggingface.co/papers/1909.08593 . To address this issue, we add a penalty to the reward function based on the KL divergence between the current model and the reference model. By doing this, we encourage the model to stay close to what the reference model generates. What Is the Concern with Negative KL Divergence? If you generate text by purely sampling from the model distribution things work fine in general. But when you use the generate method there are a few caveats because it does not always purely sample depending on the settings which can cause KL-divergence to go negative. Essentially when the active model achieves log_p_token_active < log_p_token_ref we get negative KL-div. This can happen in a several cases: top-k sampling : the model can smooth out the probability distribution causing the top-k tokens having a smaller probability than those of the reference model but they still are selected min_length : this ignores the EOS token until min_length is reached. thus the model can assign a very low log prob to the EOS token and very high probs to all others until min_length is reached These are just a few examples. Why is negative KL an issue? The total reward R is computed R = r - beta * KL so if the model can learn how to drive KL-divergence negative it effectively gets a positive reward. In many cases it can be much easier to exploit such a bug in the generation than actually learning the reward function. In addition the KL can become arbitrarily small thus the actual reward can be very small compared to it. So how should you generate text for PPO training? Let’s have a look! How to generate text for training? In order to avoid the KL issues described above we recommend to use the following settings: Copied generation_kwargs = { "min_length" : - 1 , # don't ignore the EOS token (see above) "top_k" : 0.0 , # no top-k sampling "top_p" : 1.0 , # no nucleus sampling "do_sample" : True , # yes, we want to sample "pad_token_id" : tokenizer.eos_token_id, # most decoder models don't have a padding token - use EOS token instead "max_new_tokens" : 32 , # specify how many tokens you want to generate at most } With these settings we usually don’t encounter any issues. You can also experiments with other settings but if you encounter issues with negative KL-divergence try to go back to these and see if they persist. How can debug your own use-case? Debugging the RL pipeline can be challenging due to its complexity. Here are some tips and suggestions to make the process easier: Start from a working example : Begin with a working example from the trl repository and gradually modify it to fit your specific use-case. Changing everything at once can make it difficult to identify the source of potential issues. For example, you can start by replacing the model in the example and once you figure out the best hyperparameters try to switch to your dataset and reward model. If you change everything at once you won’t know where a potential problem comes from. Start small, scale later : Training large models can be very slow and take several hours or days until you see any improvement. For debugging this is not a convenient timescale so try to use small model variants during the development phase and scale up once that works. That being said you sometimes have to be careful as small models might not have the capacity to solve a complicated task either. Start simple : Try to start with a minimal example and build complexity from there. Your use-case might require for example a complicated reward function consisting of many different rewards - try to use one signal first and see if you can optimize that and then add more complexity after that. Inspect the generations : It’s always a good idea to inspect what the model is generating. Maybe there is a bug in your post-processing or your prompt. Due to bad settings you might cut-off generations too soon. These things are very hard to see on the metrics but very obvious if you look at the generations. Inspect the reward model : If you reward is not improving over time maybe there’s an issue with the reward model. You can look at extreme cases to see if it does what it should: e.g. in the sentiment case you can check if simple positive and negative examples really get different rewards. And you can look at the distribution of your dataset. Finally, maybe the reward is dominated by the query which the model can’t affect so you might need to normalize this (e.g. reward of query+response minus reward of the query). These are just a few tips that we find helpful - if you have more useful tricks feel free to open a PR to add them as well! < > Update on GitHub ← Dataset Formats Use Trained Models → Training FAQ What Metrics Should I Look at? Why Do We Use a Reference Model, and What’s the Purpose of K L Divergence? What Is the Concern with Negative K L Divergence? How to generate text for training? How can debug your own use-case?
LayerNorm_Tuning.txt
LayerNorm Tuning Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation LayerNorm Tuning PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started LayerNorm Tuning LayerNorm Tuning ( LN Tuning ) is a PEFT method that only fine-tunes the parameters of the LayerNorm layers in a model. The paper has tested the performance of this method on large language models and has shown that it can achieve strong performance with a significant reduction in the number of trainable parameters and GPU memory usage. However, the method is not limited to language models and can be applied to any model that uses LayerNorm layers. In this implementation, the default is that all layernorm layers inside a model is finetuned, but it could be used to target other layer types such as MLP or Attention layers, this can be done by specifying the target_modules in the LNTuningConfig . The abstract from the paper is: This paper introduces an efficient strategy to transform Large Language Models (LLMs) into Multi-Modal Large Language Models (MLLMs). By conceptualizing this transformation as a domain adaptation process, i.e., transitioning from text understanding to embracing multiple modalities, we intriguingly note that, within each attention block, tuning LayerNorm suffices to yield strong performance. Moreover, when benchmarked against other tuning approaches like full parameter finetuning or LoRA, its benefits on efficiency are substantial. For example, when compared to LoRA on a 13B model scale, performance can be enhanced by an average of over 20% across five multi-modal tasks, and meanwhile, results in a significant reduction of trainable parameters by 41.9% and a decrease in GPU memory usage by 17.6%. On top of this LayerNorm strategy, we showcase that selectively tuning only with conversational data can improve efficiency further. Beyond these empirical outcomes, we provide a comprehensive analysis to explore the role of LayerNorm in adapting LLMs to the multi-modal domain and improving the expressive power of the model. LNTuningConfig class peft. LNTuningConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False target_modules : Optional[Union[list[str], str]] = None exclude_modules : Optional[Union[list[str], str]] = None modules_to_save : Optional[Union[list[str], str]] = None ) Parameters target_modules ( Optional[Union[List[str], str]] ) — List of module names or regex expression of the module names to replace with LNTuning. For example, ‘. decoder. ’ or ‘. encoder. ’. If this is not specified, modules will be chosen according to the model architecture. If the architecture is not known, an error will be raised — in this case, you should specify the target modules manually. exclude_modules ( Optional[Union[List[str], str]] ) — The names of the modules to not apply the adapter. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. modules_to_save ( Optional[Union[List[str], str]] ) — List of modules to be set as trainable and saved in the final checkpoint. For example, in Sequence Classification or Token Classification tasks, the final layer classifier/score are randomly initialized and as such need to be trainable and saved. This is the configuration class to store the configuration of a LNTuningModel . LNTuningModel class peft. LNTuningModel < source > ( model config adapter_name low_cpu_mem_usage : bool = False ) → ‘torch.nn.Module’ Parameters model ( torch.nn.Module ) — The model to be adapted. config ( LNTuningConfig ) — The configuration of the Lora model. adapter_name ( str ) — The name of the adapter, defaults to "default" . low_cpu_mem_usage ( bool , optional , defaults to False ) — This option has no effect on LN tuning but exists for consistency with other PEFT methods. Returns ‘torch.nn.Module’ The adapted model with LayerNorm tuned on. Creates LayerNorm tuning from a pretrained transformer model. The method is described in detail in https://arxiv.org/abs/2312.11420 . Example: Copied >>> from transformers import AutoModelForCausalLM >>> from peft import get_peft_model, TaskType, LNTuningConfig >>> peft_config = LNTuningConfig( ... task_type=TaskType.CAUSAL_LM, ... ) >>> model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-7b-hf" ) >>> model = get_peft_model(model, peft_config) >>> model.print_trainable_parameters() Attributes : model ( PreTrainedModel ) — The model to be adapted. peft_config ( LNTuningConfig ): The configuration of the Lora model. disable_adapter_layers < source > ( ) Disable all adapters. When disabling all adapters, the model output corresponds to the output of the base model. enable_adapter_layers < source > ( ) Enable all adapters. Call this if you have previously disabled all adapters and want to re-enable them. < > Update on GitHub ← Prompt tuning VeRA → Layer Norm Tuning LN Tuning Config LN Tuning Model
Interface__TranslationOutputValue.txt
Interface: TranslationOutputValue Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: TranslationOutputValue Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: TranslationOutputValue Properties translation _ text • translation_text : string The string after translation Defined in inference/src/tasks/nlp/translation.ts:16 < > Update on GitHub ← TokenClassificationOutputValue VisualQuestionAnsweringOutput → Interface: Translation Output Value Properties translation _ text Defined in
Distributed_Training_with_optimum-neuron.txt
Distributed Training with optimum-neuron Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Distributed Training with optimum-neuron AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Distributed Training with optimum-neuron AWS Trainium instances are great to train models. They can contain up to 16 Neuron devices, each device containing 2 Neuron cores and has 32GB of memory (16GB per core). For example a trn1.32xlarge instance has 32 x 16 = 512GB of memory. But there is a caveat: each Neuron core is an independent data-parallel worker by default. It means that the model, the gradient state and the optimizer state, amounting to approximately 4 times the model size, must fit in each of the Neuron cores (16GB) to be able to train. If that is the case, then the activations must also fit in the remaining memory. To alleviate that, optimum-neuron supports parallelism features enabling you to harness the full power of your Trainium instance: ZeRO-1 : It is an optimization of data-parallelism which consists in sharding the optimizer state (which usually represents half of the memory needed on the device) over the data-parallel ranks. Tensor Parallelism : It is a technique which consists in sharding each of your model matrix-multiplications along a given axis (row or column) on multiple devices. It also known as intra-layer model parallelism. The number of devices to shard your parameters on is called the tensor_parallel_size . Sequence parallelism : It is an optimization over Tensor Parallelism which shards the activations on the sequence axis outside of the tensor parallel regions. It is useful because it saves memory by sharding the activations. Pipeline Parallelism : It consists in sharding the model block layers on multiple devices. It is also known as inter-layer model parallelism. The number of devices to shard your layers on is called the pipeline_parallel_size . The good news is that is it possible to combine those techniques, and optimum-neuron makes it very easy! All the example scripts provided in the optimum-neuron repo have those features implemented via the NeuronTrainer . How to enable ZeRO-1? Whether you use the NeuronTrainer or decide to have your own training script that uses the NeuronAccelerator , it is very easy to enable the ZeRO-1 optimization. Via the NeuronTrainer Copied from optimum.neuron import NeuronTrainingArguments, NeuronTrainer # To enable ZeRO-1, set the `zero_1` argument to `True` in the training arguments. training_args = NeuronTrainingArguments( ..., zero_1= True , ) trainer = NeuronTrainer( model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() Since the example scripts use the NeuronTrainer , you can enable ZeRO-1 when using them by add the --zero_1 flag to your command line. For example: Copied torchrun --nproc_per_node=2 examples/language-modeling/run_clm.py \ --model_name_or_path TinyLlama/TinyLlama-1.1B-Chat-v0.6 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --per_device_train_batch_size 1 \ --block_size 1024 \ --bf16 \ --zero_1 \ --output_dir my_training/ Via the NeuronAccelerator There is a little bit more work to do when not using the NeuronTrainer : (Optional) Wrap the optimizer class to make it lazy. When ZeRO-1 is enabled the original optimizer is overridden to use a sharded version of it. Hence, it is possible to load the original optimizer lazily so that the optimizer state is not materialized until it is actually sharded. Copied from torch.optim import AdamW from optimum.neuron.distributed import make_optimizer_constructor_lazy lazy_adamw = make_optimizer_constructor_lazy(AdamW) Set the zero_1 argument to True when instantiating the NeuronAccelerator . Copied accelerator = NeuronAccelerator( ... zero_1= True , ) model = ... lazy_optimizer = lazy_adamw(...) # Actually instantiate the optimizer. model, optimizer = accelerator.prepare(model, lazy_optimizer) How to enable Tensor Parallelism? Just as for ZeRO-1, it is possible to apply Tensor Parallelism either with the NeuronTrainer or the NeuronAccelerator . When doing Tensor Parallelism, you have different settings: The tensor_parallel_size . Ideally it should be smallest value for which the model fits. Whether or not sequence parallelism should be enabled. Sequence parallelism shards the activations on the sequence axis outside of the tensor parallel regions. It is useful because it saves memory by sharding the activations. Whether or not parallelization of the embedding layer should be done. By default it is done because it offers multiple benefits: Parallelizing the embedding layer saves memory, which can enable fitting a bigger batch size and/or sequence length. For language models, where the embedding layer weights and the language-modeling head weights are usually tied, the language-modeling head ends up parallel and does not require to all-gather its output since it is fed to a cross entropy loss compatible with parallelism, saving expensive communication. On top of that, it is very important to make sure that the original model is loaded in an efficient manner: the training script is going to be called by torchrun , which will dispatch it to workers, one worker per core. If each worker (there are 32 of them in a trn1.32xlarge instance) loads the full model weights, it can take a lot of time and go out-of-memory really fast. optimum-neuron provides a context-manager distributed.lazy_load_for_parallelism() that loads the model lazily to prevent that, only the parameters of the corresponding model shard will be materialized in each worker. Via the NeuronTrainer Copied from optimum.neuron import NeuronTrainingArguments, NeuronTrainer from optimum.neuron.distributed import lazy_load_for_parallelism # Specify the `tensor_parallel_size` in the training arguments. training_args = NeuronTrainingArguments( ..., tensor_parallel_size= 8 , disable_embedding_parallelization= False , # It is `False` by default. disable_sequence_parallel= False , # It is `False` by default. ) with lazy_load_for_parallelism(tensor_parallel_size=training_args.tensor_parallel_size): model = ... trainer = NeuronTrainer( model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() Since the example scripts use the NeuronTrainer , you can enable Tensor Parallelism when using them by specifying the --tensor_parallel_size argument, and optionally the disable_embedding_parallelization and disable_sequence_parallel flags. to your command line. For example: Copied torchrun --nproc_per_node=2 examples/language-modeling/run_clm.py \ --model_name_or_path TinyLlama/TinyLlama-1.1B-Chat-v0.6 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --per_device_train_batch_size 1 \ --block_size 1024 \ --bf16 \ --tensor_parallel_size 2 \ --output_dir my_training/ Via the NeuronAccelerator Just as for ZeRO-1, it is possible to wrap the optimizer class to make it lazy. Since the model parameters are going to be sharded, it is not needed to materialize the optimizer state prior to model parallelization: the wrapper makes sure that it stays unmaterialized. Copied from torch.optim import AdamW from optimum.neuron import NeuronAccelerator from optimum.neuron.accelerate.utils import ModelParallelismPlugin from optimum.neuron.distributed import lazy_load_for_parallelism tensor_parallel_size = 8 mp_plugin = ModelParallelismPlugin( tensor_parallel_size, parallelize_embeddings= True , sequence_parallel_enabled= True , checkpoint_dir= None , # Can be specified when resuming from checkpoint. ) accelerator = NeuronAccelerator( ... mp_plugin=mp_plugin, ) with lazy_load_for_parallelism(tensor_parallel_size=tensor_parallel_size): model = ... lazy_adamw = make_optimizer_constructor_lazy(AdamW) lazy_optimizer = lazy_adamw(...) # Actually instantiate the optimizer. model, optimizer = accelerator.prepare(model, lazy_optimizer) Checkpoint consolidation Since Tensor Parallelism consists in sharding the model weights accross different workers, only sharded checkpoints will be saved during training. It is necessary to consolidate the sharded checkpoints to be able to share and use them outside of the specific training configuration there were created under. The Optimum CLI provides a way of doing that very easily via the optimum neuron consolidate command: Copied optimum-cli neuron consolidate --help usage: optimum-cli neuron consolidate [-h] [-f {pytorch,safetensors}] checkpoint_dir output_dir positional arguments: checkpoint_dir The path to the directory containing the checkpoints. output_dir The path to the output directory containing the consolidated checkpoint. optional arguments: -h, --help show this help message and exit -f {pytorch,safetensors}, --format {pytorch,safetensors} The format used to save the consolidated checkpoint. All you need to do is specify the sharded checkpoints directory and the output directory that will contain the consolidated checkpoints, and the command takes care of the rest. It is also possible to specify the output format of the consolidated checkpoints, by default it will export them to the safetensors format, which is the recommend format to use. Example: Training with Tensor Parallelism just completed and the output dir is called my_training . The directory looks like the following: Copied my_training/ ├── README.md ├── all_results.json ├── checkpoint-10 │   ├── config.json │   ├── scheduler.pt │   ├── special_tokens_map.json │   ├── tensor_parallel_shards │   ├── tokenizer.json │   ├── tokenizer.model │   ├── tokenizer_config.json │   ├── trainer_state.json │   └── training_args.bin ├── config.json ├── special_tokens_map.json ├── tensor_parallel_shards │   ├── tp_rank_00_pp_rank_00 │   ├── tp_rank_01_pp_rank_00 │   ├── tp_rank_02_pp_rank_00 │   ├── tp_rank_03_pp_rank_00 │   ├── tp_rank_04_pp_rank_00 │   ├── tp_rank_05_pp_rank_00 │   ├── tp_rank_06_pp_rank_00 │   └── tp_rank_07_pp_rank_00 ├── tokenizer.json ├── tokenizer.model ├── tokenizer_config.json ├── train_results.json ├── trainer_state.json └── training_args.bin It is possible to consolidate the sharded checkpoints in my_training/tensor_parallel_shards , which correspond to the sharded checkpoints saved at the end of the training, by running the following command: Copied optimum-cli neuron consolidate my_training my_training_consolidated_checkpoint The sharded checkpoints are saved under a directory called tensor_parallel_shards . The optimum-cli neuron consolidate command accept as input both a directory that contains a tensor_parallel_shards directory, or the tensor_parallel_shards directory itself. ← Fine-tune Transformers with AWS Trainium Export a model to Inferentia → Distributed Training with optimum-neuron How to enable ZeR O-1? Via the Neuron Trainer Via the Neuron Accelerator How to enable Tensor Parallelism? Via the Neuron Trainer Via the Neuron Accelerator Checkpoint consolidation
Interface__CachedFileInfo.txt
Interface: CachedFileInfo Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: CachedFileInfo Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: CachedFileInfo Properties blob • blob : Object Underlying file - which path is symlinked to Type declaration Name Type lastAccessedAt Date lastModifiedAt Date path string size number Defined in hub/src/lib/cache-management.ts:37 path • path : string Defined in hub/src/lib/cache-management.ts:33 < > Update on GitHub ← AuthInfo CachedRepoInfo → Interface: Cached File Info Properties blob Type declaration Defined in path Defined in
Pipelines.txt
pipelines Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation pipelines Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started pipelines Pipelines provide a high-level, easy to use, API for running machine learning models. Example: Instantiate pipeline using the pipeline function. Copied import { pipeline } from '@huggingface/transformers' ; const classifier = await pipeline ( 'sentiment-analysis' ); const output = await classifier ( 'I love transformers!' ); // [{'label': 'POSITIVE', 'score': 0.999817686}] pipelines static .Pipeline ⇐ Callable new Pipeline(options) .dispose() : DisposeType ._call(...args) .TextClassificationPipeline new TextClassificationPipeline(options) ._call() : TextClassificationPipelineCallback .TokenClassificationPipeline new TokenClassificationPipeline(options) ._call() : TokenClassificationPipelineCallback .QuestionAnsweringPipeline new QuestionAnsweringPipeline(options) ._call() : QuestionAnsweringPipelineCallback .FillMaskPipeline new FillMaskPipeline(options) ._call() : FillMaskPipelineCallback .Text2TextGenerationPipeline new Text2TextGenerationPipeline(options) ._key : ’generated_text’ ._call() : Text2TextGenerationPipelineCallback .SummarizationPipeline new SummarizationPipeline(options) ._key : ’summary_text’ .TranslationPipeline new TranslationPipeline(options) ._key : ’translation_text’ .TextGenerationPipeline new TextGenerationPipeline(options) ._call() : TextGenerationPipelineCallback .ZeroShotClassificationPipeline new ZeroShotClassificationPipeline(options) .model : any ._call() : ZeroShotClassificationPipelineCallback .FeatureExtractionPipeline new FeatureExtractionPipeline(options) ._call() : FeatureExtractionPipelineCallback .ImageFeatureExtractionPipeline new ImageFeatureExtractionPipeline(options) ._call() : ImageFeatureExtractionPipelineCallback .AudioClassificationPipeline new AudioClassificationPipeline(options) ._call() : AudioClassificationPipelineCallback .ZeroShotAudioClassificationPipeline new ZeroShotAudioClassificationPipeline(options) ._call() : ZeroShotAudioClassificationPipelineCallback .AutomaticSpeechRecognitionPipeline new AutomaticSpeechRecognitionPipeline(options) ._call() : AutomaticSpeechRecognitionPipelineCallback .ImageToTextPipeline new ImageToTextPipeline(options) ._call() : ImageToTextPipelineCallback .ImageClassificationPipeline new ImageClassificationPipeline(options) ._call() : ImageClassificationPipelineCallback .ImageSegmentationPipeline new ImageSegmentationPipeline(options) ._call() : ImageSegmentationPipelineCallback .ZeroShotImageClassificationPipeline new ZeroShotImageClassificationPipeline(options) ._call() : ZeroShotImageClassificationPipelineCallback .ObjectDetectionPipeline new ObjectDetectionPipeline(options) ._call() : ObjectDetectionPipelineCallback .ZeroShotObjectDetectionPipeline new ZeroShotObjectDetectionPipeline(options) ._call() : ZeroShotObjectDetectionPipelineCallback .DocumentQuestionAnsweringPipeline new DocumentQuestionAnsweringPipeline(options) ._call() : DocumentQuestionAnsweringPipelineCallback .TextToAudioPipeline new TextToAudioPipeline(options) ._call() : TextToAudioPipelineCallback .ImageToImagePipeline new ImageToImagePipeline(options) ._call() : ImageToImagePipelineCallback .DepthEstimationPipeline new DepthEstimationPipeline(options) ._call() : DepthEstimationPipelineCallback .pipeline(task, [model], [options]) ⇒ * inner ~ImagePipelineInputs : string | RawImage | URL ~AudioPipelineInputs : string | URL | Float32Array | Float64Array ~BoundingBox : Object ~Disposable ⇒ Promise.<void> ~TextPipelineConstructorArgs : Object ~ImagePipelineConstructorArgs : Object ~TextImagePipelineConstructorArgs : Object ~TextClassificationPipelineType ⇒ Promise.<(TextClassificationOutput|Array<TextClassificationOutput>)> ~TokenClassificationPipelineType ⇒ Promise.<(TokenClassificationOutput|Array<TokenClassificationOutput>)> ~QuestionAnsweringPipelineType ⇒ Promise.<(QuestionAnsweringOutput|Array<QuestionAnsweringOutput>)> ~FillMaskPipelineType ⇒ Promise.<(FillMaskOutput|Array<FillMaskOutput>)> ~Text2TextGenerationPipelineType ⇒ Promise.<(Text2TextGenerationOutput|Array<Text2TextGenerationOutput>)> ~SummarizationPipelineType ⇒ Promise.<(SummarizationOutput|Array<SummarizationOutput>)> ~TranslationPipelineType ⇒ Promise.<(TranslationOutput|Array<TranslationOutput>)> ~TextGenerationPipelineType ⇒ Promise.<(TextGenerationOutput|Array<TextGenerationOutput>)> ~ZeroShotClassificationPipelineType ⇒ Promise.<(ZeroShotClassificationOutput|Array<ZeroShotClassificationOutput>)> ~FeatureExtractionPipelineType ⇒ Promise.<Tensor> ~ImageFeatureExtractionPipelineType ⇒ Promise.<Tensor> ~AudioClassificationPipelineType ⇒ Promise.<(AudioClassificationOutput|Array<AudioClassificationOutput>)> ~ZeroShotAudioClassificationPipelineType ⇒ Promise.<(Array<ZeroShotAudioClassificationOutput>|Array<Array<ZeroShotAudioClassificationOutput>>)> ~Chunk : Object ~AutomaticSpeechRecognitionPipelineType ⇒ Promise.<(AutomaticSpeechRecognitionOutput|Array<AutomaticSpeechRecognitionOutput>)> ~ImageToTextPipelineType ⇒ Promise.<(ImageToTextOutput|Array<ImageToTextOutput>)> ~ImageClassificationPipelineType ⇒ Promise.<(ImageClassificationOutput|Array<ImageClassificationOutput>)> ~ImageSegmentationPipelineType ⇒ Promise.<Array<ImageSegmentationPipelineOutput>> ~ZeroShotImageClassificationPipelineType ⇒ Promise.<(Array<ZeroShotImageClassificationOutput>|Array<Array<ZeroShotImageClassificationOutput>>)> ~ObjectDetectionPipelineType ⇒ Promise.<(ObjectDetectionPipelineOutput|Array<ObjectDetectionPipelineOutput>)> ~ZeroShotObjectDetectionPipelineType ⇒ Promise.<(Array<ZeroShotObjectDetectionOutput>|Array<Array<ZeroShotObjectDetectionOutput>>)> ~DocumentQuestionAnsweringPipelineType ⇒ Promise.<(DocumentQuestionAnsweringOutput|Array<DocumentQuestionAnsweringOutput>)> ~TextToAudioPipelineConstructorArgs : Object ~TextToAudioPipelineType ⇒ Promise.<TextToAudioOutput> ~ImageToImagePipelineType ⇒ Promise.<(RawImage|Array<RawImage>)> ~DepthEstimationPipelineType ⇒ Promise.<(DepthEstimationPipelineOutput|Array<DepthEstimationPipelineOutput>)> ~AllTasks : * pipelines.Pipeline ⇐ <code> Callable </code> The Pipeline class is the class from which all pipelines inherit. Refer to this class for methods shared across different pipelines. Kind : static class of pipelines Extends : Callable .Pipeline ⇐ Callable new Pipeline(options) .dispose() : DisposeType ._call(...args) new Pipeline(options) Create a new Pipeline. Param Type Default Description options Object An object containing the following properties: [options.task] string The task of the pipeline. Useful for specifying subtasks. [options.model] PreTrainedModel The model used by the pipeline. [options.tokenizer] PreTrainedTokenizer The tokenizer used by the pipeline (if any). [options.processor] Processor The processor used by the pipeline (if any). pipeline.dispose() : <code> DisposeType </code> Kind : instance method of Pipeline pipeline._call(...args) This method should be implemented in subclasses to provide the functionality of the callable object. Kind : instance method of Pipeline Overrides : _call Throws : Error If the subclass does not implement the `_call` method. Param Type ...args Array.<any> pipelines.TextClassificationPipeline Text classification pipeline using any ModelForSequenceClassification . Example: Sentiment-analysis w/ Xenova/distilbert-base-uncased-finetuned-sst-2-english . Copied const classifier = await pipeline ( 'sentiment-analysis' , 'Xenova/distilbert-base-uncased-finetuned-sst-2-english' ); const output = await classifier ( 'I love transformers!' ); // [{ label: 'POSITIVE', score: 0.999788761138916 }] Example: Multilingual sentiment-analysis w/ Xenova/bert-base-multilingual-uncased-sentiment (and return top 5 classes). Copied const classifier = await pipeline ( 'sentiment-analysis' , 'Xenova/bert-base-multilingual-uncased-sentiment' ); const output = await classifier ( 'Le meilleur film de tous les temps.' , { top_k : 5 }); // [ // { label: '5 stars', score: 0.9610759615898132 }, // { label: '4 stars', score: 0.03323351591825485 }, // { label: '3 stars', score: 0.0036155181005597115 }, // { label: '1 star', score: 0.0011325967498123646 }, // { label: '2 stars', score: 0.0009423971059732139 } // ] Example: Toxic comment classification w/ Xenova/toxic-bert (and return all classes). Copied const classifier = await pipeline ( 'text-classification' , 'Xenova/toxic-bert' ); const output = await classifier ( 'I hate you!' , { top_k : null }); // [ // { label: 'toxic', score: 0.9593140482902527 }, // { label: 'insult', score: 0.16187334060668945 }, // { label: 'obscene', score: 0.03452680632472038 }, // { label: 'identity_hate', score: 0.0223250575363636 }, // { label: 'threat', score: 0.019197041168808937 }, // { label: 'severe_toxic', score: 0.005651099607348442 } // ] Kind : static class of pipelines .TextClassificationPipeline new TextClassificationPipeline(options) ._call() : TextClassificationPipelineCallback new TextClassificationPipeline(options) Create a new TextClassificationPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. textClassificationPipeline._call() : <code> TextClassificationPipelineCallback </code> Kind : instance method of TextClassificationPipeline pipelines.TokenClassificationPipeline Named Entity Recognition pipeline using any ModelForTokenClassification . Example: Perform named entity recognition with Xenova/bert-base-NER . Copied const classifier = await pipeline ( 'token-classification' , 'Xenova/bert-base-NER' ); const output = await classifier ( 'My name is Sarah and I live in London' ); // [ // { entity: 'B-PER', score: 0.9980202913284302, index: 4, word: 'Sarah' }, // { entity: 'B-LOC', score: 0.9994474053382874, index: 9, word: 'London' } // ] Example: Perform named entity recognition with Xenova/bert-base-NER (and return all labels). Copied const classifier = await pipeline ( 'token-classification' , 'Xenova/bert-base-NER' ); const output = await classifier ( 'Sarah lives in the United States of America' , { ignore_labels : [] }); // [ // { entity: 'B-PER', score: 0.9966587424278259, index: 1, word: 'Sarah' }, // { entity: 'O', score: 0.9987385869026184, index: 2, word: 'lives' }, // { entity: 'O', score: 0.9990072846412659, index: 3, word: 'in' }, // { entity: 'O', score: 0.9988298416137695, index: 4, word: 'the' }, // { entity: 'B-LOC', score: 0.9995510578155518, index: 5, word: 'United' }, // { entity: 'I-LOC', score: 0.9990395307540894, index: 6, word: 'States' }, // { entity: 'I-LOC', score: 0.9986724853515625, index: 7, word: 'of' }, // { entity: 'I-LOC', score: 0.9975294470787048, index: 8, word: 'America' } // ] Kind : static class of pipelines .TokenClassificationPipeline new TokenClassificationPipeline(options) ._call() : TokenClassificationPipelineCallback new TokenClassificationPipeline(options) Create a new TokenClassificationPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. tokenClassificationPipeline._call() : <code> TokenClassificationPipelineCallback </code> Kind : instance method of TokenClassificationPipeline pipelines.QuestionAnsweringPipeline Question Answering pipeline using any ModelForQuestionAnswering . Example: Run question answering with Xenova/distilbert-base-uncased-distilled-squad . Copied const answerer = await pipeline ( 'question-answering' , 'Xenova/distilbert-base-uncased-distilled-squad' ); const question = 'Who was Jim Henson?' ; const context = 'Jim Henson was a nice puppet.' ; const output = await answerer (question, context); // { // answer: "a nice puppet", // score: 0.5768911502526741 // } Kind : static class of pipelines .QuestionAnsweringPipeline new QuestionAnsweringPipeline(options) ._call() : QuestionAnsweringPipelineCallback new QuestionAnsweringPipeline(options) Create a new QuestionAnsweringPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. questionAnsweringPipeline._call() : <code> QuestionAnsweringPipelineCallback </code> Kind : instance method of QuestionAnsweringPipeline pipelines.FillMaskPipeline Masked language modeling prediction pipeline using any ModelWithLMHead . Example: Perform masked language modelling (a.k.a. “fill-mask”) with Xenova/bert-base-uncased . Copied const unmasker = await pipeline ( 'fill-mask' , 'Xenova/bert-base-cased' ); const output = await unmasker ( 'The goal of life is [MASK].' ); // [ // { token_str: 'survival', score: 0.06137419492006302, token: 8115, sequence: 'The goal of life is survival.' }, // { token_str: 'love', score: 0.03902450203895569, token: 1567, sequence: 'The goal of life is love.' }, // { token_str: 'happiness', score: 0.03253183513879776, token: 9266, sequence: 'The goal of life is happiness.' }, // { token_str: 'freedom', score: 0.018736306577920914, token: 4438, sequence: 'The goal of life is freedom.' }, // { token_str: 'life', score: 0.01859794743359089, token: 1297, sequence: 'The goal of life is life.' } // ] Example: Perform masked language modelling (a.k.a. “fill-mask”) with Xenova/bert-base-cased (and return top result). Copied const unmasker = await pipeline ( 'fill-mask' , 'Xenova/bert-base-cased' ); const output = await unmasker ( 'The Milky Way is a [MASK] galaxy.' , { top_k : 1 }); // [{ token_str: 'spiral', score: 0.6299987435340881, token: 14061, sequence: 'The Milky Way is a spiral galaxy.' }] Kind : static class of pipelines .FillMaskPipeline new FillMaskPipeline(options) ._call() : FillMaskPipelineCallback new FillMaskPipeline(options) Create a new FillMaskPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. fillMaskPipeline._call() : <code> FillMaskPipelineCallback </code> Kind : instance method of FillMaskPipeline pipelines.Text2TextGenerationPipeline Text2TextGenerationPipeline class for generating text using a model that performs text-to-text generation tasks. Example: Text-to-text generation w/ Xenova/LaMini-Flan-T5-783M . Copied const generator = await pipeline ( 'text2text-generation' , 'Xenova/LaMini-Flan-T5-783M' ); const output = await generator ( 'how can I become more healthy?' , { max_new_tokens : 100 , }); // [{ generated_text: "To become more healthy, you can: 1. Eat a balanced diet with plenty of fruits, vegetables, whole grains, lean proteins, and healthy fats. 2. Stay hydrated by drinking plenty of water. 3. Get enough sleep and manage stress levels. 4. Avoid smoking and excessive alcohol consumption. 5. Regularly exercise and maintain a healthy weight. 6. Practice good hygiene and sanitation. 7. Seek medical attention if you experience any health issues." }] Kind : static class of pipelines .Text2TextGenerationPipeline new Text2TextGenerationPipeline(options) ._key : ’generated_text’ ._call() : Text2TextGenerationPipelineCallback new Text2TextGenerationPipeline(options) Create a new Text2TextGenerationPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. text2TextGenerationPipeline._key : <code> ’ generated_text ’ </code> Kind : instance property of Text2TextGenerationPipeline text2TextGenerationPipeline._call() : <code> Text2TextGenerationPipelineCallback </code> Kind : instance method of Text2TextGenerationPipeline pipelines.SummarizationPipeline A pipeline for summarization tasks, inheriting from Text2TextGenerationPipeline. Example: Summarization w/ Xenova/distilbart-cnn-6-6 . Copied const generator = await pipeline ( 'summarization' , 'Xenova/distilbart-cnn-6-6' ); const text = 'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, ' + 'and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. ' + 'During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest ' + 'man-made structure in the world, a title it held for 41 years until the Chrysler Building in New ' + 'York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to ' + 'the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the ' + 'Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second ' + 'tallest free-standing structure in France after the Millau Viaduct.' ; const output = await generator (text, { max_new_tokens : 100 , }); // [{ summary_text: ' The Eiffel Tower is about the same height as an 81-storey building and the tallest structure in Paris. It is the second tallest free-standing structure in France after the Millau Viaduct.' }] Kind : static class of pipelines .SummarizationPipeline new SummarizationPipeline(options) ._key : ’summary_text’ new SummarizationPipeline(options) Create a new SummarizationPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. summarizationPipeline._key : <code> ’ summary_text ’ </code> Kind : instance property of SummarizationPipeline pipelines.TranslationPipeline Translates text from one language to another. Example: Multilingual translation w/ Xenova/nllb-200-distilled-600M . See here for the full list of languages and their corresponding codes. Copied const translator = await pipeline ( 'translation' , 'Xenova/nllb-200-distilled-600M' ); const output = await translator ( 'जीवन एक चॉकलेट बॉक्स की तरह है।' , { src_lang : 'hin_Deva' , // Hindi tgt_lang : 'fra_Latn' , // French }); // [{ translation_text: 'La vie est comme une boîte à chocolat.' }] Example: Multilingual translation w/ Xenova/m2m100_418M . See here for the full list of languages and their corresponding codes. Copied const translator = await pipeline ( 'translation' , 'Xenova/m2m100_418M' ); const output = await translator ( '生活就像一盒巧克力。' , { src_lang : 'zh' , // Chinese tgt_lang : 'en' , // English }); // [{ translation_text: 'Life is like a box of chocolate.' }] Example: Multilingual translation w/ Xenova/mbart-large-50-many-to-many-mmt . See here for the full list of languages and their corresponding codes. Copied const translator = await pipeline ( 'translation' , 'Xenova/mbart-large-50-many-to-many-mmt' ); const output = await translator ( 'संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है' , { src_lang : 'hi_IN' , // Hindi tgt_lang : 'fr_XX' , // French }); // [{ translation_text: 'Le chef des Nations affirme qu 'il n 'y a military solution in Syria.' }] Kind : static class of pipelines .TranslationPipeline new TranslationPipeline(options) ._key : ’translation_text’ new TranslationPipeline(options) Create a new TranslationPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. translationPipeline._key : <code> ’ translation_text ’ </code> Kind : instance property of TranslationPipeline pipelines.TextGenerationPipeline Language generation pipeline using any ModelWithLMHead or ModelForCausalLM . This pipeline predicts the words that will follow a specified text prompt. NOTE: For the full list of generation parameters, see GenerationConfig . Example: Text generation with Xenova/distilgpt2 (default settings). Copied const generator = await pipeline ( 'text-generation' , 'Xenova/distilgpt2' ); const text = 'I enjoy walking with my cute dog,' ; const output = await generator (text); // [{ generated_text: "I enjoy walking with my cute dog, and I love to play with the other dogs." }] Example: Text generation with Xenova/distilgpt2 (custom settings). Copied const generator = await pipeline ( 'text-generation' , 'Xenova/distilgpt2' ); const text = 'Once upon a time, there was' ; const output = await generator (text, { temperature : 2 , max_new_tokens : 10 , repetition_penalty : 1.5 , no_repeat_ngram_size : 2 , num_beams : 2 , num_return_sequences : 2 , }); // [{ // "generated_text": "Once upon a time, there was an abundance of information about the history and activities that" // }, { // "generated_text": "Once upon a time, there was an abundance of information about the most important and influential" // }] Example: Run code generation with Xenova/codegen-350M-mono . Copied const generator = await pipeline ( 'text-generation' , 'Xenova/codegen-350M-mono' ); const text = 'def fib(n):' ; const output = await generator (text, { max_new_tokens : 44 , }); // [{ // generated_text: 'def fib(n):\n' + // ' if n == 0:\n' + // ' return 0\n' + // ' elif n == 1:\n' + // ' return 1\n' + // ' else:\n' + // ' return fib(n-1) + fib(n-2)\n' // }] Kind : static class of pipelines .TextGenerationPipeline new TextGenerationPipeline(options) ._call() : TextGenerationPipelineCallback new TextGenerationPipeline(options) Create a new TextGenerationPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. textGenerationPipeline._call() : <code> TextGenerationPipelineCallback </code> Kind : instance method of TextGenerationPipeline pipelines.ZeroShotClassificationPipeline NLI-based zero-shot classification pipeline using a ModelForSequenceClassification trained on NLI (natural language inference) tasks. Equivalent of text-classification pipelines, but these models don’t require a hardcoded number of potential classes, they can be chosen at runtime. It usually means it’s slower but it is much more flexible. Example: Zero shot classification with Xenova/mobilebert-uncased-mnli . Copied const classifier = await pipeline ( 'zero-shot-classification' , 'Xenova/mobilebert-uncased-mnli' ); const text = 'Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.' ; const labels = [ 'mobile' , 'billing' , 'website' , 'account access' ]; const output = await classifier (text, labels); // { // sequence: 'Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.', // labels: [ 'mobile', 'website', 'billing', 'account access' ], // scores: [ 0.5562091040482018, 0.1843621307860853, 0.13942646639336376, 0.12000229877234923 ] // } Example: Zero shot classification with Xenova/nli-deberta-v3-xsmall (multi-label). Copied const classifier = await pipeline ( 'zero-shot-classification' , 'Xenova/nli-deberta-v3-xsmall' ); const text = 'I have a problem with my iphone that needs to be resolved asap!' ; const labels = [ 'urgent' , 'not urgent' , 'phone' , 'tablet' , 'computer' ]; const output = await classifier (text, labels, { multi_label : true }); // { // sequence: 'I have a problem with my iphone that needs to be resolved asap!', // labels: [ 'urgent', 'phone', 'computer', 'tablet', 'not urgent' ], // scores: [ 0.9958870956360275, 0.9923963400697035, 0.002333537946160235, 0.0015134138567598765, 0.0010699384208377163 ] // } Kind : static class of pipelines .ZeroShotClassificationPipeline new ZeroShotClassificationPipeline(options) .model : any ._call() : ZeroShotClassificationPipelineCallback new ZeroShotClassificationPipeline(options) Create a new ZeroShotClassificationPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. zeroShotClassificationPipeline.model : <code> any </code> Kind : instance property of ZeroShotClassificationPipeline zeroShotClassificationPipeline._call() : <code> ZeroShotClassificationPipelineCallback </code> Kind : instance method of ZeroShotClassificationPipeline pipelines.FeatureExtractionPipeline Feature extraction pipeline using no model head. This pipeline extracts the hidden states from the base transformer, which can be used as features in downstream tasks. Example: Run feature extraction with bert-base-uncased (without pooling/normalization). Copied const extractor = await pipeline ( 'feature-extraction' , 'Xenova/bert-base-uncased' , { revision : 'default' }); const output = await extractor ( 'This is a simple test.' ); // Tensor { // type: 'float32', // data: Float32Array [0.05939924716949463, 0.021655935794115067, ...], // dims: [1, 8, 768] // } Example: Run feature extraction with bert-base-uncased (with pooling/normalization). Copied const extractor = await pipeline ( 'feature-extraction' , 'Xenova/bert-base-uncased' , { revision : 'default' }); const output = await extractor ( 'This is a simple test.' , { pooling : 'mean' , normalize : true }); // Tensor { // type: 'float32', // data: Float32Array [0.03373778983950615, -0.010106077417731285, ...], // dims: [1, 768] // } Example: Calculating embeddings with sentence-transformers models. Copied const extractor = await pipeline ( 'feature-extraction' , 'Xenova/all-MiniLM-L6-v2' ); const output = await extractor ( 'This is a simple test.' , { pooling : 'mean' , normalize : true }); // Tensor { // type: 'float32', // data: Float32Array [0.09094982594251633, -0.014774246141314507, ...], // dims: [1, 384] // } Example: Calculating binary embeddings with sentence-transformers models. Copied const extractor = await pipeline ( 'feature-extraction' , 'Xenova/all-MiniLM-L6-v2' ); const output = await extractor ( 'This is a simple test.' , { pooling : 'mean' , quantize : true , precision : 'binary' }); // Tensor { // type: 'int8', // data: Int8Array [49, 108, 24, ...], // dims: [1, 48] // } Kind : static class of pipelines .FeatureExtractionPipeline new FeatureExtractionPipeline(options) ._call() : FeatureExtractionPipelineCallback new FeatureExtractionPipeline(options) Create a new FeatureExtractionPipeline. Param Type Description options TextPipelineConstructorArgs An object used to instantiate the pipeline. featureExtractionPipeline._call() : <code> FeatureExtractionPipelineCallback </code> Kind : instance method of FeatureExtractionPipeline pipelines.ImageFeatureExtractionPipeline Image feature extraction pipeline using no model head. This pipeline extracts the hidden states from the base transformer, which can be used as features in downstream tasks. Example: Perform image feature extraction with Xenova/vit-base-patch16-224-in21k . Copied const image_feature_extractor = await pipeline ( 'image-feature-extraction' , 'Xenova/vit-base-patch16-224-in21k' ); const url = 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/v3.0.0/cats.png' ; const features = await image_feature_extractor (url); // Tensor { // dims: [ 1, 197, 768 ], // type: 'float32', // data: Float32Array(151296) [ ... ], // size: 151296 // } Example: Compute image embeddings with Xenova/clip-vit-base-patch32 . Copied const image_feature_extractor = await pipeline ( 'image-feature-extraction' , 'Xenova/clip-vit-base-patch32' ); const url = 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/v3.0.0/cats.png' ; const features = await image_feature_extractor (url); // Tensor { // dims: [ 1, 512 ], // type: 'float32', // data: Float32Array(512) [ ... ], // size: 512 // } Kind : static class of pipelines .ImageFeatureExtractionPipeline new ImageFeatureExtractionPipeline(options) ._call() : ImageFeatureExtractionPipelineCallback new ImageFeatureExtractionPipeline(options) Create a new ImageFeatureExtractionPipeline. Param Type Description options ImagePipelineConstructorArgs An object used to instantiate the pipeline. imageFeatureExtractionPipeline._call() : <code> ImageFeatureExtractionPipelineCallback </code> Kind : instance method of ImageFeatureExtractionPipeline pipelines.AudioClassificationPipeline Audio classification pipeline using any AutoModelForAudioClassification . This pipeline predicts the class of a raw waveform or an audio file. Example: Perform audio classification with Xenova/wav2vec2-large-xlsr-53-gender-recognition-librispeech . Copied const classifier = await pipeline ( 'audio-classification' , 'Xenova/wav2vec2-large-xlsr-53-gender-recognition-librispeech' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/jfk.wav' ; const output = await classifier (url); // [ // { label: 'male', score: 0.9981542229652405 }, // { label: 'female', score: 0.001845747814513743 } // ] Example: Perform audio classification with Xenova/ast-finetuned-audioset-10-10-0.4593 and return top 4 results. Copied const classifier = await pipeline ( 'audio-classification' , 'Xenova/ast-finetuned-audioset-10-10-0.4593' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/cat_meow.wav' ; const output = await classifier (url, { top_k : 4 }); // [ // { label: 'Meow', score: 0.5617874264717102 }, // { label: 'Cat', score: 0.22365376353263855 }, // { label: 'Domestic animals, pets', score: 0.1141069084405899 }, // { label: 'Animal', score: 0.08985692262649536 }, // ] Kind : static class of pipelines .AudioClassificationPipeline new AudioClassificationPipeline(options) ._call() : AudioClassificationPipelineCallback new AudioClassificationPipeline(options) Create a new AudioClassificationPipeline. Param Type Description options AudioPipelineConstructorArgs An object used to instantiate the pipeline. audioClassificationPipeline._call() : <code> AudioClassificationPipelineCallback </code> Kind : instance method of AudioClassificationPipeline pipelines.ZeroShotAudioClassificationPipeline Zero shot audio classification pipeline using ClapModel . This pipeline predicts the class of an audio when you provide an audio and a set of candidate_labels . Example : Perform zero-shot audio classification with Xenova/clap-htsat-unfused . Copied const classifier = await pipeline ( 'zero-shot-audio-classification' , 'Xenova/clap-htsat-unfused' ); const audio = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/dog_barking.wav' ; const candidate_labels = [ 'dog' , 'vaccum cleaner' ]; const scores = await classifier (audio, candidate_labels); // [ // { score: 0.9993992447853088, label: 'dog' }, // { score: 0.0006007603369653225, label: 'vaccum cleaner' } // ] Kind : static class of pipelines .ZeroShotAudioClassificationPipeline new ZeroShotAudioClassificationPipeline(options) ._call() : ZeroShotAudioClassificationPipelineCallback new ZeroShotAudioClassificationPipeline(options) Create a new ZeroShotAudioClassificationPipeline. Param Type Description options TextAudioPipelineConstructorArgs An object used to instantiate the pipeline. zeroShotAudioClassificationPipeline._call() : <code> ZeroShotAudioClassificationPipelineCallback </code> Kind : instance method of ZeroShotAudioClassificationPipeline pipelines.AutomaticSpeechRecognitionPipeline Pipeline that aims at extracting spoken text contained within some audio. Example: Transcribe English. Copied const transcriber = await pipeline ( 'automatic-speech-recognition' , 'Xenova/whisper-tiny.en' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/jfk.wav' ; const output = await transcriber (url); // { text: " And so my fellow Americans ask not what your country can do for you, ask what you can do for your country." } Example: Transcribe English w/ timestamps. Copied const transcriber = await pipeline ( 'automatic-speech-recognition' , 'Xenova/whisper-tiny.en' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/jfk.wav' ; const output = await transcriber (url, { return_timestamps : true }); // { // text: " And so my fellow Americans ask not what your country can do for you, ask what you can do for your country." // chunks: [ // { timestamp: [0, 8], text: " And so my fellow Americans ask not what your country can do for you" } // { timestamp: [8, 11], text: " ask what you can do for your country." } // ] // } Example: Transcribe English w/ word-level timestamps. Copied const transcriber = await pipeline ( 'automatic-speech-recognition' , 'Xenova/whisper-tiny.en' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/jfk.wav' ; const output = await transcriber (url, { return_timestamps : 'word' }); // { // "text": " And so my fellow Americans ask not what your country can do for you ask what you can do for your country.", // "chunks": [ // { "text": " And", "timestamp": [0, 0.78] }, // { "text": " so", "timestamp": [0.78, 1.06] }, // { "text": " my", "timestamp": [1.06, 1.46] }, // ... // { "text": " for", "timestamp": [9.72, 9.92] }, // { "text": " your", "timestamp": [9.92, 10.22] }, // { "text": " country.", "timestamp": [10.22, 13.5] } // ] // } Example: Transcribe French. Copied const transcriber = await pipeline ( 'automatic-speech-recognition' , 'Xenova/whisper-small' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/french-audio.mp3' ; const output = await transcriber (url, { language : 'french' , task : 'transcribe' }); // { text: " J'adore, j'aime, je n'aime pas, je déteste." } Example: Translate French to English. Copied const transcriber = await pipeline ( 'automatic-speech-recognition' , 'Xenova/whisper-small' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/french-audio.mp3' ; const output = await transcriber (url, { language : 'french' , task : 'translate' }); // { text: " I love, I like, I don't like, I hate." } Example: Transcribe/translate audio longer than 30 seconds. Copied const transcriber = await pipeline ( 'automatic-speech-recognition' , 'Xenova/whisper-tiny.en' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/ted_60.wav' ; const output = await transcriber (url, { chunk_length_s : 30 , stride_length_s : 5 }); // { text: " So in college, I was a government major, which means [...] So I'd start off light and I'd bump it up" } Kind : static class of pipelines .AutomaticSpeechRecognitionPipeline new AutomaticSpeechRecognitionPipeline(options) ._call() : AutomaticSpeechRecognitionPipelineCallback new AutomaticSpeechRecognitionPipeline(options) Create a new AutomaticSpeechRecognitionPipeline. Param Type Description options TextAudioPipelineConstructorArgs An object used to instantiate the pipeline. automaticSpeechRecognitionPipeline._call() : <code> AutomaticSpeechRecognitionPipelineCallback </code> Kind : instance method of AutomaticSpeechRecognitionPipeline pipelines.ImageToTextPipeline Image To Text pipeline using a AutoModelForVision2Seq . This pipeline predicts a caption for a given image. Example: Generate a caption for an image w/ Xenova/vit-gpt2-image-captioning . Copied const captioner = await pipeline ( 'image-to-text' , 'Xenova/vit-gpt2-image-captioning' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/cats.jpg' ; const output = await captioner (url); // [{ generated_text: 'a cat laying on a couch with another cat' }] Example: Optical Character Recognition (OCR) w/ Xenova/trocr-small-handwritten . Copied const captioner = await pipeline ( 'image-to-text' , 'Xenova/trocr-small-handwritten' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/handwriting.jpg' ; const output = await captioner (url); // [{ generated_text: 'Mr. Brown commented icily.' }] Kind : static class of pipelines .ImageToTextPipeline new ImageToTextPipeline(options) ._call() : ImageToTextPipelineCallback new ImageToTextPipeline(options) Create a new ImageToTextPipeline. Param Type Description options TextImagePipelineConstructorArgs An object used to instantiate the pipeline. imageToTextPipeline._call() : <code> ImageToTextPipelineCallback </code> Kind : instance method of ImageToTextPipeline pipelines.ImageClassificationPipeline Image classification pipeline using any AutoModelForImageClassification . This pipeline predicts the class of an image. Example: Classify an image. Copied const classifier = await pipeline ( 'image-classification' , 'Xenova/vit-base-patch16-224' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/tiger.jpg' ; const output = await classifier (url); // [ // { label: 'tiger, Panthera tigris', score: 0.632695734500885 }, // ] Example: Classify an image and return top n classes. Copied const classifier = await pipeline ( 'image-classification' , 'Xenova/vit-base-patch16-224' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/tiger.jpg' ; const output = await classifier (url, { top_k : 3 }); // [ // { label: 'tiger, Panthera tigris', score: 0.632695734500885 }, // { label: 'tiger cat', score: 0.3634825646877289 }, // { label: 'lion, king of beasts, Panthera leo', score: 0.00045060308184474707 }, // ] Example: Classify an image and return all classes. Copied const classifier = await pipeline ( 'image-classification' , 'Xenova/vit-base-patch16-224' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/tiger.jpg' ; const output = await classifier (url, { top_k : 0 }); // [ // { label: 'tiger, Panthera tigris', score: 0.632695734500885 }, // { label: 'tiger cat', score: 0.3634825646877289 }, // { label: 'lion, king of beasts, Panthera leo', score: 0.00045060308184474707 }, // { label: 'jaguar, panther, Panthera onca, Felis onca', score: 0.00035465499968267977 }, // ... // ] Kind : static class of pipelines .ImageClassificationPipeline new ImageClassificationPipeline(options) ._call() : ImageClassificationPipelineCallback new ImageClassificationPipeline(options) Create a new ImageClassificationPipeline. Param Type Description options ImagePipelineConstructorArgs An object used to instantiate the pipeline. imageClassificationPipeline._call() : <code> ImageClassificationPipelineCallback </code> Kind : instance method of ImageClassificationPipeline pipelines.ImageSegmentationPipeline Image segmentation pipeline using any AutoModelForXXXSegmentation . This pipeline predicts masks of objects and their classes. Example: Perform image segmentation with Xenova/detr-resnet-50-panoptic . Copied const segmenter = await pipeline ( 'image-segmentation' , 'Xenova/detr-resnet-50-panoptic' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/cats.jpg' ; const output = await segmenter (url); // [ // { label: 'remote', score: 0.9984649419784546, mask: RawImage { ... } }, // { label: 'cat', score: 0.9994316101074219, mask: RawImage { ... } } // ] Kind : static class of pipelines .ImageSegmentationPipeline new ImageSegmentationPipeline(options) ._call() : ImageSegmentationPipelineCallback new ImageSegmentationPipeline(options) Create a new ImageSegmentationPipeline. Param Type Description options ImagePipelineConstructorArgs An object used to instantiate the pipeline. imageSegmentationPipeline._call() : <code> ImageSegmentationPipelineCallback </code> Kind : instance method of ImageSegmentationPipeline pipelines.ZeroShotImageClassificationPipeline Zero shot image classification pipeline. This pipeline predicts the class of an image when you provide an image and a set of candidate_labels . Example: Zero shot image classification w/ Xenova/clip-vit-base-patch32 . Copied const classifier = await pipeline ( 'zero-shot-image-classification' , 'Xenova/clip-vit-base-patch32' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/tiger.jpg' ; const output = await classifier (url, [ 'tiger' , 'horse' , 'dog' ]); // [ // { score: 0.9993917942047119, label: 'tiger' }, // { score: 0.0003519294841680676, label: 'horse' }, // { score: 0.0002562698791734874, label: 'dog' } // ] Kind : static class of pipelines .ZeroShotImageClassificationPipeline new ZeroShotImageClassificationPipeline(options) ._call() : ZeroShotImageClassificationPipelineCallback new ZeroShotImageClassificationPipeline(options) Create a new ZeroShotImageClassificationPipeline. Param Type Description options TextImagePipelineConstructorArgs An object used to instantiate the pipeline. zeroShotImageClassificationPipeline._call() : <code> ZeroShotImageClassificationPipelineCallback </code> Kind : instance method of ZeroShotImageClassificationPipeline pipelines.ObjectDetectionPipeline Object detection pipeline using any AutoModelForObjectDetection . This pipeline predicts bounding boxes of objects and their classes. Example: Run object-detection with Xenova/detr-resnet-50 . Copied const detector = await pipeline ( 'object-detection' , 'Xenova/detr-resnet-50' ); const img = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/cats.jpg' ; const output = await detector (img, { threshold : 0.9 }); // [{ // score: 0.9976370930671692, // label: "remote", // box: { xmin: 31, ymin: 68, xmax: 190, ymax: 118 } // }, // ... // { // score: 0.9984092116355896, // label: "cat", // box: { xmin: 331, ymin: 19, xmax: 649, ymax: 371 } // }] Kind : static class of pipelines .ObjectDetectionPipeline new ObjectDetectionPipeline(options) ._call() : ObjectDetectionPipelineCallback new ObjectDetectionPipeline(options) Create a new ObjectDetectionPipeline. Param Type Description options ImagePipelineConstructorArgs An object used to instantiate the pipeline. objectDetectionPipeline._call() : <code> ObjectDetectionPipelineCallback </code> Kind : instance method of ObjectDetectionPipeline pipelines.ZeroShotObjectDetectionPipeline Zero-shot object detection pipeline. This pipeline predicts bounding boxes of objects when you provide an image and a set of candidate_labels . Example: Zero-shot object detection w/ Xenova/owlvit-base-patch32 . Copied const detector = await pipeline ( 'zero-shot-object-detection' , 'Xenova/owlvit-base-patch32' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/astronaut.png' ; const candidate_labels = [ 'human face' , 'rocket' , 'helmet' , 'american flag' ]; const output = await detector (url, candidate_labels); // [ // { // score: 0.24392342567443848, // label: 'human face', // box: { xmin: 180, ymin: 67, xmax: 274, ymax: 175 } // }, // { // score: 0.15129457414150238, // label: 'american flag', // box: { xmin: 0, ymin: 4, xmax: 106, ymax: 513 } // }, // { // score: 0.13649864494800568, // label: 'helmet', // box: { xmin: 277, ymin: 337, xmax: 511, ymax: 511 } // }, // { // score: 0.10262022167444229, // label: 'rocket', // box: { xmin: 352, ymin: -1, xmax: 463, ymax: 287 } // } // ] Example: Zero-shot object detection w/ Xenova/owlvit-base-patch32 (returning top 4 matches and setting a threshold). Copied const detector = await pipeline ( 'zero-shot-object-detection' , 'Xenova/owlvit-base-patch32' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/beach.png' ; const candidate_labels = [ 'hat' , 'book' , 'sunglasses' , 'camera' ]; const output = await detector (url, candidate_labels, { top_k : 4 , threshold : 0.05 }); // [ // { // score: 0.1606510728597641, // label: 'sunglasses', // box: { xmin: 347, ymin: 229, xmax: 429, ymax: 264 } // }, // { // score: 0.08935828506946564, // label: 'hat', // box: { xmin: 38, ymin: 174, xmax: 258, ymax: 364 } // }, // { // score: 0.08530698716640472, // label: 'camera', // box: { xmin: 187, ymin: 350, xmax: 260, ymax: 411 } // }, // { // score: 0.08349756896495819, // label: 'book', // box: { xmin: 261, ymin: 280, xmax: 494, ymax: 425 } // } // ] Kind : static class of pipelines .ZeroShotObjectDetectionPipeline new ZeroShotObjectDetectionPipeline(options) ._call() : ZeroShotObjectDetectionPipelineCallback new ZeroShotObjectDetectionPipeline(options) Create a new ZeroShotObjectDetectionPipeline. Param Type Description options TextImagePipelineConstructorArgs An object used to instantiate the pipeline. zeroShotObjectDetectionPipeline._call() : <code> ZeroShotObjectDetectionPipelineCallback </code> Kind : instance method of ZeroShotObjectDetectionPipeline pipelines.DocumentQuestionAnsweringPipeline Document Question Answering pipeline using any AutoModelForDocumentQuestionAnswering . The inputs/outputs are similar to the (extractive) question answering pipeline; however, the pipeline takes an image (and optional OCR’d words/boxes) as input instead of text context. Example: Answer questions about a document with Xenova/donut-base-finetuned-docvqa . Copied const qa_pipeline = await pipeline ( 'document-question-answering' , 'Xenova/donut-base-finetuned-docvqa' ); const image = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/invoice.png' ; const question = 'What is the invoice number?' ; const output = await qa_pipeline (image, question); // [{ answer: 'us-001' }] Kind : static class of pipelines .DocumentQuestionAnsweringPipeline new DocumentQuestionAnsweringPipeline(options) ._call() : DocumentQuestionAnsweringPipelineCallback new DocumentQuestionAnsweringPipeline(options) Create a new DocumentQuestionAnsweringPipeline. Param Type Description options TextImagePipelineConstructorArgs An object used to instantiate the pipeline. documentQuestionAnsweringPipeline._call() : <code> DocumentQuestionAnsweringPipelineCallback </code> Kind : instance method of DocumentQuestionAnsweringPipeline pipelines.TextToAudioPipeline Text-to-audio generation pipeline using any AutoModelForTextToWaveform or AutoModelForTextToSpectrogram . This pipeline generates an audio file from an input text and optional other conditional inputs. Example: Generate audio from text with Xenova/speecht5_tts . Copied const synthesizer = await pipeline ( 'text-to-speech' , 'Xenova/speecht5_tts' , { quantized : false }); const speaker_embeddings = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/speaker_embeddings.bin' ; const out = await synthesizer ( 'Hello, my dog is cute' , { speaker_embeddings }); // { // audio: Float32Array(26112) [-0.00005657337896991521, 0.00020583874720614403, ...], // sampling_rate: 16000 // } You can then save the audio to a .wav file with the wavefile package: Copied import wavefile from 'wavefile' ; import fs from 'fs' ; const wav = new wavefile. WaveFile (); wav. fromScratch ( 1 , out. sampling_rate , '32f' , out. audio ); fs. writeFileSync ( 'out.wav' , wav. toBuffer ()); Example: Multilingual speech generation with Xenova/mms-tts-fra . See here for the full list of available languages (1107). Copied const synthesizer = await pipeline ( 'text-to-speech' , 'Xenova/mms-tts-fra' ); const out = await synthesizer ( 'Bonjour' ); // { // audio: Float32Array(23808) [-0.00037693005288019776, 0.0003325853613205254, ...], // sampling_rate: 16000 // } Kind : static class of pipelines .TextToAudioPipeline new TextToAudioPipeline(options) ._call() : TextToAudioPipelineCallback new TextToAudioPipeline(options) Create a new TextToAudioPipeline. Param Type Description options TextToAudioPipelineConstructorArgs An object used to instantiate the pipeline. textToAudioPipeline._call() : <code> TextToAudioPipelineCallback </code> Kind : instance method of TextToAudioPipeline pipelines.ImageToImagePipeline Image to Image pipeline using any AutoModelForImageToImage . This pipeline generates an image based on a previous image input. Example: Super-resolution w/ Xenova/swin2SR-classical-sr-x2-64 Copied const upscaler = await pipeline ( 'image-to-image' , 'Xenova/swin2SR-classical-sr-x2-64' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/butterfly.jpg' ; const output = await upscaler (url); // RawImage { // data: Uint8Array(786432) [ 41, 31, 24, 43, ... ], // width: 512, // height: 512, // channels: 3 // } Kind : static class of pipelines .ImageToImagePipeline new ImageToImagePipeline(options) ._call() : ImageToImagePipelineCallback new ImageToImagePipeline(options) Create a new ImageToImagePipeline. Param Type Description options ImagePipelineConstructorArgs An object used to instantiate the pipeline. imageToImagePipeline._call() : <code> ImageToImagePipelineCallback </code> Kind : instance method of ImageToImagePipeline pipelines.DepthEstimationPipeline Depth estimation pipeline using any AutoModelForDepthEstimation . This pipeline predicts the depth of an image. Example: Depth estimation w/ Xenova/dpt-hybrid-midas Copied const depth_estimator = await pipeline ( 'depth-estimation' , 'Xenova/dpt-hybrid-midas' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/cats.jpg' ; const out = await depth_estimator (url); // { // predicted_depth: Tensor { // dims: [ 384, 384 ], // type: 'float32', // data: Float32Array(147456) [ 542.859130859375, 545.2833862304688, 546.1649169921875, ... ], // size: 147456 // }, // depth: RawImage { // data: Uint8Array(307200) [ 86, 86, 86, ... ], // width: 640, // height: 480, // channels: 1 // } // } Kind : static class of pipelines .DepthEstimationPipeline new DepthEstimationPipeline(options) ._call() : DepthEstimationPipelineCallback new DepthEstimationPipeline(options) Create a new DepthEstimationPipeline. Param Type Description options ImagePipelineConstructorArgs An object used to instantiate the pipeline. depthEstimationPipeline._call() : <code> DepthEstimationPipelineCallback </code> Kind : instance method of DepthEstimationPipeline pipelines.pipeline(task, [model], [options]) ⇒ <code> * </code> Utility factory method to build a Pipeline object. Kind : static method of pipelines Returns : * - A Pipeline object for the specified task. Throws : Error If an unsupported pipeline is requested. Param Type Default Description task T The task defining which pipeline will be returned. Currently accepted tasks are: "audio-classification" : will return a AudioClassificationPipeline . "automatic-speech-recognition" : will return a AutomaticSpeechRecognitionPipeline . "depth-estimation" : will return a DepthEstimationPipeline . "document-question-answering" : will return a DocumentQuestionAnsweringPipeline . "feature-extraction" : will return a FeatureExtractionPipeline . "fill-mask" : will return a FillMaskPipeline . "image-classification" : will return a ImageClassificationPipeline . "image-segmentation" : will return a ImageSegmentationPipeline . "image-to-text" : will return a ImageToTextPipeline . "object-detection" : will return a ObjectDetectionPipeline . "question-answering" : will return a QuestionAnsweringPipeline . "summarization" : will return a SummarizationPipeline . "text2text-generation" : will return a Text2TextGenerationPipeline . "text-classification" (alias "sentiment-analysis" available): will return a TextClassificationPipeline . "text-generation" : will return a TextGenerationPipeline . "token-classification" (alias "ner" available): will return a TokenClassificationPipeline . "translation" : will return a TranslationPipeline . "translation_xx_to_yy" : will return a TranslationPipeline . "zero-shot-classification" : will return a ZeroShotClassificationPipeline . "zero-shot-audio-classification" : will return a ZeroShotAudioClassificationPipeline . "zero-shot-image-classification" : will return a ZeroShotImageClassificationPipeline . "zero-shot-object-detection" : will return a ZeroShotObjectDetectionPipeline . [model] string null The name of the pre-trained model to use. If not specified, the default model for the task will be used. [options] * Optional parameters for the pipeline. pipelines~ImagePipelineInputs : <code> string </code> | <code> RawImage </code> | <code> URL </code> Kind : inner typedef of pipelines pipelines~AudioPipelineInputs : <code> string </code> | <code> URL </code> | <code> Float32Array </code> | <code> Float64Array </code> Kind : inner typedef of pipelines pipelines~BoundingBox : <code> Object </code> Kind : inner typedef of pipelines Properties Name Type Description xmin number The minimum x coordinate of the bounding box. ymin number The minimum y coordinate of the bounding box. xmax number The maximum x coordinate of the bounding box. ymax number The maximum y coordinate of the bounding box. pipelines~Disposable ⇒ <code> Promise. < void > </code> Kind : inner typedef of pipelines Returns : Promise.<void> - A promise that resolves when the item has been disposed. Properties Name Type Description dispose DisposeType A promise that resolves when the pipeline has been disposed. pipelines~TextPipelineConstructorArgs : <code> Object </code> An object used to instantiate a text-based pipeline. Kind : inner typedef of pipelines Properties Name Type Description task string The task of the pipeline. Useful for specifying subtasks. model PreTrainedModel The model used by the pipeline. tokenizer PreTrainedTokenizer The tokenizer used by the pipeline. pipelines~ImagePipelineConstructorArgs : <code> Object </code> An object used to instantiate an audio-based pipeline. Kind : inner typedef of pipelines Properties Name Type Description task string The task of the pipeline. Useful for specifying subtasks. model PreTrainedModel The model used by the pipeline. processor Processor The processor used by the pipeline. pipelines~TextImagePipelineConstructorArgs : <code> Object </code> An object used to instantiate a text- and audio-based pipeline. Kind : inner typedef of pipelines Properties Name Type Description task string The task of the pipeline. Useful for specifying subtasks. model PreTrainedModel The model used by the pipeline. tokenizer PreTrainedTokenizer The tokenizer used by the pipeline. processor Processor The processor used by the pipeline. pipelines~TextClassificationPipelineType ⇒ <code> Promise. < (TextClassificationOutput|Array < TextClassificationOutput > ) > </code> Parameters specific to text classification pipelines. Kind : inner typedef of pipelines Returns : Promise.<(TextClassificationOutput|Array<TextClassificationOutput>)> - An array or object containing the predicted labels and scores. Param Type Description texts string | Array<string> The input text(s) to be classified. [options] TextClassificationPipelineOptions The options to use for text classification. Properties Name Type Default Description label string The label predicted. score number The corresponding probability. [top_k] number 1 The number of top predictions to be returned. pipelines~TokenClassificationPipelineType ⇒ <code> Promise. < (TokenClassificationOutput|Array < TokenClassificationOutput > ) > </code> Parameters specific to token classification pipelines. Kind : inner typedef of pipelines Returns : Promise.<(TokenClassificationOutput|Array<TokenClassificationOutput>)> - The result. Param Type Description texts string | Array<string> One or several texts (or one list of texts) for token classification. [options] TokenClassificationPipelineOptions The options to use for token classification. Properties Name Type Description word string The token/word classified. This is obtained by decoding the selected tokens. score number The corresponding probability for entity . entity string The entity predicted for that token/word. index number The index of the corresponding token in the sentence. [start] number The index of the start of the corresponding entity in the sentence. [end] number The index of the end of the corresponding entity in the sentence. [ignore_labels] Array.<string> A list of labels to ignore. pipelines~QuestionAnsweringPipelineType ⇒ <code> Promise. < (QuestionAnsweringOutput|Array < QuestionAnsweringOutput > ) > </code> Parameters specific to question answering pipelines. Kind : inner typedef of pipelines Returns : Promise.<(QuestionAnsweringOutput|Array<QuestionAnsweringOutput>)> - An array or object containing the predicted answers and scores. Param Type Description question string | Array<string> One or several question(s) (must be used in conjunction with the context argument). context string | Array<string> One or several context(s) associated with the question(s) (must be used in conjunction with the question argument). [options] QuestionAnsweringPipelineOptions The options to use for question answering. Properties Name Type Default Description score number The probability associated to the answer. [start] number The character start index of the answer (in the tokenized version of the input). [end] number The character end index of the answer (in the tokenized version of the input). answer string The answer to the question. [top_k] number 1 The number of top answer predictions to be returned. pipelines~FillMaskPipelineType ⇒ <code> Promise. < (FillMaskOutput|Array < FillMaskOutput > ) > </code> Parameters specific to fill mask pipelines. Kind : inner typedef of pipelines Returns : Promise.<(FillMaskOutput|Array<FillMaskOutput>)> - An array of objects containing the score, predicted token, predicted token string, and the sequence with the predicted token filled in, or an array of such arrays (one for each input text). If only one input text is given, the output will be an array of objects. Throws : Error When the mask token is not found in the input text. Param Type Description texts string | Array<string> One or several texts (or one list of prompts) with masked tokens. [options] FillMaskPipelineOptions The options to use for masked language modelling. Properties Name Type Default Description sequence string The corresponding input with the mask token prediction. score number The corresponding probability. token number The predicted token id (to replace the masked one). token_str string The predicted token (to replace the masked one). [top_k] number 5 When passed, overrides the number of predictions to return. pipelines~Text2TextGenerationPipelineType ⇒ <code> Promise. < (Text2TextGenerationOutput|Array < Text2TextGenerationOutput > ) > </code> Kind : inner typedef of pipelines Param Type Description texts string | Array<string> Input text for the encoder. [options] * Additional keyword arguments to pass along to the generate method of the model. Properties Name Type Description generated_text string The generated text. pipelines~SummarizationPipelineType ⇒ <code> Promise. < (SummarizationOutput|Array < SummarizationOutput > ) > </code> Kind : inner typedef of pipelines Param Type Description texts string | Array<string> One or several articles (or one list of articles) to summarize. [options] * Additional keyword arguments to pass along to the generate method of the model. Properties Name Type Description summary_text string The summary text. pipelines~TranslationPipelineType ⇒ <code> Promise. < (TranslationOutput|Array < TranslationOutput > ) > </code> Kind : inner typedef of pipelines Param Type Description texts string | Array<string> Texts to be translated. [options] * Additional keyword arguments to pass along to the generate method of the model. Properties Name Type Description translation_text string The translated text. pipelines~TextGenerationPipelineType ⇒ <code> Promise. < (TextGenerationOutput|Array < TextGenerationOutput > ) > </code> Parameters specific to text-generation pipelines. Kind : inner typedef of pipelines Returns : Promise.<(TextGenerationOutput|Array<TextGenerationOutput>)> - An array or object containing the generated texts. Param Type Description texts string | Array<string> | Chat | Array<Chat> One or several prompts (or one list of prompts) to complete. [options] Partial.<TextGenerationConfig> Additional keyword arguments to pass along to the generate method of the model. Properties Name Type Default Description generated_text string | Chat The generated text. [add_special_tokens] boolean Whether or not to add special tokens when tokenizing the sequences. [return_full_text] boolean true If set to false only added text is returned, otherwise the full text is returned. pipelines~ZeroShotClassificationPipelineType ⇒ <code> Promise. < (ZeroShotClassificationOutput|Array < ZeroShotClassificationOutput > ) > </code> Parameters specific to zero-shot classification pipelines. Kind : inner typedef of pipelines Returns : Promise.<(ZeroShotClassificationOutput|Array<ZeroShotClassificationOutput>)> - An array or object containing the predicted labels and scores. Param Type Description texts string | Array<string> The sequence(s) to classify, will be truncated if the model input is too large. candidate_labels string | Array<string> The set of possible class labels to classify each sequence into. Can be a single label, a string of comma-separated labels, or a list of labels. [options] ZeroShotClassificationPipelineOptions The options to use for zero-shot classification. Properties Name Type Default Description sequence string The sequence for which this is the output. labels Array.<string> The labels sorted by order of likelihood. scores Array.<number> The probabilities for each of the labels. [hypothesis_template] string ""This example is {}."" The template used to turn each candidate label into an NLI-style hypothesis. The candidate label will replace the &#123;} placeholder. [multi_label] boolean false Whether or not multiple candidate labels can be true. If false , the scores are normalized such that the sum of the label likelihoods for each sequence is 1. If true , the labels are considered independent and probabilities are normalized for each candidate by doing a softmax of the entailment score vs. the contradiction score. pipelines~FeatureExtractionPipelineType ⇒ <code> Promise. < Tensor > </code> Parameters specific to feature extraction pipelines. Kind : inner typedef of pipelines Returns : Promise.<Tensor> - The features computed by the model. Param Type Description texts string | Array<string> One or several texts (or one list of texts) to get the features of. [options] FeatureExtractionPipelineOptions The options to use for feature extraction. Properties Name Type Default Description [pooling] 'none' | 'mean' | 'cls' "none" The pooling method to use. [normalize] boolean false Whether or not to normalize the embeddings in the last dimension. [quantize] boolean false Whether or not to quantize the embeddings. [precision] 'binary' | 'ubinary' 'binary' The precision to use for quantization. pipelines~ImageFeatureExtractionPipelineType ⇒ <code> Promise. < Tensor > </code> Parameters specific to image feature extraction pipelines. Kind : inner typedef of pipelines Returns : Promise.<Tensor> - The image features computed by the model. Param Type Description images ImagePipelineInputs One or several images (or one list of images) to get the features of. [options] ImageFeatureExtractionPipelineOptions The options to use for image feature extraction. Properties Name Type Default Description [pool] boolean Whether or not to return the pooled output. If set to false , the model will return the raw hidden states. pipelines~AudioClassificationPipelineType ⇒ <code> Promise. < (AudioClassificationOutput|Array < AudioClassificationOutput > ) > </code> Parameters specific to audio classification pipelines. Kind : inner typedef of pipelines Returns : Promise.<(AudioClassificationOutput|Array<AudioClassificationOutput>)> - An array or object containing the predicted labels and scores. Param Type Description audio AudioPipelineInputs The input audio file(s) to be classified. The input is either: string or URL that is the filename/URL of the audio file, the file will be read at the processor's sampling rate to get the waveform using the AudioContext API. If AudioContext is not available, you should pass the raw waveform in as a Float32Array of shape (n, ) . Float32Array or Float64Array of shape (n, ) , representing the raw audio at the correct sampling rate (no further check will be done). [options] AudioClassificationPipelineOptions The options to use for audio classification. Properties Name Type Default Description label string The label predicted. score number The corresponding probability. [top_k] number 5 The number of top labels that will be returned by the pipeline. If the provided number is null or higher than the number of labels available in the model configuration, it will default to the number of labels. pipelines~ZeroShotAudioClassificationPipelineType ⇒ <code> Promise. < (Array < ZeroShotAudioClassificationOutput > |Array < Array < ZeroShotAudioClassificationOutput > > ) > </code> Parameters specific to zero-shot audio classification pipelines. Kind : inner typedef of pipelines Returns : Promise.<(Array<ZeroShotAudioClassificationOutput>|Array<Array<ZeroShotAudioClassificationOutput>>)> - An array of objects containing the predicted labels and scores. Param Type Description audio AudioPipelineInputs The input audio file(s) to be classified. The input is either: string or URL that is the filename/URL of the audio file, the file will be read at the processor's sampling rate to get the waveform using the AudioContext API. If AudioContext is not available, you should pass the raw waveform in as a Float32Array of shape (n, ) . Float32Array or Float64Array of shape (n, ) , representing the raw audio at the correct sampling rate (no further check will be done). candidate_labels Array.<string> The candidate labels for this audio. [options] ZeroShotAudioClassificationPipelineOptions The options to use for zero-shot audio classification. Properties Name Type Default Description label string The label identified by the model. It is one of the suggested candidate_label . score number The score attributed by the model for that label (between 0 and 1). [hypothesis_template] string ""This is a sound of {}."" The sentence used in conjunction with candidate_labels to attempt the audio classification by replacing the placeholder with the candidate_labels. Then likelihood is estimated by using logits_per_audio . pipelines~Chunk : <code> Object </code> Kind : inner typedef of pipelines Properties Name Type Description timestamp * The start and end timestamp of the chunk in seconds. text string The recognized text. pipelines~AutomaticSpeechRecognitionPipelineType ⇒ <code> Promise. < (AutomaticSpeechRecognitionOutput|Array < AutomaticSpeechRecognitionOutput > ) > </code> Parameters specific to automatic-speech-recognition pipelines. Kind : inner typedef of pipelines Returns : Promise.<(AutomaticSpeechRecognitionOutput|Array<AutomaticSpeechRecognitionOutput>)> - An object containing the transcription text and optionally timestamps if return_timestamps is true . Param Type Description audio AudioPipelineInputs The input audio file(s) to be transcribed. The input is either: string or URL that is the filename/URL of the audio file, the file will be read at the processor's sampling rate to get the waveform using the AudioContext API. If AudioContext is not available, you should pass the raw waveform in as a Float32Array of shape (n, ) . Float32Array or Float64Array of shape (n, ) , representing the raw audio at the correct sampling rate (no further check will be done). [options] Partial.<AutomaticSpeechRecognitionConfig> Additional keyword arguments to pass along to the generate method of the model. Properties Name Type Description text string The recognized text. [chunks] Array.<Chunk> When using return_timestamps , the chunks will become a list containing all the various text chunks identified by the model. [return_timestamps] boolean | 'word' Whether to return timestamps or not. Default is false . [chunk_length_s] number The length of audio chunks to process in seconds. Default is 0 (no chunking). [stride_length_s] number The length of overlap between consecutive audio chunks in seconds. If not provided, defaults to chunk_length_s / 6 . [force_full_sequences] boolean Whether to force outputting full sequences or not. Default is false . [language] string The source language. Default is null , meaning it should be auto-detected. Use this to potentially improve performance if the source language is known. [task] string The task to perform. Default is null , meaning it should be auto-detected. [num_frames] number The number of frames in the input audio. pipelines~ImageToTextPipelineType ⇒ <code> Promise. < (ImageToTextOutput|Array < ImageToTextOutput > ) > </code> Kind : inner typedef of pipelines Returns : Promise.<(ImageToTextOutput|Array<ImageToTextOutput>)> - An object (or array of objects) containing the generated text(s). Param Type Description texts ImagePipelineInputs The images to be captioned. [options] * Additional keyword arguments to pass along to the generate method of the model. Properties Name Type Description generated_text string The generated text. pipelines~ImageClassificationPipelineType ⇒ <code> Promise. < (ImageClassificationOutput|Array < ImageClassificationOutput > ) > </code> Parameters specific to image classification pipelines. Kind : inner typedef of pipelines Returns : Promise.<(ImageClassificationOutput|Array<ImageClassificationOutput>)> - An array or object containing the predicted labels and scores. Param Type Description images ImagePipelineInputs The input images(s) to be classified. [options] ImageClassificationPipelineOptions The options to use for image classification. Properties Name Type Default Description label string The label identified by the model. score number The score attributed by the model for that label. [top_k] number 1 The number of top labels that will be returned by the pipeline. pipelines~ImageSegmentationPipelineType ⇒ <code> Promise. < Array < ImageSegmentationPipelineOutput > > </code> Parameters specific to image segmentation pipelines. Kind : inner typedef of pipelines Returns : Promise.<Array<ImageSegmentationPipelineOutput>> - The annotated segments. Param Type Description images ImagePipelineInputs The input images. [options] ImageSegmentationPipelineOptions The options to use for image segmentation. Properties Name Type Default Description label string The label of the segment. score number | null The score of the segment. mask RawImage The mask of the segment. [threshold] number 0.5 Probability threshold to filter out predicted masks. [mask_threshold] number 0.5 Threshold to use when turning the predicted masks into binary values. [overlap_mask_area_threshold] number 0.8 Mask overlap threshold to eliminate small, disconnected segments. [subtask] null | string Segmentation task to be performed. One of [ panoptic , instance , and semantic ], depending on model capabilities. If not set, the pipeline will attempt to resolve (in that order). [label_ids_to_fuse] Array.<number> List of label ids to fuse. If not set, do not fuse any labels. [target_sizes] Array.<Array<number>> List of target sizes for the input images. If not set, use the original image sizes. pipelines~ZeroShotImageClassificationPipelineType ⇒ <code> Promise. < (Array < ZeroShotImageClassificationOutput > |Array < Array < ZeroShotImageClassificationOutput > > ) > </code> Parameters specific to zero-shot image classification pipelines. Kind : inner typedef of pipelines Returns : Promise.<(Array<ZeroShotImageClassificationOutput>|Array<Array<ZeroShotImageClassificationOutput>>)> - An array of objects containing the predicted labels and scores. Param Type Description images ImagePipelineInputs The input images. candidate_labels Array.<string> The candidate labels for this image. [options] ZeroShotImageClassificationPipelineOptions The options to use for zero-shot image classification. Properties Name Type Default Description label string The label identified by the model. It is one of the suggested candidate_label . score number The score attributed by the model for that label (between 0 and 1). [hypothesis_template] string ""This is a photo of {}"" The sentence used in conjunction with candidate_labels to attempt the image classification by replacing the placeholder with the candidate_labels. Then likelihood is estimated by using logits_per_image . pipelines~ObjectDetectionPipelineType ⇒ <code> Promise. < (ObjectDetectionPipelineOutput|Array < ObjectDetectionPipelineOutput > ) > </code> Parameters specific to object detection pipelines. Kind : inner typedef of pipelines Returns : Promise.<(ObjectDetectionPipelineOutput|Array<ObjectDetectionPipelineOutput>)> - A list of objects or a list of list of objects. Param Type Description images ImagePipelineInputs The input images. [options] ObjectDetectionPipelineOptions The options to use for object detection. Properties Name Type Default Description label string The class label identified by the model. score number The score attributed by the model for that label. box BoundingBox The bounding box of detected object in image's original size, or as a percentage if percentage is set to true. [threshold] number 0.9 The threshold used to filter boxes by score. [percentage] boolean false Whether to return the boxes coordinates in percentage (true) or in pixels (false). pipelines~ZeroShotObjectDetectionPipelineType ⇒ <code> Promise. < (Array < ZeroShotObjectDetectionOutput > |Array < Array < ZeroShotObjectDetectionOutput > > ) > </code> Parameters specific to zero-shot object detection pipelines. Kind : inner typedef of pipelines Returns : Promise.<(Array<ZeroShotObjectDetectionOutput>|Array<Array<ZeroShotObjectDetectionOutput>>)> - An array of objects containing the predicted labels, scores, and bounding boxes. Param Type Description images ImagePipelineInputs The input images. candidate_labels Array.<string> What the model should recognize in the image. [options] ZeroShotObjectDetectionPipelineOptions The options to use for zero-shot object detection. Properties Name Type Default Description label string Text query corresponding to the found object. score number Score corresponding to the object (between 0 and 1). box BoundingBox Bounding box of the detected object in image's original size, or as a percentage if percentage is set to true. [threshold] number 0.1 The probability necessary to make a prediction. [top_k] number The number of top predictions that will be returned by the pipeline. If the provided number is null or higher than the number of predictions available, it will default to the number of predictions. [percentage] boolean false Whether to return the boxes coordinates in percentage (true) or in pixels (false). pipelines~DocumentQuestionAnsweringPipelineType ⇒ <code> Promise. < (DocumentQuestionAnsweringOutput|Array < DocumentQuestionAnsweringOutput > ) > </code> Kind : inner typedef of pipelines Returns : Promise.<(DocumentQuestionAnsweringOutput|Array<DocumentQuestionAnsweringOutput>)> - An object (or array of objects) containing the answer(s). Param Type Description image ImageInput The image of the document to use. question string A question to ask of the document. [options] * Additional keyword arguments to pass along to the generate method of the model. Properties Name Type Description answer string The generated text. pipelines~TextToAudioPipelineConstructorArgs : <code> Object </code> Kind : inner typedef of pipelines Properties Name Type Description [vocoder] PreTrainedModel The vocoder used by the pipeline (if the model uses one). If not provided, use the default HifiGan vocoder. pipelines~TextToAudioPipelineType ⇒ <code> Promise. < TextToAudioOutput > </code> Parameters specific to text-to-audio pipelines. Kind : inner typedef of pipelines Returns : Promise.<TextToAudioOutput> - An object containing the generated audio and sampling rate. Param Type Description texts string | Array<string> The text(s) to generate. options TextToAudioPipelineOptions Parameters passed to the model generation/forward method. Properties Name Type Default Description audio Float32Array The generated audio waveform. sampling_rate number The sampling rate of the generated audio waveform. [speaker_embeddings] Tensor | Float32Array | string | URL The speaker embeddings (if the model requires it). pipelines~ImageToImagePipelineType ⇒ <code> Promise. < (RawImage|Array < RawImage > ) > </code> Kind : inner typedef of pipelines Returns : Promise.<(RawImage|Array<RawImage>)> - The transformed image or list of images. Param Type Description images ImagePipelineInputs The images to transform. pipelines~DepthEstimationPipelineType ⇒ <code> Promise. < (DepthEstimationPipelineOutput|Array < DepthEstimationPipelineOutput > ) > </code> Kind : inner typedef of pipelines Returns : Promise.<(DepthEstimationPipelineOutput|Array<DepthEstimationPipelineOutput>)> - An image or a list of images containing result(s). Param Type Description images ImagePipelineInputs The images to compute depth for. Properties Name Type Description predicted_depth Tensor The raw depth map predicted by the model. depth RawImage The processed depth map as an image (with the same size as the input image). pipelines~AllTasks : <code> * </code> All possible pipeline types. Kind : inner typedef of pipelines < > Update on GitHub ← Index Models → pipelines pipelines. Pipeline ⇐ Callable new Pipeline(options) pipeline.dispose() : Dispose Type pipeline._call(...args) pipelines. Text Classification Pipeline new Text Classification Pipeline(options) text Classification Pipeline._call() : Text Classification Pipeline Callback pipelines. Token Classification Pipeline new Token Classification Pipeline(options) token Classification Pipeline._call() : Token Classification Pipeline Callback pipelines. Question Answering Pipeline new Question Answering Pipeline(options) question Answering Pipeline._call() : Question Answering Pipeline Callback pipelines. Fill Mask Pipeline new Fill Mask Pipeline(options) fill Mask Pipeline._call() : Fill Mask Pipeline Callback pipelines. Text2 Text Generation Pipeline new Text2 Text Generation Pipeline(options) text2 Text Generation Pipeline._key : ’ generated_text ’ text2 Text Generation Pipeline._call() : Text2 Text Generation Pipeline Callback pipelines. Summarization Pipeline new Summarization Pipeline(options) summarization Pipeline._key : ’ summary_text ’ pipelines. Translation Pipeline new Translation Pipeline(options) translation Pipeline._key : ’ translation_text ’ pipelines. Text Generation Pipeline new Text Generation Pipeline(options) text Generation Pipeline._call() : Text Generation Pipeline Callback pipelines. Zero Shot Classification Pipeline new Zero Shot Classification Pipeline(options) zero Shot Classification Pipeline.model : any zero Shot Classification Pipeline._call() : Zero Shot Classification Pipeline Callback pipelines. Feature Extraction Pipeline new Feature Extraction Pipeline(options) feature Extraction Pipeline._call() : Feature Extraction Pipeline Callback pipelines. Image Feature Extraction Pipeline new Image Feature Extraction Pipeline(options) image Feature Extraction Pipeline._call() : Image Feature Extraction Pipeline Callback pipelines. Audio Classification Pipeline new Audio Classification Pipeline(options) audio Classification Pipeline._call() : Audio Classification Pipeline Callback pipelines. Zero Shot Audio Classification Pipeline new Zero Shot Audio Classification Pipeline(options) zero Shot Audio Classification Pipeline._call() : Zero Shot Audio Classification Pipeline Callback pipelines. Automatic Speech Recognition Pipeline new Automatic Speech Recognition Pipeline(options) automatic Speech Recognition Pipeline._call() : Automatic Speech Recognition Pipeline Callback pipelines. Image To Text Pipeline new Image To Text Pipeline(options) image To Text Pipeline._call() : Image To Text Pipeline Callback pipelines. Image Classification Pipeline new Image Classification Pipeline(options) image Classification Pipeline._call() : Image Classification Pipeline Callback pipelines. Image Segmentation Pipeline new Image Segmentation Pipeline(options) image Segmentation Pipeline._call() : Image Segmentation Pipeline Callback pipelines. Zero Shot Image Classification Pipeline new Zero Shot Image Classification Pipeline(options) zero Shot Image Classification Pipeline._call() : Zero Shot Image Classification Pipeline Callback pipelines. Object Detection Pipeline new Object Detection Pipeline(options) object Detection Pipeline._call() : Object Detection Pipeline Callback pipelines. Zero Shot Object Detection Pipeline new Zero Shot Object Detection Pipeline(options) zero Shot Object Detection Pipeline._call() : Zero Shot Object Detection Pipeline Callback pipelines. Document Question Answering Pipeline new Document Question Answering Pipeline(options) document Question Answering Pipeline._call() : Document Question Answering Pipeline Callback pipelines. Text To Audio Pipeline new Text To Audio Pipeline(options) text To Audio Pipeline._call() : Text To Audio Pipeline Callback pipelines. Image To Image Pipeline new Image To Image Pipeline(options) image To Image Pipeline._call() : Image To Image Pipeline Callback pipelines. Depth Estimation Pipeline new Depth Estimation Pipeline(options) depth Estimation Pipeline._call() : Depth Estimation Pipeline Callback pipelines.pipeline(task, [model], [options]) ⇒ * pipelines~ Image Pipeline Inputs : string | Raw Image | UR L pipelines~ Audio Pipeline Inputs : string | UR L | Float32 Array | Float64 Array pipelines~ Bounding Box : Object pipelines~ Disposable ⇒ Promise. < void > pipelines~ Text Pipeline Constructor Args : Object pipelines~ Image Pipeline Constructor Args : Object pipelines~ Text Image Pipeline Constructor Args : Object pipelines~ Text Classification Pipeline Type ⇒ Promise. < ( Text Classification Output| Array < Text Classification Output > ) > pipelines~ Token Classification Pipeline Type ⇒ Promise. < ( Token Classification Output| Array < Token Classification Output > ) > pipelines~ Question Answering Pipeline Type ⇒ Promise. < ( Question Answering Output| Array < Question Answering Output > ) > pipelines~ Fill Mask Pipeline Type ⇒ Promise. < ( Fill Mask Output| Array < Fill Mask Output > ) > pipelines~ Text2 Text Generation Pipeline Type ⇒ Promise. < ( Text2 Text Generation Output| Array < Text2 Text Generation Output > ) > pipelines~ Summarization Pipeline Type ⇒ Promise. < ( Summarization Output| Array < Summarization Output > ) > pipelines~ Translation Pipeline Type ⇒ Promise. < ( Translation Output| Array < Translation Output > ) > pipelines~ Text Generation Pipeline Type ⇒ Promise. < ( Text Generation Output| Array < Text Generation Output > ) > pipelines~ Zero Shot Classification Pipeline Type ⇒ Promise. < ( Zero Shot Classification Output| Array < Zero Shot Classification Output > ) > pipelines~ Feature Extraction Pipeline Type ⇒ Promise. < Tensor > pipelines~ Image Feature Extraction Pipeline Type ⇒ Promise. < Tensor > pipelines~ Audio Classification Pipeline Type ⇒ Promise. < ( Audio Classification Output| Array < Audio Classification Output > ) > pipelines~ Zero Shot Audio Classification Pipeline Type ⇒ Promise. < ( Array < Zero Shot Audio Classification Output > | Array < Array < Zero Shot Audio Classification Output > > ) > pipelines~ Chunk : Object pipelines~ Automatic Speech Recognition Pipeline Type ⇒ Promise. < ( Automatic Speech Recognition Output| Array < Automatic Speech Recognition Output > ) > pipelines~ Image To Text Pipeline Type ⇒ Promise. < ( Image To Text Output| Array < Image To Text Output > ) > pipelines~ Image Classification Pipeline Type ⇒ Promise. < ( Image Classification Output| Array < Image Classification Output > ) > pipelines~ Image Segmentation Pipeline Type ⇒ Promise. < Array < Image Segmentation Pipeline Output > > pipelines~ Zero Shot Image Classification Pipeline Type ⇒ Promise. < ( Array < Zero Shot Image Classification Output > | Array < Array < Zero Shot Image Classification Output > > ) > pipelines~ Object Detection Pipeline Type ⇒ Promise. < ( Object Detection Pipeline Output| Array < Object Detection Pipeline Output > ) > pipelines~ Zero Shot Object Detection Pipeline Type ⇒ Promise. < ( Array < Zero Shot Object Detection Output > | Array < Array < Zero Shot Object Detection Output > > ) > pipelines~ Document Question Answering Pipeline Type ⇒ Promise. < ( Document Question Answering Output| Array < Document Question Answering Output > ) > pipelines~ Text To Audio Pipeline Constructor Args : Object pipelines~ Text To Audio Pipeline Type ⇒ Promise. < Text To Audio Output > pipelines~ Image To Image Pipeline Type ⇒ Promise. < ( Raw Image| Array < Raw Image > ) > pipelines~ Depth Estimation Pipeline Type ⇒ Promise. < ( Depth Estimation Pipeline Output| Array < Depth Estimation Pipeline Output > ) > pipelines~ All Tasks : *
Numpy_API.txt
Numpy API Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Safetensors documentation Numpy API Safetensors 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.5.0-rc.0 v0.3.2 v0.2.9 EN Getting started 🤗 Safetensors Speed Comparison Tensor Sharing in Pytorch Metadata Parsing Convert weights to safetensors API Torch API Tensorflow API PaddlePaddle API Flax API Numpy API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.5.0-rc.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Numpy API safetensors.numpy.load_file < source > ( filename : typing.Union[str, os.PathLike] ) → Dict[str, np.ndarray] Parameters filename ( str , or os.PathLike )) — The name of the file which contains the tensors Returns Dict[str, np.ndarray] dictionary that contains name as key, value as np.ndarray Loads a safetensors file into numpy format. Example: Copied from safetensors.numpy import load_file file_path = "./my_folder/bert.safetensors" loaded = load_file(file_path) safetensors.numpy.load < source > ( data : bytes ) → Dict[str, np.ndarray] Parameters data ( bytes ) — The content of a safetensors file Returns Dict[str, np.ndarray] dictionary that contains name as key, value as np.ndarray on cpu Loads a safetensors file into numpy format from pure bytes. Example: Copied from safetensors.numpy import load file_path = "./my_folder/bert.safetensors" with open (file_path, "rb" ) as f: data = f.read() loaded = load(data) safetensors.numpy.save_file < source > ( tensor_dict : typing.Dict[str, numpy.ndarray] filename : typing.Union[str, os.PathLike] metadata : typing.Optional[typing.Dict[str, str]] = None ) → None Parameters tensor_dict ( Dict[str, np.ndarray] ) — The incoming tensors. Tensors need to be contiguous and dense. filename ( str , or os.PathLike )) — The filename we’re saving into. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns None Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.numpy import save_file import numpy as np tensors = { "embedding" : np.zeros(( 512 , 1024 )), "attention" : np.zeros(( 256 , 256 ))} save_file(tensors, "model.safetensors" ) safetensors.numpy.save < source > ( tensor_dict : typing.Dict[str, numpy.ndarray] metadata : typing.Optional[typing.Dict[str, str]] = None ) → bytes Parameters tensor_dict ( Dict[str, np.ndarray] ) — The incoming tensors. Tensors need to be contiguous and dense. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns bytes The raw bytes representing the format Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.numpy import save import numpy as np tensors = { "embedding" : np.zeros(( 512 , 1024 )), "attention" : np.zeros(( 256 , 256 ))} byte_data = save(tensors) < > Update on GitHub ← Flax API Numpy API
Training_customization.txt
Training customization Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Training customization TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Training customization TRL is designed with modularity in mind so that users to be able to efficiently customize the training loop for their needs. Below are some examples on how you can apply and test different techniques. Note: Although these examples use the DPOTrainer, the customization applies to most (if not all) trainers. Train on multiple GPUs / nodes The trainers in TRL use 🤗 Accelerate to enable distributed training across multiple GPUs or nodes. To do so, first create an 🤗 Accelerate config file by running Copied accelerate config and answering the questions according to your multi-gpu / multi-node setup. You can then launch distributed training by running: Copied accelerate launch your_script.py We also provide config files in the examples folder that can be used as templates. To use these templates, simply pass the path to the config file when launching a job, e.g.: Copied accelerate launch --config_file=examples/accelerate_configs/multi_gpu.yaml --num_processes {NUM_GPUS} path_to_script.py --all_arguments_of_the_script Refer to the examples page for more details. Distributed training with DeepSpeed All of the trainers in TRL can be run on multiple GPUs together with DeepSpeed ZeRO-{1,2,3} for efficient sharding of the optimizer states, gradients, and model weights. To do so, run: Copied accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero{1,2,3}.yaml --num_processes {NUM_GPUS} path_to_your_script.py --all_arguments_of_the_script Note that for ZeRO-3, a small tweak is needed to initialize your reward model on the correct device via the zero3_init_context_manager() context manager. In particular, this is needed to avoid DeepSpeed hanging after a fixed number of training steps. Here is a snippet of what is involved from the sentiment_tuning example: Copied ds_plugin = ppo_trainer.accelerator.state.deepspeed_plugin if ds_plugin is not None and ds_plugin.is_zero3_init_enabled(): with ds_plugin.zero3_init_context_manager(enable= False ): sentiment_pipe = pipeline( "sentiment-analysis" , model= "lvwerra/distilbert-imdb" , device=device) else : sentiment_pipe = pipeline( "sentiment-analysis" , model= "lvwerra/distilbert-imdb" , device=device) Consult the 🤗 Accelerate documentation for more information about the DeepSpeed plugin. Use different optimizers and schedulers By default, the DPOTrainer creates a torch.optim.AdamW optimizer. You can create and define a different optimizer and pass it to DPOTrainer as follows: Copied from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer from torch import optim from trl import DPOConfig, DPOTrainer model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct" ) tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct" ) dataset = load_dataset( "trl-lib/ultrafeedback_binarized" , split= "train" ) training_args = DPOConfig(output_dir= "Qwen2.5-0.5B-DPO" ) optimizer = optim.SGD(model.parameters(), lr=training_args.learning_rate) trainer = DPOTrainer( model=model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, optimizers=(optimizer, None ), ) trainer.train() Add a learning rate scheduler You can also play with your training by adding learning rate schedulers. Copied from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer from torch import optim from trl import DPOConfig, DPOTrainer model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct" ) tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct" ) dataset = load_dataset( "trl-lib/ultrafeedback_binarized" , split= "train" ) training_args = DPOConfig(output_dir= "Qwen2.5-0.5B-DPO" ) optimizer = optim.AdamW(model.parameters(), lr=training_args.learning_rate) lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size= 30 , gamma= 0.1 ) trainer = DPOTrainer( model=model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, optimizers=(optimizer, lr_scheduler), ) trainer.train() Memory efficient fine-tuning by sharing layers Another tool you can use for more memory efficient fine-tuning is to share layers between the reference model and the model you want to train. Copied from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer from trl import create_reference_model, DPOConfig, DPOTrainer model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct" ) ref_model = create_reference_model(model, num_shared_layers= 6 ) tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct" ) dataset = load_dataset( "trl-lib/ultrafeedback_binarized" , split= "train[:1%]" ) training_args = DPOConfig(output_dir= "Qwen2.5-0.5B-DPO" ) trainer = DPOTrainer( model=model, ref_model=ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, ) trainer.train() Pass 8-bit reference models Since trl supports all keyword arguments when loading a model from transformers using from_pretrained , you can also leverage load_in_8bit from transformers for more memory efficient fine-tuning. Read more about 8-bit model loading in transformers here . Copied from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig from trl import DPOConfig, DPOTrainer model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct" ) quantization_config = BitsAndBytesConfig(load_in_8bit= True ) ref_model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct" , quantization_config= quantization_config) tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2.5-0.5B-Instruct" ) dataset = load_dataset( "trl-lib/ultrafeedback_binarized" , split= "train" ) training_args = DPOConfig(output_dir= "Qwen2.5-0.5B-DPO" ) trainer = DPOTrainer( model=model, ref_model=ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, ) trainer.train() Use the CUDA cache optimizer When training large models, you should better handle the CUDA cache by iteratively clearing it. To do so, simply pass optimize_cuda_cache=True to DPOConfig : Copied training_args = DPOConfig(..., optimize_cuda_cache= True ) < > Update on GitHub ← Use Trained Models Understanding Logs → Training customization Train on multiple GP Us / nodes Distributed training with Deep Speed Use different optimizers and schedulers Add a learning rate scheduler Memory efficient fine-tuning by sharing layers Pass 8-bit reference models Use the CUD A cache optimizer
Image_Segmentation.txt
Image Segmentation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Image Segmentation api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Image Segmentation Image Segmentation divides an image into segments where each pixel in the image is mapped to an object. For more details about the image-segmentation task, check out its dedicated page ! You will find examples and related materials. Recommended models openmmlab/upernet-convnext-small : Solid semantic segmentation model trained on ADE20k. facebook/mask2former-swin-large-coco-panoptic : Panoptic segmentation model trained on the COCO (common objects) dataset. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/openmmlab/upernet-convnext-small" headers = { "Authorization" : "Bearer hf_***" } def query ( filename ): with open (filename, "rb" ) as f: data = f.read() response = requests.post(API_URL, headers=headers, data=data) return response.json() output = query( "cats.jpg" ) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload inputs* string The input image data as a base64-encoded string. If no parameters are provided, you can also provide the image data as a raw bytes payload. parameters object mask_threshold number Threshold to use when turning the predicted masks into binary values. overlap_mask_area_threshold number Mask overlap threshold to eliminate small, disconnected segments. subtask enum Possible values: instance, panoptic, semantic. threshold number Probability threshold to filter out predicted masks. Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body (array) object[] A predicted mask / segment label string The label of the predicted segment. mask string The corresponding mask as a black-and-white image (base64-encoded). score number The score or confidence degree the model has. < > Update on GitHub ← Image Classification Image to Image → Image Segmentation Recommended models Using the API AP I specification Request Response
Using_mlx-image_at_Hugging_Face.txt
Using mlx-image at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using mlx-image at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using mlx-image at Hugging Face mlx-image is an image models library developed by Riccardo Musmeci built on Apple MLX . It tries to replicate the great timm , but for MLX models. Exploring mlx-image on the Hub You can find mlx-image models by filtering using the mlx-image library name, like in this query . There’s also an open mlx-vision community for contributors converting and publishing weights for MLX format. Installation Copied pip install mlx-image Models Model weights are available on the mlx-vision community on HuggingFace. To load a model with pre-trained weights: Copied from mlxim.model import create_model # loading weights from HuggingFace (https://huggingface.co/mlx-vision/resnet18-mlxim) model = create_model( "resnet18" ) # pretrained weights loaded from HF # loading weights from local file model = create_model( "resnet18" , weights= "path/to/resnet18/model.safetensors" ) To list all available models: Copied from mlxim.model import list_models list_models() ImageNet-1K Results Go to results-imagenet-1k.csv to check every model converted to mlx-image and its performance on ImageNet-1K with different settings. TL;DR performance is comparable to the original models from PyTorch implementations. Similarity to PyTorch and other familiar tools mlx-image tries to be as close as possible to PyTorch: DataLoader -> you can define your own collate_fn and also use num_workers to speed up data loading Dataset -> mlx-image already supports LabelFolderDataset (the good and old PyTorch ImageFolder ) and FolderDataset (a generic folder with images in it) ModelCheckpoint -> keeps track of the best model and saves it to disk (similar to PyTorchLightning). It also suggests early stopping Training Training is similar to PyTorch. Here’s an example of how to train a model: Copied import mlx.nn as nn import mlx.optimizers as optim from mlxim.model import create_model from mlxim.data import LabelFolderDataset, DataLoader train_dataset = LabelFolderDataset( root_dir= "path/to/train" , class_map={ 0 : "class_0" , 1 : "class_1" , 2 : [ "class_2" , "class_3" ]} ) train_loader = DataLoader( dataset=train_dataset, batch_size= 32 , shuffle= True , num_workers= 4 ) model = create_model( "resnet18" ) # pretrained weights loaded from HF optimizer = optim.Adam(learning_rate= 1e-3 ) def train_step ( model, inputs, targets ): logits = model(inputs) loss = mx.mean(nn.losses.cross_entropy(logits, target)) return loss model.train() for epoch in range ( 10 ): for batch in train_loader: x, target = batch train_step_fn = nn.value_and_grad(model, train_step) loss, grads = train_step_fn(x, target) optimizer.update(model, grads) mx. eval (model.state, optimizer.state) Additional Resources mlx-image repository mlx-vision community Contact If you have any questions, please email [email protected] . < > Update on GitHub ← ML-Agents MLX → Using mlx-image at Hugging Face Exploring mlx-image on the Hub Installation Models Image Net-1 K Results Similarity to Py Torch and other familiar tools Training Additional Resources Contact
Interface__TextGenerationStreamOutput.txt
Interface: TextGenerationStreamOutput Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: TextGenerationStreamOutput Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: TextGenerationStreamOutput Properties details • details : null | TextGenerationStreamDetails Generation details Only available when the generation is finished Defined in inference/src/tasks/nlp/textGenerationStream.ts:82 generated _ text • generated_text : null | string Complete generated text Only available when the generation is finished Defined in inference/src/tasks/nlp/textGenerationStream.ts:77 index • Optional index : number Defined in inference/src/tasks/nlp/textGenerationStream.ts:70 token • token : TextGenerationStreamToken Generated token, one at a time Defined in inference/src/tasks/nlp/textGenerationStream.ts:72 < > Update on GitHub ← TextGenerationStreamDetails TextGenerationStreamPrefillToken → Interface: Text Generation Stream Output Properties details Defined in generated _ text Defined in index Defined in token Defined in
Controlled_generation.txt
Controlled generation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Controlled generation Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Controlled generation Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue . We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. InstructPix2Pix Pix2Pix Zero Attend and Excite Semantic Guidance Self-attention Guidance Depth2Image MultiDiffusion Panorama DreamBooth Textual Inversion ControlNet Prompt Weighting Custom Diffusion Model Editing DiffEdit T2I-Adapter FABRIC For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training. Method Inference only Requires training / fine-tuning Comments InstructPix2Pix ✅ ❌ Can additionally be fine-tuned for better performance on specific edit instructions. Pix2Pix Zero ✅ ❌ Attend and Excite ✅ ❌ Semantic Guidance ✅ ❌ Self-attention Guidance ✅ ❌ Depth2Image ✅ ❌ MultiDiffusion Panorama ✅ ❌ DreamBooth ❌ ✅ Textual Inversion ❌ ✅ ControlNet ✅ ❌ A ControlNet can be trained/fine-tuned on a custom conditioning. Prompt Weighting ✅ ❌ Custom Diffusion ❌ ✅ Model Editing ✅ ❌ DiffEdit ✅ ❌ T2I-Adapter ✅ ❌ Fabric ✅ ❌ InstructPix2Pix Paper InstructPix2Pix is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. InstructPix2Pix has been explicitly trained to work well with InstructGPT -like prompts. Pix2Pix Zero Paper Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. Pix2Pix Zero can be used both to edit synthetic images as well as real images. To edit synthetic images, one first generates an image given a caption. Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. To edit a real image, one first generates an image caption using a model like BLIP . Then one applies DDIM inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model can edit an image in less than a minute on a consumer GPU as shown here . As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall pipeline might require more memory than a standard StableDiffusionPipeline . An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former involves fine-tuning the pre-trained weights while the latter does not. This means that you can apply Pix2Pix Zero to any of the available Stable Diffusion models. Attend and Excite Paper Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline . Semantic Guidance (SEGA) Paper SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. Self-attention Guidance (SAG) Paper Self-attention Guidance improves the general quality of images. SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. Depth2Image Project Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. It conditions on a monocular depth estimate of the original image. MultiDiffusion Panorama Paper MultiDiffusion Panorama defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). Fine-tuning your own models In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. DreamBooth Project DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. Textual Inversion Paper Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. ControlNet Paper ControlNet is an auxiliary network which adds an extra condition. There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, depth maps, and semantic segmentations. Prompt Weighting Prompt weighting is a simple technique that puts more attention weight on certain parts of the text input. Custom Diffusion Paper Custom Diffusion only fine-tunes the cross-attention maps of a pre-trained text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the concept(s) of interest. Model Editing Paper The text-to-image model editing pipeline helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for “A pack of roses”, the roses in the generated images are more likely to be red. This pipeline helps you change that assumption. DiffEdit Paper DiffEdit allows for semantic editing of input images along with input prompts while preserving the original input images as much as possible. T2I-Adapter Paper T2I-Adapter is an auxiliary network which adds an extra condition. There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch, depth maps, and semantic segmentations. Fabric Paper Fabric is a training-free approach applicable to a wide range of popular diffusion models, which exploits the self-attention layer present in the most widely used architectures to condition the diffusion process on a set of feedback images. < > Update on GitHub ← Philosophy How to contribute? → Controlled generation Instruct Pix2 Pix Pix2 Pix Zero Attend and Excite Semantic Guidance (SEG A) Self-attention Guidance (SA G) Depth2 Image Multi Diffusion Panorama Fine-tuning your own models Dream Booth Textual Inversion Control Net Prompt Weighting Custom Diffusion Model Editing Diff Edit T2 I- Adapter Fabric
Interface__AudioToAudioOutputValue.txt
Interface: AudioToAudioOutputValue Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: AudioToAudioOutputValue Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: AudioToAudioOutputValue Properties blob • blob : string Base64 encoded audio output. Defined in inference/src/tasks/audio/audioToAudio.ts:21 content-type • content-type : string Content-type for blob, e.g. audio/flac Defined in inference/src/tasks/audio/audioToAudio.ts:26 label • label : string The label for the audio output (model specific) Defined in inference/src/tasks/audio/audioToAudio.ts:16 < > Update on GitHub ← AudioClassificationOutputValue AutomaticSpeechRecognitionOutput → Interface: Audio To Audio Output Value Properties blob Defined in content-type Defined in label Defined in
Contribute_new_quantization_method.txt
Contribute new quantization method Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Contribute new quantization method Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Contribute new quantization method Transformers supports and integrates many quantization methods such as QLoRA, GPTQ, LLM.int8, and AWQ. However, there are other quantization approaches that are not yet integrated. To make adding and using these quantization methods with Transformers models easier, you should use the HfQuantizer class. The HfQuantizer is designed as an internal helper class for adding a quantization method instead of something you apply to every PyTorch module. This guide will show you how to integrate a new quantization method with the HfQuantizer class. Requirements Before integrating a new quantization method into Transformers, ensure the method you are trying to add meets the following prerequisites. Only quantization methods that can be run with PyTorch modules are currently supported. The quantization method is available through a Python package that is pip-installable by anyone (it is also fine if you can only install the package from source). Ideally, pre-compiled kernels are included in the pip package. The method can run on commonly-used hardware (CPU, GPU, …). The method is wrapped in a nn.Module (e.g., Linear8bitLt , Linear4bit ), and the quantized linear layer should have the following definition: Copied class Linear4bit (nn.Module): def __init__ ( self, ... ): ... def forward ( self, x ): return my_4bit_kernel(x, self.weight, self.bias) This way, Transformers models can be easily quantized by replacing some instances of nn.Linear with a target class. The quantization method should be serializable. You can save the quantized weights locally or push them to the Hub. Make sure the package that contains the quantization kernels/primitive is stable (no frequent breaking changes). For some quantization methods, they may require “pre-quantizing” the models through data calibration (e.g., AWQ). In this case, we prefer to only support inference in Transformers and let the third-party library maintained by the ML community deal with the model quantization itself. Build a new HFQuantizer class Create a new quantization config class inside src/transformers/utils/quantization_config.py and make sure to expose the new quantization config inside Transformers main init by adding it to the _import_structure object of src/transformers/ init .py . Create a new file inside src/transformers/quantizers/ named quantizer_your_method.py , and make it inherit from src/transformers/quantizers/base.py::HfQuantizer . Make sure to add the new quantizer and quantization config in the quantization auto-mapping in src/transformers/quantizers/auto.py . Define the following class attributes/property methods for your quantization method: requires_calibration : Whether the quantization method requires a data calibration process. If set to True , you can only support inference (with quantized weights) and not inference and quantization. required_packages : A list of strings of the required packages to use the quantized weights. You might need to define some new utility methods such as is_auto_awq_available in transformers/src/utils/import_utils.py . requires_parameters_quantization : Only required if your quantization method requires extra attention to the underlying nn.Parameter object. For example, bitsandbytes uses Params4bit and Int8Param , which requires some extra attention when quantizing the model. Most of the recent quantization method packs int2/int4 weights inside torch.uint8 weights, so this flag should not be really required (set to False by default). is_serializable : A property method to determine whether the method is serializable or not. is_trainable : A property method to determine whether you can fine-tune models on top of the quantization method (with or without PEFT approaches). Write the validate_environment and update_torch_dtype methods. These methods are called before creating the quantized model to ensure users use the right configuration. You can have a look at how this is done on other quantizers. Write the _process_model_before_weight_loading method. In Transformers, the quantized models are initialized first on the "meta" device before loading the weights. This means the _process_model_before_weight_loading method takes care of manipulating the model skeleton to replace some modules (e.g., nn.Linear ) with the target modules (quantization modules). You can define a module replacement logic or any other utility method by creating a new file in transformers/src/integrations/ and exposing the relevant methods in that folder’s __init__.py file. The best starting point would be to have a look at another quantization methods such as quantizer_awq.py . Write the _process_model_after_weight_loading method. This method enables implementing additional features that require manipulating the model after loading the weights. Document everything! Make sure your quantization method is documented by adding a new file under docs/source/en/quantization and adding a new row in the table in docs/source/en/quantization/overview.md . Add tests! You should add tests by first adding the package in our nightly Dockerfile inside docker/transformers-quantization-latest-gpu and then adding a new test file in tests/quantization/xxx . Feel free to check out how it is implemented for other quantization methods. < > Update on GitHub ← compressed-tensors Overview → Contribute new quantization method Requirements Build a new HF Quantizer class
Keras_callbacks.txt
Keras callbacks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Keras callbacks Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Keras callbacks When training a Transformers model with Keras, there are some library-specific callbacks available to automate common tasks: KerasMetricCallback class transformers. KerasMetricCallback < source > ( metric_fn : typing.Callable eval_dataset : typing.Union[tensorflow.python.data.ops.dataset_ops.DatasetV2, numpy.ndarray, tensorflow.python.framework.tensor.Tensor, tuple, dict] output_cols : typing.Optional[typing.List[str]] = None label_cols : typing.Optional[typing.List[str]] = None batch_size : typing.Optional[int] = None predict_with_generate : bool = False use_xla_generation : bool = False generate_kwargs : typing.Optional[dict] = None ) Parameters metric_fn ( Callable ) — Metric function provided by the user. It will be called with two arguments - predictions and labels . These contain the model’s outputs and matching labels from the dataset. It should return a dict mapping metric names to numerical values. eval_dataset ( tf.data.Dataset or dict or tuple or np.ndarray or tf.Tensor ) — Validation data to be used to generate predictions for the metric_fn . output_cols (`List[str], optional ) — A list of columns to be retained from the model output as the predictions. Defaults to all. label_cols (’ List[str] , optional ’) — A list of columns to be retained from the input dataset as the labels. Will be autodetected if this is not supplied. batch_size ( int , optional ) — Batch size. Only used when the data is not a pre-batched tf.data.Dataset . predict_with_generate ( bool , optional , defaults to False ) — Whether we should use model.generate() to get outputs for the model. use_xla_generation ( bool , optional , defaults to False ) — If we’re generating, whether to compile model generation with XLA. This can massively increase the speed of generation (up to 100X speedup) but will require a new XLA compilation for each input shape. When using XLA generation, it’s a good idea to pad your inputs to the same size, or to use the pad_to_multiple_of argument in your tokenizer or DataCollator , which will reduce the number of unique input shapes and save a lot of compilation time. This option has no effect is predict_with_generate is False . generate_kwargs ( dict , optional ) — Keyword arguments to pass to model.generate() when generating. Has no effect if predict_with_generate is False . Callback to compute metrics at the end of every epoch. Unlike normal Keras metrics, these do not need to be compilable by TF. It is particularly useful for common NLP metrics like BLEU and ROUGE that require string operations or generation loops that cannot be compiled. Predictions (or generations) will be computed on the eval_dataset before being passed to the metric_fn in np.ndarray format. The metric_fn should compute metrics and return a dict mapping metric names to metric values. We provide an example of a suitable metric_fn that computes ROUGE scores for a summarization model below. Note that this example skips some post-processing for readability and simplicity, and should probably not be used as-is! Copied from datasets import load_metric rouge_metric = load_metric( "rouge" ) def rouge_fn ( predictions, labels ): decoded_predictions = tokenizer.batch_decode(predictions, skip_special_tokens= True ) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens= True ) result = rouge_metric.compute(predictions=decoded_predictions, references=decoded_labels) return {key: value.mid.fmeasure * 100 for key, value in result.items()} The above function will return a dict containing values which will be logged like any other Keras metric: Copied {'rouge1': 37.4199 , 'rouge2': 13.9768 , 'rougeL': 34.361 , 'rougeLsum': 35.0781 PushToHubCallback class transformers. PushToHubCallback < source > ( output_dir : typing.Union[str, pathlib.Path] save_strategy : typing.Union[str, transformers.trainer_utils.IntervalStrategy] = 'epoch' save_steps : typing.Optional[int] = None tokenizer : typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None hub_model_id : typing.Optional[str] = None hub_token : typing.Optional[str] = None checkpoint : bool = False **model_card_args ) Parameters output_dir ( str ) — The output directory where the model predictions and checkpoints will be written and synced with the repository on the Hub. save_strategy ( str or IntervalStrategy , optional , defaults to "epoch" ) — The checkpoint save strategy to adopt during training. Possible values are: "no" : Save is done at the end of training. "epoch" : Save is done at the end of each epoch. "steps" : Save is done every save_steps save_steps ( int , optional ) — The number of steps between saves when using the “steps” save_strategy . tokenizer ( PreTrainedTokenizerBase , optional ) — The tokenizer used by the model. If supplied, will be uploaded to the repo alongside the weights. hub_model_id ( str , optional ) — The name of the repository to keep in sync with the local output_dir . It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance "user_name/model" , which allows you to push to an organization you are a member of with "organization_name/model" . Will default to the name of output_dir . hub_token ( str , optional ) — The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with huggingface-cli login . checkpoint ( bool , optional , defaults to False ) — Whether to save full training checkpoints (including epoch and optimizer state) to allow training to be resumed. Only usable when save_strategy is "epoch" . Callback that will save and push the model to the Hub regularly. By default, it pushes once per epoch, but this can be changed with the save_strategy argument. Pushed models can be accessed like any other model on the hub, such as with the from_pretrained method. Copied from transformers.keras_callbacks import PushToHubCallback push_to_hub_callback = PushToHubCallback( output_dir= "./model_save" , tokenizer=tokenizer, hub_model_id= "gpt5-7xlarge" , ) model.fit(train_dataset, callbacks=[push_to_hub_callback]) < > Update on GitHub ← Data Collator Logging → Keras callbacks Keras Metric Callback Push To Hub Callback
How_to_Add_a_Space_to_ArXiv.txt
How to Add a Space to ArXiv Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation How to Add a Space to ArXiv Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Using OpenCV in Spaces More ways to create Spaces Managing Spaces with Github Actions Managing Spaces with CircleCI Workflows Custom Python Spaces How to Add a Space to ArXiv Cookie limitations in Spaces Set URL query and hash Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started How to Add a Space to ArXiv Demos on Hugging Face Spaces allow a wide audience to try out state-of-the-art machine learning research without writing any code. Hugging Face and ArXiv have collaborated to embed these demos directly along side papers on ArXiv! Thanks to this integration, users can now find the most popular demos for a paper on its arXiv abstract page. For example, if you want to try out demos of the LayoutLM document classification model, you can go to the LayoutLM paper’s arXiv page , and navigate to the demo tab. You will see open-source demos built by the machine learning community for this model, which you can try out immediately in your browser: We’ll cover two different ways to add your Space to ArXiv and have it show up in the Demos tab. Prerequisites There’s an existing paper on ArXiv that you’d like to create a demo for You have built or (can build) a demo for the model on Spaces Method 1 (Recommended): Linking from the Space README The simplest way to add a Space to an ArXiv paper is to include the link to the paper in the Space README file ( README.md ). It’s good practice to include a full citation as well. You can see an example of a link and a citation on this Echocardiogram Segmentation Space README . And that’s it! Your Space should appear in the Demo tab next to the paper on ArXiv in a few minutes 🤗 Method 2: Linking a Related Model An alternative approach can be used to link Spaces to papers by linking an intermediate model to the Space. This requires that the paper is associated with a model that is on the Hugging Face Hub (or can be uploaded there) First, upload the model associated with the ArXiv paper onto the Hugging Face Hub if it is not already there. ( Detailed instructions are here ) When writing the model card (README.md) for the model, include a link to the ArXiv paper. It’s good practice to include a full citation as well. You can see an example of a link and a citation on the LayoutLM model card Note : you can verify this step has been carried out successfully by seeing if an ArXiv button appears above the model card. In the case of LayoutLM, the button says: “arxiv:1912.13318” and links to the LayoutLM paper on ArXiv. Then, create a demo on Spaces that loads this model. Somewhere within the code, the model name must be included in order for Hugging Face to detect that a Space is associated with it. For example, the docformer_for_document_classification Space loads the LayoutLM like this and include the string "microsoft/layoutlm-base-uncased" : Copied from transformers import LayoutLMForTokenClassification layoutlm_dummy = LayoutLMForTokenClassification.from_pretrained( "microsoft/layoutlm-base-uncased" , num_labels= 1 ) Note : Here’s an overview on building demos on Hugging Face Spaces and here are more specific instructions for Gradio and Streamlit . As soon as your Space is built, Hugging Face will detect that it is associated with the model. A “Linked Models” button should appear in the top right corner of the Space, as shown here: Note : You can also add linked models manually by explicitly updating them in the README metadata for the Space, as described here . Your Space should appear in the Demo tab next to the paper on ArXiv in a few minutes 🤗 < > Update on GitHub ← Custom Python Spaces Cookie limitations in Spaces → How to Add a Space to Ar Xiv
Fine-tune_Transformers_with_AWS_Trainium.txt
Fine-tune Transformers with AWS Trainium Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Fine-tune Transformers with AWS Trainium AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Fine-tune Transformers with AWS Trainium Training on AWS Trainium is as simple as in Transformers: You need to replace the Transformers’ Trainer class with the NeuronTrainer class. You can find several examples in the official repository for the following tasks: language modeling , question answering , summarization , text classification , translation , image classification , audio classification , speech recognition , contrastive image-text training . If you want go through an step-by-step example check out the getting started with AWS Trainium and Hugging Face Transformers guide. ← Neuron model cache Distributed Training → Fine-tune Transformers with AW S Trainium
Agents_and_tools.txt
Agents and tools Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Agents and tools Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Agents and tools What is an agent? Large Language Models (LLMs) trained to perform causal language modeling can tackle a wide range of tasks, but they often struggle with basic tasks like logic, calculation, and search. When prompted in domains in which they do not perform well, they often fail to generate the answer we expect them to. One approach to overcome this weakness is to create an agent . An agent is a system that uses an LLM as its engine, and it has access to functions called tools . These tools are functions for performing a task, and they contain all necessary description for the agent to properly use them. The agent can be programmed to: devise a series of actions/tools and run them all at once, like the CodeAgent plan and execute actions/tools one by one and wait for the outcome of each action before launching the next one, like the ReactJsonAgent Types of agents Code agent This agent has a planning step, then generates python code to execute all its actions at once. It natively handles different input and output types for its tools, thus it is the recommended choice for multimodal tasks. React agents This is the go-to agent to solve reasoning tasks, since the ReAct framework ( Yao et al., 2022 ) makes it really efficient to think on the basis of its previous observations. We implement two versions of ReactJsonAgent: ReactJsonAgent generates tool calls as a JSON in its output. ReactCodeAgent is a new type of ReactJsonAgent that generates its tool calls as blobs of code, which works really well for LLMs that have strong coding performance. Read Open-source LLMs as LangChain Agents blog post to learn more about ReAct agents. For example, here is how a ReAct Code agent would work its way through the following question. Copied >>> agent. run ( .. . "How many more blocks (also denoted as layers) in BERT base encoder than the encoder from the architecture proposed in Attention is All You Need?" , .. . ) =====New task ===== How many more blocks (also denoted as layers) in BERT base encoder than the encoder from the architecture proposed in Attention is All You Need? ====Agent is executing the code below: bert_blocks = search( query = "number of blocks in BERT base encoder" ) print ( "BERT blocks:" , bert_blocks) ==== Print outputs: BERT blocks: twelve encoder blocks ====Agent is executing the code below: attention_layer = search( query = "number of layers in Attention is All You Need" ) print ( "Attention layers:" , attention_layer) ==== Print outputs: Attention layers: Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position- 2 Page 3 Figure 1: The Transformer - model architecture. ====Agent is executing the code below: bert_blocks = 12 attention_layers = 6 diff = bert_blocks - attention_layers print ( "Difference in blocks:" , diff) final_answer(diff) ==== Print outputs: Difference in blocks: 6 Final answer: 6 How can I build an agent? To initialize an agent, you need these arguments: an LLM to power your agent - the agent is not exactly the LLM, it’s more like the agent is a program that uses an LLM as its engine. a system prompt: what the LLM engine will be prompted with to generate its output a toolbox from which the agent pick tools to execute a parser to extract from the LLM output which tools are to call and with which arguments Upon initialization of the agent system, the tool attributes are used to generate a tool description, then baked into the agent’s system_prompt to let it know which tools it can use and why. To start with, please install the agents extras in order to install all default dependencies. Copied pip install transformers[agents] Build your LLM engine by defining a llm_engine method which accepts a list of messages and returns text. This callable also needs to accept a stop argument that indicates when to stop generating. Copied from huggingface_hub import login, InferenceClient login( "<YOUR_HUGGINGFACEHUB_API_TOKEN>" ) client = InferenceClient(model= "meta-llama/Meta-Llama-3-70B-Instruct" ) def llm_engine ( messages, stop_sequences=[ "Task" ] ) -> str : response = client.chat_completion(messages, stop=stop_sequences, max_tokens= 1000 ) answer = response.choices[ 0 ].message.content return answer You could use any llm_engine method as long as: it follows the messages format ( List[Dict[str, str]] ) for its input messages , and it returns a str . it stops generating outputs at the sequences passed in the argument stop_sequences Additionally, llm_engine can also take a grammar argument. In the case where you specify a grammar upon agent initialization, this argument will be passed to the calls to llm_engine, with the grammar that you defined upon initialization, to allow constrained generation in order to force properly-formatted agent outputs. You will also need a tools argument which accepts a list of Tools - it can be an empty list. You can also add the default toolbox on top of your tools list by defining the optional argument add_base_tools=True . Now you can create an agent, like CodeAgent , and run it. You can also create a TransformersEngine with a pre-initialized pipeline to run inference on your local machine using transformers . For convenience, since agentic behaviours generally require stronger models such as Llama-3.1-70B-Instruct that are harder to run locally for now, we also provide the HfApiEngine class that initializes a huggingface_hub.InferenceClient under the hood. Copied from transformers import CodeAgent, HfApiEngine llm_engine = HfApiEngine(model= "meta-llama/Meta-Llama-3-70B-Instruct" ) agent = CodeAgent(tools=[], llm_engine=llm_engine, add_base_tools= True ) agent.run( "Could you translate this sentence from French, say it out loud and return the audio." , sentence= "Où est la boulangerie la plus proche?" , ) This will be handy in case of emergency baguette need! You can even leave the argument llm_engine undefined, and an HfApiEngine will be created by default. Copied from transformers import CodeAgent agent = CodeAgent(tools=[], add_base_tools= True ) agent.run( "Could you translate this sentence from French, say it out loud and give me the audio." , sentence= "Où est la boulangerie la plus proche?" , ) Note that we used an additional sentence argument: you can pass text as additional arguments to the model. You can also use this to indicate the path to local or remote files for the model to use: Copied from transformers import ReactCodeAgent agent = ReactCodeAgent(tools=[], llm_engine=llm_engine, add_base_tools= True ) agent.run( "Why does Mike not know many people in New York?" , audio= "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3" ) The prompt and output parser were automatically defined, but you can easily inspect them by calling the system_prompt_template on your agent. Copied print (agent.system_prompt_template) It’s important to explain as clearly as possible the task you want to perform. Every run() operation is independent, and since an agent is powered by an LLM, minor variations in your prompt might yield completely different results. You can also run an agent consecutively for different tasks: each time the attributes agent.task and agent.logs will be re-initialized. Code execution A Python interpreter executes the code on a set of inputs passed along with your tools. This should be safe because the only functions that can be called are the tools you provided (especially if it’s only tools by Hugging Face) and the print function, so you’re already limited in what can be executed. The Python interpreter also doesn’t allow imports by default outside of a safe list, so all the most obvious attacks shouldn’t be an issue. You can still authorize additional imports by passing the authorized modules as a list of strings in argument additional_authorized_imports upon initialization of your ReactCodeAgent or CodeAgent : Copied >>> from transformers import ReactCodeAgent >>> agent = ReactCodeAgent(tools=[], additional_authorized_imports=[ 'requests' , 'bs4' ]) >>> agent.run( "Could you get me the title of the page at url 'https://huggingface.co/blog'?" ) (...) 'Hugging Face – Blog' The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent. The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports! The system prompt An agent, or rather the LLM that drives the agent, generates an output based on the system prompt. The system prompt can be customized and tailored to the intended task. For example, check the system prompt for the ReactCodeAgent (below version is slightly simplified). Copied You will be given a task to solve as best you can. You have access to the following tools: <<tool_descriptions>> To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences. At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task, then the tools that you want to use. Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '/End code' sequence. During each intermediate step, you can use 'print()' to save whatever important information you will then need. These print outputs will then be available in the 'Observation:' field, for using this information as input for the next step. In the end you have to return a final answer using the `final_answer` tool. Here are a few examples using notional tools: --- {examples} Above example were using notional tools that might not exist for you. You only have acces to those tools: <<tool_names>> You also can perform computations in the python code you generate. Always provide a 'Thought:' and a 'Code:\n```py' sequence ending with '```<end_code>' sequence. You MUST provide at least the 'Code:' sequence to move forward. Remember to not perform too many operations in a single code block! You should split the task into intermediate code blocks. Print results at the end of each step to save the intermediate results. Then use final_answer() to return the final result. Remember to make sure that variables you use are all defined. Now Begin! The system prompt includes: An introduction that explains how the agent should behave and what tools are. A description of all the tools that is defined by a <<tool_descriptions>> token that is dynamically replaced at runtime with the tools defined/chosen by the user. The tool description comes from the tool attributes, name , description , inputs and output_type , and a simple jinja2 template that you can refine. The expected output format. You could improve the system prompt, for example, by adding an explanation of the output format. For maximum flexibility, you can overwrite the whole system prompt template by passing your custom prompt as an argument to the system_prompt parameter. Copied from transformers import ReactJsonAgent from transformers.agents import PythonInterpreterTool agent = ReactJsonAgent(tools=[PythonInterpreterTool()], system_prompt= "{your_custom_prompt}" ) Please make sure to define the <<tool_descriptions>> string somewhere in the template so the agent is aware of the available tools. Inspecting an agent run Here are a few useful attributes to inspect what happened after a run: agent.logs stores the fine-grained logs of the agent. At every step of the agent’s run, everything gets stored in a dictionary that then is appended to agent.logs . Running agent.write_inner_memory_from_logs() creates an inner memory of the agent’s logs for the LLM to view, as a list of chat messages. This method goes over each step of the log and only stores what it’s interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be transcripted by this method. Tools A tool is an atomic function to be used by an agent. You can for instance check the PythonInterpreterTool : it has a name, a description, input descriptions, an output type, and a __call__ method to perform the action. When the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent’s system prompt. This lets the agent know which tools it can use and why. Default toolbox Transformers comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument add_base_tools = True : Document question answering : given a document (such as a PDF) in image format, answer a question on this document ( Donut ) Image question answering : given an image, answer a question on this image ( VILT ) Speech to text : given an audio recording of a person talking, transcribe the speech into text ( Whisper ) Text to speech : convert text to speech ( SpeechT5 ) Translation : translates a given sentence from source language to target language. DuckDuckGo search* : performs a web search using DuckDuckGo browser. Python code interpreter : runs your the LLM generated Python code in a secure environment. This tool will only be added to ReactJsonAgent if you initialize it with add_base_tools=True , since code-based agent can already natively execute Python code You can manually use a tool by calling the load_tool() function and a task to perform. Copied from transformers import load_tool tool = load_tool( "text-to-speech" ) audio = tool( "This is a text to speech tool" ) Create a new tool You can create your own tool for use cases not covered by the default tools from Hugging Face. For example, let’s create a tool that returns the most downloaded model for a given task from the Hub. You’ll start with the code below. Copied from huggingface_hub import list_models task = "text-classification" model = next ( iter (list_models( filter =task, sort= "downloads" , direction=- 1 ))) print (model. id ) This code can quickly be converted into a tool, just by wrapping it in a function and adding the tool decorator: Copied from transformers import tool @tool def model_download_tool ( task: str ) -> str : """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint. Args: task: The task for which """ model = next ( iter (list_models( filter = "text-classification" , sort= "downloads" , direction=- 1 ))) return model. id The function needs: A clear name. The name usually describes what the tool does. Since the code returns the model with the most downloads for a task, let’s put model_download_tool . Type hints on both inputs and output A description, that includes an ‘Args:’ part where each argument is described (without a type indication this time, it will be pulled from the type hint). All these will be automatically baked into the agent’s system prompt upon initialization: so strive to make them as clear as possible! This definition format is the same as tool schemas used in apply_chat_template , the only difference is the added tool decorator: read more on our tool use API here . Then you can directly initialize your agent: Copied from transformers import CodeAgent agent = CodeAgent(tools=[model_download_tool], llm_engine=llm_engine) agent.run( "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" ) You get the following: Copied ======== New task ======== Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub? ==== Agent is executing the code below: most_downloaded_model = model_download_tool(task="text-to-video") print(f"The most downloaded model for the 'text-to-video' task is {most_downloaded_model}.") ==== And the output: "The most downloaded model for the 'text-to-video' task is ByteDance/AnimateDiff-Lightning." Manage your agent’s toolbox If you have already initialized an agent, it is inconvenient to reinitialize it from scratch with a tool you want to use. With Transformers, you can manage an agent’s toolbox by adding or replacing a tool. Let’s add the model_download_tool to an existing agent initialized with only the default toolbox. Copied from transformers import CodeAgent agent = CodeAgent(tools=[], llm_engine=llm_engine, add_base_tools= True ) agent.toolbox.add_tool(model_download_tool) Now we can leverage both the new tool and the previous text-to-speech tool: Copied agent.run( "Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub and return the audio?" ) Audio Beware when adding tools to an agent that already works well because it can bias selection towards your tool or select another tool other than the one already defined. Use the agent.toolbox.update_tool() method to replace an existing tool in the agent’s toolbox. This is useful if your new tool is a one-to-one replacement of the existing tool because the agent already knows how to perform that specific task. Just make sure the new tool follows the same API as the replaced tool or adapt the system prompt template to ensure all examples using the replaced tool are updated. Use a collection of tools You can leverage tool collections by using the ToolCollection object, with the slug of the collection you want to use. Then pass them as a list to initialize you agent, and start using them! Copied from transformers import ToolCollection, ReactCodeAgent image_tool_collection = ToolCollection(collection_slug= "huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f" ) agent = ReactCodeAgent(tools=[*image_tool_collection.tools], add_base_tools= True ) agent.run( "Please draw me a picture of rivers and lakes." ) To speed up the start, tools are loaded only if called by the agent. This gets you this image: < > Update on GitHub ← Share your model Agents, supercharged - Multi-agents, External tools, and more → Agents and tools What is an agent? Types of agents Code agent React agents How can I build an agent? Code execution The system prompt Inspecting an agent run Tools Default toolbox Create a new tool Manage your agent’s toolbox Use a collection of tools
🤗_Evaluate.txt
🤗 Evaluate Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation 🤗 Evaluate Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🤗 Evaluate A library for easily evaluating machine learning models and datasets. With a single line of code, you get access to dozens of evaluation methods for different domains (NLP, Computer Vision, Reinforcement Learning, and more!). Be it on your local machine or in a distributed training setup, you can evaluate your models in a consistent and reproducible way! Visit the 🤗 Evaluate organization for a full list of available metrics. Each metric has a dedicated Space with an interactive demo for how to use the metric, and a documentation card detailing the metrics limitations and usage. Tutorials Learn the basics and become familiar with loading, computing, and saving with 🤗 Evaluate. Start here if you are using 🤗 Evaluate for the first time! How-to guides Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Evaluate to solve real-world problems. Conceptual guides High-level explanations for building a better understanding of important topics such as considerations going into evaluating a model or dataset and the difference between metrics, measurements, and comparisons. Reference Technical descriptions of how 🤗 Evaluate classes and methods work. Installation → 🤗 Evaluate
Using_OpenCV_in_Spaces.txt
Using OpenCV in Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using OpenCV in Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Using OpenCV in Spaces More ways to create Spaces Managing Spaces with Github Actions Managing Spaces with CircleCI Workflows Custom Python Spaces How to Add a Space to ArXiv Cookie limitations in Spaces Set URL query and hash Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using OpenCV in Spaces In order to use OpenCV in your Gradio or Streamlit Spaces, you’ll need to make the Space install both the Python and Debian dependencies This means adding python3-opencv to the packages.txt file, and adding opencv-python to the requirements.txt file. If those files don’t exist, you’ll need to create them. To see an example, see this Gradio project . < > Update on GitHub ← Advanced Topics More ways to create Spaces → Using OpenC V in Spaces
NeuronTrainer.txt
NeuronTrainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation NeuronTrainer AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started NeuronTrainer The NeuronTrainer class provides an extended API for the feature-complete Transformers Trainer . It is used in all the example scripts . The NeuronTrainer class is optimized for 🤗 Transformers models running on AWS Trainium. Here is an example of how to customize NeuronTrainer to use a weighted loss (useful when you have an unbalanced training set): Copied from torch import nn from optimum.neuron import NeuronTrainer class CustomNeuronTrainer ( NeuronTrainer ): def compute_loss ( self, model, inputs, return_outputs= False ): labels = inputs.get( "labels" ) # forward pass outputs = model(**inputs) logits = outputs.get( "logits" ) # compute custom loss (suppose one has 3 labels with different weights) loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([ 1.0 , 2.0 , 3.0 ])) loss = loss_fct(logits.view(- 1 , self.model.config.num_labels), labels.view(- 1 )) return (loss, outputs) if return_outputs else loss Another way to customize the training loop behavior for the PyTorch NeuronTrainer is to use callbacks that can inspect the training loop state (for progress reporting, logging on TensorBoard or other ML platforms…) and take decisions (like early stopping). NeuronTrainer class optimum.neuron. NeuronTrainer < source > ( *args **kwargs ) Trainer that is suited for performing training on AWS Trainium instances. class optimum.neuron. Seq2SeqNeuronTrainer < source > ( *args **kwargs ) Seq2SeqTrainer that is suited for performing training on AWS Trainium instances. ← Add support for a new model architecture Neuron Distributed → Neuron Trainer Neuron Trainer
More_ways_to_create_Spaces.txt
More ways to create Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation More ways to create Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Using OpenCV in Spaces More ways to create Spaces Managing Spaces with Github Actions Managing Spaces with CircleCI Workflows Custom Python Spaces How to Add a Space to ArXiv Cookie limitations in Spaces Set URL query and hash Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started More ways to create Spaces Duplicating a Space You can duplicate a Space by clicking the three dots at the top right and selecting Duplicate this Space . Learn more about it here . Creating a Space from a model New! You can now create a Gradio demo directly from most model pages, using the “Deploy -> Spaces” button. As another example of how to create a Space from a set of models, the Model Comparator Space Builder from @farukozderim can be used to create a Space directly from any model hosted on the Hub. < > Update on GitHub ← Using OpenCV in Spaces Managing Spaces with Github Actions → More ways to create Spaces Duplicating a Space Creating a Space from a model
Interface__BaseArgs.txt
Interface: BaseArgs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: BaseArgs Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: BaseArgs Properties accessToken • Optional accessToken : string The access token to use. Without it, you’ll get rate-limited quickly. Can be created for free in hf.co/settings/token You can also pass an external Inference provider’s key if you intend to call a compatible provider like Sambanova, Together, Replicate… Defined in inference/src/types.ts:59 endpointUrl • Optional endpointUrl : string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one. Defined in inference/src/types.ts:76 model • Optional model : string The HF model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead. Defined in inference/src/types.ts:69 provider • Optional provider : "fal-ai" | "replicate" | "sambanova" | "together" | "hf-inference" Set an Inference provider to run this model on. Defaults to the first provider in your user settings that is compatible with this model. Defined in inference/src/types.ts:83 < > Update on GitHub ← AutomaticSpeechRecognitionOutput DocumentQuestionAnsweringOutput → Interface: Base Args Properties access Token Defined in endpoint Url Defined in model Defined in provider Defined in
Using_SpanMarker_at_Hugging_Face.txt
Using SpanMarker at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using SpanMarker at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using SpanMarker at Hugging Face SpanMarker is a framework for training powerful Named Entity Recognition models using familiar encoders such as BERT, RoBERTa and DeBERTa. Tightly implemented on top of the 🤗 Transformers library, SpanMarker can take good advantage of it. As a result, SpanMarker will be intuitive to use for anyone familiar with Transformers. Exploring SpanMarker in the Hub You can find span_marker models by filtering at the left of the models page . All models on the Hub come with these useful features: An automatically generated model card with a brief description. An interactive widget you can use to play with the model directly in the browser. An Inference API that allows you to make inference requests. Installation To get started, you can follow the SpanMarker installation guide . You can also use the following one-line install through pip: Copied pip install -U span_marker Using existing models All span_marker models can easily be loaded from the Hub. Copied from span_marker import SpanMarkerModel model = SpanMarkerModel.from_pretrained( "tomaarsen/span-marker-bert-base-fewnerd-fine-super" ) Once loaded, you can use SpanMarkerModel.predict to perform inference. Copied model.predict( "Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris." ) Copied [ { "span" : "Amelia Earhart" , "label" : "person-other" , "score" : 0.7629689574241638 , "char_start_index" : 0 , "char_end_index" : 14 } , { "span" : "Lockheed Vega 5B" , "label" : "product-airplane" , "score" : 0.9833564758300781 , "char_start_index" : 38 , "char_end_index" : 54 } , { "span" : "Atlantic" , "label" : "location-bodiesofwater" , "score" : 0.7621214389801025 , "char_start_index" : 66 , "char_end_index" : 74 } , { "span" : "Paris" , "label" : "location-GPE" , "score" : 0.9807717204093933 , "char_start_index" : 78 , "char_end_index" : 83 } ] If you want to load a specific SpanMarker model, you can click Use in SpanMarker and you will be given a working snippet! Additional resources SpanMarker repository SpanMarker docs < > Update on GitHub ← spaCy SpeechBrain → Using Span Marker at Hugging Face Exploring Span Marker in the Hub Installation Using existing models Additional resources
HQQ.txt
HQQ Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation HQQ Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started HQQ Half-Quadratic Quantization (HQQ) implements on-the-fly quantization via fast robust optimization. It doesn’t require calibration data and can be used to quantize any model. Please refer to the official package for more details. For installation, we recommend you use the following approach to get the latest version and build its corresponding CUDA kernels: Copied pip install hqq To quantize a model, you need to create an HqqConfig . There are two ways of doing it: Copied from transformers import AutoModelForCausalLM, AutoTokenizer, HqqConfig # Method 1: all linear layers will use the same quantization config quant_config = HqqConfig(nbits= 8 , group_size= 64 ) Copied # Method 2: each linear layer with the same tag will use a dedicated quantization config q4_config = { 'nbits' : 4 , 'group_size' : 64 } q3_config = { 'nbits' : 3 , 'group_size' : 32 } quant_config = HqqConfig(dynamic_config={ 'self_attn.q_proj' :q4_config, 'self_attn.k_proj' :q4_config, 'self_attn.v_proj' :q4_config, 'self_attn.o_proj' :q4_config, 'mlp.gate_proj' :q3_config, 'mlp.up_proj' :q3_config, 'mlp.down_proj' :q3_config, }) The second approach is especially interesting for quantizing Mixture-of-Experts (MoEs) because the experts are less affected by lower quantization settings. Then you simply quantize the model as follows Copied model = transformers.AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map= "cuda" , quantization_config=quant_config ) Optimized Runtime HQQ supports various backends, including pure PyTorch and custom dequantization CUDA kernels. These backends are suitable for older gpus and peft/QLoRA training. For faster inference, HQQ supports 4-bit fused kernels (TorchAO and Marlin), reaching up to 200 tokens/sec on a single 4090. For more details on how to use the backends, please refer to https://github.com/mobiusml/hqq/?tab=readme-ov-file#backend < > Update on GitHub ← HIGGS FBGEMM_FP8 → HQQ Optimized Runtime
Using_Adapters_at_Hugging_Face.txt
Using Adapters at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using Adapters at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using Adapters at Hugging Face Note: Adapters has replaced the adapter-transformers library and is fully compatible in terms of model weights. See here for more. Adapters is an add-on library to 🤗 transformers for efficiently fine-tuning pre-trained language models using adapters and other parameter-efficient methods. Adapters also provides various methods for composition of adapter modules during training and inference. You can learn more about this in the Adapters paper . Exploring Adapters on the Hub You can find Adapters models by filtering at the left of the models page . Some adapter models can be found in the Adapter Hub repository . Models from both sources are aggregated on the AdapterHub website . Installation To get started, you can refer to the AdapterHub installation guide . You can also use the following one-line install through pip: Copied pip install adapters Using existing models For a full guide on loading pre-trained adapters, we recommend checking out the official guide . As a brief summary, a full setup consists of three steps: Load a base transformers model with the AutoAdapterModel class provided by Adapters . Use the load_adapter() method to load and add an adapter. Activate the adapter via active_adapters (for inference) or activate and set it as trainable via train_adapter() (for training). Make sure to also check out composition of adapters . Copied from adapters import AutoAdapterModel # 1. model = AutoAdapterModel.from_pretrained( "FacebookAI/roberta-base" ) # 2. adapter_name = model.load_adapter( "AdapterHub/roberta-base-pf-imdb" ) # 3. model.active_adapters = adapter_name # or model.train_adapter(adapter_name) You can also use list_adapters to find all adapter models programmatically: Copied from adapters import list_adapters # source can be "ah" (AdapterHub), "hf" (hf.co) or None (for both, default) adapter_infos = list_adapters(source= "hf" , model_name= "FacebookAI/roberta-base" ) If you want to see how to load a specific model, you can click Use in Adapters and you will be given a working snippet that you can load it! Sharing your models For a full guide on sharing models with Adapters , we recommend checking out the official guide . You can share your adapter by using the push_adapter_to_hub method from a model that already contains an adapter. Copied model.push_adapter_to_hub( "my-awesome-adapter" , "awesome_adapter" , adapterhub_tag= "sentiment/imdb" , datasets_tag= "imdb" ) This command creates a repository with an automatically generated model card and all necessary metadata. Additional resources Adapters repository Adapters docs Adapters paper Integration with Hub docs < > Update on GitHub ← Integrated Libraries AllenNLP → Using Adapters at Hugging Face Exploring Adapters on the Hub Installation Using existing models Sharing your models Additional resources
Performing_gradient_accumulation_with_Accelerate_e.txt
Performing gradient accumulation with Accelerate Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Performing gradient accumulation with Accelerate Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Performing gradient accumulation with Accelerate Gradient accumulation is a technique where you can train on bigger batch sizes than your machine would normally be able to fit into memory. This is done by accumulating gradients over several batches, and only stepping the optimizer after a certain number of batches have been performed. While technically standard gradient accumulation code would work fine in a distributed setup, it is not the most efficient method for doing so and you may experience considerable slowdowns! In this tutorial you will see how to quickly setup gradient accumulation and perform it with the utilities provided in Accelerate, which can total to adding just one new line of code! This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches: Copied device = "cuda" model.to(device) gradient_accumulation_steps = 2 for index, batch in enumerate (training_dataloader): inputs, targets = batch inputs = inputs.to(device) targets = targets.to(device) outputs = model(inputs) loss = loss_function(outputs, targets) loss = loss / gradient_accumulation_steps loss.backward() if (index + 1 ) % gradient_accumulation_steps == 0 : optimizer.step() scheduler.step() optimizer.zero_grad() Converting it to Accelerate First the code shown earlier will be converted to utilize Accelerate without the special gradient accumulation helper: Copied + from accelerate import Accelerator + accelerator = Accelerator() + model, optimizer, training_dataloader, scheduler = accelerator.prepare( + model, optimizer, training_dataloader, scheduler + ) for index, batch in enumerate(training_dataloader): inputs, targets = batch - inputs = inputs.to(device) - targets = targets.to(device) outputs = model(inputs) loss = loss_function(outputs, targets) loss = loss / gradient_accumulation_steps + accelerator.backward(loss) if (index+1) % gradient_accumulation_steps == 0: optimizer.step() scheduler.step() optimizer.zero_grad() In its current state, this code is not going to perform gradient accumulation efficiently due to a process called gradient synchronization. Read more about that in the Concepts tutorial ! Letting Accelerate handle gradient accumulation All that is left now is to let Accelerate handle the gradient accumulation for us. To do so you should pass in a gradient_accumulation_steps parameter to Accelerator , dictating the number of steps to perform before each call to step() and how to automatically adjust the loss during the call to backward() : Copied from accelerate import Accelerator - accelerator = Accelerator() + accelerator = Accelerator(gradient_accumulation_steps=2) Alternatively, you can pass in a gradient_accumulation_plugin parameter to the Accelerator object’s __init__ , which will allow you to further customize the gradient accumulation behavior. Read more about that in the GradientAccumulationPlugin docs. From here you can use the accumulate() context manager from inside your training loop to automatically perform the gradient accumulation for you! You just wrap it around the entire training part of our code: Copied - for index, batch in enumerate(training_dataloader): + for batch in training_dataloader: + with accelerator.accumulate(model): inputs, targets = batch outputs = model(inputs) You can remove all the special checks for the step number and the loss adjustment: Copied - loss = loss / gradient_accumulation_steps accelerator.backward(loss) - if (index+1) % gradient_accumulation_steps == 0: optimizer.step() scheduler.step() optimizer.zero_grad() As you can see the Accelerator is able to keep track of the batch number you are on and it will automatically know whether to step through the prepared optimizer and how to adjust the loss. Typically with gradient accumulation, you would need to adjust the number of steps to reflect the change in total batches you are training on. Accelerate automagically does this for you by default. Behind the scenes we instantiate a GradientAccumulationPlugin configured to do this. The state.GradientState is sync’d with the active dataloader being iterated upon. As such it assumes naively that when we have reached the end of the dataloader everything will sync and a step will be performed. To disable this, set sync_with_dataloader to be False in the GradientAccumulationPlugin : Copied from accelerate import Accelerator from accelerate.utils import GradientAccumulationPlugin plugin = GradientAccumulationPlugin( sync_with_dataloader = False ) accelerator = Accelerator( .. ., gradient_accumulation_plugin =plugin) The finished code Below is the finished implementation for performing gradient accumulation with Accelerate Copied from accelerate import Accelerator accelerator = Accelerator(gradient_accumulation_steps= 2 ) model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) for batch in training_dataloader: with accelerator.accumulate(model): inputs, targets = batch outputs = model(inputs) loss = loss_function(outputs, targets) accelerator.backward(loss) optimizer.step() scheduler.step() optimizer.zero_grad() It’s important that only one forward/backward should be done inside the context manager with accelerator.accumulate(model) . To learn more about what magic this wraps around, read the Gradient Synchronization concept guide Self-contained example Here is a self-contained example that you can run to see gradient accumulation in action with Accelerate: Copied import torch import copy from accelerate import Accelerator from accelerate.utils import set_seed from torch.utils.data import TensorDataset, DataLoader # seed set_seed( 0 ) # define toy inputs and labels x = torch.tensor([ 1. , 2. , 3. , 4. , 5. , 6. , 7. , 8. ]) y = torch.tensor([ 2. , 4. , 6. , 8. , 10. , 12. , 14. , 16. ]) gradient_accumulation_steps = 4 per_device_batch_size = len (x) // gradient_accumulation_steps # define dataset and dataloader dataset = TensorDataset(x, y) dataloader = DataLoader(dataset, batch_size=per_device_batch_size) # define model, optimizer and loss function class SimpleLinearModel (torch.nn.Module): def __init__ ( self ): super (SimpleLinearModel, self).__init__() self.weight = torch.nn.Parameter(torch.zeros(( 1 , 1 ))) def forward ( self, inputs ): return inputs @ self.weight model = SimpleLinearModel() model_clone = copy.deepcopy(model) criterion = torch.nn.MSELoss() model_optimizer = torch.optim.SGD(model.parameters(), lr= 0.02 ) accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps) model, model_optimizer, dataloader = accelerator.prepare(model, model_optimizer, dataloader) model_clone_optimizer = torch.optim.SGD(model_clone.parameters(), lr= 0.02 ) print ( f"initial model weight is {model.weight.mean().item(): .5 f} " ) print ( f"initial model weight is {model_clone.weight.mean().item(): .5 f} " ) for i, (inputs, labels) in enumerate (dataloader): with accelerator.accumulate(model): inputs = inputs.view(- 1 , 1 ) print (i, inputs.flatten()) labels = labels.view(- 1 , 1 ) outputs = model(inputs) loss = criterion(outputs, labels) accelerator.backward(loss) model_optimizer.step() model_optimizer.zero_grad() loss = criterion(x.view(- 1 , 1 ) @ model_clone.weight, y.view(- 1 , 1 )) model_clone_optimizer.zero_grad() loss.backward() model_clone_optimizer.step() print ( f"w/ accumulation, the final model weight is {model.weight.mean().item(): .5 f} " ) print ( f"w/o accumulation, the final model weight is {model_clone.weight.mean().item(): .5 f} " ) Copied initial model weight is 0 . 00000 initial model weight is 0 . 00000 0 tensor([ 1 ., 2 .]) 1 tensor([ 3 ., 4 .]) 2 tensor([ 5 ., 6 .]) 3 tensor([ 7 ., 8 .]) w / accumulation, the final model weight is 2 . 04000 w /o accumulation, the final model weight is 2 . 04000 Gradient accumulation on training samples of variable size As was pointed out in this blog-post , which points out a common error that occurs when performing gradient accumulation on training samples of variable size: […] for gradient accumulation across token-level tasks like causal LM training, the correct loss should be computed by the total loss across all batches in a gradient accumulation step divided by the total number of all non padding tokens in those batches . This is not the same as the average of the per-batch loss values. In other words, some adjustements must be made on losses that operate on a token-level basis. Skeleton code Copied from accelerate import Accelerator import math import contextlib gradient_accumulation_steps = 2 accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps) model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) training_iterator = iter (training_dataloader) num_samples_in_epoch = len (training_dataloader) remainder = num_samples_in_epoch % gradient_accumulation_steps remainder = remainder if remainder != 0 else gradient_accumulation_steps total_updates = math.ceil(num_samples_in_epoch / gradient_accumulation_steps) total_batched_samples = 0 for update_step in range (total_updates): # In order to correctly the total number of non-padded tokens on which we'll compute the cross-entropy loss # we need to pre-load the full local batch - i.e the next per_device_batch_size * accumulation_steps samples batch_samples = [] num_batches_in_step = gradient_accumulation_steps if update_step != (total_updates - 1 ) else remainder for _ in range (num_batches_in_step): batch_samples += [ next (training_iterator)] # get local num items in batch num_items_in_batch = sum ([(batch[ "labels" ].ne(- 100 )). sum () for batch in batch_samples]) # to compute it correctly in a multi-device DDP training, we need to gather the total number of items in the full batch. num_items_in_batch = accelerator.gather(num_items_in_batch). sum ().item() for i, batch in enumerate (batch_samples): # if we perform gradient accumulation in a multi-devices set-up, we want to avoid unecessary communications when accumulating # cf: https://muellerzr.github.io/blog/gradient_accumulation.html if (i < len (batch_samples) - 1 and accelerator.num_processes > 1 ): ctx = model.no_sync else : ctx = contextlib.nullcontext total_batched_samples += 1 with ctx(): inputs, targets = batch outputs = model(inputs) loss = loss_function(outputs, targets) # the loss function shoud sum over samples rather than averaging # We multiply by num_processes because the DDP calculates the average gradient across all devices whereas dividing by num_items_in_batch already takes into account all devices # Same reason for gradient_accumulation_steps, but this times it's Accelerate that calculate the average gradient across the accumulated steps loss = (loss * gradient_accumulation_steps * accelerator.num_processes) / num_items_in_batch accelerator.backward(loss) # Sync gradients and perform optimization steps once every gradient_accumulation_steps optimizer.step() scheduler.step() optimizer.zero_grad() Self-contained causal LM example Copied import torch import copy from accelerate import Accelerator from accelerate.utils import set_seed from accelerate.logging import get_logger from torch.utils.data import Dataset, DataLoader import math import contexlib # seed set_seed( 0 ) logger = get_logger(__name__) class MyDataset ( Dataset ): def __init__ ( self, num_samples ): super ().__init__() self. len = num_samples def __getitem__ ( self, index ): input_ids = torch.arange( 1 , index+ 2 , dtype=torch.float32) labels = torch.remainder(input_ids, 2 ) return { "input_ids" : input_ids, "labels" : labels} def __len__ ( self ): return self. len def collate_fn ( features ): input_ids = torch.nn.utils.rnn.pad_sequence([f[ "input_ids" ] for f in features], batch_first= True , padding_value=- 100 ) labels = torch.nn.utils.rnn.pad_sequence([f[ "labels" ] for f in features], batch_first= True , padding_value=- 100 ) return { "input_ids" : input_ids[..., None ], "labels" : labels[..., None ]} # define toy inputs and labels gradient_accumulation_steps = 2 per_device_batch_size = 4 # define accelerator accelerator = Accelerator(gradient_accumulation_steps=gradient_accumulation_steps) # define dataset and dataloader # for this toy example, we'll compute gradient descent over one single global batch dataset = MyDataset(per_device_batch_size*gradient_accumulation_steps*accelerator.num_processes) dataloader = DataLoader(dataset, batch_size=per_device_batch_size, collate_fn=collate_fn) # define model, model_optimizer and loss function model = torch.nn.Linear( 1 , 2 , bias= False ) model_clone = copy.deepcopy(model) criterion = torch.nn.CrossEntropyLoss(reduction= "sum" ) # must sum over samples rather than averaging model_optimizer = torch.optim.SGD(model.parameters(), lr= 0.08 ) logger.warning( f"initial model weight is {model.weight.detach().cpu().squeeze()} " ) logger.warning( f"initial model clone weight is {model_clone.weight.detach().cpu().squeeze()} " ) # prepare artifacts - accelerator handles device placement and dataloader splitting model, model_optimizer = accelerator.prepare(model, model_optimizer) dataloader = accelerator.prepare_data_loader(dataloader, device_placement= True ) training_iterator = iter (dataloader) num_samples_in_epoch = len (dataloader) remainder = num_samples_in_epoch % gradient_accumulation_steps remainder = remainder if remainder != 0 else gradient_accumulation_steps total_gradient_updates = math.ceil(num_samples_in_epoch / gradient_accumulation_steps) total_batched_samples = 0 for update_step in range (total_gradient_updates): # In order to correctly the total number of non-padded tokens on which we'll compute the cross-entropy loss # we need to pre-load the full local batch - i.e the next per_device_batch_size * accumulation_steps samples batch_samples = [] num_batches_in_step = gradient_accumulation_steps if update_step != (total_gradient_updates - 1 ) else remainder for _ in range (num_batches_in_step): batch_samples += [ next (training_iterator)] # get local num items in batch local_num_items_in_batch = sum ([(batch[ "labels" ].ne(- 100 )). sum () for batch in batch_samples]) logger.warning( f"Step {update_step} - Device {accelerator.process_index} - num items in the local batch {local_num_items_in_batch} " , main_process_only= False ) # to compute it correctly in a multi-device DDP training, we need to gather the total number of items in the full batch. num_items_in_batch = accelerator.gather(local_num_items_in_batch). sum ().item() logger.warning( f"Total num items {num_items_in_batch} " ) for i, batch in enumerate (batch_samples): inputs, labels = batch[ "input_ids" ], batch[ "labels" ] total_batched_samples += 1 # if we perform gradient accumulation in a multi-devices set-up, we want to avoid unecessary communications when accumulating # cf: https://muellerzr.github.io/blog/gradient_accumulation.html if (i < len (batch_samples) - 1 and accelerator.num_processes > 1 ): ctx = model.no_sync else : ctx = contextlib.nullcontext with ctx(): outputs = model(inputs) loss = criterion(outputs.view(- 1 , 2 ), labels.view(- 1 ).to(torch.int64)) # We multiply by num_processes because the DDP calculates the average gradient across all devices whereas dividing by num_items_in_batch already takes into account all devices # Same reason for gradient_accumulation_steps, but this times it's Accelerate that calculate the average gradient across the accumulated steps loss = (loss * gradient_accumulation_steps * accelerator.num_processes) / num_items_in_batch accelerator.backward(loss) model_optimizer.step() model_optimizer.zero_grad() logger.warning( f"Device {accelerator.process_index} - w/ accumulation, the final model weight is {accelerator.unwrap_model(model).weight.detach().cpu().squeeze()} " , main_process_only= False ) # We know do the same operation but on a single device and without gradient accumulation if accelerator.is_main_process: # prepare one single entire batch dataloader = DataLoader(dataset, batch_size= len (dataset), collate_fn=collate_fn) full_batch_without_accum = next ( iter (dataloader)) total_inputs, total_labels = full_batch_without_accum[ "input_ids" ], full_batch_without_accum[ "labels" ] model_clone_optimizer = torch.optim.SGD(model_clone.parameters(), lr= 0.08 ) # train the cloned model loss = torch.nn.CrossEntropyLoss(reduction= "mean" )(model_clone(total_inputs).view(- 1 , 2 ), total_labels.view(- 1 ).to(torch.int64)) model_clone_optimizer.zero_grad() loss.backward() model_clone_optimizer.step() # We should have the same final weights. logger.warning( f"w/o accumulation, the final model weight is {model_clone.weight.detach().cpu().squeeze()} " ) Results on a single device - gradient accumulation steps set to 1 and batch_size set to 8: Copied initial model weight is tensor([- 0 . 0075 , 0 . 5364 ]) initial model clone weight is tensor([- 0 . 0075 , 0 . 5364 ]) Step 0 - Device 0 - num items in the local batch 36 Total num items 36 Device 0 - w/ accumulation, the final model weight is tensor([ 0 . 0953 , 0 . 4337 ]) w /o accumulation, the final model weight is tensor([ 0 . 0953 , 0 . 4337 ]) Results on a two devices set-up - gradient accumulation steps set to 2 and batch_size set to 4. Copied initial model weight is tensor([- 0 . 0075 , 0 . 5364 ]) initial model clone weight is tensor([- 0 . 0075 , 0 . 5364 ]) Step 0 - Device 0 - num items in the local batch 52 Step 0 - Device 1 - num items in the local batch 84 Total num items 136 Device 1 - w/ accumulation, the final model weight is tensor([ 0 . 2117 , 0 . 3172 ]) Device 0 - w/ accumulation, the final model weight is tensor([ 0 . 2117 , 0 . 3172 ]) w /o accumulation, the final model weight is tensor([ 0 . 2117 , 0 . 3172 ]) To go further: Please find a complete example script on a real world training run in the examples folder at the path accelerate/examples/by_feature/gradient_accumulation_for_autoregressive_models.py . Running it on several training configurations with constant global batch size equal to 32 gives the following graph: Note that the training losses are exactly the same up to training step 20. The small deviation after this training step occurs at the very end of the first epoch, because, by default , the dataloader duplicates the samples at the beginning of the dataset when the total batch size doesn’t exactly divide the dataset. < > Update on GitHub ← Example Zoo Local SGD → Performing gradient accumulation with Accelerate Converting it to Accelerate Letting Accelerate handle gradient accumulation The finished code Self-contained example Gradient accumulation on training samples of variable size Skeleton code Self-contained causal L M example To go further:
Using_quantized_models_(dtypes).txt
Using quantized models (dtypes) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Using quantized models (dtypes) Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Running models on WebGPU Using quantized models (dtypes) Accessing Private/Gated Models Server-side Audio Processing API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities You are viewing main version, which requires installation from source . If you'd like regular npm install, checkout the latest stable version ( v3.0.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using quantized models (dtypes) Before Transformers.js v3, we used the quantized option to specify whether to use a quantized (q8) or full-precision (fp32) variant of the model by setting quantized to true or false , respectively. Now, we’ve added the ability to select from a much larger list with the dtype parameter. The list of available quantizations depends on the model, but some common ones are: full-precision ( "fp32" ), half-precision ( "fp16" ), 8-bit ( "q8" , "int8" , "uint8" ), and 4-bit ( "q4" , "bnb4" , "q4f16" ). (e.g., mixedbread-ai/mxbai-embed-xsmall-v1) Basic usage Example: Run Qwen2.5-0.5B-Instruct in 4-bit quantization ( demo ) Copied import { pipeline } from "@huggingface/transformers" ; // Create a text generation pipeline const generator = await pipeline ( "text-generation" , "onnx-community/Qwen2.5-0.5B-Instruct" , { dtype : "q4" , device : "webgpu" }, ); // Define the list of messages const messages = [ { role : "system" , content : "You are a helpful assistant." }, { role : "user" , content : "Tell me a funny joke." }, ]; // Generate a response const output = await generator (messages, { max_new_tokens : 128 }); console . log (output[ 0 ]. generated_text . at (- 1 ). content ); Per-module dtypes Some encoder-decoder models, like Whisper or Florence-2, are extremely sensitive to quantization settings: especially of the encoder. For this reason, we added the ability to select per-module dtypes, which can be done by providing a mapping from module name to dtype. Example: Run Florence-2 on WebGPU ( demo ) Copied import { Florence2ForConditionalGeneration } from "@huggingface/transformers" ; const model = await Florence2ForConditionalGeneration . from_pretrained ( "onnx-community/Florence-2-base-ft" , { dtype : { embed_tokens : "fp16" , vision_encoder : "fp16" , encoder_model : "q4" , decoder_model_merged : "q4" , }, device : "webgpu" , }, ); See full code example Copied import { Florence2ForConditionalGeneration , AutoProcessor , AutoTokenizer , RawImage , } from "@huggingface/transformers" ; // Load model, processor, and tokenizer const model_id = "onnx-community/Florence-2-base-ft" ; const model = await Florence2ForConditionalGeneration . from_pretrained ( model_id, { dtype : { embed_tokens : "fp16" , vision_encoder : "fp16" , encoder_model : "q4" , decoder_model_merged : "q4" , }, device : "webgpu" , }, ); const processor = await AutoProcessor . from_pretrained (model_id); const tokenizer = await AutoTokenizer . from_pretrained (model_id); // Load image and prepare vision inputs const url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" ; const image = await RawImage . fromURL (url); const vision_inputs = await processor (image); // Specify task and prepare text inputs const task = "<MORE_DETAILED_CAPTION>" ; const prompts = processor. construct_prompts (task); const text_inputs = tokenizer (prompts); // Generate text const generated_ids = await model. generate ({ ...text_inputs, ...vision_inputs, max_new_tokens : 100 , }); // Decode generated text const generated_text = tokenizer. batch_decode (generated_ids, { skip_special_tokens : false , })[ 0 ]; // Post-process the generated text const result = processor. post_process_generation ( generated_text, task, image. size , ); console . log (result); // { '<MORE_DETAILED_CAPTION>': 'A green car is parked in front of a tan building. The building has a brown door and two brown windows. The car is a two door and the door is closed. The green car has black tires.' } < > Update on GitHub ← Running models on WebGPU Accessing Private/Gated Models → Using quantized models (dtypes) Basic usage Per-module dtypes
Moderation.txt
Moderation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Moderation Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Moderation Check out the Code of Conduct and the Content Guidelines . Reporting a repository To report a repository, you can click the three dots at the top right of a repository. Afterwards, you can click “Report the repository”. This will allow you to explain what’s the reason behind the report (Ethical issue, legal issue, not working, or other) and a description for the report. Once you do this, a public discussion will be opened. Reporting a comment To report a comment, you can click the three dots at the top right of a comment. That will submit a request for the Hugging Face team to review. < > Update on GitHub ← Protect AI Paper Pages → Moderation Reporting a repository Reporting a comment
Spaces_Configuration_Reference.txt
Spaces Configuration Reference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Spaces Configuration Reference Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Spaces Configuration Reference Spaces are configured through the YAML block at the top of the README.md file at the root of the repository. All the accepted parameters are listed below. title : string Display title for the Space. emoji : string Space emoji (emoji-only character allowed). colorFrom : string Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray). colorTo : string Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray). sdk : string Can be either gradio , streamlit , docker , or static . python_version : string Any valid Python 3.x or 3.x.x version. Defaults to 3.10 . sdk_version : string Specify the version of the selected SDK (Streamlit or Gradio). All versions of Gradio are supported. All versions of Streamlit from 0.79.0 are supported. suggested_hardware : string Specify the suggested hardware on which this Space must be run. Useful for Spaces that are meant to be duplicated by other users. Setting this value will not automatically assign an hardware to this Space. Value must be a valid hardware flavor. Current valid hardware flavors: CPU: "cpu-basic" , "cpu-upgrade" GPU: "t4-small" , "t4-medium" , "l4x1" , "l4x4" , "a10g-small" , "a10g-large" , "a10g-largex2" , "a10g-largex4" , "a100-large" TPU: "v5e-1x1" , "v5e-2x2" , "v5e-2x4" suggested_storage : string Specify the suggested permanent storage on which this Space must be run. Useful for Spaces that are meant to be duplicated by other users. Setting this value will not automatically assign a permanent storage to this Space. Value must be one of "small" , "medium" or "large" . app_file : string Path to your main application file (which contains either gradio or streamlit Python code, or static html code). Path is relative to the root of the repository. app_port : int Port on which your application is running. Used only if sdk is docker . Default port is 7860 . base_path : string For non-static Spaces, initial url to render. Needs to start with / . For static Spaces, use app_file instead. fullWidth : boolean Whether your Space is rendered inside a full-width (when true ) or fixed-width column (ie. “container” CSS) inside the iframe. Defaults to true . header : string Can be either mini or default . If header is set to mini the space will be displayed full-screen with a mini floating header . short_description : string A short description of the Space. This will be displayed in the Space’s thumbnail. models : List[string] HF model IDs (like openai-community/gpt2 or deepset/roberta-base-squad2 ) used in the Space. Will be parsed automatically from your code if not specified here. datasets : List[string] HF dataset IDs (like mozilla-foundation/common_voice_13_0 or oscar-corpus/OSCAR-2109 ) used in the Space. Will be parsed automatically from your code if not specified here. tags : List[string] List of terms that describe your Space task or scope. thumbnail : string URL for defining a custom thumbnail for social sharing. pinned : boolean Whether the Space stays on top of your profile. Can be useful if you have a lot of Spaces so you and others can quickly see your best Space. hf_oauth : boolean Whether a connected OAuth app is associated to this Space. See Adding a Sign-In with HF button to your Space for more details. hf_oauth_scopes : List[string] Authorized scopes of the connected OAuth app. openid and profile are authorized by default and do not need this parameter. See Adding a Sign-In with HF button to your space for more details. hf_oauth_expiration_minutes : int Duration of the OAuth token in minutes. Defaults to 480 minutes (8 hours). Maximum duration is 43200 minutes (30 days). See Adding a Sign-In with HF button to your space for more details. hf_oauth_authorized_org : string or List[string] Restrict OAuth access to members of specific organizations. See Adding a Sign-In with HF button to your space for more details. disable_embedding : boolean Whether the Space iframe can be embedded in other websites. Defaults to false, i.e. Spaces can be embedded. startup_duration_timeout : string Set a custom startup duration timeout for your Space. This is the maximum time your Space is allowed to start before it times out and is flagged as unhealthy. Defaults to 30 minutes, but any valid duration (like 1h , 30m ) is acceptable. custom_headers : Dict[string, string] Set custom HTTP headers that will be added to all HTTP responses when serving your Space. For now, only the cross-origin-embedder-policy (COEP), cross-origin-opener-policy (COOP), and cross-origin-resource-policy (CORP) headers are allowed. These headers can be used to set up a cross-origin isolated environment and enable powerful features like SharedArrayBuffer , for example: Copied custom_headers: cross-origin-embedder-policy: require-corp cross-origin-opener-policy: same-origin cross-origin-resource-policy: cross-origin Note: all headers and values must be lowercase. preload_from_hub : List[string] Specify a list of Hugging Face Hub models or other large files to be preloaded during the build time of your Space. This optimizes the startup time by having the files ready when your application starts. This is particularly useful for Spaces that rely on large models or datasets that would otherwise need to be downloaded at runtime. The format for each item is "repository_name" to download all files from a repository, or "repository_name file1,file2" for downloading specific files within that repository. You can also specify a specific commit to download using the format "repository_name file1,file2 commit_sha256" . Example usage: Copied preload_from_hub: - warp-ai/wuerstchen-prior text_encoder/model.safetensors,prior/diffusion_pytorch_model.safetensors - coqui/XTTS-v1 - openai-community/gpt2 config.json 11c5a3d5811f50298f278a704980280950aedb10 In this example, the Space will preload specific .safetensors files from warp-ai/wuerstchen-prior , the complete coqui/XTTS-v1 repository, and a specific revision of the config.json file in the openai-community/gpt2 repository from the Hugging Face Hub during build time. Files are saved in the default `huggingface_hub` disk cache `~/.cache/huggingface/hub`. If you application expects them elsewhere or you changed your `HF_HOME` variable, this pre-loading does not follow that at this time. < > Update on GitHub ← Run Spaces with Docker Sign-In with HF button → Spaces Configuration Reference
Safetensors.txt
Safetensors Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Safetensors text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Safetensors Safetensors is a model serialization format for deep learning models. It is faster and safer compared to other serialization formats like pickle (which is used under the hood in many deep learning libraries). TGI depends on safetensors format mainly to enable tensor parallelism sharding . For a given model repository during serving, TGI looks for safetensors weights. If there are no safetensors weights, TGI converts the PyTorch weights to safetensors format. You can learn more about safetensors by reading the safetensors documentation . < > Update on GitHub ← PagedAttention Flash Attention → Safetensors
AQLM.txt
AQLM Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation AQLM Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started AQLM Try AQLM on Google Colab ! Additive Quantization of Language Models ( AQLM ) is a Large Language Models compression method. It quantizes multiple weights together and takes advantage of interdependencies between them. AQLM represents groups of 8-16 weights as a sum of multiple vector codes. Inference support for AQLM is realised in the aqlm library. Make sure to install it to run the models (note aqlm works only with python>=3.10): Copied pip install aqlm[gpu,cpu] The library provides efficient kernels for both GPU and CPU inference and training. The instructions on how to quantize models yourself, as well as all the relevant code can be found in the corresponding GitHub repository . To run AQLM models simply load a model that has been quantized with AQLM: Copied from transformers import AutoTokenizer, AutoModelForCausalLM quantized_model = AutoModelForCausalLM.from_pretrained( "ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf" , torch_dtype= "auto" , device_map= "auto" ) tokenizer = AutoTokenizer.from_pretrained( "ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf" ) PEFT Starting with version aqlm 1.0.2 , AQLM supports Parameter-Efficient Fine-Tuning in a form of LoRA integrated into the PEFT library. AQLM configurations AQLM quantization setups vary mainly on the number of codebooks used as well as codebook sizes in bits. The most popular setups, as well as inference kernels they support are: Kernel Number of codebooks Codebook size, bits Notation Accuracy Speedup Fast GPU inference Fast CPU inference Triton K N KxN - Up to ~0.7x ✅ ❌ CUDA 1 16 1x16 Best Up to ~1.3x ✅ ❌ CUDA 2 8 2x8 OK Up to ~3.0x ✅ ❌ Numba K 8 Kx8 Good Up to ~4.0x ❌ ✅ < > Update on GitHub ← AWQ VPTQ → AQLM PEFT AQL M configurations
T2I-Adapter.txt
T2I-Adapter Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation T2I-Adapter Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started T2I-Adapter T2I-Adapter is a lightweight adapter for controlling and providing more accurate structure guidance for text-to-image models. It works by learning an alignment between the internal knowledge of the text-to-image model and an external control signal, such as edge detection or depth estimation. The T2I-Adapter design is simple, the condition is passed to four feature extraction blocks and three downsample blocks. This makes it fast and easy to train different adapters for different conditions which can be plugged into the text-to-image model. T2I-Adapter is similar to ControlNet except it is smaller (~77M parameters) and faster because it only runs once during the diffusion process. The downside is that performance may be slightly worse than ControlNet. This guide will show you how to use T2I-Adapter with different Stable Diffusion models and how you can compose multiple T2I-Adapters to impose more than one condition. There are several T2I-Adapters available for different conditions, such as color palette, depth, sketch, pose, and segmentation. Check out the TencentARC repository to try them out! Before you begin, make sure you have the following libraries installed. Copied # uncomment to install the necessary libraries in Colab #!pip install -q diffusers accelerate controlnet-aux==0.0.7 Text-to-image Text-to-image models rely on a prompt to generate an image, but sometimes, text alone may not be enough to provide more accurate structural guidance. T2I-Adapter allows you to provide an additional control image to guide the generation process. For example, you can provide a canny image (a white outline of an image on a black background) to guide the model to generate an image with a similar structure. Stable Diffusion 1.5 Stable Diffusion XL Create a canny image with the opencv-library . Copied import cv2 import numpy as np from PIL import Image from diffusers.utils import load_image image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" ) image = np.array(image) low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = Image.fromarray(image) Now load a T2I-Adapter conditioned on canny images and pass it to the StableDiffusionAdapterPipeline . Copied import torch from diffusers import StableDiffusionAdapterPipeline, T2IAdapter adapter = T2IAdapter.from_pretrained( "TencentARC/t2iadapter_canny_sd15v2" , torch_dtype=torch.float16) pipeline = StableDiffusionAdapterPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , adapter=adapter, torch_dtype=torch.float16, ) pipeline.to( "cuda" ) Finally, pass your prompt and control image to the pipeline. Copied generator = torch.Generator( "cuda" ).manual_seed( 0 ) image = pipeline( prompt= "cinematic photo of a plush and soft midcentury style rug on a wooden floor, 35mm photograph, film, professional, 4k, highly detailed" , image=image, generator=generator, ).images[ 0 ] image MultiAdapter T2I-Adapters are also composable, allowing you to use more than one adapter to impose multiple control conditions on an image. For example, you can use a pose map to provide structural control and a depth map for depth control. This is enabled by the MultiAdapter class. Let’s condition a text-to-image model with a pose and depth adapter. Create and place your depth and pose image and in a list. Copied from diffusers.utils import load_image pose_image = load_image( "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png" ) depth_image = load_image( "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png" ) cond = [pose_image, depth_image] prompt = [ "Santa Claus walking into an office room with a beautiful city view" ] depth image pose image Load the corresponding pose and depth adapters as a list in the MultiAdapter class. Copied import torch from diffusers import StableDiffusionAdapterPipeline, MultiAdapter, T2IAdapter adapters = MultiAdapter( [ T2IAdapter.from_pretrained( "TencentARC/t2iadapter_keypose_sd14v1" ), T2IAdapter.from_pretrained( "TencentARC/t2iadapter_depth_sd14v1" ), ] ) adapters = adapters.to(torch.float16) Finally, load a StableDiffusionAdapterPipeline with the adapters, and pass your prompt and conditioned images to it. Use the adapter_conditioning_scale to adjust the weight of each adapter on the image. Copied pipeline = StableDiffusionAdapterPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4" , torch_dtype=torch.float16, adapter=adapters, ).to( "cuda" ) image = pipeline(prompt, cond, adapter_conditioning_scale=[ 0.7 , 0.7 ]).images[ 0 ] image < > Update on GitHub ← ControlNet Latent Consistency Model → T2 I- Adapter Text-to-image Multi Adapter
Mistral_Small_Instruct_performance_on_AWS_Inferent.txt
Mistral-Small-Instruct performance on AWS Inferentia2 (Latency & Througput) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Mistral-Small-Instruct performance on AWS Inferentia2 (Latency & Througput) AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Mistral-Small-Instruct performance on AWS Inferentia2 (Latency & Througput) How fast is Mistral on Inferentia2? Let’s figure out! For this benchmark we will use the following configurations: Model type batch_size sequence_length Mistral-Small BS1 1 4096 Mistral-Small BS4 4 4096 Note: all models are compiled to use 6 devices corresponding to 12 cores on the inf2.48xlarge instance. Note: please refer to the inferentia2 product page for details on the available instances. Time to first token The time to first token is the time required to process the input tokens and generate the first output token. It is a very important metric, as it corresponds to the latency directly perceived by the user when streaming generated tokens. We test the time to first token for increasing context sizes, from a typical Q/A usage, to heavy Retrieval Augmented Generation (RAG) use-cases. Time to first token is expressed in seconds . Inter-token Latency The inter-token latency corresponds to the average time elapsed between two generated tokens. It is expressed in milliseconds . Throughput Unlike some other benchmarks, we evaluate the throughput using generated tokens only, by dividing their number by the end-to-end latency. Throughput is expressed in tokens/second . ← NeuronX Text-generation-inference for AWS inferentia2 Llama-3.1 8B on AWS Inferentia2 → Mistral- Small- Instruct performance on AW S Inferentia2 ( Latency & Througput) Time to first token Inter-token Latency Throughput
P-tuning.txt
P-tuning Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation P-tuning PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started P-tuning P-tuning adds trainable prompt embeddings to the input that is optimized by a prompt encoder to find a better prompt, eliminating the need to manually design prompts. The prompt tokens can be added anywhere in the input sequence, and p-tuning also introduces anchor tokens for improving performance. The abstract from the paper is: While GPTs with traditional fine-tuning fail to achieve strong results on natural language understanding (NLU), we show that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning — which employs trainable continuous prompt embeddings. On the knowledge probing (LAMA) benchmark, the best GPT recovers 64\% (P@1) of world knowledge without any additional text provided during test time, which substantially improves the previous best by 20+ percentage points. On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning. Importantly, we find that P-tuning also improves BERTs’ performance in both few-shot and supervised settings while largely reducing the need for prompt engineering. Consequently, P-tuning outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark. . PromptEncoderConfig class peft. PromptEncoderConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False num_virtual_tokens : int = None token_dim : int = None num_transformer_submodules : typing.Optional[int] = None num_attention_heads : typing.Optional[int] = None num_layers : typing.Optional[int] = None encoder_reparameterization_type : typing.Union[str, peft.tuners.p_tuning.config.PromptEncoderReparameterizationType] = <PromptEncoderReparameterizationType.MLP: 'MLP'> encoder_hidden_size : int = None encoder_num_layers : int = 2 encoder_dropout : float = 0.0 ) Parameters encoder_reparameterization_type (Union[ PromptEncoderReparameterizationType , str ]) — The type of reparameterization to use. encoder_hidden_size ( int ) — The hidden size of the prompt encoder. encoder_num_layers ( int ) — The number of layers of the prompt encoder. encoder_dropout ( float ) — The dropout probability of the prompt encoder. This is the configuration class to store the configuration of a PromptEncoder . PromptEncoder class peft. PromptEncoder < source > ( config ) Parameters config ( PromptEncoderConfig ) — The configuration of the prompt encoder. The prompt encoder network that is used to generate the virtual token embeddings for p-tuning. Example: Copied >>> from peft import PromptEncoder, PromptEncoderConfig >>> config = PromptEncoderConfig( ... peft_type= "P_TUNING" , ... task_type= "SEQ_2_SEQ_LM" , ... num_virtual_tokens= 20 , ... token_dim= 768 , ... num_transformer_submodules= 1 , ... num_attention_heads= 12 , ... num_layers= 12 , ... encoder_reparameterization_type= "MLP" , ... encoder_hidden_size= 768 , ... ) >>> prompt_encoder = PromptEncoder(config) Attributes : embedding ( torch.nn.Embedding ) — The embedding layer of the prompt encoder. mlp_head ( torch.nn.Sequential ) — The MLP head of the prompt encoder if inference_mode=False . lstm_head ( torch.nn.LSTM ) — The LSTM head of the prompt encoder if inference_mode=False and encoder_reparameterization_type="LSTM" . token_dim ( int ) — The hidden embedding dimension of the base transformer model. input_size ( int ) — The input size of the prompt encoder. output_size ( int ) — The output size of the prompt encoder. hidden_size ( int ) — The hidden size of the prompt encoder. total_virtual_tokens ( int ): The total number of virtual tokens of the prompt encoder. encoder_type (Union[ PromptEncoderReparameterizationType , str ]): The encoder type of the prompt encoder. Input shape: ( batch_size , total_virtual_tokens ) Output shape: ( batch_size , total_virtual_tokens , token_dim ) < > Update on GitHub ← Polytropon Prefix tuning → P-tuning Prompt Encoder Config Prompt Encoder
Run_with_Docker.txt
Run with Docker Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Run with Docker Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Run with Docker You can use Docker to run most Spaces locally. To view instructions to download and run Spaces’ Docker images, click on the “Run with Docker” button on the top-right corner of your Space page: Login to the Docker registry Some Spaces will require you to login to Hugging Face’s Docker registry. To do so, you’ll need to provide: Your Hugging Face username as username A User Access Token as password . Generate one here . < > Update on GitHub ← Embed your Space Spaces Configuration Reference → Run with Docker Login to the Docker registry
Security_&_Compliance.txt
Security & Compliance Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Security & Compliance Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Security & Compliance 🤗 Inference Endpoints is built with security and secure inference at its core. Below you can find an overview of the security measures we have in place. Data Security/Privacy Hugging Face does not store any customer data in terms of payloads or tokens that are passed to the Inference Endpoint. We are storing logs for 30 days. Every Inference Endpoints uses TLS/SSL to encrypt the data in transit. We also recommend using AWS or Azure Private Link for organizations. This allows you to access your Inference Endpoint through a private connection, without exposing it to the internet. Hugging Face also offers Business Associate Addendum or GDPR data processing agreement through the Inference Endpoint enterprise plan. Model Security/Privacy: You can set a model repository as private if you do not want to publicly expose it. Hugging Face does not own any model or data you upload to the Hugging Face hub. Hugging Face does provide malware and pickle scans over the contents of the model repository as with all items in the Hub. Inference Endpoints and Hub Security The Hugging Face Hub, which Inference Endpoints is part, is also SOC2 Type 2 certified. The Hugging Face Hub offers Role Based Access Control. For more on hub security: https://huggingface.co/docs/hub/security Inference Endpoint Security level We currently offer three types of endpoints, in order or increasing security level: Public : A Public Endpoint is available from the internet, secured with TLS/SSL, and requires no authentication. Protected : A Protected Endpoint is available from the internet, secured with TLS/SSL, and requires a valid Hugging Face token for authentication. Private A Private Endpoint is only available through an intra-region secured AWS or Azure PrivateLink connection. Private Endpoints are not accessible from the internet. Public and Protected Endpoints do not require any additional configuration. For Private Endpoints, you need to provide the AWS account ID of the account which also should have access to 🤗 Inference Endpoints. Hugging Face Privacy Policy - https://huggingface.co/privacy < > Update on GitHub ← 🤗 Inference Endpoints Supported Tasks → Security & Compliance Data Security/ Privacy Model Security/ Privacy: Inference Endpoints and Hub Security Inference Endpoint Security level
Debugging.txt
Debugging Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Debugging Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Debugging Training on multiple GPUs can be a tricky endeavor whether you’re running into installation issues or communication problems between your GPUs. This debugging guide covers some issues you may run into and how to resolve them. DeepSpeed CUDA installation If you’re using DeepSpeed, you’ve probably already installed it with the following command. Copied pip install deepspeed DeepSpeed compiles CUDA C++ code and it can be a potential source of errors when building PyTorch extensions that require CUDA. These errors depend on how CUDA is installed on your system, and this section focuses on PyTorch built with CUDA 10.2 . For any other installation issues, please open an issue with the DeepSpeed team. Non-identical CUDA toolkits PyTorch comes with its own CUDA toolkit, but to use DeepSpeed with PyTorch, you need to have an identical version of CUDA installed system-wide. For example, if you installed PyTorch with cudatoolkit==10.2 in your Python environment, then you’ll also need to have CUDA 10.2 installed system-wide. If you don’t have CUDA installed system-wide, you should install it first. The exact location may vary from system to system, but usr/local/cuda-10.2 is the most common location on many Unix systems. When CUDA is correctly setup and added to your PATH environment variable, you can find the installation location with the following command: Copied which nvcc Multiple CUDA toolkits You may also have more than one CUDA toolkit installed system-wide. Copied /usr/local/cuda-10.2 /usr/local/cuda-11.0 Typically, package installers set the paths to whatever the last version was installed. If the package build fails because it can’t find the right CUDA version (despite it being installed system-wide already), then you need to configure the PATH and LD_LIBRARY_PATH environment variables to point to the correct path. Take a look at the contents of these environment variables first: Copied echo $PATH echo $LD_LIBRARY_PATH PATH lists the locations of the executables and LD_LIBRARY_PATH lists where to look for shared libraries. Earlier entries are prioritized over later ones, and : is used to separate multiple entries. To tell the build program where to find the specific CUDA toolkit you want, insert the correct path to list first. This command prepends rather than overwrites the existing values. Copied # adjust the version and full path if needed export PATH=/usr/local/cuda-10.2/bin: $PATH export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64: $LD_LIBRARY_PATH In addition, you should also check the directories you assign actually exist. The lib64 sub-directory contains various CUDA .so objects (like libcudart.so ) and while it is unlikely your system names them differently, you should check the actual names and change them accordingly. Older CUDA versions Sometimes, older CUDA versions may refuse to build with newer compilers. For example, if you have gcc-9 but CUDA wants gcc-7 . Usually, installing the latest CUDA toolkit enables support for the newer compiler. You could also install an older version of the compiler in addition to the one you’re currently using (or it may already be installed but it’s not used by default and the build system can’t see it). To resolve this, you can create a symlink to give the build system visibility to the older compiler. Copied # adapt the path to your system sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++ Prebuild If you’re still having issues with installing DeepSpeed or if you’re building DeepSpeed at run time, you can try to prebuild the DeepSpeed modules before installing them. To make a local build for DeepSpeed: Copied git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST= "8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \ --global-option= "build_ext" --global-option= "-j8" --no-cache -v \ --disable-pip-version-check 2>&1 | tee build.log To use NVMe offload, add the DS_BUILD_AIO=1 parameter to the build command and make sure you install the libaio-dev package system-wide. Next, you’ll have to specify your GPU’s architecture by editing the TORCH_CUDA_ARCH_LIST variable (find a complete list of NVIDIA GPUs and their corresponding architectures on this page ). To check the PyTorch version that corresponds to your architecture, run the following command: Copied python -c "import torch; print(torch.cuda.get_arch_list())" Find the architecture for a GPU with the following command: same GPUs specific GPU Copied CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())" If you get 8, 6 , then you can set TORCH_CUDA_ARCH_LIST="8.6" . For multiple GPUs with different architectures, list them like TORCH_CUDA_ARCH_LIST="6.1;8.6" . It is also possible to not specify TORCH_CUDA_ARCH_LIST and the build program automatically queries the GPU architecture of the build. However, it may or may not match the actual GPU on the target machine which is why it is better to explicitly specify the correct architecture. For training on multiple machines with the same setup, you’ll need to make a binary wheel: Copied git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST= "8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \ python setup.py build_ext -j8 bdist_wheel This command generates a binary wheel that’ll look something like dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl . Now you can install this wheel locally or on another machine. Copied pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl Multi-GPU Network Issues Debug When training or inferencing with DistributedDataParallel and multiple GPU, if you run into issue of inter-communication between processes and/or nodes, you can use the following script to diagnose network issues. Copied wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py For example to test how 2 GPUs interact do: Copied python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py If both processes can talk to each and allocate GPU memory each will print an OK status. For more GPUs or nodes adjust the arguments in the script. You will find a lot more details inside the diagnostics script and even a recipe to how you could run it in a SLURM environment. An additional level of debug is to add NCCL_DEBUG=INFO environment variable as follows: Copied NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py This will dump a lot of NCCL-related debug information, which you can then search online if you find that some problems are reported. Or if you’re not sure how to interpret the output you can share the log file in an Issue. Underflow and Overflow Detection This feature is currently available for PyTorch-only. For multi-GPU training it requires DDP ( torch.distributed.launch ). This feature can be used with any nn.Module -based model. If you start getting loss=NaN or the model exhibits some other abnormal behavior due to inf or nan in activations or weights one needs to discover where the first underflow or overflow happens and what led to it. Luckily you can accomplish that easily by activating a special module that will do the detection automatically. If you’re using Trainer , you just need to add: Copied --debug underflow_overflow to the normal command line arguments, or pass debug="underflow_overflow" when creating the TrainingArguments object. If you’re using your own training loop or another Trainer you can accomplish the same with: Copied from transformers.debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model) DebugUnderflowOverflow inserts hooks into the model that immediately after each forward call will test input and output variables and also the corresponding module’s weights. As soon as inf or nan is detected in at least one element of the activations or weights, the program will assert and print a report like this (this was caught with google/mt5-small under fp16 mixed precision): Copied Detected inf/nan during batch_number= 0 Last 21 forward frames: abs min abs max metadata encoder .block. 1 .layer. 1 .DenseReluDense.dropout Dropout 0 . 00 e+ 00 2 . 57 e+ 02 input[ 0 ] 0 . 00 e+ 00 2 . 85 e+ 02 output [...] encoder .block. 2 .layer. 0 T5LayerSelfAttention 6 . 78 e- 04 3 . 15 e+ 03 input[ 0 ] 2 . 65 e- 04 3 . 42 e+ 03 output[ 0 ] None output[ 1 ] 2 . 25 e- 01 1 . 00 e+ 04 output[ 2 ] encoder .block. 2 .layer. 1 .layer_norm T5LayerNorm 8 . 69 e- 02 4 . 18 e- 01 weight 2 . 65 e- 04 3 . 42 e+ 03 input[ 0 ] 1 . 79 e- 06 4 . 65 e+ 00 output encoder .block. 2 .layer. 1 .DenseReluDense.wi_0 Linear 2 . 17 e- 07 4 . 50 e+ 00 weight 1 . 79 e- 06 4 . 65 e+ 00 input[ 0 ] 2 . 68 e- 06 3 . 70 e+ 01 output encoder .block. 2 .layer. 1 .DenseReluDense.wi_1 Linear 8 . 08 e- 07 2 . 66 e+ 01 weight 1 . 79 e- 06 4 . 65 e+ 00 input[ 0 ] 1 . 27 e- 04 2 . 37 e+ 02 output encoder .block. 2 .layer. 1 .DenseReluDense.dropout Dropout 0 . 00 e+ 00 8 . 76 e+ 03 input[ 0 ] 0 . 00 e+ 00 9 . 74 e+ 03 output encoder .block. 2 .layer. 1 .DenseReluDense.wo Linear 1 . 01 e- 06 6 . 44 e+ 00 weight 0 . 00 e+ 00 9 . 74 e+ 03 input[ 0 ] 3 . 18 e- 04 6 . 27 e+ 04 output encoder .block. 2 .layer. 1 .DenseReluDense T5DenseGatedGeluDense 1 . 79 e- 06 4 . 65 e+ 00 input[ 0 ] 3 . 18 e- 04 6 . 27 e+ 04 output encoder .block. 2 .layer. 1 .dropout Dropout 3 . 18 e- 04 6 . 27 e+ 04 input[ 0 ] 0 . 00 e+ 00 inf output The example output has been trimmed in the middle for brevity. The second column shows the value of the absolute largest element, so if you have a closer look at the last few frames, the inputs and outputs were in the range of 1e4 . So when this training was done under fp16 mixed precision the very last step overflowed (since under fp16 the largest number before inf is 64e3 ). To avoid overflows under fp16 the activations must remain way below 1e4 , because 1e4 * 1e4 = 1e8 so any matrix multiplication with large activations is going to lead to a numerical overflow condition. At the very start of the trace you can discover at which batch number the problem occurred (here Detected inf/nan during batch_number=0 means the problem occurred on the first batch). Each reported frame starts by declaring the fully qualified entry for the corresponding module this frame is reporting for. If we look just at this frame: Copied encoder .block. 2 .layer. 1 .layer_norm T5LayerNorm 8 . 69 e- 02 4 . 18 e- 01 weight 2 . 65 e- 04 3 . 42 e+ 03 input[ 0 ] 1 . 79 e- 06 4 . 65 e+ 00 output Here, encoder.block.2.layer.1.layer_norm indicates that it was a layer norm for the first layer, of the second block of the encoder. And the specific calls of the forward is T5LayerNorm . Let’s look at the last few frames of that report: Copied Detected inf/nan during batch_number= 0 Last 21 forward frames: abs min abs max metadata [...] encoder .block. 2 .layer. 1 .DenseReluDense.wi_0 Linear 2 . 17 e- 07 4 . 50 e+ 00 weight 1 . 79 e- 06 4 . 65 e+ 00 input[ 0 ] 2 . 68 e- 06 3 . 70 e+ 01 output encoder .block. 2 .layer. 1 .DenseReluDense.wi_1 Linear 8 . 08 e- 07 2 . 66 e+ 01 weight 1 . 79 e- 06 4 . 65 e+ 00 input[ 0 ] 1 . 27 e- 04 2 . 37 e+ 02 output encoder .block. 2 .layer. 1 .DenseReluDense.wo Linear 1 . 01 e- 06 6 . 44 e+ 00 weight 0 . 00 e+ 00 9 . 74 e+ 03 input[ 0 ] 3 . 18 e- 04 6 . 27 e+ 04 output encoder .block. 2 .layer. 1 .DenseReluDense T5DenseGatedGeluDense 1 . 79 e- 06 4 . 65 e+ 00 input[ 0 ] 3 . 18 e- 04 6 . 27 e+ 04 output encoder .block. 2 .layer. 1 .dropout Dropout 3 . 18 e- 04 6 . 27 e+ 04 input[ 0 ] 0 . 00 e+ 00 inf output The last frame reports for Dropout.forward function with the first entry for the only input and the second for the only output. You can see that it was called from an attribute dropout inside DenseReluDense class. We can see that it happened during the first layer, of the 2nd block, during the very first batch. Finally, the absolute largest input elements was 6.27e+04 and same for the output was inf . You can see here, that T5DenseGatedGeluDense.forward resulted in output activations, whose absolute max value was around 62.7K, which is very close to fp16’s top limit of 64K. In the next frame we have Dropout which renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than 64K, and we get an overflow ( inf ). As you can see it’s the previous frames that we need to look into when the numbers start going into very large for fp16 numbers. Let’s match the report to the code from models/t5/modeling_t5.py : Copied class T5DenseGatedGeluDense (nn.Module): def __init__ ( self, config ): super ().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias= False ) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias= False ) self.wo = nn.Linear(config.d_ff, config.d_model, bias= False ) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN[ "gelu_new" ] def forward ( self, hidden_states ): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states Now it’s easy to see the dropout call, and all the previous calls as well. Since the detection is happening in a forward hook, these reports are printed immediately after each forward returns. Going back to the full report, to act on it and to fix the problem, we need to go a few frames up where the numbers started to go up and most likely switch to the fp32 mode here, so that the numbers don’t overflow when multiplied or summed up. Of course, there might be other solutions. For example, we could turn off amp temporarily if it’s enabled, after moving the original forward into a helper wrapper, like so: Copied def _forward ( self, hidden_states ): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states import torch def forward ( self, hidden_states ): if torch.is_autocast_enabled(): with torch.cuda.amp.autocast(enabled= False ): return self._forward(hidden_states) else : return self._forward(hidden_states) Since the automatic detector only reports on inputs and outputs of full frames, once you know where to look, you may want to analyse the intermediary stages of any specific forward function as well. In such a case you can use the detect_overflow helper function to inject the detector where you want it, for example: Copied from debug_utils import detect_overflow class T5LayerFF (nn.Module): [...] def forward ( self, hidden_states ): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, "after layer_norm" ) forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, "after DenseReluDense" ) return hidden_states + self.dropout(forwarded_states) You can see that we added 2 of these and now we track if inf or nan for forwarded_states was detected somewhere in between. Actually, the detector already reports these because each of the calls in the example above is a nn.Module , but let’s say if you had some local direct calculations this is how you’d do that. Additionally, if you’re instantiating the debugger in your own code, you can adjust the number of frames printed from its default, e.g.: Copied from transformers.debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save= 100 ) Specific batch absolute min and max value tracing The same debugging class can be used for per-batch tracing with the underflow/overflow detection feature turned off. Let’s say you want to watch the absolute min and max values for all the ingredients of each forward call of a given batch, and only do that for batches 1 and 3. Then you instantiate this class as: Copied debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[ 1 , 3 ]) And now full batches 1 and 3 will be traced using the same format as the underflow/overflow detector does. Batches are 0-indexed. This is helpful if you know that the program starts misbehaving after a certain batch number, so you can fast-forward right to that area. Here is a sample truncated output for such configuration: Copied *** Starting batch number=1 *** abs min abs max metadata shared Embedding 1.01e -06 7.92e +02 weight 0.00e +00 2.47e +04 input[0] 5.36e -05 7.92e +02 output [...] decoder.dropout Dropout 1.60e -07 2.27e +01 input[0] 0.00e +00 2.52e +01 output decoder T5Stack not a tensor output lm_head Linear 1.01e -06 7.92e +02 weight 0.00e +00 1.11e +00 input[0] 6.06e -02 8.39e +01 output T5ForConditionalGeneration not a tensor output *** Starting batch number=3 *** abs min abs max metadata shared Embedding 1.01e -06 7.92e +02 weight 0.00e +00 2.78e +04 input[0] 5.36e -05 7.92e +02 output [...] Here you will get a huge number of frames dumped - as many as there were forward calls in your model, so it may or may not what you want, but sometimes it can be easier to use for debugging purposes than a normal debugger. For example, if a problem starts happening at batch number 150. So you can dump traces for batches 149 and 150 and compare where numbers started to diverge. You can also specify the batch number after which to stop the training, with: Copied debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[ 1 , 3 ], abort_after_batch_num= 3 ) < > Update on GitHub ← Instantiate a big model XLA Integration for TensorFlow Models → Debugging Deep Speed CUD A installation Non-identical CUD A toolkits Multiple CUD A toolkits Older CUD A versions Prebuild Multi-GP U Network Issues Debug Underflow and Overflow Detection Specific batch absolute min and max value tracing
Use_with_JAX.txt
Use with JAX Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Use with JAX Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Use with JAX This document is a quick introduction to using datasets with JAX, with a particular focus on how to get jax.Array objects out of our datasets, and how to use them to train JAX models. jax and jaxlib are required to reproduce to code above, so please make sure you install them as pip install datasets[jax] . Dataset format By default, datasets return regular Python objects: integers, floats, strings, lists, etc., and string and binary objects are unchanged, since JAX only supports numbers. To get JAX arrays (numpy-like) instead, you can set the format of the dataset to jax : Copied >>> from datasets import Dataset >>> data = [[ 1 , 2 ], [ 3 , 4 ]] >>> ds = Dataset.from_dict({ "data" : data}) >>> ds = ds.with_format( "jax" ) >>> ds[ 0 ] { 'data' : DeviceArray([ 1 , 2 ], dtype=int32)} >>> ds[: 2 ] { 'data' : DeviceArray([ [ 1 , 2 ], [ 3 , 4 ]], dtype=int32)} A Dataset object is a wrapper of an Arrow table, which allows fast reads from arrays in the dataset to JAX arrays. Note that the exact same procedure applies to DatasetDict objects, so that when setting the format of a DatasetDict to jax , all the Dataset s there will be formatted as jax : Copied >>> from datasets import DatasetDict >>> data = { "train" : { "data" : [[ 1 , 2 ], [ 3 , 4 ]]}, "test" : { "data" : [[ 5 , 6 ], [ 7 , 8 ]]}} >>> dds = DatasetDict.from_dict(data) >>> dds = dds.with_format( "jax" ) >>> dds[ "train" ][: 2 ] { 'data' : DeviceArray([ [ 1 , 2 ], [ 3 , 4 ]], dtype=int32)} Another thing you’ll need to take into consideration is that the formatting is not applied until you actually access the data. So if you want to get a JAX array out of a dataset, you’ll need to access the data first, otherwise the format will remain the same. Finally, to load the data in the device of your choice, you can specify the device argument, but note that jaxlib.xla_extension.Device is not supported as it’s not serializable with neither pickle not dill , so you’ll need to use its string identifier instead: Copied >>> import jax >>> from datasets import Dataset >>> data = [[ 1 , 2 ], [ 3 , 4 ]] >>> ds = Dataset.from_dict({ "data" : data}) >>> device = str (jax.devices()[ 0 ]) # Not casting to `str` before passing it to `with_format` will raise a `ValueError` >>> ds = ds.with_format( "jax" , device=device) >>> ds[ 0 ] { 'data' : DeviceArray([ 1 , 2 ], dtype=int32)} >>> ds[ 0 ][ "data" ].device() TFRT_CPU_0 >>> assert ds[ 0 ][ "data" ].device() == jax.devices()[ 0 ] True Note that if the device argument is not provided to with_format then it will use the default device which is jax.devices()[0] . N-dimensional arrays If your dataset consists of N-dimensional arrays, you will see that by default they are considered as the same tensor if the shape is fixed: Copied >>> from datasets import Dataset >>> data = [[[ 1 , 2 ],[ 3 , 4 ]], [[ 5 , 6 ],[ 7 , 8 ]]] # fixed shape >>> ds = Dataset.from_dict({ "data" : data}) >>> ds = ds.with_format( "jax" ) >>> ds[ 0 ] { 'data' : Array([[ 1 , 2 ], [ 3 , 4 ]], dtype=int32)} Copied >>> from datasets import Dataset >>> data = [[[ 1 , 2 ],[ 3 ]], [[ 4 , 5 , 6 ],[ 7 , 8 ]]] # varying shape >>> ds = Dataset.from_dict({ "data" : data}) >>> ds = ds.with_format( "jax" ) >>> ds[ 0 ] { 'data' : [Array([ 1 , 2 ], dtype=int32), Array([ 3 ], dtype=int32)]} However this logic often requires slow shape comparisons and data copies. To avoid this, you must explicitly use the Array feature type and specify the shape of your tensors: Copied >>> from datasets import Dataset, Features, Array2D >>> data = [[[ 1 , 2 ],[ 3 , 4 ]],[[ 5 , 6 ],[ 7 , 8 ]]] >>> features = Features({ "data" : Array2D(shape=( 2 , 2 ), dtype= 'int32' )}) >>> ds = Dataset.from_dict({ "data" : data}, features=features) >>> ds = ds.with_format( "torch" ) >>> ds[ 0 ] { 'data' : Array([[ 1 , 2 ], [ 3 , 4 ]], dtype=int32)} >>> ds[: 2 ] { 'data' : Array([[[ 1 , 2 ], [ 3 , 4 ]], [[ 5 , 6 ], [ 7 , 8 ]]], dtype=int32)} Other feature types ClassLabel data is properly converted to arrays: Copied >>> from datasets import Dataset, Features, ClassLabel >>> labels = [ 0 , 0 , 1 ] >>> features = Features({ "label" : ClassLabel(names=[ "negative" , "positive" ])}) >>> ds = Dataset.from_dict({ "label" : labels}, features=features) >>> ds = ds.with_format( "jax" ) >>> ds[: 3 ] { 'label' : DeviceArray([ 0 , 0 , 1 ], dtype=int32)} String and binary objects are unchanged, since JAX only supports numbers. The Image and Audio feature types are also supported. To use the Image feature type, you’ll need to install the vision extra as pip install datasets[vision] . Copied >>> from datasets import Dataset, Features, Image >>> images = [ "path/to/image.png" ] * 10 >>> features = Features({ "image" : Image()}) >>> ds = Dataset.from_dict({ "image" : images}, features=features) >>> ds = ds.with_format( "jax" ) >>> ds[ 0 ][ "image" ].shape ( 512 , 512 , 3 ) >>> ds[ 0 ] { 'image' : DeviceArray([[[ 255 , 255 , 255 ], [ 255 , 255 , 255 ], ..., [ 255 , 255 , 255 ], [ 255 , 255 , 255 ]]], dtype=uint8)} >>> ds[: 2 ][ "image" ].shape ( 2 , 512 , 512 , 3 ) >>> ds[: 2 ] { 'image' : DeviceArray([[[[ 255 , 255 , 255 ], [ 255 , 255 , 255 ], ..., [ 255 , 255 , 255 ], [ 255 , 255 , 255 ]]]], dtype=uint8)} To use the Audio feature type, you’ll need to install the audio extra as pip install datasets[audio] . Copied >>> from datasets import Dataset, Features, Audio >>> audio = [ "path/to/audio.wav" ] * 10 >>> features = Features({ "audio" : Audio()}) >>> ds = Dataset.from_dict({ "audio" : audio}, features=features) >>> ds = ds.with_format( "jax" ) >>> ds[ 0 ][ "audio" ][ "array" ] DeviceArray([- 0.059021 , - 0.03894043 , - 0.00735474 , ..., 0.0133667 , 0.01809692 , 0.00268555 ], dtype=float32) >>> ds[ 0 ][ "audio" ][ "sampling_rate" ] DeviceArray( 44100 , dtype=int32, weak_type= True ) Data loading JAX doesn’t have any built-in data loading capabilities, so you’ll need to use a library such as PyTorch to load your data using a DataLoader or TensorFlow using a tf.data.Dataset . Citing the JAX documentation on this topic: “JAX is laser-focused on program transformations and accelerator-backed NumPy, so we don’t include data loading or munging in the JAX library. There are already a lot of great data loaders out there, so let’s just use them instead of reinventing anything. We’ll grab PyTorch’s data loader, and make a tiny shim to make it work with NumPy arrays.”. So that’s the reason why JAX-formatting in datasets is so useful, because it lets you use any model from the HuggingFace Hub with JAX, without having to worry about the data loading part. Using with_format('jax') The easiest way to get JAX arrays out of a dataset is to use the with_format('jax') method. Lets assume that we want to train a neural network on the MNIST dataset available at the HuggingFace Hub at https://huggingface.co/datasets/mnist . Copied >>> from datasets import load_dataset >>> ds = load_dataset( "mnist" ) >>> ds = ds.with_format( "jax" ) >>> ds[ "train" ][ 0 ] { 'image' : DeviceArray([[ 0 , 0 , 0 , ...], [ 0 , 0 , 0 , ...], ..., [ 0 , 0 , 0 , ...], [ 0 , 0 , 0 , ...]], dtype=uint8), 'label' : DeviceArray( 5 , dtype=int32)} Once the format is set we can feed the dataset to the JAX model in batches using the Dataset.iter() method: Copied >>> for epoch in range (epochs): ... for batch in ds[ "train" ]. iter (batch_size= 32 ): ... x, y = batch[ "image" ], batch[ "label" ] ... ... < > Update on GitHub ← Use with PyTorch Use with Spark → Use with JAX Dataset format N-dimensional arrays Other feature types Data loading Using with_format('jax')
AutoTrain_API.txt
AutoTrain API Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AutoTrain documentation AutoTrain API AutoTrain 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.8.24 v0.7.129 v0.6.48 v0.5.2 EN Getting Started 🤗 AutoTrain How much does it cost? Get help and support Frequently Asked Questions Quickstart Train on Spaces Train Locally Config File Tasks LLM Finetuning Text Classification/Regression Extractive QA Sentence Transformer Image Classification / Regression Object Detection DreamBooth Seq2Seq Token Classification Tabular Miscellaneous Understanding Column Mapping AutoTrain API Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started AutoTrain API With AutoTrain API, you can run your own instance of AutoTrain and use it to train models on Hugging Face Spaces infrastructure (local training coming soon). This API is designed to be used with autotrain compatible models and datasets, and it provides a simple interface to train models with minimal configuration. Getting Started To get started with AutoTrain API, all you need to do is install autotrain-advanced as discussed in running locally section and run the autotrain app command: Copied $ autotrain app --port 8000 --host 127.0.0.1 You can then access the API reference at http://127.0.0.1:8000/docs . Example Usage Copied curl -X POST "http://127.0.0.1:8000/api/create_project" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer hf_XXXXX" \ -d '{ "username": "abhishek", "project_name": "my-autotrain-api-model", "task": "llm:orpo", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "hub_dataset": "argilla/distilabel-capybara-dpo-7k-binarized", "train_split": "train", "hardware": "spaces-a10g-large", "column_mapping": { "text_column": "chosen", "rejected_text_column": "rejected", "prompt_text_column": "prompt" }, "params": { "block_size": 1024, "model_max_length": 4096, "max_prompt_length": 512, "epochs": 1, "batch_size": 2, "lr": 0.00003, "peft": true, "quantization": "int4", "target_modules": "all-linear", "padding": "right", "optimizer": "adamw_torch", "scheduler": "linear", "gradient_accumulation": 4, "mixed_precision": "fp16", "chat_template": "chatml" } }' < > Update on GitHub ← Understanding Column Mapping Auto Train API Getting Started Example Usage
Using_TGI_CLI.txt
Using TGI CLI Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Using TGI CLI text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using TGI CLI You can use TGI command-line interface (CLI) to download weights, serve and quantize models, or get information on serving parameters. To install the CLI, please refer to the installation section . text-generation-server lets you download the model with download-weights command like below 👇 Copied text-generation-server download-weights MODEL_HUB_ID You can also use it to quantize models like below 👇 Copied text-generation-server quantize MODEL_HUB_ID OUTPUT_DIR You can use text-generation-launcher to serve models. Copied text-generation-launcher --model-id MODEL_HUB_ID --port 8080 There are many options and parameters you can pass to text-generation-launcher . The documentation for CLI is kept minimal and intended to rely on self-generating documentation, which can be found by running Copied text-generation-launcher -- help You can also find it hosted in this Swagger UI . Same documentation can be found for text-generation-server . Copied text-generation-server -- help < > Update on GitHub ← Serving Private & Gated Models Non-core Model Serving → Using TG I CLI
Load_LoRAs_for_inference.txt
Load LoRAs for inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Load LoRAs for inference Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Load LoRAs for inference There are many adapter types (with LoRAs being the most popular) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. In this tutorial, you’ll learn how to easily load and manage adapters for inference with the 🤗 PEFT integration in 🤗 Diffusers. You’ll use LoRA as the main adapter technique, so you’ll see the terms LoRA and adapter used interchangeably. Let’s first install all the required libraries. Copied !pip install -q transformers accelerate peft diffusers Now, load a pipeline with a Stable Diffusion XL (SDXL) checkpoint: Copied from diffusers import DiffusionPipeline import torch pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to( "cuda" ) Next, load a CiroN2022/toy-face adapter with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which lets you easily switch between different LoRA checkpoints. Let’s call this adapter "toy" . Copied pipe.load_lora_weights( "CiroN2022/toy-face" , weight_name= "toy_face_sdxl.safetensors" , adapter_name= "toy" ) Make sure to include the token toy_face in the prompt and then you can perform inference: Copied prompt = "toy_face of a hacker with a hoodie" lora_scale = 0.9 image = pipe( prompt, num_inference_steps= 30 , cross_attention_kwargs={ "scale" : lora_scale}, generator=torch.manual_seed( 0 ) ).images[ 0 ] image With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images and call it "pixel" . The pipeline automatically sets the first loaded adapter ( "toy" ) as the active adapter, but you can activate the "pixel" adapter with the ~PeftAdapterMixin.set_adapters method: Copied pipe.load_lora_weights( "nerijs/pixel-art-xl" , weight_name= "pixel-art-xl.safetensors" , adapter_name= "pixel" ) pipe.set_adapters( "pixel" ) Make sure you include the token pixel art in your prompt to generate a pixel art image: Copied prompt = "a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps= 30 , cross_attention_kwargs={ "scale" : lora_scale}, generator=torch.manual_seed( 0 ) ).images[ 0 ] image By default, if the most up-to-date versions of PEFT and Transformers are detected, low_cpu_mem_usage is set to True to speed up the loading time of LoRA checkpoints. Merge adapters You can also merge different adapter checkpoints for inference to blend their styles together. Once again, use the ~PeftAdapterMixin.set_adapters method to activate the pixel and toy adapters and specify the weights for how they should be merged. Copied pipe.set_adapters([ "pixel" , "toy" ], adapter_weights=[ 0.5 , 1.0 ]) LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth . DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. Remember to use the trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl (these are found in their repositories) in the prompt to generate an image. Copied prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps= 30 , cross_attention_kwargs={ "scale" : 1.0 }, generator=torch.manual_seed( 0 ) ).images[ 0 ] image Impressive! As you can see, the model generated an image that mixed the characteristics of both adapters. Through its PEFT integration, Diffusers also offers more efficient merging methods which you can learn about in the Merge LoRAs guide! To return to only using one adapter, use the ~PeftAdapterMixin.set_adapters method to activate the "toy" adapter: Copied pipe.set_adapters( "toy" ) prompt = "toy_face of a hacker with a hoodie" lora_scale = 0.9 image = pipe( prompt, num_inference_steps= 30 , cross_attention_kwargs={ "scale" : lora_scale}, generator=torch.manual_seed( 0 ) ).images[ 0 ] image Or to disable all adapters entirely, use the ~PeftAdapterMixin.disable_lora method to return the base model. Copied pipe.disable_lora() prompt = "toy_face of a hacker with a hoodie" image = pipe(prompt, num_inference_steps= 30 , generator=torch.manual_seed( 0 )).images[ 0 ] image Customize adapters strength For even more customization, you can control how strongly the adapter affects each part of the pipeline. For this, pass a dictionary with the control strengths (called “scales”) to ~PeftAdapterMixin.set_adapters . For example, here’s how you can turn on the adapter for the down parts, but turn it off for the mid and up parts: Copied pipe.enable_lora() # enable lora again, after we disabled it above prompt = "toy_face of a hacker with a hoodie, pixel art" adapter_weight_scales = { "unet" : { "down" : 1 , "mid" : 0 , "up" : 0 } } pipe.set_adapters( "pixel" , adapter_weight_scales) image = pipe(prompt, num_inference_steps= 30 , generator=torch.manual_seed( 0 )).images[ 0 ] image Let’s see how turning off the down part and turning on the mid and up part respectively changes the image. Copied adapter_weight_scales = { "unet" : { "down" : 0 , "mid" : 1 , "up" : 0 } } pipe.set_adapters( "pixel" , adapter_weight_scales) image = pipe(prompt, num_inference_steps= 30 , generator=torch.manual_seed( 0 )).images[ 0 ] image Copied adapter_weight_scales = { "unet" : { "down" : 0 , "mid" : 0 , "up" : 1 } } pipe.set_adapters( "pixel" , adapter_weight_scales) image = pipe(prompt, num_inference_steps= 30 , generator=torch.manual_seed( 0 )).images[ 0 ] image Looks cool! This is a really powerful feature. You can use it to control the adapter strengths down to per-transformer level. And you can even use it for multiple adapters. Copied adapter_weight_scales_toy = 0.5 adapter_weight_scales_pixel = { "unet" : { "down" : 0.9 , # all transformers in the down-part will use scale 0.9 # "mid" # because, in this example, "mid" is not given, all transformers in the mid part will use the default scale 1.0 "up" : { "block_0" : 0.6 , # all 3 transformers in the 0th block in the up-part will use scale 0.6 "block_1" : [ 0.4 , 0.8 , 1.0 ], # the 3 transformers in the 1st block in the up-part will use scales 0.4, 0.8 and 1.0 respectively } } } pipe.set_adapters([ "toy" , "pixel" ], [adapter_weight_scales_toy, adapter_weight_scales_pixel]) image = pipe(prompt, num_inference_steps= 30 , generator=torch.manual_seed( 0 )).images[ 0 ] image Manage adapters You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, use the get_active_adapters() method to check the list of active adapters: Copied active_adapters = pipe.get_active_adapters() active_adapters [ "toy" , "pixel" ] You can also get the active adapters of each pipeline component with get_list_adapters() : Copied list_adapters_component_wise = pipe.get_list_adapters() list_adapters_component_wise { "text_encoder" : [ "toy" , "pixel" ], "unet" : [ "toy" , "pixel" ], "text_encoder_2" : [ "toy" , "pixel" ]} The ~PeftAdapterMixin.delete_adapters function completely removes an adapter and their LoRA layers from a model. Copied pipe.delete_adapters( "toy" ) pipe.get_active_adapters() [ "pixel" ] < > Update on GitHub ← Train a diffusion model Accelerate inference of text-to-image diffusion models → Load LoR As for inference Merge adapters Customize adapters strength Manage adapters
Paper_Pages.txt
Paper Pages Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Paper Pages Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Paper Pages Paper pages allow people to find artifacts related to a paper such as models, datasets and apps/demos (Spaces). Paper pages also enable the community to discuss about the paper. Linking a Paper to a model, dataset or Space If the repository card ( README.md ) includes a link to a paper on arXiv, the Hugging Face Hub will extract the arXiv ID and include it in the repository’s tags. Clicking on the arxiv tag will let you: Visit the Paper page. Filter for other models or datasets on the Hub that cite the same paper. Claiming authorship to a Paper The Hub will attempt to automatically match paper to users based on their email. If your paper is not linked to your account, you can click in your name in the corresponding Paper page and click “claim authorship”. This will automatically re-direct to your paper settings where you can confirm the request. The admin team will validate your request soon. Once confirmed, the Paper page will show as verified. Frequently Asked Questions Can I control which Paper pages show in my profile? Yes! You can visit your Papers in settings , where you will see a list of verified papers. There, you can click the “Show on profile” checkbox to hide/show it in your profile. Do you support ACL anthology? We’re starting with Arxiv as it accounts for 95% of the paper URLs Hugging Face users have linked in their repos organically. We’ll check how this evolve and potentially extend to other paper hosts in the future. Can I have a Paper page even if I have no model/dataset/Space? Yes. You can go to the main Papers page , click search and write the name of the paper or the full Arxiv id. If the paper does not exist, you will get an option to index it. You can also just visit the page hf.co/papers/xxxx.yyyyy replacing with the arxiv id of the paper you wish to index. < > Update on GitHub ← Moderation Search → Paper Pages Linking a Paper to a model, dataset or Space Claiming authorship to a Paper Frequently Asked Questions Can I control which Paper pages show in my profile? Do you support AC L anthology? Can I have a Paper page even if I have no model/dataset/ Space?
Use_tokenizers_from_🤗_Tokenizers.txt
Use tokenizers from 🤗 Tokenizers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Use tokenizers from 🤗 Tokenizers Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Use tokenizers from 🤗 Tokenizers The PreTrainedTokenizerFast depends on the 🤗 Tokenizers library. The tokenizers obtained from the 🤗 Tokenizers library can be loaded very simply into 🤗 Transformers. Before getting in the specifics, let’s first start by creating a dummy tokenizer in a few lines: Copied >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token= "[UNK]" )) >>> trainer = BpeTrainer(special_tokens=[ "[UNK]" , "[CLS]" , "[SEP]" , "[PAD]" , "[MASK]" ]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [...] >>> tokenizer.train(files, trainer) We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to a JSON file for future re-use. Loading directly from the tokenizer object Let’s see how to leverage this tokenizer object in the 🤗 Transformers library. The PreTrainedTokenizerFast class allows for easy instantiation, by accepting the instantiated tokenizer object as an argument: Copied >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to the tokenizer page for more information. Loading from a JSON file In order to load a tokenizer from a JSON file, let’s first start by saving our tokenizer: Copied >>> tokenizer.save( "tokenizer.json" ) The path to which we saved this file can be passed to the PreTrainedTokenizerFast initialization method using the tokenizer_file parameter: Copied >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file= "tokenizer.json" ) This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to the tokenizer page for more information. < > Update on GitHub ← LLM prompting guide Run inference with multilingual models → Use tokenizers from 🤗 Tokenizers Loading directly from the tokenizer object Loading from a JSO N file
Pickle_Scanning.txt
Pickle Scanning Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Pickle Scanning Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security User Access Tokens Two-Factor Authentication Git over SSH Signing Commits with GPG Single Sign-On (SSO) Advanced Access Control (Resource Groups) Malware Scanning Pickle Scanning Secrets Scanning Protect AI Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Pickle Scanning Pickle is a widely used serialization format in ML. Most notably, it is the default format for PyTorch model weights. There are dangerous arbitrary code execution attacks that can be perpetrated when you load a pickle file. We suggest loading models from users and organizations you trust, relying on signed commits, and/or loading models from TF or Jax formats with the from_tf=True auto-conversion mechanism. We also alleviate this issue by displaying/“vetting” the list of imports in any pickled file, directly on the Hub. Finally, we are experimenting with a new, simple serialization format for weights called safetensors . What is a pickle? From the official docs : The pickle module implements binary protocols for serializing and de-serializing a Python object structure. What this means is that pickle is a serializing protocol, something you use to efficiently share data amongst parties. We call a pickle the binary file that was generated while pickling. At its core, the pickle is basically a stack of instructions or opcodes. As you probably have guessed, it’s not human readable. The opcodes are generated when pickling and read sequentially at unpickling. Based on the opcode, a given action is executed. Here’s a small example: Copied import pickle import pickletools var = "data I want to share with a friend" # store the pickle data in a file named 'payload.pkl' with open ( 'payload.pkl' , 'wb' ) as f: pickle.dump(var, f) # disassemble the pickle # and print the instructions to the command line with open ( 'payload.pkl' , 'rb' ) as f: pickletools.dis(f) When you run this, it will create a pickle file and print the following instructions in your terminal: Copied 0 : \x80 PROTO 4 2 : \x95 FRAME 48 11 : \x8c SHORT_BINUNICODE 'data I want to share with a friend' 57 : \x94 MEMOIZE ( as 0 ) 58 : . STOP highest protocol among opcodes = 4 Don’t worry too much about the instructions for now, just know that the pickletools module is very useful for analyzing pickles. It allows you to read the instructions in the file without executing any code. Pickle is not simply a serialization protocol, it allows more flexibility by giving the ability to users to run python code at de-serialization time. Doesn’t sound good, does it? Why is it dangerous? As we’ve stated above, de-serializing pickle means that code can be executed. But this comes with certain limitations: you can only reference functions and classes from the top level module; you cannot embed them in the pickle file itself. Back to the drawing board: Copied import pickle import pickletools class Data : def __init__ ( self, important_stuff: str ): self.important_stuff = important_stuff d = Data( "42" ) with open ( 'payload.pkl' , 'wb' ) as f: pickle.dump(d, f) When we run this script we get the payload.pkl again. When we check the file’s contents: Copied # cat payload.pkl __main__Data)}important_stuff42sb.% # hexyl payload.pkl ┌────────┬─────────────────────────┬─────────────────────────┬────────┬────────┐ │00000000│ 80 04 95 33 00 00 00 00 ┊ 00 00 00 8c 08 5f 5f 6d │ו×30000┊000ו__m│ │00000010│ 61 69 6e 5f 5f 94 8c 04 ┊ 44 61 74 61 94 93 94 29 │ain__×ו┊Data×××)│ │00000020│ 81 94 7d 94 8c 0f 69 6d ┊ 70 6f 72 74 61 6e 74 5f │××}×וim┊portant_│ │00000030│ 73 74 75 66 66 94 8c 02 ┊ 34 32 94 73 62 2e │stuff×ו┊42×sb. │ └────────┴─────────────────────────┴─────────────────────────┴────────┴────────┘ We can see that there isn’t much in there, a few opcodes and the associated data. You might be thinking, so what’s the problem with pickle? Let’s try something else: Copied from fickling.pickle import Pickled import pickle # Create a malicious pickle data = "my friend needs to know this" pickle_bin = pickle.dumps(data) p = Pickled.load(pickle_bin) p.insert_python_exec( 'print("you\'ve been pwned !")' ) with open ( 'payload.pkl' , 'wb' ) as f: p.dump(f) # innocently unpickle and get your friend's data with open ( 'payload.pkl' , 'rb' ) as f: data = pickle.load(f) print (data) Here we’re using the fickling library for simplicity. It allows us to add pickle instructions to execute code contained in a string via the exec function. This is how you circumvent the fact that you cannot define functions or classes in your pickles: you run exec on python code saved as a string. When you run this, it creates a payload.pkl and prints the following: Copied you 've been pwned ! my friend needs to know this If we check the contents of the pickle file, we get: Copied # cat payload.pkl c__builtin__ exec (Vprint( "you've been pwned !" ) tR my friend needs to know this.% # hexyl payload.pkl ┌────────┬─────────────────────────┬─────────────────────────┬────────┬────────┐ │00000000│ 63 5f 5f 62 75 69 6c 74 ┊ 69 6e 5f 5f 0a 65 78 65 │c__built┊in___exe│ │00000010│ 63 0a 28 56 70 72 69 6e ┊ 74 28 22 79 6f 75 27 76 │c_(Vprin┊t( "you'v│ │00000020│ 65 20 62 65 65 6e 20 70 ┊ 77 6e 65 64 20 21 22 29 │e been p┊wned !" )│ │00000030│ 0a 74 52 80 04 95 20 00 ┊ 00 00 00 00 00 00 8c 1c │_tR×•× 0┊000000ו│ │00000040│ 6d 79 20 66 72 69 65 6e ┊ 64 20 6e 65 65 64 73 20 │my frien┊d needs │ │00000050│ 74 6f 20 6b 6e 6f 77 20 ┊ 74 68 69 73 94 2e │to know ┊this×. │ └────────┴─────────────────────────┴─────────────────────────┴────────┴────────┘ Basically, this is what’s happening when you unpickle: Copied # ... opcodes_stack = [exec_func, "malicious argument" , "REDUCE" ] opcode = stack.pop() if opcode == "REDUCE" : arg = opcodes_stack.pop() callable = opcodes_stack.pop() opcodes_stack.append( callable (arg)) # ... The instructions that pose a threat are STACK_GLOBAL , GLOBAL and REDUCE . REDUCE is what tells the unpickler to execute the function with the provided arguments and *GLOBAL instructions are telling the unpickler to import stuff. To sum up, pickle is dangerous because: when importing a python module, arbitrary code can be executed you can import builtin functions like eval or exec , which can be used to execute arbitrary code when instantiating an object, the constructor may be called This is why it is stated in most docs using pickle, do not unpickle data from untrusted sources. Mitigation Strategies Don’t use pickle Sound advice Luc, but pickle is used profusely and isn’t going anywhere soon: finding a new format everyone is happy with and initiating the change will take some time. So what can we do for now? Load files from users and organizations you trust On the Hub, you have the ability to sign your commits with a GPG key . This does not guarantee that your file is safe, but it does guarantee the origin of the file. If you know and trust user A and the commit that includes the file on the Hub is signed by user A’s GPG key, it’s pretty safe to assume that you can trust the file. Load model weights from TF or Flax TensorFlow and Flax checkpoints are not affected, and can be loaded within PyTorch architectures using the from_tf and from_flax kwargs for the from_pretrained method to circumvent this issue. E.g.: Copied from transformers import AutoModel model = AutoModel.from_pretrained( "google-bert/bert-base-cased" , from_flax= True ) Use your own serialization format MsgPack Protobuf Cap’n’proto Avro safetensors This last format, safetensors , is a simple serialization format that we are working on and experimenting with currently! Please help or contribute if you can 🔥. Improve torch.load/save There’s an open discussion in progress at PyTorch on having a Safe way of loading only weights from *.pt file by default – please chime in there! Hub’s Security Scanner What we have now We have created a security scanner that scans every file pushed to the Hub and runs security checks. At the time of writing, it runs two types of scans: ClamAV scans Pickle Import scans For ClamAV scans, files are run through the open-source antivirus ClamAV . While this covers a good amount of dangerous files, it doesn’t cover pickle exploits. We have implemented a Pickle Import scan, which extracts the list of imports referenced in a pickle file. Every time you upload a pytorch_model.bin or any other pickled file, this scan is run. On the hub the list of imports will be displayed next to each file containing imports. If any import looks suspicious, it will be highlighted. We get this data thanks to pickletools.genops which allows us to read the file without executing potentially dangerous code. Note that this is what allows to know if, when unpickling a file, it will REDUCE on a potentially dangerous function that was imported by *GLOBAL . Disclaimer : this is not 100% foolproof. It is your responsibility as a user to check if something is safe or not. We are not actively auditing python packages for safety, the safe/unsafe imports lists we have are maintained in a best-effort manner. Please contact us if you think something is not safe, and we flag it as such, by sending us an email to website at huggingface.co Potential solutions One could think of creating a custom Unpickler in the likes of this one . But as we can see in this sophisticated exploit , this won’t work. Thankfully, there is always a trace of the eval import, so reading the opcodes directly should allow to catch malicious usage. The current solution I propose is creating a file resembling a .gitignore but for imports. This file would be a whitelist of imports that would make a pytorch_model.bin file flagged as dangerous if there are imports not included in the whitelist. One could imagine having a regex-ish format where you could allow all numpy submodules for instance via a simple line like: numpy.* . Further Reading pickle - Python object serialization - Python 3.10.6 documentation Dangerous Pickles - Malicious Python Serialization GitHub - trailofbits/fickling: A Python pickling decompiler and static analyzer Exploiting Python pickles cpython/pickletools.py at 3.10 · python/cpython cpython/pickle.py at 3.10 · python/cpython CrypTen/serial.py at main · facebookresearch/CrypTen CTFtime.org / Balsn CTF 2019 / pyshv1 / Writeup Rehabilitating Python’s pickle module < > Update on GitHub ← Malware Scanning Secrets Scanning → Pickle Scanning What is a pickle? Why is it dangerous? Mitigation Strategies Load files from users and organizations you trust Load model weights from T F or Flax Use your own serialization format Improve torch.load/save Hub’s Security Scanner What we have now Potential solutions Further Reading
Libraries.txt
Libraries Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Libraries Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Argilla Dask Datasets Distilabel DuckDB FiftyOne Pandas Polars Spark WebDataset Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Libraries The Datasets Hub has support for several libraries in the Open Source ecosystem. Thanks to the huggingface_hub Python library , it’s easy to enable sharing your datasets on the Hub. We’re happy to welcome to the Hub a set of Open Source libraries that are pushing Machine Learning forward. The table below summarizes the supported libraries and their level of integration. Library Description Download from Hub Push to Hub Argilla Collaboration tool for AI engineers and domain experts that value high quality data. ✅ ✅ Dask Parallel and distributed computing library that scales the existing Python and PyData ecosystem. ✅ ✅ Datasets 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). ✅ ✅ Distilabel The framework for synthetic data generation and AI feedback. ✅ ✅ DuckDB In-process SQL OLAP database management system. ✅ ✅ FiftyOne FiftyOne is a library for curation and visualization of image, video, and 3D data. ✅ ✅ Pandas Python data analysis toolkit. ✅ ✅ Polars A DataFrame library on top of an OLAP query engine. ✅ ✅ Spark Real-time, large-scale data processing tool in a distributed environment. ✅ ✅ WebDataset Library to write I/O pipelines for large datasets. ✅ ❌ Integrating data libraries and tools with the Hub This guide is designed for developers and maintainers of data libraries and tools who want to integrate with the Hugging Face Hub. Whether you’re building a data processing library, analysis tool, or any software that needs to interact with datasets, this documentation will help you implement a Hub integration. The guide covers: Possible approaches to loading data from the Hub into your library/tool Possible approaches to uploading data from your library/tool to the Hub Loading data from the Hub If you have a library for working with data, it can be helpful for your users to load data from the Hub. In general, we suggest relying on an existing library like datasets , pandas or polars to do this unless you have a specific reason to implement your own. If you require more control over the loading process, you can use the huggingface_hub library, which will allow you, for example, to download a specific subset of files from a repository. You can find more information about loading data from the Hub here . Integrating via the Dataset Viewer and Parquet Files The Hub’s dataset viewer and Parquet conversion system provide a standardized way to integrate with datasets, regardless of their original format. This infrastructure is a reliable integration layer between the Hub and external libraries. If the dataset is not already in Parquet, the Hub automatically converts the first 5GB of every dataset to Parquet format to power the dataset viewer and provide consistent access patterns. This standardization offers several benefits for library integrations: Consistent data access patterns regardless of original format Built-in dataset preview and exploration through the Hub’s dataset viewer. The dataset viewer can also be embedded as an iframe in your applications, making it easy to provide rich dataset previews. For more information about embedding the viewer, see the dataset viewer embedding documentation . Efficient columnar storage optimized for querying. For example, you could use a tool like DuckDB to query or filter for a specific subset of data. Parquet is well supported across the machine learning and data science ecosystem. For more details on working with the Dataset Viewer API, see the Dataset Viewer API documentation Uploading data to the Hub This section covers possible approaches for adding the ability to upload data to the Hub in your library, i.e. how to implement a push_to_hub method. This guide will cover three primary ways to upload data to the Hub: using the datasets library and the push_to_hub method using pandas to write to the Hub using the huggingface_hub library and the hf_hub_download method directly using the API or Git LFS Use the datasets library The most straightforward approach to pushing data to the Hub is to rely on the existing push_to_hub method from the datasets library. The push_to_hub method will automatically handle: the creation of the repository the conversion of the dataset to Parquet chunking the dataset into suitable parts uploading the data For example, if you have a synthetic data generation library that returns a list of dictionaries, you could simply do the following: Copied from datasets import Dataset data = [{ "prompt" : "Write a cake recipe" , "response" : "Measure 1 cup ..." }] ds = Dataset.from_list(data) ds.push_to_hub( "USERNAME_OR_ORG/repo_ID" ) Examples of this kind of integration: Distilabel Rely on an existing libraries integration with the Hub Polars, Pandas, Dask, Spark and DuckDB all can write to a Hugging Face Hub repository. See datasets libraries for more details. If you are already using one of these libraries in your code, adding the ability to push to the Hub is straightforward. For example, if you have a synthetic data generation library that can return a Pandas DataFrame, here is the code you would need to write to the Hub: Copied from huggingface_hub import HfApi # Initialize the Hub API hf_api = HfApi(token=os.getenv( "HF_TOKEN" )) # Create a repository (if it doesn't exist) hf_api.create_repo(repo_id= "username/my-dataset" , repo_type= "dataset" ) # Convert your data to a DataFrame and save directly to the Hub df.to_parquet( "hf://datasets/username/my-dataset/data.parquet" ) Using the huggingface_hub Python library The huggingface_hub Python library offers a more flexible approach to uploading data to the Hub. The library allows you to upload specific files or subsets of files to a repository. This is useful if you have a large dataset that you don’t want to convert to Parquet, want to upload a specific subset of files, or want more control over the repo structure. Depending on your use case, you can upload a file or folder at a specific point in your code, i.e., export annotations from a tool to the Hub when a user clicks “push to Hub”. For example, Copied from huggingface_hub import HfApi api = HfApi(token=HF_TOKEN) api.upload_folder( folder_path= "/my-cool-library/data-folder" , repo_id= "username/my-cool-space" , repo_type= "dataset" , commit_message= "Push annotations to Hub" allow_patterns= "*.jsonl" , ) You can find more information about ways to upload data to the Hub here . Alternatively, there are situations where you may want to upload data in the background, for example, synthetic data being generated every 10 minutes. In this case you can use the scheduled_uploads feature of the huggingface_hub library. For more details, see the scheduled uploads documentation . You can see an example of using this approach to upload data to the Hub in The fastdata library This magpie Demo Space More support For technical questions about integration, feel free to contact the datasets team at [email protected] . < > Update on GitHub ← Downloading Datasets Argilla → Libraries Integrating data libraries and tools with the Hub Loading data from the Hub Integrating via the Dataset Viewer and Parquet Files Uploading data to the Hub Use the datasets library Rely on an existing libraries integration with the Hub Using the huggingface_hub Python library More support
Create_an_Endpoint.txt
Create an Endpoint Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Create an Endpoint Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Create an Endpoint After your first login, you will be directed to the Endpoint creation page . As an example, this guide will go through the steps to deploy distilbert/distilbert-base-uncased-finetuned-sst-2-english for text classification. 1. Enter the Hugging Face Repository ID and your desired Endpoint name 2. Select your Instance Configuration Choose cloud provider, region, and instance type. If you’re looking for a specific cloud provider, region, or instance that you don’t yet see available, please let us know. 3. Apply Automatic Scale-to-Zero Or leave your Endpoint as is. 4. Define the Security Level for the Endpoint 5. Customize your Endpoint Feel free to customize your Endpoint further in Advanced Configuration: replica autoscaling, task, revision, framework, and container type are accessible in this section. 6. Create your Endpoint By clicking Create Endpoint . The cost estimate displayed is per hour and does not take autoscaling into account. 7. Wait for the Endpoint to build, initialize and run Note that initializing time depends on the model size and typically takes between 1 to 5 minutes. 8. Test your Endpoint 🎉 This is possible in your Endpoint Overview utilizing Playground 🏁 ! < > Update on GitHub ← Access the solution (UI) Send Requests to Endpoints → Create an Endpoint 1. Enter the Hugging Face Repository I D and your desired Endpoint name 2. Select your Instance Configuration 3. Apply Automatic Scale-to- Zero 4. Define the Security Level for the Endpoint 5. Customize your Endpoint 6. Create your Endpoint 7. Wait for the Endpoint to build, initialize and run 8. Test your Endpoint 🎉
Types_of_Evaluations_in_🤗_Evaluate.txt
Types of Evaluations in 🤗 Evaluate Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Types of Evaluations in 🤗 Evaluate Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Types of Evaluations in 🤗 Evaluate The goal of the 🤗 Evaluate library is to support different types of evaluation, depending on different goals, datasets and models. Here are the types of evaluations that are currently supported with a few examples for each: Metrics A metric measures the performance of a model on a given dataset. This is often based on an existing ground truth (i.e. a set of references), but there are also *referenceless metrics* which allow evaluating generated text by leveraging a pretrained model such as [GPT-2](https://huggingface.co/gpt2). Examples of metrics include: Accuracy : the proportion of correct predictions among the total number of cases processed. Exact Match : the rate at which the input predicted strings exactly match their references. Mean Intersection over union (IoUO) : the area of overlap between the predicted segmentation of an image and the ground truth divided by the area of union between the predicted segmentation and the ground truth. Metrics are often used to track model performance on benchmark datasets, and to report progress on tasks such as machine translation and image classification . Comparisons Comparisons can be useful to compare the performance of two or more models on a single test dataset. For instance, the McNemar Test is a paired nonparametric statistical hypothesis test that takes the predictions of two models and compares them, aiming to measure whether the models’s predictions diverge or not. The p value it outputs, which ranges from 0.0 to 1.0 , indicates the difference between the two models’ predictions, with a lower p value indicating a more significant difference. Comparisons have yet to be systematically used when comparing and reporting model performance, however they are useful tools to go beyond simply comparing leaderboard scores and for getting more information on the way model prediction differ. Measurements In the 🤗 Evaluate library, measurements are tools for gaining more insights on datasets and model predictions. For instance, in the case of datasets, it can be useful to calculate the average word length of a dataset’s entries, and how it is distributed — this can help when choosing the maximum input length for Tokenizer . In the case of model predictions, it can help to calculate the average perplexity of model predictions using different models such as GPT-2 and BERT , which can indicate the quality of generated text when no reference is available. All three types of evaluation supported by the 🤗 Evaluate library are meant to be mutually complementary, and help our community carry out more mindful and responsible evaluation. We will continue adding more types of metrics, measurements and comparisons in coming months, and are counting on community involvement (via PRs and issues ) to make the library as extensive and inclusive as possible! ← scikit-learn Considerations for model evaluation → Types of Evaluations in 🤗 Evaluate Metrics Comparisons Measurements
Utilities_for_Generation.txt
Utilities for Generation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Utilities for Generation Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Utilities for Generation This page lists all the utility functions used by generate() . Generate Outputs The output of generate() is an instance of a subclass of ModelOutput . This output is a data structure containing all the information returned by generate() , but that can also be used as tuple or dictionary. Here’s an example: Copied from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained( "openai-community/gpt2" ) model = GPT2LMHeadModel.from_pretrained( "openai-community/gpt2" ) inputs = tokenizer( "Hello, my dog is cute and " , return_tensors= "pt" ) generation_output = model.generate(**inputs, return_dict_in_generate= True , output_scores= True ) The generation_output object is a GenerateDecoderOnlyOutput , as we can see in the documentation of that class below, it means it has the following attributes: sequences : the generated sequences of tokens scores (optional): the prediction scores of the language modelling head, for each generation step hidden_states (optional): the hidden states of the model, for each generation step attentions (optional): the attention weights of the model, for each generation step Here we have the scores since we passed along output_scores=True , but we don’t have hidden_states and attentions because we didn’t pass output_hidden_states=True or output_attentions=True . You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get None . Here for instance generation_output.scores are all the generated prediction scores of the language modeling head, and generation_output.attentions is None . When using our generation_output object as a tuple, it only keeps the attributes that don’t have None values. Here, for instance, it has two elements, loss then logits , so Copied generation_output[: 2 ] will return the tuple (generation_output.sequences, generation_output.scores) for instance. When using our generation_output object as a dictionary, it only keeps the attributes that don’t have None values. Here, for instance, it has two keys that are sequences and scores . We document here all output types. PyTorch class transformers.generation. GenerateDecoderOnlyOutput < source > ( sequences : LongTensor = None scores : typing.Optional[typing.Tuple[torch.FloatTensor]] = None logits : typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None past_key_values : typing.Optional[typing.Tuple[typing.Tuple[typing.Tuple[torch.FloatTensor]]]] = None ) Parameters sequences ( torch.LongTensor of shape (batch_size, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(torch.FloatTensor) optional , returned when output_scores=True ) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . logits ( tuple(torch.FloatTensor) optional , returned when output_logits=True ) — Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . attentions ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length) . hidden_states ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, generated_length, hidden_size) . past_key_values ( tuple(tuple(torch.FloatTensor))) , optional , returned when use_cache=True ) — Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model’s documentation. Usually, a Cache instance. Outputs of decoder-only generation models, when using non-beam methods. class transformers.generation. GenerateEncoderDecoderOutput < source > ( sequences : LongTensor = None scores : typing.Optional[typing.Tuple[torch.FloatTensor]] = None logits : typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions : typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_hidden_states : typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None cross_attentions : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None past_key_values : typing.Optional[typing.Tuple[typing.Tuple[typing.Tuple[torch.FloatTensor]]]] = None ) Parameters sequences ( torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(torch.FloatTensor) optional , returned when output_scores=True ) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . logits ( tuple(torch.FloatTensor) optional , returned when output_logits=True ) — Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . encoder_attentions ( tuple(torch.FloatTensor) , optional , returned when output_attentions=True ) — Tuple of torch.FloatTensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length) . encoder_hidden_states ( tuple(torch.FloatTensor) , optional , returned when output_hidden_states=True ) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size) . decoder_attentions ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length) . cross_attentions ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length) . decoder_hidden_states ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, generated_length, hidden_size) . past_key_values ( tuple(tuple(torch.FloatTensor))) , optional , returned when use_cache=True is passed or when config.use_cache=True ) — Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model’s documentation. Usually, a Cache instance. Outputs of encoder-decoder generation models, when using non-beam methods. class transformers.generation. GenerateBeamDecoderOnlyOutput < source > ( sequences : LongTensor = None sequences_scores : typing.Optional[torch.FloatTensor] = None scores : typing.Optional[typing.Tuple[torch.FloatTensor]] = None logits : typing.Optional[typing.Tuple[torch.FloatTensor]] = None beam_indices : typing.Optional[torch.LongTensor] = None attentions : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None past_key_values : typing.Optional[typing.Tuple[typing.Tuple[typing.Tuple[torch.FloatTensor]]]] = None ) Parameters sequences ( torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . sequences_scores ( torch.FloatTensor of shape (batch_size*num_return_sequences) , optional , returned when output_scores=True ) — Final beam scores of the generated sequences . scores ( tuple(torch.FloatTensor) optional , returned when output_scores=True ) — Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size) . logits ( tuple(torch.FloatTensor) optional , returned when output_logits=True ) — Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . beam_indices ( torch.LongTensor , optional , returned when output_scores=True ) — Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length) . attentions ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length) . hidden_states ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size) . past_key_values ( tuple(tuple(torch.FloatTensor))) , optional , returned when use_cache=True ) — Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model’s documentation. Usually, a Cache instance. Outputs of decoder-only generation models, when using beam methods. class transformers.generation. GenerateBeamEncoderDecoderOutput < source > ( sequences : LongTensor = None sequences_scores : typing.Optional[torch.FloatTensor] = None scores : typing.Optional[typing.Tuple[torch.FloatTensor]] = None logits : typing.Optional[typing.Tuple[torch.FloatTensor]] = None beam_indices : typing.Optional[torch.LongTensor] = None encoder_attentions : typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_hidden_states : typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None cross_attentions : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None past_key_values : typing.Optional[typing.Tuple[typing.Tuple[typing.Tuple[torch.FloatTensor]]]] = None ) Parameters sequences ( torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . sequences_scores ( torch.FloatTensor of shape (batch_size*num_return_sequences) , optional , returned when output_scores=True ) — Final beam scores of the generated sequences . scores ( tuple(torch.FloatTensor) optional , returned when output_scores=True ) — Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size) . logits ( tuple(torch.FloatTensor) optional , returned when output_logits=True ) — Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . beam_indices ( torch.LongTensor , optional , returned when output_scores=True ) — Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length) . encoder_attentions ( tuple(torch.FloatTensor) , optional , returned when output_attentions=True ) — Tuple of torch.FloatTensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length) . encoder_hidden_states ( tuple(torch.FloatTensor) , optional , returned when output_hidden_states=True ) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_beams*num_return_sequences, sequence_length, hidden_size) . decoder_attentions ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams*num_return_sequences, num_heads, generated_length, sequence_length) . cross_attentions ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size, num_heads, generated_length, sequence_length) . decoder_hidden_states ( tuple(tuple(torch.FloatTensor)) , optional , returned when output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of torch.FloatTensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size) . past_key_values ( tuple(tuple(torch.FloatTensor))) , optional , returned when use_cache=True ) — Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model’s documentation. Usually, a Cache instance. Outputs of encoder-decoder generation models, when using beam methods. TensorFlow class transformers.generation. TFGreedySearchEncoderDecoderOutput < source > ( sequences : Tensor = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None encoder_attentions : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None encoder_hidden_states : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None decoder_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None cross_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None decoder_hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . encoder_attentions ( tuple(tf.Tensor) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length) . encoder_hidden_states ( tuple(tf.Tensor) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size) . decoder_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length) . cross_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length) . decoder_hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, generated_length, hidden_size) . Base class for outputs of encoder-decoder generation models using greedy search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes) class transformers.generation. TFGreedySearchDecoderOnlyOutput < source > ( sequences : Tensor = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length) . hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, generated_length, hidden_size) . Base class for outputs of decoder-only generation models using greedy search. class transformers.generation. TFSampleEncoderDecoderOutput < source > ( sequences : Tensor = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None encoder_attentions : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None encoder_hidden_states : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None decoder_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None cross_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None decoder_hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_return_sequences, config.vocab_size) . encoder_attentions ( tuple(tf.Tensor) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size*num_return_sequences, num_heads, sequence_length, sequence_length) . encoder_hidden_states ( tuple(tf.Tensor) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_return_sequences, sequence_length, hidden_size) . decoder_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_return_sequences, num_heads, generated_length, sequence_length) . cross_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length) . decoder_hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_return_sequences, generated_length, hidden_size) . Base class for outputs of encoder-decoder generation models using sampling. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes) class transformers.generation. TFSampleDecoderOnlyOutput < source > ( sequences : Tensor = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_return_sequences, config.vocab_size) . attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (num_return_sequences*batch_size, num_heads, generated_length, sequence_length) . hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (num_return_sequences*batch_size, generated_length, hidden_size) . Base class for outputs of decoder-only generation models using sampling. class transformers.generation. TFBeamSearchEncoderDecoderOutput < source > ( sequences : Tensor = None sequences_scores : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None beam_indices : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None encoder_attentions : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None encoder_hidden_states : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None decoder_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None cross_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None decoder_hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . sequences_scores ( tf.Tensor of shape (batch_size*num_return_sequences) , optional , returned when output_scores=True is passed or when config.output_scores=True ) — Final beam scores of the generated sequences . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size)`. beam_indices ( tf.Tensor , optional , returned when output_scores=True is passed or when config.output_scores=True ) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) . encoder_attentions ( tuple(tf.Tensor) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length) . encoder_hidden_states ( tuple(tf.Tensor) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_beams*num_return_sequences, sequence_length, hidden_size) . decoder_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams*num_return_sequences, num_heads, generated_length, sequence_length) . cross_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length) . decoder_hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size) . Base class for outputs of encoder-decoder generation models using beam search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes) class transformers.generation. TFBeamSearchDecoderOnlyOutput < source > ( sequences : Tensor = None sequences_scores : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None beam_indices : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . sequences_scores ( tf.Tensor of shape (batch_size*num_return_sequences) , optional , returned when output_scores=True is passed or when config.output_scores=True ) — Final beam scores of the generated sequences . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams*num_return_sequences, config.vocab_size) . beam_indices ( tf.Tensor , optional , returned when output_scores=True is passed or when config.output_scores=True ) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) . attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length) . hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams*num_return_sequences, generated_length, hidden_size) . Base class for outputs of decoder-only generation models using beam search. class transformers.generation. TFBeamSampleEncoderDecoderOutput < source > ( sequences : Tensor = None sequences_scores : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None beam_indices : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None encoder_attentions : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None encoder_hidden_states : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None decoder_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None cross_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None decoder_hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size*num_beams, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . sequences_scores ( tf.Tensor of shape (batch_size * num_return_sequence) , optional , returned when output_scores=True is passed or when config.output_scores=True ) — Final beam scores of the generated sequences . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size) . beam_indices ( tf.Tensor , optional , returned when output_scores=True is passed or when config.output_scores=True ) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) . encoder_attentions ( tuple(tf.Tensor) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length) . encoder_hidden_states ( tuple(tf.Tensor) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size*num_beams, sequence_length, hidden_size) . decoder_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length) . cross_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length) . decoder_hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, generated_length, hidden_size) . Base class for outputs of encoder-decoder generation models using beam sampling. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes) class transformers.generation. TFBeamSampleDecoderOnlyOutput < source > ( sequences : Tensor = None sequences_scores : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None beam_indices : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . sequences_scores ( tf.Tensor of shape (batch_size * num_return_sequence) , optional , returned when output_scores=True is passed or when config.output_scores=True ) — Final beam scores of the generated sequences . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams*num_return_sequences, config.vocab_size) . beam_indices ( tf.Tensor , optional , returned when output_scores=True is passed or when config.output_scores=True ) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) . attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, num_heads, generated_length, sequence_length) . hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size*num_beams, generated_length, hidden_size) . Base class for outputs of decoder-only generation models using beam sample. class transformers.generation. TFContrastiveSearchEncoderDecoderOutput < source > ( sequences : Tensor = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None encoder_attentions : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None encoder_hidden_states : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None decoder_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None cross_attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None decoder_hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . encoder_attentions ( tuple(tf.Tensor) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple of tf.Tensor (one for each layer of the decoder) of shape (batch_size, num_heads, sequence_length, sequence_length) . encoder_hidden_states ( tuple(tf.Tensor) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size) . decoder_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length) . cross_attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length) . decoder_hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, generated_length, hidden_size) . Base class for outputs of encoder-decoder generation models using contrastive search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes) class transformers.generation. TFContrastiveSearchDecoderOnlyOutput < source > ( sequences : Tensor = None scores : typing.Optional[typing.Tuple[tensorflow.python.framework.tensor.Tensor]] = None attentions : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None hidden_states : typing.Optional[typing.Tuple[typing.Tuple[tensorflow.python.framework.tensor.Tensor]]] = None ) Parameters sequences ( tf.Tensor of shape (batch_size, sequence_length) ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(tf.Tensor) optional , returned when output_scores=True is passed or when config.output_scores=True ) — Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size, config.vocab_size) . attentions ( tuple(tuple(tf.Tensor)) , optional , returned when output_attentions=True is passed or config.output_attentions=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, num_heads, generated_length, sequence_length) . hidden_states ( tuple(tuple(tf.Tensor)) , optional , returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) — Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of tf.Tensor of shape (batch_size, generated_length, hidden_size) . Base class for outputs of decoder-only generation models using contrastive search. FLAX class transformers.generation. FlaxSampleOutput < source > ( sequences : Array = None ) Parameters sequences ( jnp.ndarray of shape (batch_size, max_length) ) — The generated sequences. Flax Base class for outputs of decoder-only generation models using sampling. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. class transformers.generation. FlaxGreedySearchOutput < source > ( sequences : Array = None ) Parameters sequences ( jnp.ndarray of shape (batch_size, max_length) ) — The generated sequences. Flax Base class for outputs of decoder-only generation models using greedy search. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. class transformers.generation. FlaxBeamSearchOutput < source > ( sequences : Array = None scores : Array = None ) Parameters sequences ( jnp.ndarray of shape (batch_size, max_length) ) — The generated sequences. scores ( jnp.ndarray of shape (batch_size,) ) — The scores (log probabilities) of the generated sequences. Flax Base class for outputs of decoder-only generation models using greedy search. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. LogitsProcessor A LogitsProcessor can be used to modify the prediction scores of a language model head for generation. PyTorch class transformers. AlternatingCodebooksLogitsProcessor < source > ( input_start_len : int semantic_vocab_size : int codebook_size : int ) Parameters input_start_len ( int ) — The length of the initial input sequence. semantic_vocab_size ( int ) — Vocabulary size of the semantic part, i.e number of tokens associated to the semantic vocabulary. codebook_size ( int ) — Number of tokens associated to the codebook. LogitsProcessor enforcing alternated generation between the two codebooks of Bark. This logits processor is exclusively compatible with Bark ’s fine submodel. See the model documentation for examples. __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) class transformers. ClassifierFreeGuidanceLogitsProcessor < source > ( guidance_scale ) Parameters guidance_scale (float) — The guidance scale for classifier free guidance (CFG). CFG is enabled by setting guidance_scale > 1 . Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. LogitsProcessor for classifier free guidance (CFG). The scores are split over the batch dimension, where the first half correspond to the conditional logits (predicted from the input prompt) and the second half correspond to the unconditional logits (predicted from an empty or ‘null’ prompt). The processor computes a weighted average across the conditional and unconditional logits, parameterised by the guidance_scale . See the paper for more information. This logits processor is exclusively compatible with MusicGen Examples: Copied >>> from transformers import AutoProcessor, MusicgenForConditionalGeneration >>> processor = AutoProcessor.from_pretrained( "facebook/musicgen-small" ) >>> model = MusicgenForConditionalGeneration.from_pretrained( "facebook/musicgen-small" ) >>> inputs = processor( ... text=[ "80s pop track with bassy drums and synth" , "90s rock song with loud guitars and heavy drums" ], ... padding= True , ... return_tensors= "pt" , ... ) >>> audio_values = model.generate(**inputs, do_sample= True , guidance_scale= 3 , max_new_tokens= 256 ) __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. EncoderNoRepeatNGramLogitsProcessor < source > ( encoder_ngram_size : int encoder_input_ids : LongTensor ) Parameters encoder_ngram_size ( int ) — All ngrams of size ngram_size can only occur within the encoder input ids. encoder_input_ids ( int ) — The encoder_input_ids that should not be repeated within the decoder ids. LogitsProcessor that works similarly to NoRepeatNGramLogitsProcessor , but applied exclusively to prevent the repetition of n-grams present in the prompt. It was designed to promote chattiness in a language model, by preventing the generation of n-grams present in previous conversation rounds. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( "bigscience/bloomz-560m" ) >>> tokenizer = AutoTokenizer.from_pretrained( "bigscience/bloomz-560m" ) >>> inputs = tokenizer( "Alice: I love cats. What do you love?\nBob:" , return_tensors= "pt" ) >>> # With greedy decoding, we see Bob repeating Alice's opinion. If Bob was a chatbot, it would be a poor one. >>> outputs = model.generate(**inputs) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) Alice: I love cats. What do you love? Bob: I love cats. What do you >>> # With this logits processor, we can prevent Bob from repeating Alice's opinion. >>> outputs = model.generate(**inputs, encoder_no_repeat_ngram_size= 2 ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) Alice: I love cats. What do you love? Bob: My cats are very cute. __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. EncoderRepetitionPenaltyLogitsProcessor < source > ( penalty : float encoder_input_ids : LongTensor ) Parameters penalty ( float ) — The parameter for repetition penalty. 1.0 means no penalty. Above 1.0 rewards prompt tokens. Between 0.0 and 1.0 penalizes prompt tokens. encoder_input_ids ( torch.LongTensor ) — The encoder_input_ids that should be repeated within the decoder ids. LogitsProcessor that works similarly to RepetitionPenaltyLogitsProcessor , but with an inverse penalty that is applied to the tokens present in the prompt. In other words, a penalty above 1.0 increases the odds of selecting tokens that were present in the prompt. It was designed to avoid hallucination in input-grounded tasks, like summarization. Although originally intended for encoder-decoder models, it can also be used with decoder-only models like LLMs. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "bigscience/bloomz-560m" ) >>> model = AutoModelForCausalLM.from_pretrained( "bigscience/bloomz-560m" ) >>> inputs = tokenizer([ "Alice and Bob. The third member's name was" ], return_tensors= "pt" ) >>> gen_out = model.generate(**inputs) >>> print (tokenizer.batch_decode(gen_out, skip_special_tokens= True )[ 0 ]) Alice and Bob. The third membe r's name was not mentioned. >>> # With the `encoder_repetition_penalty` argument we can trigger this logits processor in `generate`, which can >>> # promote the use of prompt tokens ("Bob" in this example) >>> gen_out = model.generate(**inputs, encoder_repetition_penalty=1.2) >>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0]) Alice and Bob. The third member' s name was Bob. The third membe r's name was Bob. __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. EpsilonLogitsWarper < source > ( epsilon : float filter_value : float = -inf min_tokens_to_keep : int = 1 ) Parameters epsilon ( float ) — If set to > 0, only the most tokens with probabilities epsilon or higher are kept for generation. filter_value ( float , optional , defaults to -inf) — All filtered values will be set to this float value. min_tokens_to_keep ( int , optional , defaults to 1) — Minimum number of tokens that cannot be filtered. LogitsProcessor that performs epsilon-sampling, i.e. restricting to tokens with prob >= epsilon . Takes the largest min_tokens_to_keep tokens if no tokens satisfy this constraint. See Truncation Sampling as Language Model Desmoothing for more information. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed( 1 ) >>> model = AutoModelForCausalLM.from_pretrained( "distilbert/distilgpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilgpt2" ) >>> inputs = tokenizer( "A sequence: 1, 2" , return_tensors= "pt" ) >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample= True ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: 1 , 2 , 3 | < 4 (left-hand pointer) ; <BLANKLINE> <BLANKLINE> >>> # With epsilon sampling, the output gets restricted to high-probability tokens. Note that this is similar to >>> # Top P sampling, which restricts tokens based on their cumulative probability. >>> # Pro tip: The paper recomends using `epsilon_cutoff` values between 3e-4 and 9e-4 >>> outputs = model.generate(**inputs, do_sample= True , epsilon_cutoff= 0.1 ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. EtaLogitsWarper < source > ( epsilon : float filter_value : float = -inf min_tokens_to_keep : int = 1 device : str = 'cpu' ) Parameters epsilon ( float ) — A float value in the range (0, 1). Hyperparameter used to calculate the dynamic cutoff value, eta . The suggested values from the paper ranges from 3e-4 to 4e-3 depending on the size of the model. filter_value ( float , optional , defaults to -inf) — All values that are found to be below the dynamic cutoff value, eta , are set to this float value. This parameter is useful when logits need to be modified for very low probability tokens that should be excluded from generation entirely. min_tokens_to_keep ( int , optional , defaults to 1) — Specifies the minimum number of tokens that must be kept for generation, regardless of their probabilities. For example, if min_tokens_to_keep is set to 1, at least one token will always be kept for generation, even if all tokens have probabilities below the cutoff eta . device ( str , optional , defaults to "cpu" ) — The device to allocate the tensors. LogitsProcessor that performs eta-sampling, a technique to filter out tokens with probabilities below a dynamic cutoff value, eta , which is calculated based on a combination of the hyperparameter epsilon and the entropy of the token probabilities, i.e. eta := min(epsilon, sqrt(epsilon * e^-entropy(probabilities))) . Takes the largest min_tokens_to_keep tokens if no tokens satisfy this constraint. It addresses the issue of poor quality in long samples of text generated by neural language models leading to more coherent and fluent text. See Truncation Sampling as Language Model Desmoothing for more information. Note: do_sample must be set to True for this LogitsProcessor to work. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed( 1 ) >>> model = AutoModelForCausalLM.from_pretrained( "distilbert/distilgpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilgpt2" ) >>> inputs = tokenizer( "A sequence: 1, 2" , return_tensors= "pt" ) >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample= True ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: 1 , 2 , 3 | < 4 (left-hand pointer) ; <BLANKLINE> <BLANKLINE> >>> # With eta sampling, the output gets restricted to high-probability tokens. You can see it as a dynamic form of >>> # epsilon sampling that adapts its cutoff probability based on the entropy (high entropy = lower cutoff). >>> # Pro tip: The paper recomends using `eta_cutoff` values between 3e-4 to 4e-3 >>> outputs = model.generate(**inputs, do_sample= True , eta_cutoff= 0.1 ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. ExponentialDecayLengthPenalty < source > ( exponential_decay_length_penalty : typing.Tuple[int, float] eos_token_id : typing.Union[int, typing.List[int], torch.Tensor] input_ids_seq_length : int ) Parameters exponential_decay_length_penalty ( tuple(int, float) ) — This tuple shall consist of: (start_index, decay_factor) where start_index indicates where penalty starts and decay_factor represents the factor of exponential decay eos_token_id ( Union[int, List[int], torch.Tensor] ) — The id(s) of the end-of-sequence token. input_ids_seq_length ( int ) — The length of the input sequence. LogitsProcessor that exponentially increases the score of the eos_token_id after start_index has been reached. This allows generating shorter sequences without having a hard cutoff, allowing the eos_token to be predicted in a meaningful position. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> text = "Just wanted to let you know, I" >>> inputs = tokenizer(text, return_tensors= "pt" ) >>> # Let's consider that we want short sentences, so we limit `max_length=30`. However, we observe that the answer >>> # tends to end abruptly. >>> set_seed( 1 ) >>> outputs = model.generate(**inputs, do_sample= True , temperature= 0.9 , max_length= 30 , pad_token_id= 50256 ) >>> print (tokenizer.batch_decode(outputs)[ 0 ]) Just wanted to let you know, I received a link to an ebook, the book How To Start A Social Network which was published in 2010. Although >>> # To promote the appearance of the EOS token at the right time, we add the `exponential_decay_length_penalty = >>> # (start_index, decay_factor)`. Instead of cutting at max_tokens, the output comes to an end before and usually >>> # with more meaning. What happens is that starting from `start_index` the EOS token score will be increased >>> # by `decay_factor` exponentially. However, if you set a high decay factor, you may also end up with abruptly >>> # ending sequences. >>> set_seed( 1 ) >>> outputs = model.generate( ... **inputs, ... do_sample= True , ... temperature= 0.9 , ... max_length= 30 , ... pad_token_id= 50256 , ... exponential_decay_length_penalty=( 15 , 1.6 ), ... ) >>> print (tokenizer.batch_decode(outputs)[ 0 ]) Just wanted to let you know, I received a link to an ebook, the book How To Start A Social Network which<|endoftext|> >>> # With a small decay factor, you will have a higher chance of getting a meaningful sequence. >>> set_seed( 1 ) >>> outputs = model.generate( ... **inputs, ... do_sample= True , ... temperature= 0.9 , ... max_length= 30 , ... pad_token_id= 50256 , ... exponential_decay_length_penalty=( 15 , 1.01 ), ... ) >>> print (tokenizer.batch_decode(outputs)[ 0 ]) Just wanted to let you know, I received a link to an ebook, the book How To Start A Social Network which was published in 2010. <|endoftext|> __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. ForcedBOSTokenLogitsProcessor < source > ( bos_token_id : int ) Parameters bos_token_id ( int ) — The id of the token to force as the first generated token. LogitsProcessor that enforces the specified token as the first generated token. Used with encoder-decoder models. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained( "google/flan-t5-small" ) >>> tokenizer = AutoTokenizer.from_pretrained( "google/flan-t5-small" ) >>> inputs = tokenizer( "Translate from English to German: I love cats." , return_tensors= "pt" ) >>> # By default, it continues generating according to the model's logits >>> outputs = model.generate(**inputs, max_new_tokens= 10 ) >>> print (tokenizer.batch_decode(outputs)[ 0 ]) <pad> Ich liebe Kitty.</s> >>> # We can use `forced_bos_token_id` to force the start of generation with an encoder-decoder model >>> # (including forcing it to end straight away with an EOS token) >>> outputs = model.generate(**inputs, max_new_tokens= 10 , forced_bos_token_id=tokenizer.eos_token_id) >>> print (tokenizer.batch_decode(outputs)[ 0 ]) <pad></s> __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. ForcedEOSTokenLogitsProcessor < source > ( max_length : int eos_token_id : typing.Union[int, typing.List[int], torch.Tensor] device : str = 'cpu' ) Parameters max_length ( int ) — The maximum length of the sequence to be generated. eos_token_id ( Union[int, List[int], torch.Tensor] ) — The id(s) of the end-of-sequence token. device ( str , optional , defaults to "cpu" ) — The device to allocate the tensors. LogitsProcessor that enforces the specified token as the last generated token when max_length is reached. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( "distilbert/distilgpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilgpt2" ) >>> inputs = tokenizer( "A sequence: 1, 2, 3" , return_tensors= "pt" ) >>> # By default, it continues generating according to the model's logits >>> outputs = model.generate(**inputs, max_new_tokens= 10 ) >>> print (tokenizer.batch_decode(outputs)[ 0 ]) A sequence: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 >>> # `forced_eos_token_id` ensures the generation ends with a EOS token >>> outputs = model.generate(**inputs, max_new_tokens= 10 , forced_eos_token_id=tokenizer.eos_token_id) >>> print (tokenizer.batch_decode(outputs)[ 0 ]) A sequence: 1 , 2 , 3 , 4 , 5 , 6 , 7 ,<|endoftext|> __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. HammingDiversityLogitsProcessor < source > ( diversity_penalty : float num_beams : int num_beam_groups : int ) Parameters diversity_penalty ( float ) — This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. A higher diversity_penalty will enforce greater diversity among the beams. Adjusting this value can help strike a balance between diversity and natural likelihood. num_beams ( int ) — Number of beams for beam search. 1 means no beam search. num_beam_groups ( int ) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. this paper for more details. LogitsProcessor that enforces diverse beam search. Note that this logits processor is only effective for PreTrainedModel.group_beam_search . See Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models for more details. Traditional beam search often generates very similar sequences across different beams. HammingDiversityLogitsProcessor addresses this by penalizing beams that generate tokens already chosen by other beams in the same time step. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> import torch >>> # Initialize the model and tokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "google-t5/t5-base" ) >>> model = AutoModelForSeq2SeqLM.from_pretrained( "google-t5/t5-base" ) >>> # A long text about the solar system >>> text = ( ... "The Solar System is a gravitationally bound system comprising the Sun and the objects that orbit it, " ... "either directly or indirectly. Of the objects that orbit the Sun directly, the largest are the eight " ... "planets, with the remainder being smaller objects, such as the five dwarf planets and small Solar System " ... "bodies. The Solar System formed 4.6 billion years ago from the gravitational collapse of a giant " ... "interstellar molecular cloud." ... ) >>> inputs = tokenizer( "summarize: " + text, return_tensors= "pt" ) >>> # Generate diverse summary >>> outputs_diverse = model.generate( ... **inputs, ... num_beam_groups= 2 , ... diversity_penalty= 10.0 , ... max_length= 100 , ... num_beams= 4 , ... num_return_sequences= 2 , ... ) >>> summaries_diverse = tokenizer.batch_decode(outputs_diverse, skip_special_tokens= True ) >>> # Generate non-diverse summary >>> outputs_non_diverse = model.generate( ... **inputs, ... max_length= 100 , ... num_beams= 4 , ... num_return_sequences= 2 , ... ) >>> summary_non_diverse = tokenizer.batch_decode(outputs_non_diverse, skip_special_tokens= True ) >>> # With `diversity_penalty`, the resulting beams are much more diverse >>> print (summary_non_diverse) [ 'the solar system formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets.' , 'the Solar System formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets.' ] >>> print (summaries_diverse) [ 'the solar system formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets.' , 'the solar system formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. of the objects that orbit the Sun directly, the largest are the eight planets. the rest of the objects are smaller objects, such as the five dwarf planets and small solar system bodies.' ] __call__ < source > ( input_ids : LongTensor scores : FloatTensor current_tokens : LongTensor beam_group_idx : int ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search current_tokens ( torch.LongTensor of shape (batch_size) ) — Indices of input sequence tokens in the vocabulary, corresponding to the tokens selected by the other beam groups in the current generation step. beam_group_idx ( int ) — The index of the beam group currently being processed. Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. InfNanRemoveLogitsProcessor < source > ( ) LogitsProcessor that removes all nan and inf values to avoid the generation method to fail. Note that using the logits processor should only be used if necessary since it can slow down the generation method. This logits processor has no generate example, as there shouldn’t be a correct combination of flags that warrants its use. __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. LogitNormalization < source > ( ) LogitsProcessor for normalizing the scores using log-softmax. It’s important to normalize the scores during beam search, after applying the logits processors or warpers, since the search algorithm used in this library doesn’t do it (it only does it before, but they may need re-normalization) but it still supposes that the scores are normalized when comparing the hypotheses. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> import torch >>> model = AutoModelForCausalLM.from_pretrained( "distilbert/distilgpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilgpt2" ) >>> inputs = tokenizer( "A sequence: 1, 2, 3" , return_tensors= "pt" ) >>> # By default, the scores are not normalized -- the sum of their exponentials is NOT a normalized probability >>> # distribution, summing to 1 >>> outputs = model.generate(**inputs, return_dict_in_generate= True , output_scores= True ) >>> print (torch.allclose(torch. sum (torch.exp(outputs.scores[- 1 ])), torch.Tensor(( 1.000 ,)), rtol= 1e-4 )) False >>> # Normalizing them may have a positive impact on beam methods, or when using the scores on your application >>> outputs = model.generate(**inputs, renormalize_logits= True , return_dict_in_generate= True , output_scores= True ) >>> print (torch.allclose(torch. sum (torch.exp(outputs.scores[- 1 ])), torch.Tensor(( 1.000 ,)), rtol= 1e-4 )) True __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. LogitsProcessor < source > ( ) Abstract base class for all logit processors that can be applied during generation. __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. LogitsProcessorList < source > ( iterable = () ) This class can be used to create a list of LogitsProcessor to subsequently process a scores input tensor. This class inherits from list and adds a specific call method to apply each LogitsProcessor to the inputs. __call__ < source > ( input_ids : LongTensor scores : FloatTensor **kwargs ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search kwargs ( Dict[str, Any] , optional ) — Additional kwargs that are specific to a logits processor. Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. MinLengthLogitsProcessor < source > ( min_length : int eos_token_id : typing.Union[int, typing.List[int], torch.Tensor] device : str = 'cpu' ) Parameters min_length ( int ) — The minimum length below which the score of eos_token_id is set to -float("Inf") . eos_token_id ( Union[int, List[int], torch.Tensor] ) — The id(s) of the end-of-sequence token. device ( str , optional , defaults to "cpu" ) — The device to allocate the tensors. LogitsProcessor enforcing a min-length by setting EOS probability to 0. Note that, for decoder-only models like most LLMs, the length includes the prompt. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "bigscience/bloomz-560m" ) >>> model = AutoModelForCausalLM.from_pretrained( "bigscience/bloomz-560m" ) >>> inputs = tokenizer( "A number:" , return_tensors= "pt" ) >>> gen_out = model.generate(**inputs) >>> print (tokenizer.batch_decode(gen_out, skip_special_tokens= True )[ 0 ]) A number: one >>> # setting `min_length` to a value smaller than the uncontrolled output length has no impact >>> gen_out = model.generate(**inputs, min_length= 3 ) >>> print (tokenizer.batch_decode(gen_out, skip_special_tokens= True )[ 0 ]) A number: one >>> # setting a larger `min_length` will force the model to generate beyond its natural ending point, which is not >>> # necessarily incorrect >>> gen_out = model.generate(**inputs, min_length= 10 ) >>> print (tokenizer.batch_decode(gen_out, skip_special_tokens= True )[ 0 ]) A number: one thousand, nine hundred and ninety-four __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. MinNewTokensLengthLogitsProcessor < source > ( prompt_length_to_skip : int min_new_tokens : int eos_token_id : typing.Union[int, typing.List[int], torch.Tensor] device : str = 'cpu' ) Parameters prompt_length_to_skip ( int ) — The input tokens length. Not a valid argument when used with generate as it will automatically assign the input length. min_new_tokens ( int ) — The minimum new tokens length below which the score of eos_token_id is set to -float("Inf") . eos_token_id ( Union[int, List[int], torch.Tensor] ) — The id(s) of the end-of-sequence token. device ( str , optional , defaults to "cpu" ) — The device to allocate the tensors. LogitsProcessor enforcing a min-length of new tokens by setting EOS (End-Of-Sequence) token probability to 0. Contrarily to MinLengthLogitsProcessor , this processor ignores the prompt. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "bigscience/bloomz-560m" ) >>> model = AutoModelForCausalLM.from_pretrained( "bigscience/bloomz-560m" ) >>> inputs = tokenizer([ "A number:" ], return_tensors= "pt" ) >>> gen_out = model.generate(**inputs) >>> print (tokenizer.batch_decode(gen_out, skip_special_tokens= True )[ 0 ]) A number: one >>> # setting `min_new_tokens` will force the model to generate beyond its natural ending point, which is not >>> # necessarily incorrect >>> gen_out = model.generate(**inputs, min_new_tokens= 2 ) >>> print (tokenizer.batch_decode(gen_out, skip_special_tokens= True )[ 0 ]) A number: one thousand __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. MinPLogitsWarper < source > ( min_p : float filter_value : float = -inf min_tokens_to_keep : int = 1 ) Parameters min_p ( float ) — Minimum token probability, which will be scaled by the probability of the most likely token. It must be a value between 0 and 1. Typical values are in the 0.01-0.2 range, comparably selective as setting top_p in the 0.99-0.8 range (use the opposite of normal top_p values). filter_value ( float , optional , defaults to -inf) — All filtered values will be set to this float value. min_tokens_to_keep ( int , optional , defaults to 1) — Minimum number of tokens that cannot be filtered. LogitsProcessor that performs min-p, i.e. keeps all tokens that are above a minimum probability, scaled by the probability of the most likely token. As a result, the filter becomes more agressive in the presence of high-probability tokens, which is a sign of a confident output that we shouldn’t deviate from. Often used together with TemperatureLogitsWarper . Used as an alternative to TopPLogitsWarper and TopKLogitsWarper . Created by @menhguin and @kalomaze (github handles). Code adapted from this external PR Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed( 1 ) >>> model = AutoModelForCausalLM.from_pretrained( "distilbert/distilgpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilgpt2" ) >>> inputs = tokenizer( "A sequence: 1, 2" , return_tensors= "pt" ) >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample= True ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: 1 , 2 , 3 | < 4 (left-hand pointer) ; <BLANKLINE> <BLANKLINE> >>> # With `min_p` sampling, the output gets restricted to high-probability tokens. >>> # Pro tip: In practice, LLMs use `min_p` in the 0.01-0.2 range. >>> outputs = model.generate(**inputs, do_sample= True , min_p= 0.1 ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) class transformers. NoBadWordsLogitsProcessor < source > ( bad_words_ids : typing.List[typing.List[int]] eos_token_id : typing.Union[int, typing.List[int], torch.Tensor, NoneType] = None ) Parameters bad_words_ids ( List[List[int]] ) — List of list of token ids that are not allowed to be generated. eos_token_id ( Union[int, List[int], torch.Tensor] , optional ) — The id(s) of the end-of-sequence token. LogitsProcessor that enforces that specified sequences will never be selected. In order to get the token ids of the words that should not appear in the generated text, make sure to set add_prefix_space=True when initializing the tokenizer, and use tokenizer(bad_words, add_special_tokens=False).input_ids . The add_prefix_space argument is only supported for some slow tokenizers, as fast tokenizers’ prefixing behaviours come from pre tokenizers . Read more here . Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> inputs = tokenizer([ "In a word, the cake is a" ], return_tensors= "pt" ) >>> output_ids = model.generate(inputs[ "input_ids" ], max_new_tokens= 5 , pad_token_id=tokenizer.eos_token_id) >>> print (tokenizer.batch_decode(output_ids, skip_special_tokens= True )[ 0 ]) In a word, the cake is a bit of a mess. >>> # Now let's take the bad words out. Please note that the tokenizer is initialized differently >>> tokenizer_with_prefix_space = AutoTokenizer.from_pretrained( "openai-community/gpt2" , add_prefix_space= True ) >>> def get_tokens_as_list ( word_list ): ... "Converts a sequence of words into a list of tokens" ... tokens_list = [] ... for word in word_list: ... tokenized_word = tokenizer_with_prefix_space([word], add_special_tokens= False ).input_ids[ 0 ] ... tokens_list.append(tokenized_word) ... return tokens_list >>> bad_words_ids = get_tokens_as_list(word_list=[ "mess" ]) >>> output_ids = model.generate( ... inputs[ "input_ids" ], max_new_tokens= 5 , bad_words_ids=bad_words_ids, pad_token_id=tokenizer.eos_token_id ... ) >>> print (tokenizer.batch_decode(output_ids, skip_special_tokens= True )[ 0 ]) In a word, the cake is a bit of a surprise. __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. NoRepeatNGramLogitsProcessor < source > ( ngram_size : int ) Parameters ngram_size ( int ) — All ngrams of size ngram_size can only occur once. N-grams are groups of “n” consecutive words, characters, or tokens taken from a sequence of text. Given the sentence: “She runs fast”, the bi-grams (n=2) would be (“she”, “runs”) and (“runs”, “fast”). In text generation, avoiding repetitions of word sequences provides a more diverse output. This LogitsProcessor enforces no repetition of n-grams by setting the scores of banned tokens to negative infinity which eliminates those tokens from consideration when further processing the scores. Note that, for decoder-only models like most LLMs, the prompt is also considered to obtain the n-grams. Fairseq . Use n-gram penalties with care. For instance, penalizing 2-grams (bigrams) in an article about the city of New York might lead to undesirable outcomes where the city’s name appears only once in the entire text. Reference Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( "distilbert/distilgpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilgpt2" ) >>> inputs = tokenizer([ "Today I" ], return_tensors= "pt" ) >>> output = model.generate(**inputs) >>> print (tokenizer.decode(output[ 0 ], skip_special_tokens= True )) Today I’m not sure if I’m going to be able to do it. >>> # Now let's add ngram size using `no_repeat_ngram_size`. This stops the repetitions ("I’m") in the output. >>> output = model.generate(**inputs, no_repeat_ngram_size= 2 ) >>> print (tokenizer.decode(output[ 0 ], skip_special_tokens= True )) Today I’m not sure if I can get a better understanding of the nature of this issue __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. PrefixConstrainedLogitsProcessor < source > ( prefix_allowed_tokens_fn : typing.Callable[[int, torch.Tensor], typing.List[int]] num_beams : int ) Parameters prefix_allowed_tokens_fn ( Callable[[int, torch.Tensor], List[int]] ) — This function constraints the beam search to allowed tokens only at each step. This function takes 2 arguments inputs_ids and the batch ID batch_id . It has to return a list with the allowed tokens for the next generation step conditioned on the previously generated tokens inputs_ids and the batch ID batch_id . LogitsProcessor that enforces constrained generation and is useful for prefix-conditioned constrained generation. See Autoregressive Entity Retrieval for more information. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( "bigscience/bloomz-560m" ) >>> tokenizer = AutoTokenizer.from_pretrained( "bigscience/bloomz-560m" ) >>> inputs = tokenizer( "Alice and Bob" , return_tensors= "pt" ) >>> # By default, it continues generating according to the model's logits >>> outputs = model.generate(**inputs, max_new_tokens= 5 ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) Alice and Bob are friends >>> # We can contrain it with `prefix_allowed_tokens_fn` to force a certain behavior based on a prefix. >>> # For instance, we can force an entire entity to be generated when its beginning is detected. >>> entity = tokenizer( " Bob Marley" , return_tensors= "pt" ).input_ids[ 0 ] # 3 tokens >>> def prefix_allowed_tokens_fn ( batch_id, input_ids ): ... ''' ... Attempts to generate 'Bob Marley' when 'Bob' is detected. ... In this case, `batch_id` is not used, but you can set rules for each batch member. ... ''' ... if input_ids[- 1 ] == entity[ 0 ]: ... return [entity[ 1 ].item()] ... elif input_ids[- 2 ] == entity[ 0 ] and input_ids[- 1 ] == entity[ 1 ]: ... return [entity[ 2 ].item()] ... return list ( range (tokenizer.vocab_size)) # If no match, allow all tokens >>> outputs = model.generate(**inputs, max_new_tokens= 5 , prefix_allowed_tokens_fn=prefix_allowed_tokens_fn) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) Alice and Bob Marley __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. RepetitionPenaltyLogitsProcessor < source > ( penalty : float ) Parameters penalty ( float ) — The parameter for repetition penalty. 1.0 means no penalty. Above 1.0 penalizes previously generated tokens. Between 0.0 and 1.0 rewards previously generated tokens. LogitsProcessor that prevents the repetition of previous tokens through a penalty. This penalty is applied at most once per token. Note that, for decoder-only models like most LLMs, the considered tokens include the prompt. In the original paper , the authors suggest the use of a penalty of around 1.2 to achieve a good balance between truthful generation and lack of repetition. To penalize and reduce repetition, use penalty values above 1.0, where a higher value penalizes more strongly. To reward and encourage repetition, use penalty values between 0.0 and 1.0, where a lower value rewards more strongly. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> # Initializing the model and tokenizer for it >>> model = AutoModelForCausalLM.from_pretrained( "distilbert/distilgpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilgpt2" ) >>> inputs = tokenizer([ "I'm not going to" ], return_tensors= "pt" ) >>> # This shows a normal generate without any specific parameters >>> summary_ids = model.generate(**inputs) >>> print (tokenizer.batch_decode(summary_ids, skip_special_tokens= True )[ 0 ]) I 'm not going to be able to do that. I' m going to be able to do that >>> # This generates a penalty for repeated tokens >>> penalized_ids = model.generate(**inputs, repetition_penalty= 1.1 ) >>> print (tokenizer.batch_decode(penalized_ids, skip_special_tokens= True )[ 0 ]) I 'm not going to be able to do that. I' ll just have to go out and play __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. SequenceBiasLogitsProcessor < source > ( sequence_bias : typing.List[typing.List[typing.Union[typing.List[int], float]]] ) Parameters sequence_bias ( List[List[Union[List[int], float]]] ) — List of lists that maps a sequence of tokens to its bias term (e.g. [[[10, 45], -2.0], [[64], -7.5]] ). Positive biases increase the odds of the sequence being selected, while negative biases do the opposite. If a sequence has a length of 1, its bias will always be applied. Otherwise, the bias will only be applied if the sequence in question is about to be completed (in the token selection step after this processor is applied). LogitsProcessor that applies an additive bias on sequences. The bias is applied to the last token of a sequence when the next generated token can complete it. Consequently, to take the most of biasing sequences with more than one token, consider using beam methods (to gracefully work around partially completed sequences that have a negative bias) and applying the bias to their prefixes (to ensure the bias is applied earlier). In order to get the token ids of the sequences that you want to bias, make sure to set add_prefix_space=True when initializing the tokenizer, and use tokenizer(bad_words, add_special_tokens=False).input_ids . The add_prefix_space argument is only supported for some slow tokenizers, as fast tokenizers’ prefixing behaviours come from pre tokenizers . Read more here . Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> inputs = tokenizer([ "The full name of Donald is Donald" ], return_tensors= "pt" ) >>> summary_ids = model.generate(inputs[ "input_ids" ], max_new_tokens= 4 ) >>> print (tokenizer.batch_decode(summary_ids, skip_special_tokens= True )[ 0 ]) The full name of Donald is Donald J. Trump Jr >>> # Now let's control generation through a bias. Please note that the tokenizer is initialized differently! >>> tokenizer_with_prefix_space = AutoTokenizer.from_pretrained( "openai-community/gpt2" , add_prefix_space= True ) >>> def get_tokens ( word ): ... return tokenizer_with_prefix_space([word], add_special_tokens= False ).input_ids[ 0 ] >>> # If we add a negative bias without beam search, it may become "stuck" in a prefix without good continuations >>> sequence_bias = [get_tokens( "Trump" ), - 10.0 ] >>> biased_ids = model.generate(inputs[ "input_ids" ], max_new_tokens= 4 , sequence_bias=sequence_bias) >>> print (tokenizer.batch_decode(biased_ids, skip_special_tokens= True )[ 0 ]) The full name of Donald is Donald J. Donald, >>> biased_ids = model.generate(inputs[ "input_ids" ], max_new_tokens= 4 , num_beams= 4 , sequence_bias=sequence_bias) >>> print (tokenizer.batch_decode(biased_ids, skip_special_tokens= True )[ 0 ]) The full name of Donald is Donald Rumsfeld, >>> # We can also add a positive bias to nudge the model towards specific tokens or continuations >>> sequence_bias = [get_tokens( "Donald Duck" ), 10.0 ] >>> biased_ids = model.generate(inputs[ "input_ids" ], max_new_tokens= 4 , num_beams= 4 , sequence_bias=sequence_bias) >>> print (tokenizer.batch_decode(biased_ids, skip_special_tokens= True )[ 0 ]) The full name of Donald is Donald Duck. __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. SuppressTokensAtBeginLogitsProcessor < source > ( begin_suppress_tokens begin_index device : str = 'cpu' ) SuppressTokensAtBeginLogitsProcessor supresses a list of tokens as soon as the generate function starts generating using begin_index tokens. This should ensure that the tokens defined by begin_suppress_tokens are not generated at the beginning. Originally created for Whisper . Examples: Copied >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained( "openai/whisper-tiny.en" ) >>> model = WhisperForConditionalGeneration.from_pretrained( "openai/whisper-tiny.en" ) >>> ds = load_dataset( "hf-internal-testing/librispeech_asr_dummy" , "clean" , split= "validation" ) >>> inputs = processor(ds[ 0 ][ "audio" ][ "array" ], return_tensors= "pt" ) >>> # Whisper has `begin_suppress_tokens` set by default (= `[220, 50256]`). 50256 is the EOS token, so this means >>> # it can't generate and EOS token in the first iteration, but it can in the others. >>> outputs = model.generate(**inputs, return_dict_in_generate= True , output_scores= True ) >>> print (outputs.scores[ 0 ][ 0 , 50256 ]) tensor(-inf) >>> print (outputs.scores[- 1 ][ 0 , 50256 ]) # in other places we can see some probability mass for EOS tensor( 29.9010 ) >>> # If we disable `begin_suppress_tokens`, we can generate EOS in the first iteration. >>> outputs = model.generate( ... **inputs, return_dict_in_generate= True , output_scores= True , begin_suppress_tokens= None ... ) >>> print (outputs.scores[ 0 ][ 0 , 50256 ]) tensor( 11.2027 ) __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. SuppressTokensLogitsProcessor < source > ( suppress_tokens device : str = 'cpu' ) This processor can be used to suppress a list of tokens. The processor will set their log probs to -inf so that they are not generated. Originally created for Whisper . Examples: Copied >>> from transformers import AutoProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained( "openai/whisper-tiny.en" ) >>> model = WhisperForConditionalGeneration.from_pretrained( "openai/whisper-tiny.en" ) >>> ds = load_dataset( "hf-internal-testing/librispeech_asr_dummy" , "clean" , split= "validation" ) >>> inputs = processor(ds[ 0 ][ "audio" ][ "array" ], return_tensors= "pt" ) >>> # Whisper has a long list of suppressed tokens. For instance, in this case, the token 1 is suppressed by default. >>> outputs = model.generate(**inputs, return_dict_in_generate= True , output_scores= True ) >>> print (outputs.scores[ 1 ][ 0 , 1 ]) # 1 (and not 0) is the first freely generated token tensor(-inf) >>> # If we disable `suppress_tokens`, we can generate it. >>> outputs = model.generate(**inputs, return_dict_in_generate= True , output_scores= True , suppress_tokens= None ) >>> print (outputs.scores[ 1 ][ 0 , 1 ]) tensor( 6.0678 ) __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. SynthIDTextWatermarkLogitsProcessor < source > ( ngram_len : int keys : typing.List[int] sampling_table_size : int sampling_table_seed : int context_history_size : int device : device skip_first_ngram_calls : bool = False debug_mode : bool = False ) Parameters ngram_len ( int ) — Ngram length. keys ( List[int] ) — A sequence of watermarking keys, one for each depth. sampling_table_size ( int ) — Size of the sampling table. sampling_table_seed ( int ) — Random seed to generate the sampling table. context_history_size ( int ) — Size of the tensor to keep track of seen contexts. device ( torch.device ) — Device to use. skip_first_ngram_calls ( bool , optional , defaults to False ) — Whether to skip first ngram calls. debug_mode ( bool , optional, optional , defaults to False ) — Logits are modified to uniform one got before watermarking modification is applied. This is to test the implementation. Logits processor that implements watermarking techniques for text generation models. This class facilitates the application of SynthID text watermarking, a method for embedding imperceptible signals into generated text to aid in detecting synthetic content. It operates by subtly manipulating the probabilities of token selection during text generation in a manner that can be reliably recovered later for verification. Key Features: State Management: Maintains internal state to track token sequences and generate watermarking keys dynamically. Key Generation: Computes hashes based on token sequences and watermarking parameters to create unique keys for each position. G-Value Sampling: Employs a pre-computed sampling table to sample watermarking values (g-values) based on the generated keys. Score Adjustment: Applies calculated g-values to modify token probabilities during generation, embedding the watermark. Context Repetition Handling: Incorporates logic to avoid watermarking tokens in repeated contexts, preserving naturalness. EOS Token Masking: Supports masking end-of-sentence tokens to prevent their inclusion in watermarking calculations. Utility Functions: Provides functions to compute g-values directly, check for context repetition, create EOS token masks, and estimate expected mean g-values. Refer to paper url: https://www.nature.com/articles/s41586-024-08025-4 for more details around this. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer, SynthIDTextWatermarkingConfig >>> tokenizer = AutoTokenizer.from_pretrained( 'google/gemma-2-2b' , padding_side= "left" ) >>> model = AutoModelForCausalLM.from_pretrained( 'google/gemma-2-2b' ) >>> # SynthID Text configuration >>> watermarking_config = SynthIDTextWatermarkingConfig( ... keys=[ 654 , 400 , 836 , 123 , 340 , 443 , 597 , 160 , 57 ], ... ngram_len= 5 , ... ) >>> # Generation with watermarking >>> tokenized_prompts = tokenizer([ "Once upon a time, " ], return_tensors= "pt" , padding= True ) >>> output_sequences = model.generate( ... **tokenized_prompts, watermarking_config=watermarking_config, do_sample= True , max_new_tokens= 10 ... ) >>> watermarked_text = tokenizer.batch_decode(output_sequences, skip_special_tokens= True ) __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. TemperatureLogitsWarper < source > ( temperature : float ) Parameters temperature ( float ) — Strictly positive float value used to modulate the logits distribution. A value smaller than 1 decreases randomness (and vice versa), with 0 being equivalent to shifting all probability mass to the most likely token. LogitsProcessor for temperature (exponential scaling output probability distribution), which effectively means that it can control the randomness of the predicted tokens. Often used together with TopPLogitsWarper and TopKLogitsWarper . Make sure that do_sample=True is included in the generate arguments otherwise the temperature value won’t have any effect. Examples: Copied >>> import torch >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed( 0 ) # for reproducibility >>> tokenizer = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> model.config.pad_token_id = model.config.eos_token_id >>> inputs = tokenizer([ "Hugging Face Company is" ], return_tensors= "pt" ) >>> # With temperature=1.0, the default, we consistently get random outputs due to random sampling. >>> generate_kwargs = { "max_new_tokens" : 10 , "do_sample" : True , "temperature" : 1.0 , "num_return_sequences" : 2 } >>> outputs = model.generate(**inputs, **generate_kwargs) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )) [ 'Hugging Face Company is one of these companies that is going to take a' , "Hugging Face Company is a brand created by Brian A. O'Neil" ] >>> # However, with temperature close to 0, it approximates greedy decoding strategies (invariant) >>> generate_kwargs[ "temperature" ] = 0.0001 >>> outputs = model.generate(**inputs, **generate_kwargs) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )) [ 'Hugging Face Company is a company that has been around for over 20 years' , 'Hugging Face Company is a company that has been around for over 20 years' ] __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. TopKLogitsWarper < source > ( top_k : int filter_value : float = -inf min_tokens_to_keep : int = 1 ) Parameters top_k ( int ) — The number of highest probability vocabulary tokens to keep for top-k-filtering. filter_value ( float , optional , defaults to -inf) — All filtered values will be set to this float value. min_tokens_to_keep ( int , optional , defaults to 1) — Minimum number of tokens that cannot be filtered. LogitsProcessor that performs top-k, i.e. restricting to the k highest probability elements. Often used together with TemperatureLogitsWarper and TopPLogitsWarper . Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed( 1 ) >>> model = AutoModelForCausalLM.from_pretrained( "distilbert/distilgpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilgpt2" ) >>> inputs = tokenizer( "A sequence: A, B, C, D" , return_tensors= "pt" ) >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample= True ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: A, B, C, D, E — S — O, P — R >>> # With `top_k` sampling, the output gets restricted the k most likely tokens. >>> # Pro tip: In practice, LLMs use `top_k` in the 5-50 range. >>> outputs = model.generate(**inputs, do_sample= True , top_k= 2 ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: A, B, C, D, E, F, G, H, I __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. TopPLogitsWarper < source > ( top_p : float filter_value : float = -inf min_tokens_to_keep : int = 1 ) Parameters top_p ( float ) — If set to < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. filter_value ( float , optional , defaults to -inf) — All filtered values will be set to this float value. min_tokens_to_keep ( int , optional , defaults to 1) — Minimum number of tokens that cannot be filtered. LogitsProcessor that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off. Often used together with TemperatureLogitsWarper and TopKLogitsWarper . Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed( 1 ) >>> model = AutoModelForCausalLM.from_pretrained( "distilbert/distilgpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilgpt2" ) >>> inputs = tokenizer( "A sequence: 1, 2" , return_tensors= "pt" ) >>> # With sampling, the output is unexpected -- sometimes too unexpected. >>> outputs = model.generate(**inputs, do_sample= True ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: 1 , 2 , 3 | < 4 (left-hand pointer) ; <BLANKLINE> <BLANKLINE> >>> # With `top_p` sampling, the output gets restricted to high-probability tokens. >>> # Pro tip: In practice, LLMs use `top_p` in the 0.9-0.95 range. >>> outputs = model.generate(**inputs, do_sample= True , top_p= 0.1 ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) A sequence: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. TypicalLogitsWarper < source > ( mass : float = 0.9 filter_value : float = -inf min_tokens_to_keep : int = 1 ) Parameters mass ( float , optional , defaults to 0.9) — Value of typical_p between 0 and 1 inclusive, defaults to 0.9. filter_value ( float , optional , defaults to -inf) — All filtered values will be set to this float value. min_tokens_to_keep ( int , optional , defaults to 1) — Minimum number of tokens that cannot be filtered. LogitsProcessor that performs typical decoding. Inspired on how humans use language, it prioritizes tokens whose log probability is close to the entropy of the token probability distribution. This means that the most likely tokens may be discarded in the process. See Typical Decoding for Natural Language Generation for more information. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> model = AutoModelForCausalLM.from_pretrained( "bigscience/bloomz-560m" ) >>> tokenizer = AutoTokenizer.from_pretrained( "bigscience/bloomz-560m" ) >>> inputs = tokenizer( "1, 2, 3" , return_tensors= "pt" ) >>> # We can see that greedy decoding produces a sequence of numbers >>> outputs = model.generate(**inputs) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , >>> # For this particular seed, we can see that sampling produces nearly the same low-information (= low entropy) >>> # sequence >>> set_seed( 18 ) >>> outputs = model.generate(**inputs, do_sample= True ) >>> print (tokenizer.batch_decode(outputs, skip_special_tokens= True )[ 0 ]) 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 and 10 >>> # With `typical_p` set, the most obvious sequence is no longer produced, which may be good for your problem >>> set_seed( 18 ) >>> outputs = model.generate( ... **inputs, do_sample= True , typical_p= 0.1 , return_dict_in_generate= True , output_scores= True ... ) >>> print (tokenizer.batch_decode(outputs.sequences, skip_special_tokens= True )[ 0 ]) 1 , 2 , 3 and 5 >>> # We can see that the token corresponding to "4" (token 934) in the second position, the most likely token >>> # as seen with greedy decoding, was entirely blocked out >>> print (outputs.scores[ 1 ][ 0 , 934 ]) tensor(-inf) __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. UnbatchedClassifierFreeGuidanceLogitsProcessor < source > ( guidance_scale : float model unconditional_ids : typing.Optional[torch.LongTensor] = None unconditional_attention_mask : typing.Optional[torch.LongTensor] = None use_cache : typing.Optional[bool] = True ) Parameters guidance_scale ( float ) — The guidance scale for classifier free guidance (CFG). CFG is enabled by setting guidance_scale != 1 . Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. A value smaller than 1 has the opposite effect, while making the negative prompt provided with negative_prompt_ids (if any) act as a positive prompt. model ( PreTrainedModel ) — The model computing the unconditional scores. Supposedly the same as the one computing the conditional scores. Both models must use the same tokenizer. unconditional_ids ( torch.LongTensor of shape (batch_size, sequence_length) , optional ) — Indices of input sequence tokens in the vocabulary for the unconditional branch. If unset, will default to the last token of the prompt. unconditional_attention_mask ( torch.LongTensor of shape (batch_size, sequence_length) , optional ) — Attention mask for unconditional_ids. use_cache ( bool , optional , defaults to True ) — Whether to cache key/values during the negative prompt forward pass. Logits processor for Classifier-Free Guidance (CFG). The processors computes a weighted average across scores from prompt conditional and prompt unconditional (or negative) logits, parameterized by the guidance_scale . The unconditional scores are computed internally by prompting model with the unconditional_ids branch. See the paper for more information. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> inputs = tokenizer([ "Today, a dragon flew over Paris, France," ], return_tensors= "pt" ) >>> out = model.generate(inputs[ "input_ids" ], guidance_scale= 1.5 ) >>> tokenizer.batch_decode(out, skip_special_tokens= True )[ 0 ] 'Today, a dragon flew over Paris, France, killing at least 50 people and injuring more than 100' >>> # with a negative prompt >>> neg_inputs = tokenizer([ "A very happy event happened," ], return_tensors= "pt" ) >>> out = model.generate(inputs[ "input_ids" ], guidance_scale= 2 , negative_prompt_ids=neg_inputs[ "input_ids" ]) >>> tokenizer.batch_decode(out, skip_special_tokens= True )[ 0 ] 'Today, a dragon flew over Paris, France, killing at least 130 people. French media reported that' >>> # with a positive prompt >>> neg_inputs = tokenizer([ "A very happy event happened," ], return_tensors= "pt" ) >>> out = model.generate(inputs[ "input_ids" ], guidance_scale= 0 , negative_prompt_ids=neg_inputs[ "input_ids" ]) >>> tokenizer.batch_decode(out, skip_special_tokens= True )[ 0 ] "Today, a dragon flew over Paris, France, and I'm very happy to be here. I" __call__ < source > ( input_ids scores ) class transformers. WhisperTimeStampLogitsProcessor < source > ( generate_config begin_index : typing.Optional[int] = None _detect_timestamp_from_logprob : typing.Optional[bool] = None ) Parameters generate_config ( GenerateConfig ) — The generate config used to generate the output. The following parameters are required: eos_token_id ( int , optional , defaults to 50257): The id of the end-of-sequence token. no_timestamps_token_id ( int , optional , defaults to 50363): The id of the "<|notimestamps|>" token. max_initial_timestamp_index ( int , optional , defaults to 1): Used to set the maximum value of the initial timestamp. This is used to prevent the model from predicting timestamps that are too far in the future. begin_index ( Optional , optional ) — Token index of the first token that is generated by the model. _detect_timestamp_from_logprob ( bool , optional ) — Whether timestamps can be predicted from logprobs over all timestamps. LogitsProcessor that modifies the logits for the generation of timestamps in the transcription. When the input tokens are at a specific threshold, the processor sets the scores to negative infinity. The processor makes sure that timestamp tokens appear in pairs, by masking out the logits that would break this pairing pattern. This is done to maintain the consistency and structure of generated timestamps. It also ensures that when the predicted probability of sampling any of the timestamp token is greater than any individual non-timestamp token, those non-timestamp logits are set to negative infinity. This is done to ensure the generation of timestamps over other potential tokens. See the paper for more information. Examples: Copied >>> import torch >>> from transformers import AutoProcessor, WhisperForConditionalGeneration, GenerationConfig >>> from datasets import load_dataset >>> processor = AutoProcessor.from_pretrained( "openai/whisper-tiny.en" ) >>> model = WhisperForConditionalGeneration.from_pretrained( "openai/whisper-tiny.en" ) >>> ds = load_dataset( "hf-internal-testing/librispeech_asr_dummy" , "clean" , split= "validation" ) >>> inputs = processor(ds[ 3 ][ "audio" ][ "array" ], return_tensors= "pt" ) >>> input_features = inputs.input_features >>> #Displaying timestamps >>> generated_ids = model.generate(inputs=input_features, return_timestamps= True ) >>> transcription = processor.batch_decode(generated_ids, decode_with_timestamps= True )[ 0 ] >>> print ( "Transcription:" , transcription) Transcription: <|startoftranscript|><| 0.00 |> He has grave doubts whether Sir Frederick Layton 's work is really Greek after all, and can<|6.44|><|6.44|> discover in it but little of rocky Ithaca.<|9.44|><|endoftext|> >>> #No timestamps & change EOS: >>> #This allows the user to select a specific token to terminate the sequence on, in this case it' s the word "can" ( 460 ) >>> model.generation_config.eos_token_id = 460 >>> generated_ids = model.generate(inputs=input_features,return_timestamps= False ) >>> transcription = processor.batch_decode(generated_ids, skip_special_tokens= True )[ 0 ] >>> print ( "Transcription:" , transcription) Transcription: He has grave doubts whether Sir Frederick Layton 's work is really Greek after all and can __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. WatermarkLogitsProcessor < source > ( vocab_size device greenlist_ratio : float = 0.25 bias : float = 2.0 hashing_key : int = 15485863 seeding_scheme : str = 'lefthash' context_width : int = 1 ) Parameters vocab_size ( int ) — The model tokenizer’s vocab_size. Used to calculate “green” tokens ratio. device ( str ) — The device where model is allocated. greenlist_ratio ( float , optional, optional , defaults to 0.25) — The ratio of “green” tokens used to the vocabulary size. Defaults to 0.25. bias ( float , optional, optional , defaults to 2.0) — The bias added to the selected “green” tokens’ logits. Consider lowering the bias if the text generation quality degrades. Recommended values are in the range of [0.5, 2.0]. Defaults to 2.0. hashing_key ( int , optional, optional , defaults to 15485863) — Key used for hashing. If you deploy this watermark, we advise using another private key. Defaults to 15485863 (the millionth prime). seeding_scheme ( str , optional, optional , defaults to "lefthash" ) — The seeding scheme used for selecting “green” tokens. Accepts values: “lefthash” (default): “green” tokens selection depend on the last token (Algorithm 2 from paper) “selfhash”: “green” tokens selection depends on the current token itself (Algorithm 3 from paper) The downside of this scheme is that it considers all possible next tokens and can be slower than “lefthash”. The context length of previous tokens to use in seeding. Higher context length makes watermarking more robust. context_width ( int , optional , defaults to 1) — The number of previous tokens to use when setting the seed. Logits processor for watermarking generated text. The processor modifies model output scores by adding a small bias to randomized set of “green” tokens before generating the next token. “Green” tokens selection process depends on the seeding_scheme used. The code was based on the original repo . The text generated by this LogitsProcessor can be detected using WatermarkDetector . See call () for details, See the paper for more information. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, WatermarkingConfig >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> inputs = tokenizer([ "Alice and Bob are" ], return_tensors= "pt" ) >>> # normal generation >>> out = model.generate(inputs[ "input_ids" ], max_length= 20 , do_sample= False ) >>> tokenizer.batch_decode(out, skip_special_tokens= True )[ 0 ] 'Alice and Bob are both in the same room.\n\n"I\'m not sure if you\'re' >>> # watermarked generation >>> watermarking_config = WatermarkingConfig(bias= 2.5 , context_width= 2 , seeding_scheme= "selfhash" ) >>> out = model.generate(inputs[ "input_ids" ], watermarking_config=watermarking_config, max_length= 20 , do_sample= False ) >>> tokenizer.batch_decode(out, skip_special_tokens= True )[ 0 ] 'Alice and Bob are both still alive and well and the story is pretty much a one-hour adventure' >>> # to detect watermarked text use the WatermarkDetector class >>> from transformers import WatermarkDetector >>> detector = WatermarkDetector(model_config=model.config, device= "cpu" , watermarking_config= watermarking_config) >>> detection_preds = detector(out) >>> detection_preds array([ True ]) __call__ < source > ( input_ids : LongTensor scores : FloatTensor ) → torch.FloatTensor of shape (batch_size, config.vocab_size) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search Returns torch.FloatTensor of shape (batch_size, config.vocab_size) The processed prediction scores. TensorFlow class transformers. TFForcedBOSTokenLogitsProcessor < source > ( bos_token_id : int ) Parameters bos_token_id ( int ) — The id of the token to force as the first generated token. TFLogitsProcessor that enforces the specified token as the first generated token. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFForcedEOSTokenLogitsProcessor < source > ( max_length : int eos_token_id : int ) Parameters max_length ( int ) — The maximum length of the sequence to be generated. eos_token_id ( int ) — The id of the token to force as the last generated token when max_length is reached. TFLogitsProcessor that enforces the specified token as the last generated token when max_length is reached. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFForceTokensLogitsProcessor < source > ( force_token_map : typing.List[typing.List[int]] ) This processor takes a list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. The processor will set their log probs to 0 and all other tokens to -inf so that they are sampled at their corresponding index. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFLogitsProcessor < source > ( ) Abstract base class for all logit processors that can be applied during generation. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) → tf.Tensor of shape (batch_size, config.vocab_size) Parameters input_ids ( tf.Tensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( tf.Tensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search. cur_len ( int ) — The current length of valid input sequence tokens. In the TF implementation, the input_ids’ sequence length is the maximum length generate can produce, and we need to know which of its tokens are valid. kwargs ( Dict[str, Any] , optional ) — Additional logits processor specific kwargs. Returns tf.Tensor of shape (batch_size, config.vocab_size) The processed prediction scores. TF method for processing logits. class transformers. TFLogitsProcessorList < source > ( iterable = () ) This class can be used to create a list of TFLogitsProcessor to subsequently process a scores input tensor. This class inherits from list and adds a specific call method to apply each TFLogitsProcessor to the inputs. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int **kwargs ) → tf.Tensor of shape (batch_size, config.vocab_size) Parameters input_ids ( tf.Tensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( tf.Tensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search. cur_len ( int ) — The current length of valid input sequence tokens. In the TF implementation, the input_ids’ sequence length is the maximum length generate can produce, and we need to know which of its tokens are valid. kwargs ( Dict[str, Any] , optional ) — Additional logits processor specific kwargs. Returns tf.Tensor of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. TFLogitsWarper < source > ( ) Abstract base class for all logit warpers that can be applied during generation with multinomial sampling. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) → tf.Tensor of shape (batch_size, config.vocab_size) Parameters input_ids ( tf.Tensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( tf.Tensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search. cur_len ( int ) — The current length of valid input sequence tokens. In the TF implementation, the input_ids’ sequence length is the maximum length generate can produce, and we need to know which of its tokens are valid. kwargs ( Dict[str, Any] , optional ) — Additional logits processor specific kwargs. Returns tf.Tensor of shape (batch_size, config.vocab_size) The processed prediction scores. TF method for warping logits. class transformers. TFMinLengthLogitsProcessor < source > ( min_length : int eos_token_id : int ) Parameters min_length ( int ) — The minimum length below which the score of eos_token_id is set to -float("Inf") . eos_token_id ( int ) — The id of the end-of-sequence token. TFLogitsProcessor enforcing a min-length by setting EOS probability to 0. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFNoBadWordsLogitsProcessor < source > ( bad_words_ids : typing.List[typing.List[int]] eos_token_id : int ) Parameters bad_words_ids ( List[List[int]] ) — List of list of token ids that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, make sure to set add_prefix_space=True when initializing the tokenizer, and use tokenizer(bad_words, add_special_tokens=False).input_ids . The add_prefix_space argument is only supported for some slow tokenizers, as fast tokenizers’ prefixing behaviours come from pre tokenizers . Read more here . eos_token_id ( int ) — The id of the end-of-sequence token. TFLogitsProcessor that enforces that specified sequences will never be sampled. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFNoRepeatNGramLogitsProcessor < source > ( ngram_size : int ) Parameters ngram_size ( int ) — All ngrams of size ngram_size can only occur once. TFLogitsProcessor that enforces no repetition of n-grams. See Fairseq . __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFRepetitionPenaltyLogitsProcessor < source > ( penalty : float ) Parameters repetition_penalty ( float ) — The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details. TFLogitsProcessor enforcing an exponential penalty on repeated sequences. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFSuppressTokensAtBeginLogitsProcessor < source > ( begin_suppress_tokens begin_index ) TFSuppressTokensAtBeginLogitsProcessor suppresses a list of tokens as soon as the generate function starts generating using begin_index tokens. This should ensure that the tokens defined by begin_suppress_tokens at not sampled at the beginning of the generation. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFSuppressTokensLogitsProcessor < source > ( suppress_tokens ) This processor can be used to suppress a list of tokens. The processor will set their log probs to -inf so that they are not sampled. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFTemperatureLogitsWarper < source > ( temperature : float ) Parameters temperature ( float ) — The value used to module the logits distribution. TFLogitsWarper for temperature (exponential scaling output probability distribution). __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFTopKLogitsWarper < source > ( top_k : int filter_value : float = -inf min_tokens_to_keep : int = 1 ) Parameters top_k ( int ) — The number of highest probability vocabulary tokens to keep for top-k-filtering. filter_value ( float , optional , defaults to -inf) — All filtered values will be set to this float value. min_tokens_to_keep ( int , optional , defaults to 1) — Minimum number of tokens that cannot be filtered. TFLogitsWarper that performs top-k, i.e. restricting to the k highest probability elements. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) class transformers. TFTopPLogitsWarper < source > ( top_p : float filter_value : float = -inf min_tokens_to_keep : int = 1 ) Parameters top_p ( float ) — If set to < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. filter_value ( float , optional , defaults to -inf) — All filtered values will be set to this float value. min_tokens_to_keep ( int , optional , defaults to 1) — Minimum number of tokens that cannot be filtered. TFLogitsWarper that performs top-p, i.e. restricting to top tokens summing to <= prob_cut_off. __call__ < source > ( input_ids : Tensor scores : Tensor cur_len : int ) FLAX class transformers. FlaxForcedBOSTokenLogitsProcessor < source > ( bos_token_id : int ) Parameters bos_token_id ( int ) — The id of the token to force as the first generated token. FlaxLogitsProcessor that enforces the specified token as the first generated token. __call__ < source > ( input_ids : Array scores : Array cur_len : int ) class transformers. FlaxForcedEOSTokenLogitsProcessor < source > ( max_length : int eos_token_id : int ) Parameters max_length ( int ) — The maximum length of the sequence to be generated. eos_token_id ( int ) — The id of the token to force as the last generated token when max_length is reached. FlaxLogitsProcessor that enforces the specified token as the last generated token when max_length is reached. __call__ < source > ( input_ids : Array scores : Array cur_len : int ) class transformers. FlaxForceTokensLogitsProcessor < source > ( force_token_map ) Parameters force_token_map ( list ) — Map giving token ids and indices where they will be forced to be sampled. FlaxLogitsProcessor that takes a list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. The processor will set their log probs to 0 and all other tokens to -inf so that they are sampled at their corresponding index. __call__ < source > ( input_ids : Array scores : Array cur_len : int ) class transformers. FlaxLogitsProcessor < source > ( ) Abstract base class for all logit processors that can be applied during generation. __call__ < source > ( input_ids : Array scores : Array ) → jnp.ndarray of shape (batch_size, config.vocab_size) Parameters input_ids ( jnp.ndarray of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( jnp.ndarray of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search kwargs ( Dict[str, Any] , optional ) — Additional logits processor specific kwargs. Returns jnp.ndarray of shape (batch_size, config.vocab_size) The processed prediction scores. Flax method for processing logits. class transformers. FlaxLogitsProcessorList < source > ( iterable = () ) This class can be used to create a list of FlaxLogitsProcessor or FlaxLogitsWarper to subsequently process a scores input tensor. This class inherits from list and adds a specific call method to apply each FlaxLogitsProcessor or FlaxLogitsWarper to the inputs. __call__ < source > ( input_ids : Array scores : Array cur_len : int **kwargs ) → jnp.ndarray of shape (batch_size, config.vocab_size) Parameters input_ids ( jnp.ndarray of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( jnp.ndarray of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search kwargs ( Dict[str, Any] , optional ) — Additional logits processor specific kwargs. Returns jnp.ndarray of shape (batch_size, config.vocab_size) The processed prediction scores. class transformers. FlaxLogitsWarper < source > ( ) Abstract base class for all logit warpers that can be applied during generation with multinomial sampling. __call__ < source > ( input_ids : Array scores : Array ) → jnp.ndarray of shape (batch_size, config.vocab_size) Parameters input_ids ( jnp.ndarray of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( jnp.ndarray of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search kwargs ( Dict[str, Any] , optional ) — Additional logits processor specific kwargs. Returns jnp.ndarray of shape (batch_size, config.vocab_size) The processed prediction scores. Flax method for warping logits. class transformers. FlaxMinLengthLogitsProcessor < source > ( min_length : int eos_token_id : int ) Parameters min_length ( int ) — The minimum length below which the score of eos_token_id is set to -float("Inf") . eos_token_id ( int ) — The id of the end-of-sequence token. FlaxLogitsProcessor enforcing a min-length by setting EOS probability to 0. __call__ < source > ( input_ids : Array scores : Array cur_len : int ) class transformers. FlaxSuppressTokensAtBeginLogitsProcessor < source > ( begin_suppress_tokens begin_index ) Parameters begin_suppress_tokens ( List[int] ) — Tokens to not sample. begin_index ( int ) — Index where the tokens are suppressed. FlaxLogitsProcessor supressing a list of tokens as soon as the generate function starts generating using begin_index tokens. This should ensure that the tokens defined by begin_suppress_tokens are not sampled at the beginning of the generation. __call__ < source > ( input_ids scores cur_len : int ) class transformers. FlaxSuppressTokensLogitsProcessor < source > ( suppress_tokens : list ) Parameters suppress_tokens ( list ) — Tokens to not sample. FlaxLogitsProcessor suppressing a list of tokens at each decoding step. The processor will set their log probs to be -inf so they are not sampled. __call__ < source > ( input_ids : Array scores : Array cur_len : int ) class transformers. FlaxTemperatureLogitsWarper < source > ( temperature : float ) Parameters temperature ( float ) — The value used to module the logits distribution. FlaxLogitsWarper for temperature (exponential scaling output probability distribution). __call__ < source > ( input_ids : Array scores : Array cur_len : int ) class transformers. FlaxTopKLogitsWarper < source > ( top_k : int filter_value : float = -inf min_tokens_to_keep : int = 1 ) Parameters top_k ( int ) — The number of highest probability vocabulary tokens to keep for top-k-filtering. filter_value ( float , optional , defaults to -inf) — All filtered values will be set to this float value. min_tokens_to_keep ( int , optional , defaults to 1) — Minimum number of tokens that cannot be filtered. FlaxLogitsWarper that performs top-k, i.e. restricting to the k highest probability elements. __call__ < source > ( input_ids : Array scores : Array cur_len : int ) class transformers. FlaxTopPLogitsWarper < source > ( top_p : float filter_value : float = -inf min_tokens_to_keep : int = 1 ) Parameters top_p ( float ) — If set to < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. filter_value ( float , optional , defaults to -inf) — All filtered values will be set to this float value. min_tokens_to_keep ( int , optional , defaults to 1) — Minimum number of tokens that cannot be filtered. FlaxLogitsWarper that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off. __call__ < source > ( input_ids : Array scores : Array cur_len : int ) class transformers. FlaxWhisperTimeStampLogitsProcessor < source > ( generate_config model_config decoder_input_length ) Parameters generate_config ( GenerateConfig ) — The generate config used to generate the output. The following parameters are required: eos_token_id ( int , optional , defaults to 50257): The id of the end-of-sequence token. no_timestamps_token_id ( int , optional , defaults to 50363): The id of the "<|notimestamps|>" token. max_initial_timestamp_index ( int , optional , defaults to 1): Used to set the maximum value of the initial timestamp. This is used to prevent the model from predicting timestamps that are too far in the future. Whisper specific Processor. This processor can be used to force a list of tokens. The processor will set their log probs to inf so that they are sampled at their corresponding index. __call__ < source > ( input_ids scores cur_len ) StoppingCriteria A StoppingCriteria can be used to change when to stop generation (other than EOS token). Please note that this is exclusively available to our PyTorch implementations. class transformers. StoppingCriteria < source > ( ) Abstract base class for all stopping criteria that can be applied during generation. If your stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate . __call__ < source > ( input_ids : LongTensor scores : FloatTensor **kwargs ) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate . kwargs ( Dict[str, Any] , optional ) — Additional stopping criteria specific kwargs. class transformers. StoppingCriteriaList < source > ( iterable = () ) __call__ < source > ( input_ids : LongTensor scores : FloatTensor **kwargs ) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate . kwargs ( Dict[str, Any] , optional ) — Additional stopping criteria specific kwargs. class transformers. MaxLengthCriteria < source > ( max_length : int max_position_embeddings : typing.Optional[int] = None ) Parameters max_length ( int ) — The maximum length that the output sequence can have in number of tokens. max_position_embeddings ( int , optional ) — The maximum model length, as defined by the model’s config.max_position_embeddings attribute. This class can be used to stop generation whenever the full generated number of tokens exceeds max_length . Keep in mind for decoder-only type of transformers, this will include the initial prompted tokens. __call__ < source > ( input_ids : LongTensor scores : FloatTensor **kwargs ) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate . kwargs ( Dict[str, Any] , optional ) — Additional stopping criteria specific kwargs. class transformers. MaxTimeCriteria < source > ( max_time : float initial_timestamp : typing.Optional[float] = None ) Parameters max_time ( float ) — The maximum allowed time in seconds for the generation. initial_time ( float , optional , defaults to time.time() ) — The start of the generation allowed time. This class can be used to stop generation whenever the full generation exceeds some amount of time. By default, the time will start being counted when you initialize this function. You can override this by passing an initial_time . __call__ < source > ( input_ids : LongTensor scores : FloatTensor **kwargs ) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate . kwargs ( Dict[str, Any] , optional ) — Additional stopping criteria specific kwargs. class transformers. StopStringCriteria < source > ( tokenizer : PreTrainedTokenizerBase stop_strings : typing.Union[str, typing.List[str]] ) Parameters tokenizer ( PreTrainedTokenizer ) — The model’s associated tokenizer (necessary to extract vocab and tokenize the termination sequences) stop_strings ( Union[str, List[str]] ) — A list of strings that should end generation. If a string is passed, it will be treated like a list with a single element. This class can be used to stop generation whenever specific string sequences are generated. It preprocesses the strings together with the tokenizer vocab to find positions where tokens can validly complete the stop strings. Generation is stopped as soon as a token is generated that completes any of the stop strings. We want to catch any instance in which the stop string would be present in the decoded output, which means we must also catch cases with “overhangs” off one or both ends. To make this more concrete, for the stop string “stop”, any of the following token sequences would trigger the match: [“st”, “op”] [“stop”] [“st”, “opera”] [“sto”, “pper”] [“las”, “topper”] [“s”, “to”, “pped”] Note that a match will only be triggered if the stop string is at the end of the generated sequence. In other words, these sequences will not trigger a match: [“stop”, “at”] [“st”, “op”, “at”] [“st”, “opera”, “tion”] The reason these are not a match is that the stop string does not overlap with the final token. If you can remove one or more tokens from the end of the sequence without destroying the stop string, then this criterion will not match that stop string. This is by design; because this check is run after each token is generated, we can’t miss a valid stop string if one is generated, but we don’t want to halt generation just because the stop string exists somewhere in the past input_ids. How is the match actually performed, though? We do it in quite a confusing way, because we want the entire match process to be compilable with Torch or XLA, which means we cannot use standard string methods. However, it is possible, with some work, to do string matching with pure tensor operations. We’ll begin by describing the algorithm we use with standard string operations, and then at the end we’ll explain how this is converted to pure tensor operations. The key to the algorithm is an observation: Because the stop string must overlap with the end of the token sequence, we can start at the end of the sequence and work backwards. Specifically, we check that there is an overlap between the start of the final token and the end of the stop_string, or to put it another way, stop_string[-i:] == token[:i] for some i > 0. If you look at the positive examples above, you’ll see the last token in all of them fulfills this property: [“st”, “op”] (overlap is “op”, overlap length == 2) [“stop”] (overlap is “stop”, overlap length == 4) [“st”, “opera”] (overlap is “op”, overlap length == 2) [“sto”, “pper”] (overlap is “p”, overlap length == 1) [“las”, “topper”] (overlap is “top”, overlap length == 3) [“s”, “to”, “pped”] (overlap is “p”, overlap length == 1) It’s impossible to construct a matching sequence that does not have this property (feel free to verify this yourself). However, although this overlap between the start of the final token and the end of the stop string is necessary for a match, it is not sufficient. We also need to check that the rest of the token sequence is consistent with the stop string. How do we do that? Let’s use [“s”, “to”, “pped”] as an example. We know that the final token, “pped”, has an overlap of 1 with the stop string, “stop”. We then go back to the previous token, “to”. Since we have already matched 1 character from the stop string, the remainder to check is “sto”. We check that the next token “to” matches the end of the remainder, which it does. We have now matched 3 characters from the stop string, and the remainder to match is “s”. We go back to the previous token again, which is also “s”. This is a match, and so we have matched the entire stop string. How does it work when the tokens run off the start of the stop string, though? Let’s consider the example of [“las”, “topper”]. The final token, “topper”, has an overlap of 3 with the stop string, “stop”. Therefore, the remaining stop string to match is “s”. We go back to the previous token, “las”. Because the remainder to match is just “s”, with length 1, we consider only the final 1 character from the token, which is “s”. This matches the stop string, and so the entire string is matched. How do we compute these matches with tensor operations, though? Simply: we efficiently precompute the necessary information for all tokens! For every token, we compute: Its overlap with the end of the stop string, if any The positions inside the stop string where the token matches, including matches that run off the start. The total length of the token For example, for the token “pped”, we would compute an end overlap of 1, no internal matching positions, and a length of 4. For the token “to”, we would compute no end overlap, a single internal matching position of 1 (counting from the end), and a length of 2. For the token “s”, we would compute no end overlap, a single internal matching position of 3 (again counting from the end) and a length of 1. As long as we have this information, we can execute the algorithm above without any string comparison operations. We simply perform the following steps: Check if the final token has an end-overlap with the start string Continue backwards, keeping track of how much of the stop string we’ve matched so far At each point, check if the next token has the current position as one of its valid positions Continue until either a match fails, or we completely match the whole stop string Again, consider [“s”, “to”, “pped”] as an example. “pped” has an end overlap of 1, so we can begin a match. We have matched 1 character so far, so we check that the next token “to”, has 1 as a valid position (again, counting from the end). It does, so we add the length of “to” to our position tracker. We have now matched 3 characters, so we check that the next token “s” has 3 as a valid position. It does, so we add its length to the position tracker. The position tracker is now 4, which is the length of the stop string. We have matched the entire stop string. In the second case, [“las”, “topper”], “topper” has an end overlap of 3, so we can begin a match. We have matched 3 characters so far, so we check that the next token “las” has 3 as a valid position. It does, because we allow tokens to match positions that run off the start of the stop string. We add its length to the position tracker. The position tracker is now 6, which is greater than the length of the stop string! Don’t panic, though - this also counts as a match of the stop string. We have matched the entire stop string. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "microsoft/phi-2" ) >>> model = AutoModelForCausalLM.from_pretrained( "microsoft/phi-2" ) >>> inputs = tokenizer( "The biggest states in the USA by land area:" , return_tensors= "pt" ) >>> gen_out = model.generate(**inputs) >>> print (tokenizer.batch_decode(gen_out, skip_special_tokens= True )[ 0 ]) The biggest states in the USA by land area: - Alaska - Texas - California >>> # Passing one or more stop strings will halt generation after those strings are emitted >>> # Note that generating with stop strings requires you to pass the tokenizer too >>> gen_out = model.generate(**inputs, stop_strings=[ "Texas" ], tokenizer=tokenizer) >>> print (tokenizer.batch_decode(gen_out, skip_special_tokens= True )[ 0 ]) The biggest states in the USA by land area: - Alaska - Texas __call__ < source > ( input_ids : LongTensor scores : FloatTensor **kwargs ) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate . kwargs ( Dict[str, Any] , optional ) — Additional stopping criteria specific kwargs. class transformers. EosTokenCriteria < source > ( eos_token_id : typing.Union[int, typing.List[int], torch.Tensor] ) Parameters eos_token_id ( Union[int, List[int], torch.Tensor] ) — The id(s) of the end-of-sequence token. This class can be used to stop generation whenever the “end-of-sequence” token is generated. By default, it uses the model.generation_config.eos_token_id . __call__ < source > ( input_ids : LongTensor scores : FloatTensor **kwargs ) Parameters input_ids ( torch.LongTensor of shape (batch_size, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? scores ( torch.FloatTensor of shape (batch_size, config.vocab_size) ) — Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate . kwargs ( Dict[str, Any] , optional ) — Additional stopping criteria specific kwargs. Constraints A Constraint can be used to force the generation to include specific tokens or sequences in the output. Please note that this is exclusively available to our PyTorch implementations. class transformers. Constraint < source > ( ) Abstract base class for all constraints that can be applied during generation. It must define how the constraint can be satisfied. All classes that inherit Constraint must follow the requirement that Copied completed = False while not completed: _, completed = constraint.update(constraint.advance()) will always terminate (halt). advance < source > ( ) → token_ids (Union[int, List[int], None]) Returns token_ids (Union[int, List[int], None]) A single token ID (int) that advances the constraint, or A list of token IDs that could advance the constraint None if the constraint is completed or cannot be advanced When called, returns the token(s) that would take this constraint one step closer to being fulfilled. copy < source > ( stateful = False ) → constraint( Constraint ) Parameters stateful( bool ) — Whether to not only copy the constraint for new instance, but also its state. Returns constraint( Constraint ) The same constraint as the one being called from. Creates a new instance of this constraint. does_advance < source > ( token_id : int ) Reads in a token and returns whether it creates progress. remaining < source > ( ) Returns the number of remaining steps of advance() in order to complete this constraint. reset < source > ( ) Resets the state of this constraint to its initialization. We would call this in cases where the fulfillment of a constraint is abrupted by an unwanted token. test < source > ( ) Tests whether this constraint has been properly defined. update < source > ( token_id : int ) → stepped( bool ) Parameters token_id( int ) — The id of a newly generated token in the beam search. Returns stepped( bool ) Whether this constraint has become one step closer to being fulfuilled. completed( bool ): Whether this constraint has been completely fulfilled by this token being generated. reset ( bool ): Whether this constraint has reset its progress by this token being generated. Reads in a token and returns booleans that indicate the progress made by it. This function will update the state of this object unlikes does_advance(self, token_id: int) . This isn’t to test whether a certain token will advance the progress; it’s to update its state as if it has been generated. This becomes important if token_id != desired token (refer to else statement in PhrasalConstraint) class transformers. PhrasalConstraint < source > ( token_ids : typing.List[int] ) Parameters token_ids ( List[int] ) — The id of the token that must be generated by the output. Constraint enforcing that an ordered sequence of tokens is included in the output. class transformers. DisjunctiveConstraint < source > ( nested_token_ids : typing.List[typing.List[int]] ) Parameters nested_token_ids ( List[List[int]] ) — A list of words, where each word is a list of ids. This constraint is fulfilled by generating just one from the list of words. A special Constraint that is fulfilled by fulfilling just one of several constraints. class transformers. ConstraintListState < source > ( constraints : typing.List[transformers.generation.beam_constraints.Constraint] ) Parameters constraints ( List[Constraint] ) — A list of Constraint objects that must be fulfilled by the beam scorer. A class for beam scorers to track its progress through a list of constraints. advance < source > ( ) The list of tokens to generate such that we can make progress. By “list” we don’t mean the list of token that will fully fulfill a constraint. Given constraints c_i = {t_ij | j == # of tokens} , If we’re not in the middle of progressing through a specific constraint c_i , we return: [t_k1 for k in indices of unfulfilled constraints] If we are in the middle of a constraint, then we return: [t_ij] , where i is the index of the inprogress constraint, j is the next step for the constraint. Though we don’t care which constraint is fulfilled first, if we are in the progress of fulfilling a constraint, that’s the only one we’ll return. reset < source > ( token_ids : typing.Optional[typing.List[int]] ) token_ids: the tokens generated thus far to reset the state of the progress through constraints. BeamSearch class transformers. BeamScorer < source > ( ) Abstract base class for all beam scorers that are used for ~PreTrainedModel.beam_search and ~PreTrainedModel.beam_sample . process < source > ( input_ids : LongTensor next_scores : FloatTensor next_tokens : LongTensor next_indices : LongTensor **kwargs ) → UserDict Parameters input_ids ( torch.LongTensor of shape (batch_size * num_beams, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using any class inheriting from PreTrainedTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? next_scores ( torch.FloatTensor of shape (batch_size, 2 * num_beams) ) — Current scores of the top 2 * num_beams non-finished beam hypotheses. next_tokens ( torch.LongTensor of shape (batch_size, 2 * num_beams) ) — input_ids of the tokens corresponding to the top 2 * num_beams non-finished beam hypotheses. next_indices ( torch.LongTensor of shape (batch_size, 2 * num_beams) ) — Beam indices indicating to which beam hypothesis the next_tokens correspond. pad_token_id ( int , optional ) — The id of the padding token. eos_token_id ( Union[int, List[int]] , optional ) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. beam_indices ( torch.LongTensor , optional ) — Beam indices indicating to which beam hypothesis each token correspond. group_index ( int , optional ) — The index of the group of beams. Used with ~PreTrainedModel.group_beam_search . Returns UserDict A dictionary composed of the fields as defined above: next_beam_scores ( torch.FloatTensor of shape (batch_size * num_beams) ) — Updated scores of all non-finished beams. next_beam_tokens ( torch.FloatTensor of shape (batch_size * num_beams) ) — Next tokens to be added to the non-finished beam_hypotheses. next_beam_indices ( torch.FloatTensor of shape (batch_size * num_beams) ) — Beam indices indicating to which beam the next tokens shall be added. finalize < source > ( input_ids : LongTensor next_scores : FloatTensor next_tokens : LongTensor next_indices : LongTensor max_length : int **kwargs ) → torch.LongTensor of shape (batch_size * num_return_sequences, sequence_length) Parameters input_ids ( torch.LongTensor of shape (batch_size * num_beams, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using any class inheriting from PreTrainedTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? final_beam_scores ( torch.FloatTensor of shape (batch_size * num_beams) ) — The final scores of all non-finished beams. final_beam_tokens ( torch.FloatTensor of shape (batch_size * num_beams) ) — The last tokens to be added to the non-finished beam_hypotheses. final_beam_indices ( torch.FloatTensor of shape (batch_size * num_beams) ) — The beam indices indicating to which beam the final_beam_tokens shall be added. pad_token_id ( int , optional ) — The id of the padding token. eos_token_id ( Union[int, List[int]] , optional ) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. Returns torch.LongTensor of shape (batch_size * num_return_sequences, sequence_length) The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . class transformers. BeamSearchScorer < source > ( batch_size : int num_beams : int device : device length_penalty : typing.Optional[float] = 1.0 do_early_stopping : typing.Union[bool, str, NoneType] = False num_beam_hyps_to_keep : typing.Optional[int] = 1 num_beam_groups : typing.Optional[int] = 1 max_length : typing.Optional[int] = None ) Parameters batch_size ( int ) — Batch Size of input_ids for which standard beam search decoding is run in parallel. num_beams ( int ) — Number of beams for beam search. device ( torch.device ) — Defines the device type ( e.g. , "cpu" or "cuda" ) on which this instance of BeamSearchScorer will be allocated. length_penalty ( float , optional , defaults to 1.0) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences. do_early_stopping ( bool or str , optional , defaults to False ) — Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: True , where the generation stops as soon as there are num_beams complete candidates; False , where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; "never" , where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm). num_beam_hyps_to_keep ( int , optional , defaults to 1) — The number of beam hypotheses that shall be returned upon calling finalize() . num_beam_groups ( int , optional , defaults to 1) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. See this paper for more details. max_length ( int , optional ) — The maximum length of the sequence to be generated. BeamScorer implementing standard beam search decoding. Adapted in part from Facebook’s XLM beam search code . Reference for the diverse beam search algorithm and implementation Ashwin Kalyan’s DBS implementation process < source > ( input_ids : LongTensor next_scores : FloatTensor next_tokens : LongTensor next_indices : LongTensor pad_token_id : typing.Union[int, torch.Tensor, NoneType] = None eos_token_id : typing.Union[int, typing.List[int], torch.Tensor, NoneType] = None beam_indices : typing.Optional[torch.LongTensor] = None group_index : typing.Optional[int] = 0 decoder_prompt_len : typing.Optional[int] = 0 ) finalize < source > ( input_ids : LongTensor final_beam_scores : FloatTensor final_beam_tokens : LongTensor final_beam_indices : LongTensor max_length : int pad_token_id : typing.Union[int, torch.Tensor, NoneType] = None eos_token_id : typing.Union[int, typing.List[int], torch.Tensor, NoneType] = None beam_indices : typing.Optional[torch.LongTensor] = None decoder_prompt_len : typing.Optional[int] = 0 ) class transformers. ConstrainedBeamSearchScorer < source > ( batch_size : int num_beams : int constraints : typing.List[transformers.generation.beam_constraints.Constraint] device : device length_penalty : typing.Optional[float] = 1.0 do_early_stopping : typing.Union[bool, str, NoneType] = False num_beam_hyps_to_keep : typing.Optional[int] = 1 num_beam_groups : typing.Optional[int] = 1 max_length : typing.Optional[int] = None ) Parameters batch_size ( int ) — Batch Size of input_ids for which standard beam search decoding is run in parallel. num_beams ( int ) — Number of beams for beam search. constraints ( List[Constraint] ) — A list of positive constraints represented as Constraint objects that must be fulfilled in the generation output. For more information, the documentation of Constraint should be read. device ( torch.device ) — Defines the device type ( e.g. , "cpu" or "cuda" ) on which this instance of BeamSearchScorer will be allocated. length_penalty ( float , optional , defaults to 1.0) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences. do_early_stopping ( bool or str , optional , defaults to False ) — Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: True , where the generation stops as soon as there are num_beams complete candidates; False , where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; "never" , where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm). num_beam_hyps_to_keep ( int , optional , defaults to 1) — The number of beam hypotheses that shall be returned upon calling finalize() . num_beam_groups ( int , optional , defaults to 1) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. See this paper for more details. max_length ( int , optional ) — The maximum length of the sequence to be generated. BeamScorer implementing constrained beam search decoding. process < source > ( input_ids : LongTensor next_scores : FloatTensor next_tokens : LongTensor next_indices : LongTensor scores_for_all_vocab : FloatTensor pad_token_id : typing.Union[int, torch.Tensor, NoneType] = None eos_token_id : typing.Union[int, typing.List[int], torch.Tensor, NoneType] = None beam_indices : typing.Optional[torch.LongTensor] = None decoder_prompt_len : typing.Optional[int] = 0 ) → UserDict Parameters input_ids ( torch.LongTensor of shape (batch_size * num_beams, sequence_length) ) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using any class inheriting from PreTrainedTokenizer . See PreTrainedTokenizer.encode() and PreTrainedTokenizer. call () for details. What are input IDs? next_scores ( torch.FloatTensor of shape (batch_size, 2 * num_beams) ) — Current scores of the top 2 * num_beams non-finished beam hypotheses. next_tokens ( torch.LongTensor of shape (batch_size, 2 * num_beams) ) — input_ids of the tokens corresponding to the top 2 * num_beams non-finished beam hypotheses. next_indices ( torch.LongTensor of shape (batch_size, 2 * num_beams) ) — Beam indices indicating to which beam hypothesis the next_tokens correspond. scores_for_all_vocab ( torch.FloatTensor of shape (batch_size * num_beams, sequence_length) ) — The scores of all tokens in the vocabulary for each of the beam hypotheses. pad_token_id ( int , optional ) — The id of the padding token. eos_token_id ( Union[int, List[int]] , optional ) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. beam_indices ( torch.LongTensor , optional ) — Beam indices indicating to which beam hypothesis each token correspond. decoder_prompt_len ( int , optional ) — The length of prompt that is included in the input to decoder. Returns UserDict A dictionary composed of the fields as defined above: next_beam_scores ( torch.FloatTensor of shape (batch_size * num_beams) ) — Updated scores of all non-finished beams. next_beam_tokens ( torch.FloatTensor of shape (batch_size * num_beams) ) — Next tokens to be added to the non-finished beam_hypotheses. next_beam_indices ( torch.FloatTensor of shape (batch_size * num_beams) ) — Beam indices indicating to which beam the next tokens shall be added. finalize < source > ( input_ids : LongTensor final_beam_scores : FloatTensor final_beam_tokens : LongTensor final_beam_indices : LongTensor max_length : int pad_token_id : typing.Union[int, torch.Tensor, NoneType] = None eos_token_id : typing.Union[int, typing.List[int], torch.Tensor, NoneType] = None beam_indices : typing.Optional[torch.LongTensor] = None decoder_prompt_len : typing.Optional[int] = 0 ) Streamers class transformers. TextStreamer < source > ( tokenizer : AutoTokenizer skip_prompt : bool = False **decode_kwargs ) Parameters tokenizer ( AutoTokenizer ) — The tokenized used to decode the tokens. skip_prompt ( bool , optional , defaults to False ) — Whether to skip the prompt to .generate() or not. Useful e.g. for chatbots. decode_kwargs ( dict , optional ) — Additional keyword arguments to pass to the tokenizer’s decode method. Simple text streamer that prints the token(s) to stdout as soon as entire words are formed. The API for the streamer classes is still under development and may change in the future. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> inputs = tok([ "An increasing sequence: one," ], return_tensors= "pt" ) >>> streamer = TextStreamer(tok) >>> # Despite returning the usual output, the streamer will also print the generated text to stdout. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens= 20 ) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, end < source > ( ) Flushes any remaining cache and prints a newline to stdout. on_finalized_text < source > ( text : str stream_end : bool = False ) Prints the new text to stdout. If the stream is ending, also prints a newline. put < source > ( value ) Receives tokens, decodes them, and prints them to stdout as soon as they form entire words. class transformers. TextIteratorStreamer < source > ( tokenizer : AutoTokenizer skip_prompt : bool = False timeout : typing.Optional[float] = None **decode_kwargs ) Parameters tokenizer ( AutoTokenizer ) — The tokenized used to decode the tokens. skip_prompt ( bool , optional , defaults to False ) — Whether to skip the prompt to .generate() or not. Useful e.g. for chatbots. timeout ( float , optional ) — The timeout for the text queue. If None , the queue will block indefinitely. Useful to handle exceptions in .generate() , when it is called in a separate thread. decode_kwargs ( dict , optional ) — Additional keyword arguments to pass to the tokenizer’s decode method. Streamer that stores print-ready text in a queue, to be used by a downstream application as an iterator. This is useful for applications that benefit from acessing the generated text in a non-blocking way (e.g. in an interactive Gradio demo). The API for the streamer classes is still under development and may change in the future. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer >>> from threading import Thread >>> tok = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> inputs = tok([ "An increasing sequence: one," ], return_tensors= "pt" ) >>> streamer = TextIteratorStreamer(tok) >>> # Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way. >>> generation_kwargs = dict (inputs, streamer=streamer, max_new_tokens= 20 ) >>> thread = Thread(target=model.generate, kwargs=generation_kwargs) >>> thread.start() >>> generated_text = "" >>> for new_text in streamer: ... generated_text += new_text >>> generated_text 'An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,' on_finalized_text < source > ( text : str stream_end : bool = False ) Put the new text in the queue. If the stream is ending, also put a stop signal in the queue. class transformers. AsyncTextIteratorStreamer < source > ( tokenizer : AutoTokenizer skip_prompt : bool = False timeout : float | None = None **decode_kwargs ) Parameters tokenizer ( AutoTokenizer ) — The tokenized used to decode the tokens. skip_prompt ( bool , optional , defaults to False ) — Whether to skip the prompt to .generate() or not. Useful e.g. for chatbots. timeout ( float , optional ) — The timeout for the text queue. If None , the queue will block indefinitely. Useful to handle exceptions in .generate() , when it is called in a separate thread. decode_kwargs ( dict , optional ) — Additional keyword arguments to pass to the tokenizer’s decode method. Raises TimeoutError TimeoutError — If token generation time exceeds timeout value. Streamer that stores print-ready text in a queue, to be used by a downstream application as an async iterator. This is useful for applications that benefit from acessing the generated text asynchronously (e.g. in an interactive Gradio demo). The API for the streamer classes is still under development and may change in the future. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer, AsyncTextIteratorStreamer >>> from threading import Thread >>> import asyncio >>> tok = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> inputs = tok([ "An increasing sequence: one," ], return_tensors= "pt" ) >>> # Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way. >>> async def main (): ... # Important: AsyncTextIteratorStreamer must be initialized inside a coroutine! ... streamer = AsyncTextIteratorStreamer(tok) ... generation_kwargs = dict (inputs, streamer=streamer, max_new_tokens= 20 ) ... thread = Thread(target=model.generate, kwargs=generation_kwargs) ... thread.start() ... generated_text = "" ... async for new_text in streamer: ... generated_text += new_text >>> print (generated_text) >>> asyncio.run(main()) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, on_finalized_text < source > ( text : str stream_end : bool = False ) Put the new text in the queue. If the stream is ending, also put a stop signal in the queue. Caches class transformers. Cache < source > ( ) Base, abstract class for all caches. The actual data structure is specific to each subclass. update < source > ( key_states : Tensor value_states : Tensor layer_idx : int cache_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) Parameters key_states ( torch.Tensor ) — The new key states to cache. value_states ( torch.Tensor ) — The new value states to cache. layer_idx ( int ) — The index of the layer to cache the states for. cache_kwargs ( Dict[str, Any] , optional ) — Additional arguments for the cache subclass. These are specific to each subclass and allow new types of cache to be created. Updates the cache with the new key_states and value_states for the layer layer_idx . class transformers. CacheConfig < source > ( cache_implementation : None ) Base class for cache configs update < source > ( **kwargs ) → Dict[str, Any] Parameters kwargs ( Dict[str, Any] ) — Dictionary of attributes to tentatively update this class. Returns Dict[str, Any] Dictionary containing all the key-value pairs that were not used to update the instance. Updates attributes of this class instance with attributes from kwargs if they match existing attributes, returning all the unused kwargs. class transformers. QuantizedCacheConfig < source > ( backend : str = 'quanto' nbits : typing.Optional[int] = 4 axis_key : typing.Optional[int] = 0 axis_value : typing.Optional[int] = 0 q_group_size : typing.Optional[int] = 64 residual_length : typing.Optional[int] = 128 compute_dtype : typing.Optional[torch.dtype] = torch.float16 device : typing.Optional[str] = 'cpu' ) Parameters backend ( str , optional , defaults to "quanto" ) — Backend to use when performing quantization, Can be one of [ quanto , HQQ ] nbits ( Optional[int] , optional , defaults to 4) — Number of bits, can be 2 or 4 for the quanto backend and one of [1, 2, 3, 4, 8] for the HQQ backend. Defaults to 2. axis_key ( int , optional , defaults to 0) — Axis over which to perform grouping for the key tensors. Can be [0, -1] for quanto backend and [0, 1] for HQQ backend. axis_value ( int , optional , defaults to 0) — Axis over which to perform grouping for the value tensors. Can be [0, -1] for quanto backend and [0, 1] for HQQ backend. q_group_size ( Optional[int] , optional , defaults to 64) — Size of the quantization group, should be a divisor of the model’s hidden dimension. Defaults to 64. residual_length ( Optional[int] , optional , defaults to 128) — Length of the residual cache which will always be stored in original presicion. Defaults to 128. compute_dtype ( torch.dtype , optional , defaults to torch.float16 ) — The defualt dtype used for computations in the model. Keys and Values will be cast to this dtype after dequantization. device ( str , optional , defaults to "cpu" ) — Device on which to perform computations, should be same as the model’s device. Configuration class for quantized cache settings. validate < source > ( ) Validates if the arguments passed are correct class transformers. DynamicCache < source > ( num_hidden_layers : typing.Optional[int] = None ) A cache that grows dynamically as more tokens are generated. This is the default for generative models. It stores the Key and Value states as a list of tensors, one for each layer. The expected shape for each tensor is [batch_size, num_heads, seq_len, head_dim] . Example: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, DynamicCache >>> model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) >>> tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) >>> inputs = tokenizer(text= "My name is Qwen2" , return_tensors= "pt" ) >>> # Prepare a cache class and pass it to model's forward >>> past_key_values = DynamicCache() >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> outputs.past_key_values # access cache filled with key/values from generation DynamicCache() update < source > ( key_states : Tensor value_states : Tensor layer_idx : int cache_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) Parameters key_states ( torch.Tensor ) — The new key states to cache. value_states ( torch.Tensor ) — The new value states to cache. layer_idx ( int ) — The index of the layer to cache the states for. cache_kwargs ( Dict[str, Any] , optional ) — Additional arguments for the cache subclass. No additional arguments are used in DynamicCache . Updates the cache with the new key_states and value_states for the layer layer_idx . get_seq_length < source > ( layer_idx : typing.Optional[int] = 0 ) Returns the sequence length of the cached states. A layer index can be optionally passed. reorder_cache < source > ( beam_idx : LongTensor ) Reorders the cache for beam search, given the selected beam indices. to_legacy_cache < source > ( ) Converts the DynamicCache instance into the its equivalent in the legacy cache format. Used for backward compatibility. from_legacy_cache < source > ( past_key_values : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None num_hidden_layers : int = None ) Converts a cache in the legacy cache format into an equivalent DynamicCache . Used for backward compatibility. class transformers. QuantizedCache < source > ( cache_config : QuantizedCacheConfig ) A quantizer cache similar to what is described in the KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache paper . It allows the model to generate longer sequence length without allocating too much memory for Key and Value cache by applying quantization. The cache has two types of storage, one for original precision and one for the quantized cache. A residual length is set as a maximum capacity for the original precision cache. When the length goes beyond maximum capacity, the original precision cache is discarded and moved into the quantized cache. The quantization is done per-channel with a set q_group_size for both Keys and Values, in contrast to what was described in the paper. It stores Keys and Values a list of quantized tensors (tuples in case we need to store metadata), one for each layer. Additionally, it stores the Key and Value in original precision states as a list of tensors, one for each layer. The size of each tensor is [batch_size, num_heads, seq_len - residual_length, head_dim] update < source > ( key_states : Tensor value_states : Tensor layer_idx : int cache_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) get_seq_length < source > ( layer_idx : typing.Optional[int] = 0 ) Returns the sequence length of the cached states. A layer index can be optionally passed. class transformers. QuantoQuantizedCache < source > ( cache_config : CacheConfig ) Parameters cache_config ( QuantizedCacheConfig ) — A configuration containing all the arguments to be used by the quantizer, including axis, qtype and group size. Quantized Cache class that uses quanto as a backend to perform quantization. Current implementation supports int2 and int4 dtypes only. Example: Copied >>> # Run pip install quanto first if you don't have it yet >>> from transformers import AutoTokenizer, AutoModelForCausalLM, QuantoQuantizedCache, QuantizedCacheConfig >>> model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) >>> tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) >>> inputs = tokenizer(text= "My name is Qwen2" , return_tensors= "pt" ) >>> # Prepare a cache class and pass it to model's forward >>> cache_config = QuantizedCacheConfig(nbits= 4 ) >>> past_key_values = QuantoQuantizedCache(cache_config=cache_config) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> outputs.past_key_values # access cache filled with key/values from generation QuantoQuantizedCache() class transformers. HQQQuantizedCache < source > ( cache_config : CacheConfig ) Parameters cache_config ( QuantizedCacheConfig ) — A configuration containing all the arguments to be used by the quantizer, including axis, qtype and group size. Quantized Cache class that uses HQQ as a backend to perform quantization. Current implementation supports int2 , int4 , int8 dtypes. Example: Copied >>> # Run pip install hqq first if you don't have it yet >>> from transformers import AutoTokenizer, AutoModelForCausalLM, HQQQuantizedCache, QuantizedCacheConfig >>> model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) >>> tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) >>> inputs = tokenizer(text= "My name is Qwen2" , return_tensors= "pt" ) >>> # Prepare a cache class and pass it to model's forward >>> cache_config = QuantizedCacheConfig(nbits= 4 , axis_key= 1 , axis_value= 1 ) >>> past_key_values = HQQQuantizedCache(cache_config=cache_config) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> outputs.past_key_values # access cache filled with key/values from generation HQQQuantizedCache() class transformers. SinkCache < source > ( window_length : int num_sink_tokens : int ) Parameters window_length ( int ) — The length of the context window. num_sink_tokens ( int ) — The number of sink tokens. See the original paper for more information. A cache that as described in the Attention Sinks paper . It allows the model to generate beyond the length of its context window, without losing fluency in the conversation. As it discards past tokens, the model will lose the ability to generate tokens that depend on the context that was discarded. It stores the Key and Value states as a list of tensors, one for each layer. The expected shape for each tensor is [batch_size, num_heads, seq_len, head_dim] . Example: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, SinkCache >>> model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) >>> tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) >>> inputs = tokenizer(text= "My name is Qwen2" , return_tensors= "pt" ) >>> # Prepare a cache class and pass it to model's forward >>> past_key_values = SinkCache(window_length= 256 , num_sink_tokens= 4 ) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> outputs.past_key_values # access cache filled with key/values from generation SinkCache() update < source > ( key_states : Tensor value_states : Tensor layer_idx : int cache_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) Parameters key_states ( torch.Tensor ) — The new key states to cache. value_states ( torch.Tensor ) — The new value states to cache. layer_idx ( int ) — The index of the layer to cache the states for. cache_kwargs ( Dict[str, Any] , optional ) — Additional arguments for the cache subclass. The following arguments can be used in SinkCache : sin , cos and partial_rotation_size . These arguments are used with models using RoPE, to recompute the rotation as the tokens are shifted. Updates the cache with the new key_states and value_states for the layer layer_idx . get_seq_length < source > ( layer_idx : typing.Optional[int] = 0 ) Returns the sequence length of the cached states. A layer index can be optionally passed. reorder_cache < source > ( beam_idx : LongTensor ) Reorders the cache for beam search, given the selected beam indices. class transformers. OffloadedCache < source > ( ) A drop-in replacement for DynamicCache that conserves GPU memory at the expense of more CPU memory. Useful for generating from models with very long context. In addition to the default CUDA stream, where all forward() computations happen, this class uses another stream, the prefetch stream, which it creates itself. Since scheduling of operations on separate streams happens independently, this class uses the prefetch stream to asynchronously prefetch the KV cache of layer k+1 when layer k is executing. The movement of the layer k-1 cache to the CPU is handled by the default stream as a simple way to ensure the eviction is scheduled after all computations on that cache are finished. update < source > ( key_states : Tensor value_states : Tensor layer_idx : int cache_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) Parameters key_states ( torch.Tensor ) — The new key states to cache. value_states ( torch.Tensor ) — The new value states to cache. layer_idx ( int ) — The index of the layer to cache the states for. cache_kwargs ( Dict[str, Any] , optional ) — Additional arguments for the cache subclass. No additional arguments are used in OffloadedCache . Updates the cache with the new key_states and value_states for the layer layer_idx . prefetch_layer < source > ( layer_idx : int ) Starts prefetching the next layer cache evict_previous_layer < source > ( layer_idx : int ) Moves the previous layer cache to the CPU class transformers. StaticCache < source > ( config : PretrainedConfig batch_size : int = None max_cache_len : int = None device : device = None dtype : dtype = torch.float32 max_batch_size : typing.Optional[int] = None layer_device_map : typing.Optional[typing.Dict[int, typing.Union[str, torch.device, int]]] = None ) Parameters config ( PretrainedConfig ) — The configuration file defining the shape-related attributes required to initialize the static cache. batch_size ( int ) — The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. If you are manually setting the batch size, make sure to take into account the number of beams if you are running beam search max_cache_len ( int ) — The maximum sequence length with which the model will be used. device ( torch.device or str ) — The device on which the cache should be initialized. Should be the same as the layer. dtype ( torch.dtype , optional , defaults to torch.float32 ) — The default dtype to use when initializing the layer. layer_device_map(`Dict[int, Union[str, torch.device, int]]] , optional ) -- Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus. You can know which layers mapped to which device by checking the associated device_map: model.hf_device_map`. Static Cache class to be used with torch.compile(model) and torch.export() . Example: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, StaticCache >>> model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-7b-chat-hf" ) >>> tokenizer = AutoTokenizer.from_pretrained( "meta-llama/Llama-2-7b-chat-hf" ) >>> inputs = tokenizer(text= "My name is Llama" , return_tensors= "pt" ) >>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[ 1 ] + 10 >>> past_key_values = StaticCache(config=model.config, batch_size= 1 , max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> outputs.past_key_values # access cache filled with key/values from generation StaticCache() update < source > ( key_states : Tensor value_states : Tensor layer_idx : int cache_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) Parameters key_states ( torch.Tensor ) — The new key states to cache. value_states ( torch.Tensor ) — The new value states to cache. layer_idx ( int ) — The index of the layer to cache the states for. cache_kwargs ( Dict[str, Any] , optional ) — Additional arguments for the cache subclass. The StaticCache needs the cache_position input to know how where to write in the cache. Updates the cache with the new key_states and value_states for the layer layer_idx . It is VERY important to index using a tensor, otherwise you introduce a copy to the device. get_seq_length < source > ( layer_idx : typing.Optional[int] = 0 ) Returns the sequence length of the cached states that were seen by the model. reset < source > ( ) Resets the cache values while preserving the objects class transformers. OffloadedStaticCache < source > ( config : PretrainedConfig max_batch_size : int max_cache_len : typing.Optional[int] device : typing.Union[str, torch.device] dtype : typing.Optional[torch.dtype] = None offload_device : typing.Union[str, torch.device] = device(type='cpu') layer_device_map : typing.Optional[typing.Dict[int, typing.Union[str, torch.device, int]]] = None ) Parameters config (`PretrainedConfig) — The configuration file defining the shape-related attributes required to initialize the static cache. max_batch_size ( int ) — The maximum batch size with which the model will be used. max_cache_len ( int ) — The maximum sequence length with which the model will be used. device ( Union[str, torch.device] ) — The device on which the cache should be initialized. Should be the same as the layer device. dtype ( torch.dtype , optional ) — The default dtype to use when initializing the cache. offload_device ( Union[str, torch.device] , optional , defaults to cpu ) — The device to offload to. Defaults to CPU. layer_device_map ( Dict[int, Union[str, torch.device, int]] , optional ) — Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus. You can know which layers mapped to which device by checking the associated device_map: model.hf_device_map . key_cache ( List[torch.Tensor] ) — Off-loaded key cache tensors. First one will be on device, where-as the others are off-loaded. value_cache ( List[torch.Tensor] ) — Off-loaded value cache tensors. First one will be on device, where-as the others are off-loaded. max_batch_size ( int ) — The maximum batch size with which this cache can be used. max_cache_len ( int ) — The maximum sequence length with which this cache can be used. device ( torch.device ) — The device on which the cache is used. offload_device ( torch.device ) — The device used to offload to. dtype ( torch.dtype ) — The dtype used to initializing the cache. Static cache class to be used with torch.compile(model) that offloads to the CPU or another device. Example: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, OffloadedStaticCache >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> tokenizer = AutoTokenizer.from_pretrained( "openai-community/gpt2" ) >>> inputs = tokenizer(text= "My name is GPT2" , return_tensors= "pt" ) >>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[ 1 ] + 10 >>> past_key_values = OffloadedStaticCache(config=model.config, max_batch_size= 1 , max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> past_kv_length = outputs.past_key_values # access cache filled with key/values from generation update < source > ( key_states : Tensor value_states : Tensor layer_idx : int cache_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) Parameters key_states ( torch.Tensor ) — The new key states to cache. value_states ( torch.Tensor ) — The new value states to cache. layer_idx ( int ) — The index of the layer to cache the states for. cache_kwargs ( Dict[str, Any] , optional ) — Additional arguments for the cache subclass. The OffloadedStaticCache needs the cache_position input to know how where to write in the cache. Updates the cache with the new key_states and value_states for the layer layer_idx . It is VERY important to index using a tensor, otherwise you introduce a copy to the device. get_seq_length < source > ( layer_idx : typing.Optional[int] = 0 ) Returns the sequence length of the cached states that were seen by the model. reset < source > ( ) Resets the cache values while preserving the objects. class transformers. HybridCache < source > ( config : PretrainedConfig batch_size : int = None max_cache_len : int = None device : typing.Union[torch.device, str] = 'cpu' dtype : dtype = torch.float32 max_batch_size : typing.Optional[int] = None layer_device_map : typing.Optional[typing.Dict[int, typing.Union[str, torch.device, int]]] = None ) Parameters config (`PretrainedConfig) — The configuration file defining the shape-related attributes required to initialize the static cache. batch_size ( int ) — The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. max_cache_len ( int ) — The maximum sequence length with which the model will be used. device ( torch.device or str , optional , defaults to "cpu" ) — The device on which the cache should be initialized. Should be the same as the layer. dtype (torch.dtype, optional , defaults to torch.float32 ) — The default dtype to use when initializing the layer. layer_device_map(`Dict[int, Union[str, torch.device, int]]] , optional ) -- Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus. You can know which layers mapped to which device by checking the associated device_map: model.hf_device_map`. Hybrid Cache class to be used with torch.compile for Gemma2 models that alternate between a local sliding window attention and global attention in every other layer. Under the hood, Hybrid Cache leverages [“SlidingWindowCache”] for sliding window attention and [“StaticCache”] for global attention. For more information, see the documentation of each subcomponeent cache class. Example: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, HybridCache >>> model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b" ) >>> tokenizer = AutoTokenizer.from_pretrained( "google/gemma-2-2b" ) >>> inputs = tokenizer(text= "My name is Gemma" , return_tensors= "pt" ) >>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[ 1 ] + 10 >>> past_key_values = HybridCache(config=model.config, batch_size= 1 , max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> outputs.past_key_values # access cache filled with key/values from generation HybridCache() update < source > ( key_states : Tensor value_states : Tensor layer_idx : int cache_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) get_seq_length < source > ( layer_idx : typing.Optional[int] = 0 ) reset < source > ( ) Resets the cache values while preserving the objects class transformers. SlidingWindowCache < source > ( config : PretrainedConfig batch_size : int = None max_cache_len : int = None device : device = None dtype : dtype = torch.float32 max_batch_size : typing.Optional[int] = None layer_device_map : typing.Optional[typing.Dict[int, typing.Union[str, torch.device, int]]] = None ) Parameters config ( PretrainedConfig ) — The configuration file defining the shape-related attributes required to initialize the static cache. batch_size ( int ) — The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. max_cache_len ( int ) — The maximum sequence length with which the model will be used. device ( torch.device or str ) — The device on which the cache should be initialized. Should be the same as the layer. dtype ( torch.dtype , optional , defaults to torch.float32 ) — The default dtype to use when initializing the layer. layer_device_map(`Dict[int, Union[str, torch.device, int]]] , optional ) -- Mapping between the layers and its device. This is required when you are manually initializing the cache and the model is splitted between differents gpus. You can know which layers mapped to which device by checking the associated device_map: model.hf_device_map`. Sliding Window Cache class to be used with torch.compile for models like Mistral that support sliding window attention. Every time when we try to update the cache, we compute the indices based on cache_position >= self.config.sliding_window - 1 , if true(which means the cache can not hold all the old key value states and new states together because of the sliding window constraint), we need to do a cycle shift based on indices to replace the oldest states by the new key value states passed in. The to_shift is only true once we are above sliding_window. Thus with sliding_window==64 : indices = (slicing + to_shift[-1].int()-1) % self.config.sliding_window tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 0]) We overwrite the cache using these, then we always write at cache_position (clamped to sliding_window ) Example: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, SlidingWindowCache >>> model = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-Instruct-v0.3" ) >>> tokenizer = AutoTokenizer.from_pretrained( "mistralai/Mistral-7B-Instruct-v0.3" ) >>> inputs = tokenizer(text= "My name is Mistral" , return_tensors= "pt" ) >>> # Prepare a cache class and pass it to model's forward >>> # Leave empty space for 10 new tokens, which can be used when calling forward iteratively 10 times to generate >>> max_generated_length = inputs.input_ids.shape[ 1 ] + 10 >>> past_key_values = SlidingWindowCache(config=model.config, batch_size= 1 , max_cache_len=max_generated_length, device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> outputs.past_key_values # access cache filled with key/values from generation SlidingWindowCache() update < source > ( key_states : Tensor value_states : Tensor layer_idx : int cache_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) reset < source > ( ) class transformers. EncoderDecoderCache < source > ( self_attention_cache : Cache cross_attention_cache : Cache ) Base, abstract class for all encoder-decoder caches. Can be used to hold combinations of self-attention and cross-attention caches. Example: Copied >>> from transformers import AutoProcessor, AutoModelForCausalLM, DynamicCache, EncoderDecoderCache >>> model = AutoModelForCausalLM.from_pretrained( "openai/whisper-small" ) >>> processor = AutoProcessor.from_pretrained( "openai/whisper-small" ) >>> inputs = processor(audio=YOUR-AUDIO, return_tensors= "pt" ) >>> # Prepare cache classes for encoder and decoder and pass it to model's forward >>> self_attention_cache = DynamicCache() >>> cross_attention_cache = DynamicCache() >>> past_key_values = EncoderDecoderCache(self_attention_cache, cross_attention_cache) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> outputs.past_key_values # access cache filled with key/values from generation EncoderDecoderCache() get_seq_length < source > ( layer_idx : typing.Optional[int] = 0 ) Returns the sequence length of the cached states. A layer index can be optionally passed. to_legacy_cache < source > ( ) Converts the EncoderDecoderCache instance into its equivalent in the legacy cache format. from_legacy_cache < source > ( past_key_values : typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None ) Converts a cache in the legacy cache format into an equivalent EncoderDecoderCache . reset < source > ( ) reorder_cache < source > ( beam_idx : LongTensor ) Reorders the cache for beam search, given the selected beam indices. class transformers. MambaCache < source > ( config : PretrainedConfig batch_size : int = None dtype : dtype = torch.float16 device : typing.Union[torch.device, str, NoneType] = None max_batch_size : typing.Optional[int] = None ) Parameters config (`PretrainedConfig) — The configuration file defining the shape-related attributes required to initialize the static cache. batch_size ( int ) — The batch size with which the model will be used. Note that a new instance must be instantiated if a smaller batch size is used. dtype ( torch.dtype , optional , defaults to torch.float16 ) — The default dtype to use when initializing the layer. device ( torch.device or str , optional ) — The device on which the cache should be initialized. Should be the same as the layer. dtype — ( torch.dtype ): The default dtype used to initializing the cache. intermediate_size — ( int ): Model’s intermediate_size taken from config. ssm_state_size — ( int ): Model’s state_size taken from config. conv_kernel_size — ( int ): Model’s convolution kernel size taken from config conv_states — ( torch.Tensor ): A tensor of shape [layer_idx, batch_size, intermediate_size, conv_kernel_size] that holds convolutional states. ssm_states — ( torch.Tensor ): A tensor of shape [layer_idx, batch_size, intermediate_size, ssm_state_size] that holds ssm states Cache for mamba model which does not have attention mechanism and key value states. Example: Copied >>> from transformers import AutoTokenizer, MambaForCausalLM, MambaCache >>> model = MambaForCausalLM.from_pretrained( "state-spaces/mamba-130m-hf" ) >>> tokenizer = AutoTokenizer.from_pretrained( "state-spaces/mamba-130m-hf" ) >>> inputs = tokenizer(text= "My name is Mamba" , return_tensors= "pt" ) >>> # Prepare a cache class and pass it to model's forward >>> past_key_values = MambaCache(config=model.config, batch_size= 1 , device=model.device, dtype=model.dtype) >>> outputs = model(**inputs, past_key_values=past_key_values, use_cache= True ) >>> outputs.past_key_values MambaCache() update_conv_state < source > ( layer_idx : int new_conv_state : Tensor cache_position : LongTensor ) update_ssm_state < source > ( layer_idx : int new_ssm_state : Tensor ) reset < source > ( ) Watermark Utils class transformers. WatermarkingConfig < source > ( greenlist_ratio : typing.Optional[float] = 0.25 bias : typing.Optional[float] = 2.0 hashing_key : typing.Optional[int] = 15485863 seeding_scheme : typing.Optional[str] = 'lefthash' context_width : typing.Optional[int] = 1 ) Class that holds arguments for watermark generation and should be passed into GenerationConfig during generate . See this paper for more details on the arguments. Accepts the following keys: greenlist_ratio ( float ): Used for watermarking. The ratio of “green” tokens used to the vocabulary size. Defaults to 0.25. bias ( float ): Used with watermarking. The bias added to the selected “green” tokens’ logits. Defaults to 2.0. hashing_key ( int ): Hashing key used for watermarking. Defaults to 15485863 (the millionth prime). seeding_scheme ( str ): Algorithm to use for watermarking. Accepts values: “lefthash” (default): “green” tokens selection depend on the last token (Algorithm 2 from the paper) “selfhash”: “green” tokens selection depends on the current token itself (Algorithm 3 from the paper) The downside of this scheme is that it considers all possible next tokens and can be slower than “lefthash”. context_width( int ): The context length of previous tokens to use in seeding. Higher context length makes watermarking more robust. __call__ ( *args **kwargs ) Call self as a function. class transformers. WatermarkDetector < source > ( model_config : PretrainedConfig device : str watermarking_config : typing.Union[transformers.generation.configuration_utils.WatermarkingConfig, typing.Dict] ignore_repeated_ngrams : bool = False max_cache_size : int = 128 ) Parameters model_config ( PretrainedConfig ) — The model config that will be used to get model specific arguments used when generating. device ( str ) — The device which was used during watermarked text generation. watermarking_config (Union[ WatermarkingConfig , Dict ]) — The exact same watermarking config and arguments used when generating text. ignore_repeated_ngrams ( bool , optional , defaults to False ) — Whether to count every unique ngram only once or not. max_cache_size ( int , optional , defaults to 128) — The max size to be used for LRU caching of seeding/sampling algorithms called for every token. Detector for detection of watermark generated text. The detector needs to be given the exact same settings that were given during text generation to replicate the watermark greenlist generation and so detect the watermark. This includes the correct device that was used during text generation, the correct watermarking arguments and the correct tokenizer vocab size. The code was based on the original repo . See the paper for more information. Examples: Copied >>> from transformers import AutoTokenizer, AutoModelForCausalLM, WatermarkDetector, WatermarkingConfig >>> model_id = "openai-community/gpt2" >>> model = AutoModelForCausalLM.from_pretrained(model_id) >>> tok = AutoTokenizer.from_pretrained(model_id) >>> tok.pad_token_id = tok.eos_token_id >>> tok.padding_side = "left" >>> inputs = tok([ "This is the beginning of a long story" , "Alice and Bob are" ], padding= True , return_tensors= "pt" ) >>> input_len = inputs[ "input_ids" ].shape[- 1 ] >>> # first generate text with watermark and without >>> watermarking_config = WatermarkingConfig(bias= 2.5 , seeding_scheme= "selfhash" ) >>> out_watermarked = model.generate(**inputs, watermarking_config=watermarking_config, do_sample= False , max_length= 20 ) >>> out = model.generate(**inputs, do_sample= False , max_length= 20 ) >>> # now we can instantiate the detector and check the generated text >>> detector = WatermarkDetector(model_config=model.config, device= "cpu" , watermarking_config=watermarking_config) >>> detection_out_watermarked = detector(out_watermarked, return_dict= True ) >>> detection_out = detector(out, return_dict= True ) >>> detection_out_watermarked.prediction array([ True , True ]) >>> detection_out.prediction array([ False , False ]) __call__ < source > ( input_ids : LongTensor z_threshold : float = 3.0 return_dict : bool = False ) → WatermarkDetectorOutput or np.array Parameters input_ids ( torch.LongTensor ) — The watermark generated text. It is advised to remove the prompt, which can affect the detection. z_threshold ( Dict , optional , defaults to 3.0 ) — Changing this threshold will change the sensitivity of the detector. Higher z threshold gives less sensitivity and vice versa for lower z threshold. return_dict ( bool , optional , defaults to False ) — Whether to return ~generation.WatermarkDetectorOutput or not. If not it will return boolean predictions, Returns WatermarkDetectorOutput or np.array A WatermarkDetectorOutput if return_dict=True otherwise a np.array . ma class transformers. BayesianDetectorConfig < source > ( watermarking_depth : int = None base_rate : float = 0.5 **kwargs ) Parameters watermarking_depth ( int , optional ) — The number of tournament layers. base_rate ( float1 , optional , defaults to 0.5) — Prior probability P(w) that a text is watermarked. This is the configuration class to store the configuration of a BayesianDetectorModel . It is used to instantiate a Bayesian Detector model according to the specified arguments. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. class transformers. BayesianDetectorModel < source > ( config ) Parameters config ( BayesianDetectorConfig ) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Bayesian classifier for watermark detection. This detector uses Bayes’ rule to compute a watermarking score, which is the sigmoid of the log of ratio of the posterior probabilities P(watermarked|g_values) and P(unwatermarked|g_values). Please see the section on BayesianScore in the paper for further details. Paper URL: https://www.nature.com/articles/s41586-024-08025-4 Note that this detector only works with non-distortionary Tournament-based watermarking using the Bernoulli(0.5) g-value distribution. This model inherits from PreTrainedModel . Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( g_values : Tensor mask : Tensor labels : typing.Optional[torch.Tensor] = None loss_batch_weight = 1 return_dict = False ) Parameters g_values ( torch.Tensor of shape (batch_size, seq_len, watermarking_depth, ...) ) — g-values (with values 0 or 1) mask — A binary array shape [batch_size, seq_len] indicating which g-values should be used. g-values with mask value 0 are discarded. Computes the watermarked posterior P(watermarked|g_values). class transformers. SynthIDTextWatermarkingConfig < source > ( ngram_len : int keys : typing.List[int] context_history_size : int = 1024 sampling_table_seed : int = 0 sampling_table_size : int = 65536 skip_first_ngram_calls : bool = False debug_mode : bool = False ) Parameters ngram_len ( int ) — Ngram length. keys ( List[int] ) — A sequence of watermarking keys, one for each depth. context_history_size ( int , optional , defaults to 1024) — Size of the tensor to keep track of seen contexts. sampling_table_seed ( int , optional , defaults to 0) — Random seed to generate the sampling table. sampling_table_size ( int , optional , defaults to 65536) — Size of the sampling table. skip_first_ngram_calls ( bool , optional , defaults to False ) — Whether to skip first ngram calls. debug_mode ( bool , optional, optional , defaults to False ) — Logits are modified to uniform one got before watermarking modification is applied. This is to test the implementation. Class that holds arguments for watermark generation and should be passed into GenerationConfig during generate . See this paper for more details on the arguments. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer, SynthIDTextWatermarkingConfig >>> tokenizer = AutoTokenizer.from_pretrained( 'google/gemma-2-2b' , padding_side= "left" ) >>> model = AutoModelForCausalLM.from_pretrained( 'google/gemma-2-2b' ) >>> # SynthID Text configuration >>> watermarking_config = SynthIDTextWatermarkingConfig( ... keys=[ 654 , 400 , 836 , 123 , 340 , 443 , 597 , 160 , 57 ], ... ngram_len= 5 , ... ) >>> # Generation with watermarking >>> tokenized_prompts = tokenizer([ "Once upon a time, " ], return_tensors= "pt" , padding= True ) >>> output_sequences = model.generate( ... **tokenized_prompts, watermarking_config=watermarking_config, do_sample= True , max_new_tokens= 10 ... ) >>> watermarked_text = tokenizer.batch_decode(output_sequences, skip_special_tokens= True ) class transformers. SynthIDTextWatermarkDetector < source > ( detector_module : BayesianDetectorModel logits_processor : SynthIDTextWatermarkLogitsProcessor tokenizer : typing.Any ) Parameters detector_module ( BayesianDetectorModel ) — Bayesian detector module object initialized with parameters. Check examples/research_projects/synthid_text/detector_training.py for usage. logits_processor ( SynthIDTextWatermarkLogitsProcessor ) — The logits processor used for watermarking. tokenizer ( Any ) — The tokenizer used for the model. SynthID text watermark detector class. This class has to be initialized with the trained bayesian detector module check script in examples/synthid_text/detector_training.py for example in training/saving/loading this detector module. The folder also showcases example use case of this detector. Examples: Copied >>> from transformers import ( ... AutoTokenizer, BayesianDetectorModel, SynthIDTextWatermarkLogitsProcessor, SynthIDTextWatermarkDetector ... ) >>> # Load the detector. See examples/research_projects/synthid_text for training a detector. >>> detector_model = BayesianDetectorModel.from_pretrained( "joaogante/dummy_synthid_detector" ) >>> logits_processor = SynthIDTextWatermarkLogitsProcessor( ... **detector_model.config.watermarking_config, device= "cpu" ... ) >>> tokenizer = AutoTokenizer.from_pretrained(detector_model.config.model_name) >>> detector = SynthIDTextWatermarkDetector(detector_model, logits_processor, tokenizer) >>> # Test whether a certain string is watermarked >>> test_input = tokenizer([ "This is a test input" ], return_tensors= "pt" ) >>> is_watermarked = detector(test_input.input_ids) __call__ < source > ( tokenized_outputs : Tensor ) Compile Utils class transformers. CompileConfig < source > ( fullgraph : bool = True dynamic : typing.Optional[bool] = None backend : typing.Union[str, typing.Callable] = 'inductor' mode : str = 'reduce-overhead' options : typing.Optional[dict] = None ) Parameters fullgraph ( bool , optional , defaults to True ) — If True , requires that the whole forward be capturable in a single graph. dynamic ( bool or None , optional ) — Whether to try to use dynamic shape graphs. backend ( str or Callable , optional , defaults to "inductor" ) — Backend to be used. mode ( str , optional , defaults to "reduce-overhead" ) — Controls balance between performance and overhead. options ( dict , optional ) — A dictionary of options to pass to the backend. Class that holds arguments relative to torch.compile behavior, when using automatic compilation in generate . See torch.compile for more details on the arguments. Examples: Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer, CompileConfig >>> tokenizer = AutoTokenizer.from_pretrained( 'google/gemma-2-2b' ) >>> model = AutoModelForCausalLM.from_pretrained( 'google/gemma-2-2b' ).cuda() >>> # Automatic compile configuration, used with static cache >>> compile_config = CompileConfig(dynamic= True ) >>> # Generation with static cache and compile config >>> input = tokenizer.encode( "Hello there, how" , return_tensors= "pt" ).cuda() >>> output = model.generate( ... input , do_sample= False , max_new_tokens= 300 , cache_implementation= "static" , compile_config=compile_config ... ) >>> output_text = tokenizer.batch_decode(output, skip_special_tokens= True )[ 0 ] __call__ ( *args **kwargs ) Call self as a function. < > Update on GitHub ← Utilities for Trainer Utilities for Image Processors → Utilities for Generation Generate Outputs Py Torch Tensor Flow FLAX Logits Processor Py Torch Tensor Flow FLAX Stopping Criteria Constraints Beam Search Streamers Caches Watermark Utils Compile Utils
Licenses.txt
Licenses Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Licenses Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Licenses You are able to add a license to any repo that you create on the Hugging Face Hub to let other users know about the permissions that you want to attribute to your code or data. The license can be specified in your repository’s README.md file, known as a card on the Hub, in the card’s metadata section. Remember to seek out and respect a project’s license if you’re considering using their code or data. A full list of the available licenses is available here: Fullname License identifier (to use in repo card) Apache license 2.0 apache-2.0 MIT mit OpenRAIL license family openrail BigScience OpenRAIL-M bigscience-openrail-m CreativeML OpenRAIL-M creativeml-openrail-m BigScience BLOOM RAIL 1.0 bigscience-bloom-rail-1.0 BigCode Open RAIL-M v1 bigcode-openrail-m Academic Free License v3.0 afl-3.0 Artistic license 2.0 artistic-2.0 Boost Software License 1.0 bsl-1.0 BSD license family bsd BSD 2-clause “Simplified” license bsd-2-clause BSD 3-clause “New” or “Revised” license bsd-3-clause BSD 3-clause Clear license bsd-3-clause-clear Computational Use of Data Agreement c-uda Creative Commons license family cc Creative Commons Zero v1.0 Universal cc0-1.0 Creative Commons Attribution 2.0 cc-by-2.0 Creative Commons Attribution 2.5 cc-by-2.5 Creative Commons Attribution 3.0 cc-by-3.0 Creative Commons Attribution 4.0 cc-by-4.0 Creative Commons Attribution Share Alike 3.0 cc-by-sa-3.0 Creative Commons Attribution Share Alike 4.0 cc-by-sa-4.0 Creative Commons Attribution Non Commercial 2.0 cc-by-nc-2.0 Creative Commons Attribution Non Commercial 3.0 cc-by-nc-3.0 Creative Commons Attribution Non Commercial 4.0 cc-by-nc-4.0 Creative Commons Attribution No Derivatives 4.0 cc-by-nd-4.0 Creative Commons Attribution Non Commercial No Derivatives 3.0 cc-by-nc-nd-3.0 Creative Commons Attribution Non Commercial No Derivatives 4.0 cc-by-nc-nd-4.0 Creative Commons Attribution Non Commercial Share Alike 2.0 cc-by-nc-sa-2.0 Creative Commons Attribution Non Commercial Share Alike 3.0 cc-by-nc-sa-3.0 Creative Commons Attribution Non Commercial Share Alike 4.0 cc-by-nc-sa-4.0 Community Data License Agreement – Sharing, Version 1.0 cdla-sharing-1.0 Community Data License Agreement – Permissive, Version 1.0 cdla-permissive-1.0 Community Data License Agreement – Permissive, Version 2.0 cdla-permissive-2.0 Do What The F*ck You Want To Public License wtfpl Educational Community License v2.0 ecl-2.0 Eclipse Public License 1.0 epl-1.0 Eclipse Public License 2.0 epl-2.0 Etalab Open License 2.0 etalab-2.0 European Union Public License 1.1 eupl-1.1 GNU Affero General Public License v3.0 agpl-3.0 GNU Free Documentation License family gfdl GNU General Public License family gpl GNU General Public License v2.0 gpl-2.0 GNU General Public License v3.0 gpl-3.0 GNU Lesser General Public License family lgpl GNU Lesser General Public License v2.1 lgpl-2.1 GNU Lesser General Public License v3.0 lgpl-3.0 ISC isc Intel Research Use License Agreement intel-research LaTeX Project Public License v1.3c lppl-1.3c Microsoft Public License ms-pl Apple Sample Code license apple-ascl Mozilla Public License 2.0 mpl-2.0 Open Data Commons License Attribution family odc-by Open Database License family odbl Open Rail++-M License openrail++ Open Software License 3.0 osl-3.0 PostgreSQL License postgresql SIL Open Font License 1.1 ofl-1.1 University of Illinois/NCSA Open Source License ncsa The Unlicense unlicense zLib License zlib Open Data Commons Public Domain Dedication and License pddl Lesser General Public License For Linguistic Resources lgpl-lr DeepFloyd IF Research License Agreement deepfloyd-if-license Llama 2 Community License Agreement llama2 Llama 3 Community License Agreement llama3 Llama 3.1 Community License Agreement llama3.1 Llama 3.2 Community License Agreement llama3.2 Gemma Terms of Use gemma Unknown unknown Other other In case of license: other please add the license’s text to a LICENSE file inside your repo (or contact us to add the license you use to this list), and set a name for it in license_name . < > Update on GitHub ← Next Steps Models → Licenses
Train_and_deploy_Hugging_Face_on_Amazon_SageMaker_.txt
Train and deploy Hugging Face on Amazon SageMaker Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Amazon SageMaker documentation Train and deploy Hugging Face on Amazon SageMaker Amazon SageMaker 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Hugging Face on Amazon SageMaker Get started Run training on Amazon SageMaker Deploy models to Amazon SageMaker Reference Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Train and deploy Hugging Face on Amazon SageMaker The get started guide will show you how to quickly use Hugging Face on Amazon SageMaker. Learn how to fine-tune and deploy a pretrained 🤗 Transformers model on SageMaker for a binary text classification task. 💡 If you are new to Hugging Face, we recommend first reading the 🤗 Transformers quick tour . 📓 Open the agemaker-notebook.ipynb file to follow along! Installation and setup Get started by installing the necessary Hugging Face libraries and SageMaker. You will also need to install PyTorch and TensorFlow if you don’t already have it installed. Copied pip install "sagemaker>=2.140.0" "transformers==4.26.1" "datasets[s3]==2.10.1" --upgrade If you want to run this example in SageMaker Studio , upgrade ipywidgets for the 🤗 Datasets library and restart the kernel: Copied %%capture import IPython !conda install -c conda-forge ipywidgets -y IPython.Application.instance().kernel.do_shutdown( True ) Next, you should set up your environment: a SageMaker session and an S3 bucket. The S3 bucket will store data, models, and logs. You will need access to an IAM execution role with the required permissions. If you are planning on using SageMaker in a local environment, you need to provide the role yourself. Learn more about how to set this up here . ⚠️ The execution role is only available when you run a notebook within SageMaker. If you try to run get_execution_role in a notebook not on SageMaker, you will get a region error. Copied import sagemaker sess = sagemaker.Session() sagemaker_session_bucket = None if sagemaker_session_bucket is None and sess is not None : sagemaker_session_bucket = sess.default_bucket() role = sagemaker.get_execution_role() sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) Preprocess The 🤗 Datasets library makes it easy to download and preprocess a dataset for training. Download and tokenize the IMDb dataset: Copied from datasets import load_dataset from transformers import AutoTokenizer # load dataset train_dataset, test_dataset = load_dataset( "imdb" , split=[ "train" , "test" ]) # load tokenizer tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilbert-base-uncased" ) # create tokenization function def tokenize ( batch ): return tokenizer(batch[ "text" ], padding= "max_length" , truncation= True ) # tokenize train and test datasets train_dataset = train_dataset. map (tokenize, batched= True ) test_dataset = test_dataset. map (tokenize, batched= True ) # set dataset format for PyTorch train_dataset = train_dataset.rename_column( "label" , "labels" ) train_dataset.set_format( "torch" , columns=[ "input_ids" , "attention_mask" , "labels" ]) test_dataset = test_dataset.rename_column( "label" , "labels" ) test_dataset.set_format( "torch" , columns=[ "input_ids" , "attention_mask" , "labels" ]) Upload dataset to S3 bucket Next, upload the preprocessed dataset to your S3 session bucket with 🤗 Datasets S3 filesystem implementation: Copied # save train_dataset to s3 training_input_path = f's3:// {sess.default_bucket()} / {s3_prefix} /train' train_dataset.save_to_disk(training_input_path) # save test_dataset to s3 test_input_path = f's3:// {sess.default_bucket()} / {s3_prefix} /test' test_dataset.save_to_disk(test_input_path) Start a training job Create a Hugging Face Estimator to handle end-to-end SageMaker training and deployment. The most important parameters to pay attention to are: entry_point refers to the fine-tuning script which you can find in train.py file . instance_type refers to the SageMaker instance that will be launched. Take a look here for a complete list of instance types. hyperparameters refers to the training hyperparameters the model will be fine-tuned with. Copied from sagemaker.huggingface import HuggingFace hyperparameters={ "epochs" : 1 , # number of training epochs "train_batch_size" : 32 , # training batch size "model_name" : "distilbert/distilbert-base-uncased" # name of pretrained model } huggingface_estimator = HuggingFace( entry_point= "train.py" , # fine-tuning script to use in training job source_dir= "./scripts" , # directory where fine-tuning script is stored instance_type= "ml.p3.2xlarge" , # instance type instance_count= 1 , # number of instances role=role, # IAM role used in training job to acccess AWS resources (S3) transformers_version= "4.26" , # Transformers version pytorch_version= "1.13" , # PyTorch version py_version= "py39" , # Python version hyperparameters=hyperparameters # hyperparameters to use in training job ) Begin training with one line of code: Copied huggingface_estimator.fit({ "train" : training_input_path, "test" : test_input_path}) Deploy model Once the training job is complete, deploy your fine-tuned model by calling deploy() with the number of instances and instance type: Copied predictor = huggingface_estimator.deploy(initial_instance_count= 1 , "ml.g4dn.xlarge" ) Call predict() on your data: Copied sentiment_input = { "inputs" : "It feels like a curtain closing...there was an elegance in the way they moved toward conclusion. No fan is going to watch and feel short-changed." } predictor.predict(sentiment_input) After running your request, delete the endpoint: Copied predictor.delete_endpoint() What’s next? Congratulations, you’ve just fine-tuned and deployed a pretrained 🤗 Transformers model on SageMaker! 🎉 For your next steps, keep reading our documentation for more details about training and deployment. There are many interesting features such as distributed training and Spot instances . < > Update on GitHub ← Hugging Face on Amazon SageMaker Run training on Amazon SageMaker → Train and deploy Hugging Face on Amazon Sage Maker Installation and setup Preprocess Upload dataset to S3 bucket Start a training job Deploy model What’s next?
Data_Collator.txt
Data Collator Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Data Collator Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Data Collator Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of the same type as the elements of train_dataset or eval_dataset . To be able to build batches, data collators may apply some processing (like padding). Some of them (like DataCollatorForLanguageModeling ) also apply some random data augmentation (like random masking) on the formed batch. Examples of use can be found in the example scripts or example notebooks . Default data collator transformers.default_data_collator < source > ( features : typing.List[transformers.data.data_collator.InputDataClass] return_tensors = 'pt' ) Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named: label : handles a single value (int or float) per object label_ids : handles a list of values per object Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it’s useful. DefaultDataCollator class transformers. DefaultDataCollator < source > ( return_tensors : str = 'pt' ) Parameters return_tensors ( str , optional , defaults to "pt" ) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named: label : handles a single value (int or float) per object label_ids : handles a list of values per object Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it’s useful. This is an object (like other data collators) rather than a pure function like default_data_collator. This can be helpful if you need to set a return_tensors value at initialization. DataCollatorWithPadding class transformers. DataCollatorWithPadding < source > ( tokenizer : PreTrainedTokenizerBase padding : typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length : typing.Optional[int] = None pad_to_multiple_of : typing.Optional[int] = None return_tensors : str = 'pt' ) Parameters tokenizer ( PreTrainedTokenizer or PreTrainedTokenizerFast ) — The tokenizer used for encoding the data. padding ( bool , str or PaddingStrategy , optional , defaults to True ) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among: True or 'longest' (default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided). 'max_length' : Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. False or 'do_not_pad' : No padding (i.e., can output a batch with sequences of different lengths). max_length ( int , optional ) — Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of ( int , optional ) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.0 (Volta). return_tensors ( str , optional , defaults to "pt" ) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Data collator that will dynamically pad the inputs received. DataCollatorForTokenClassification class transformers. DataCollatorForTokenClassification < source > ( tokenizer : PreTrainedTokenizerBase padding : typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length : typing.Optional[int] = None pad_to_multiple_of : typing.Optional[int] = None label_pad_token_id : int = -100 return_tensors : str = 'pt' ) Parameters tokenizer ( PreTrainedTokenizer or PreTrainedTokenizerFast ) — The tokenizer used for encoding the data. padding ( bool , str or PaddingStrategy , optional , defaults to True ) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among: True or 'longest' (default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided). 'max_length' : Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. False or 'do_not_pad' : No padding (i.e., can output a batch with sequences of different lengths). max_length ( int , optional ) — Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of ( int , optional ) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.0 (Volta). label_pad_token_id ( int , optional , defaults to -100) — The id to use when padding the labels (-100 will be automatically ignore by PyTorch loss functions). return_tensors ( str , optional , defaults to "pt" ) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Data collator that will dynamically pad the inputs received, as well as the labels. DataCollatorForSeq2Seq class transformers. DataCollatorForSeq2Seq < source > ( tokenizer : PreTrainedTokenizerBase model : typing.Optional[typing.Any] = None padding : typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length : typing.Optional[int] = None pad_to_multiple_of : typing.Optional[int] = None label_pad_token_id : int = -100 return_tensors : str = 'pt' ) Parameters tokenizer ( PreTrainedTokenizer or PreTrainedTokenizerFast ) — The tokenizer used for encoding the data. model ( PreTrainedModel , optional ) — The model that is being trained. If set and has the prepare_decoder_input_ids_from_labels , use it to prepare the decoder_input_ids This is useful when using label_smoothing to avoid calculating loss twice. padding ( bool , str or PaddingStrategy , optional , defaults to True ) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among: True or 'longest' (default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided). 'max_length' : Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. False or 'do_not_pad' : No padding (i.e., can output a batch with sequences of different lengths). max_length ( int , optional ) — Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of ( int , optional ) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.0 (Volta). label_pad_token_id ( int , optional , defaults to -100) — The id to use when padding the labels (-100 will be automatically ignored by PyTorch loss functions). return_tensors ( str , optional , defaults to "pt" ) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Data collator that will dynamically pad the inputs received, as well as the labels. DataCollatorForLanguageModeling class transformers. DataCollatorForLanguageModeling < source > ( tokenizer : PreTrainedTokenizerBase mlm : bool = True mlm_probability : float = 0.15 pad_to_multiple_of : typing.Optional[int] = None tf_experimental_compile : bool = False return_tensors : str = 'pt' ) Parameters tokenizer ( PreTrainedTokenizer or PreTrainedTokenizerFast ) — The tokenizer used for encoding the data. mlm ( bool , optional , defaults to True ) — Whether or not to use masked language modeling. If set to False , the labels are the same as the inputs with the padding tokens ignored (by setting them to -100). Otherwise, the labels are -100 for non-masked tokens and the value to predict for the masked token. mlm_probability ( float , optional , defaults to 0.15) — The probability with which to (randomly) mask tokens in the input, when mlm is set to True . pad_to_multiple_of ( int , optional ) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.0 (Volta). return_tensors ( str ) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Data collator used for language modeling. Inputs are dynamically padded to the maximum length of a batch if they are not all of the same length. For best performance, this data collator should be used with a dataset having items that are dictionaries or BatchEncoding, with the "special_tokens_mask" key, as returned by a PreTrainedTokenizer or a PreTrainedTokenizerFast with the argument return_special_tokens_mask=True . numpy_mask_tokens < source > ( inputs : typing.Any special_tokens_mask : typing.Optional[typing.Any] = None ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. tf_mask_tokens < source > ( inputs : typing.Any vocab_size mask_token_id special_tokens_mask : typing.Optional[typing.Any] = None ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. torch_mask_tokens < source > ( inputs : typing.Any special_tokens_mask : typing.Optional[typing.Any] = None ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. DataCollatorForWholeWordMask class transformers. DataCollatorForWholeWordMask < source > ( tokenizer : PreTrainedTokenizerBase mlm : bool = True mlm_probability : float = 0.15 pad_to_multiple_of : typing.Optional[int] = None tf_experimental_compile : bool = False return_tensors : str = 'pt' ) Data collator used for language modeling that masks entire words. collates batches of tensors, honoring their tokenizer’s pad_token preprocesses batches for masked language modeling This collator relies on details of the implementation of subword tokenization by BertTokenizer , specifically that subword tokens are prefixed with ## . For tokenizers that do not adhere to this scheme, this collator will produce an output that is roughly equivalent to .DataCollatorForLanguageModeling . numpy_mask_tokens < source > ( inputs : typing.Any mask_labels : typing.Any ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set ‘mask_labels’ means we use whole word mask (wwm), we directly mask idxs according to it’s ref. tf_mask_tokens < source > ( inputs : typing.Any mask_labels : typing.Any ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set ‘mask_labels’ means we use whole word mask (wwm), we directly mask idxs according to it’s ref. torch_mask_tokens < source > ( inputs : typing.Any mask_labels : typing.Any ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set ‘mask_labels’ means we use whole word mask (wwm), we directly mask idxs according to it’s ref. DataCollatorForPermutationLanguageModeling class transformers. DataCollatorForPermutationLanguageModeling < source > ( tokenizer : PreTrainedTokenizerBase plm_probability : float = 0.16666666666666666 max_span_length : int = 5 return_tensors : str = 'pt' ) Data collator used for permutation language modeling. collates batches of tensors, honoring their tokenizer’s pad_token preprocesses batches for permutation language modeling with procedures specific to XLNet numpy_mask_tokens < source > ( inputs : typing.Any ) The masked tokens to be predicted for a particular sequence are determined by the following algorithm: Start from the beginning of the sequence by setting cur_len = 0 (number of tokens processed so far). Sample a span_length from the interval [1, max_span_length] (length of span of tokens to be masked) Reserve a context of length context_length = span_length / plm_probability to surround span to be masked Sample a starting point start_index from the interval [cur_len, cur_len + context_length - span_length] and mask tokens start_index:start_index + span_length Set cur_len = cur_len + context_length . If cur_len < max_len (i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1. tf_mask_tokens < source > ( inputs : typing.Any ) The masked tokens to be predicted for a particular sequence are determined by the following algorithm: Start from the beginning of the sequence by setting cur_len = 0 (number of tokens processed so far). Sample a span_length from the interval [1, max_span_length] (length of span of tokens to be masked) Reserve a context of length context_length = span_length / plm_probability to surround span to be masked Sample a starting point start_index from the interval [cur_len, cur_len + context_length - span_length] and mask tokens start_index:start_index + span_length Set cur_len = cur_len + context_length . If cur_len < max_len (i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1. torch_mask_tokens < source > ( inputs : typing.Any ) The masked tokens to be predicted for a particular sequence are determined by the following algorithm: Start from the beginning of the sequence by setting cur_len = 0 (number of tokens processed so far). Sample a span_length from the interval [1, max_span_length] (length of span of tokens to be masked) Reserve a context of length context_length = span_length / plm_probability to surround span to be masked Sample a starting point start_index from the interval [cur_len, cur_len + context_length - span_length] and mask tokens start_index:start_index + span_length Set cur_len = cur_len + context_length . If cur_len < max_len (i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1. DataCollatorWithFlattening class transformers. DataCollatorWithFlattening < source > ( *args return_position_ids = True separator_id = -100 **kwargs ) Data collator used for padding free approach. Does the following: concatate the entire mini batch into single long sequence [1, total_tokens] uses separator_id to separate sequences within the concatenated labels , default value is -100 no padding will be added, returns input_ids , labels and position_ids < > Update on GitHub ← Configuration Keras callbacks → Data Collator Default data collator Default Data Collator Data Collator With Padding Data Collator For Token Classification Data Collator For Seq2 Seq Data Collator For Language Modeling Data Collator For Whole Word Mask Data Collator For Permutation Language Modeling Data Collator With Flattening
BCO_Trainer.txt
BCO Trainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation BCO Trainer TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started BCO Trainer TRL supports the Binary Classifier Optimization (BCO). The BCO authors train a binary classifier whose logit serves as a reward so that the classifier maps {prompt, chosen completion} pairs to 1 and {prompt, rejected completion} pairs to 0. For a full example have a look at examples/scripts/bco.py . Expected dataset type The BCOTrainer requires an unpaired preference dataset . The BCOTrainer supports both conversational and standard dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset. Expected model format The BCO trainer expects a model of AutoModelForCausalLM , compared to PPO that expects AutoModelForCausalLMWithValueHead for the value function. Using the BCOTrainer For a detailed example have a look at the examples/scripts/bco.py script. At a high level we need to initialize the BCOTrainer with a model we wish to train and a reference ref_model which we will use to calculate the implicit rewards of the preferred and rejected response. The beta refers to the hyperparameter of the implicit reward, and the dataset contains the 3 entries listed above. Note that the model and ref_model need to have the same architecture (ie decoder only or encoder-decoder). Copied training_args = BCOConfig( beta= 0.1 , ) bco_trainer = BCOTrainer( model, model_ref, args=training_args, train_dataset=train_dataset, processing_class=tokenizer, ) After this one can then call: Copied bco_trainer.train() Underlying Distribution matching (UDM) In practical scenarios, the thumbs-up and thumbs-down datasets are likely to have divergent underlying distributions of prompts. Consider an LLM deployed for user feedback: if the model excels in writing tasks but underperforms in coding, the thumbs-up dataset will be dominated by writing-related prompts, while the thumbs-down dataset will contain mostly coding-related prompts. If the prompts in your desired and undesired datasets differ a lot, it is useful to enable UDM. Choose an embedding model and tokenizer: Copied embedding_model = AutoModel.from_pretrained(your_model_id) embedding_tokenizer = AutoTokenizer.from_pretrained(your_model_id) # customize this function depending on your embedding model def embed_prompt ( input_ids, attention_mask, model ): outputs = model(input_ids=input_ids, attention_mask=attention_mask) return outputs.last_hidden_state.mean(dim= 1 ) embedding_model = Accelerator().prepare_model(self.embedding_model) embedding_func = partial(embed_prompt, model=embedding_model) Set prompt_sample_size to defined how many prompts are selected to train the UDM classifier and start the training with the provided embedding function: Copied training_args = BCOConfig( beta= 0.1 , prompt_sample_size= 512 , ) bco_trainer = BCOTrainer( model, model_ref, args=training_args, train_dataset=train_dataset, processing_class=tokenizer, embedding_func=embedding_func, embedding_tokenizer=self.embedding_tokenizer, ) bco_trainer.train() For Mixture of Experts Models: Enabling the auxiliary loss MOEs are the most efficient if the load is about equally distributed between experts. To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss. This option is enabled by setting output_router_logits=True in the model config (e.g. MixtralConfig). To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter router_aux_loss_coef=... (default: 0.001). BCOTrainer class trl. BCOTrainer < source > ( model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str] = None ref_model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str, NoneType] = None args : BCOConfig = None train_dataset : typing.Optional[datasets.arrow_dataset.Dataset] = None eval_dataset : typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = None processing_class : typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = None data_collator : typing.Optional[transformers.data.data_collator.DataCollator] = None model_init : typing.Optional[typing.Callable[[], transformers.modeling_utils.PreTrainedModel]] = None callbacks : typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None optimizers : tuple = (None, None) preprocess_logits_for_metrics : typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None peft_config : typing.Optional[dict] = None compute_metrics : typing.Optional[typing.Callable[[transformers.trainer_utils.EvalLoopOutput], dict]] = None model_adapter_name : typing.Optional[str] = None ref_adapter_name : typing.Optional[str] = None embedding_func : typing.Optional[typing.Callable] = None embedding_tokenizer : typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None ) Parameters model ( transformers.PreTrainedModel ) — The model to train, preferably an AutoModelForSequenceClassification . ref_model ( PreTrainedModelWrapper ) — Hugging Face transformer model with a casual language modelling head. Used for implicit reward computation and loss. If no reference model is provided, the trainer will create a reference model with the same architecture as the model to be optimized. args ( BCOConfig ) — The arguments to use for training. train_dataset ( datasets.Dataset ) — The dataset to use for training. eval_dataset ( datasets.Dataset ) — The dataset to use for evaluation. processing_class ( PreTrainedTokenizerBase or BaseImageProcessor or FeatureExtractionMixin or ProcessorMixin , optional ) — Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model. data_collator ( transformers.DataCollator , optional , defaults to None ) — The data collator to use for training. If None is specified, the default data collator ( DPODataCollatorWithPadding ) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences. model_init ( Callable[[], transformers.PreTrainedModel] ) — The model initializer to use for training. If None is specified, the default model initializer will be used. callbacks ( list[transformers.TrainerCallback] ) — The callbacks to use for training. optimizers ( tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] ) — The optimizer and scheduler to use for training. preprocess_logits_for_metrics ( Callable[[torch.Tensor, torch.Tensor], torch.Tensor] ) — The function to use to preprocess the logits before computing the metrics. peft_config ( dict , defaults to None ) — The PEFT configuration to use for training. If you pass a PEFT configuration, the model will be wrapped in a PEFT model. disable_dropout ( bool , defaults to True ) — Whether or not to disable dropouts in model and ref_model . compute_metrics ( Callable[[EvalPrediction], dict] , optional ) — The function to use to compute the metrics. Must take a EvalPrediction and return a dictionary string to metric values. model_adapter_name ( str , defaults to None ) — Name of the train target PEFT adapter, when using LoRA with multiple adapters. ref_adapter_name ( str , defaults to None ) — Name of the reference PEFT adapter, when using LoRA with multiple adapters. Initialize BCOTrainer from BCO paper. bco_loss < source > ( policy_chosen_logps : FloatTensor policy_rejected_logps : FloatTensor reference_chosen_logps : FloatTensor reference_rejected_logps : FloatTensor chosen_embeddings : typing.Optional[torch.FloatTensor] rejected_embeddings : typing.Optional[torch.FloatTensor] ) → A tuple of four tensors Parameters policy_chosen_logps — Log probabilities of the policy model for the chosen responses. Shape: (num(chosen) in batch_size,) policy_rejected_logps — Log probabilities of the policy model for the rejected responses. Shape: (num(rejected) in batch_size,) reference_chosen_logps — Log probabilities of the reference model for the chosen responses. Shape: (num(chosen) in batch_size,) reference_rejected_logps — Log probabilities of the reference model for the rejected responses. Shape: (num(rejected) in batch_size,) chosen_embeddings — embeddings of desirable prompts rejected_embeddings — embeddings of undesirable prompts Returns A tuple of four tensors (losses, chosen_rewards, rejected_rewards, delta). The losses tensor contains the BCO loss for each example in the batch. The chosen_rewards and rejected_rewards tensors contain the rewards for the chosen and rejected responses, respectively. The delta value contains the moving average of all implicit rewards. Compute the BCO loss for a batch of policy and reference model log probabilities. compute_reference_log_probs < source > ( padded_batch : dict ) Computes log probabilities of the reference model for a single padded batch of a BCO specific dataset. create_model_card < source > ( model_name : typing.Optional[str] = None dataset_name : typing.Optional[str] = None tags : typing.Union[str, list[str], NoneType] = None ) Parameters model_name ( str , optional , defaults to None ) — The name of the model. dataset_name ( str , optional , defaults to None ) — The name of the dataset used for training. tags ( str , list[str] or None, optional , defaults to None ) — Tags to be associated with the model card. Creates a draft of a model card using the information available to the Trainer . evaluation_loop < source > ( dataloader : DataLoader description : str prediction_loss_only : typing.Optional[bool] = None ignore_keys : typing.Optional[list[str]] = None metric_key_prefix : str = 'eval' ) Overriding built-in evaluation loop to store metrics for each batch. Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict() . Works both with or without labels. generate_from_model_and_ref < source > ( model batch : dict ) Generate samples from the model and reference model for the given batch of inputs. get_batch_logps < source > ( logits : FloatTensor labels : LongTensor average_log_prob : bool = False label_pad_token_id : int = -100 is_encoder_decoder : bool = False ) Parameters logits — Logits of the model (unnormalized). Shape: (batch_size, sequence_length, vocab_size) labels — Labels for which to compute the log probabilities. Label tokens with a value of label_pad_token_id are ignored. Shape: (batch_size, sequence_length) average_log_prob — If True, return the average log probability per (non-masked) token. Otherwise, return the sum of the log probabilities of the (non-masked) tokens. Compute the log probabilities of the given labels under the given logits. get_batch_loss_metrics < source > ( model batch : dict ) Compute the BCO loss and other metrics for the given batch of inputs for train or test. get_eval_dataloader < source > ( eval_dataset : typing.Optional[datasets.arrow_dataset.Dataset] = None ) Parameters eval_dataset ( torch.utils.data.Dataset , optional ) — If provided, will override self.eval_dataset . If it is a Dataset , columns not accepted by the model.forward() method are automatically removed. It must implement __len__ . Returns the evaluation ~torch.utils.data.DataLoader . Subclass of transformers.src.transformers.trainer.get_eval_dataloader to precompute ref_log_probs . get_train_dataloader < source > ( ) Returns the training ~torch.utils.data.DataLoader . Subclass of transformers.src.transformers.trainer.get_train_dataloader to precompute ref_log_probs . log < source > ( logs : dict start_time : typing.Optional[float] = None ) Parameters logs ( dict[str, float] ) — The values to log. start_time ( float or None , optional , defaults to None ) — Start time of the training. Log logs on the various objects watching training, including stored metrics. null_ref_context < source > ( ) Context manager for handling null reference model (that is, peft adapter manipulation). BCOConfig class trl. BCOConfig < source > ( output_dir : str overwrite_output_dir : bool = False do_train : bool = False do_eval : bool = False do_predict : bool = False eval_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only : bool = False per_device_train_batch_size : int = 8 per_device_eval_batch_size : int = 8 per_gpu_train_batch_size : typing.Optional[int] = None per_gpu_eval_batch_size : typing.Optional[int] = None gradient_accumulation_steps : int = 1 eval_accumulation_steps : typing.Optional[int] = None eval_delay : typing.Optional[float] = 0 torch_empty_cache_steps : typing.Optional[int] = None learning_rate : float = 5e-05 weight_decay : float = 0.0 adam_beta1 : float = 0.9 adam_beta2 : float = 0.999 adam_epsilon : float = 1e-08 max_grad_norm : float = 1.0 num_train_epochs : float = 3.0 max_steps : int = -1 lr_scheduler_type : typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs : typing.Union[dict, str, NoneType] = <factory> warmup_ratio : float = 0.0 warmup_steps : int = 0 log_level : typing.Optional[str] = 'passive' log_level_replica : typing.Optional[str] = 'warning' log_on_each_node : bool = True logging_dir : typing.Optional[str] = None logging_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step : bool = False logging_steps : float = 500 logging_nan_inf_filter : bool = True save_strategy : typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps : float = 500 save_total_limit : typing.Optional[int] = None save_safetensors : typing.Optional[bool] = True save_on_each_node : bool = False save_only_model : bool = False restore_callback_states_from_checkpoint : bool = False no_cuda : bool = False use_cpu : bool = False use_mps_device : bool = False seed : int = 42 data_seed : typing.Optional[int] = None jit_mode_eval : bool = False use_ipex : bool = False bf16 : bool = False fp16 : bool = False fp16_opt_level : str = 'O1' half_precision_backend : str = 'auto' bf16_full_eval : bool = False fp16_full_eval : bool = False tf32 : typing.Optional[bool] = None local_rank : int = -1 ddp_backend : typing.Optional[str] = None tpu_num_cores : typing.Optional[int] = None tpu_metrics_debug : bool = False debug : typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last : bool = False eval_steps : typing.Optional[float] = None dataloader_num_workers : int = 0 dataloader_prefetch_factor : typing.Optional[int] = None past_index : int = -1 run_name : typing.Optional[str] = None disable_tqdm : typing.Optional[bool] = None remove_unused_columns : typing.Optional[bool] = True label_names : typing.Optional[typing.List[str]] = None load_best_model_at_end : typing.Optional[bool] = False metric_for_best_model : typing.Optional[str] = None greater_is_better : typing.Optional[bool] = None ignore_data_skip : bool = False fsdp : typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params : int = 0 fsdp_config : typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap : typing.Optional[str] = None accelerator_config : typing.Union[dict, str, NoneType] = None deepspeed : typing.Union[dict, str, NoneType] = None label_smoothing_factor : float = 0.0 optim : typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args : typing.Optional[str] = None adafactor : bool = False group_by_length : bool = False length_column_name : typing.Optional[str] = 'length' report_to : typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters : typing.Optional[bool] = None ddp_bucket_cap_mb : typing.Optional[int] = None ddp_broadcast_buffers : typing.Optional[bool] = None dataloader_pin_memory : bool = True dataloader_persistent_workers : bool = False skip_memory_metrics : bool = True use_legacy_prediction_loop : bool = False push_to_hub : bool = False resume_from_checkpoint : typing.Optional[str] = None hub_model_id : typing.Optional[str] = None hub_strategy : typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token : typing.Optional[str] = None hub_private_repo : typing.Optional[bool] = None hub_always_push : bool = False gradient_checkpointing : bool = False gradient_checkpointing_kwargs : typing.Union[dict, str, NoneType] = None include_inputs_for_metrics : bool = False include_for_metrics : typing.List[str] = <factory> eval_do_concat_batches : bool = True fp16_backend : str = 'auto' evaluation_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id : typing.Optional[str] = None push_to_hub_organization : typing.Optional[str] = None push_to_hub_token : typing.Optional[str] = None mp_parameters : str = '' auto_find_batch_size : bool = False full_determinism : bool = False torchdynamo : typing.Optional[str] = None ray_scope : typing.Optional[str] = 'last' ddp_timeout : typing.Optional[int] = 1800 torch_compile : bool = False torch_compile_backend : typing.Optional[str] = None torch_compile_mode : typing.Optional[str] = None dispatch_batches : typing.Optional[bool] = None split_batches : typing.Optional[bool] = None include_tokens_per_second : typing.Optional[bool] = False include_num_input_tokens_seen : typing.Optional[bool] = False neftune_noise_alpha : typing.Optional[float] = None optim_target_modules : typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics : bool = False eval_on_start : bool = False use_liger_kernel : typing.Optional[bool] = False eval_use_gather_object : typing.Optional[bool] = False average_tokens_across_devices : typing.Optional[bool] = False max_length : typing.Optional[int] = None max_prompt_length : typing.Optional[int] = None max_completion_length : typing.Optional[int] = None beta : float = 0.1 label_pad_token_id : int = -100 padding_value : typing.Optional[int] = None truncation_mode : str = 'keep_end' generate_during_eval : bool = False is_encoder_decoder : typing.Optional[bool] = None precompute_ref_log_probs : bool = False model_init_kwargs : typing.Optional[dict[str, typing.Any]] = None ref_model_init_kwargs : typing.Optional[dict[str, typing.Any]] = None dataset_num_proc : typing.Optional[int] = None prompt_sample_size : int = 1024 min_density_ratio : float = 0.5 max_density_ratio : float = 10.0 ) Parameters max_length ( Optional[int] , optional , defaults to None ) — Maximum length of the sequences (prompt + completion) in the batch. This argument is required if you want to use the default data collator. max_prompt_length ( Optional[int] , optional , defaults to None ) — Maximum length of the prompt. This argument is required if you want to use the default data collator. max_completion_length ( Optional[int] , optional , defaults to None ) — Maximum length of the completion. This argument is required if you want to use the default data collator and your model is an encoder-decoder. beta ( float , optional , defaults to 0.1 ) — Parameter controlling the deviation from the reference model. Higher β means less deviation from the reference model. label_pad_token_id ( int , optional , defaults to -100 ) — Label pad token id. This argument is required if you want to use the default data collator. padding_value ( Optional[int] , optional , defaults to None ) — Padding value to use. If None , the padding value of the tokenizer is used. truncation_mode ( str , optional , defaults to "keep_end" ) — Truncation mode to use when the prompt is too long. Possible values are "keep_end" or "keep_start" . This argument is required if you want to use the default data collator. generate_during_eval ( bool , optional , defaults to False ) — If True , generates and logs completions from both the model and the reference model to W&B during evaluation. is_encoder_decoder ( Optional[bool] , optional , defaults to None ) — When using the model_init argument (callable) to instantiate the model instead of the model argument, you need to specify if the model returned by the callable is an encoder-decoder model. precompute_ref_log_probs ( bool , optional , defaults to False ) — Whether to precompute reference model log probabilities for training and evaluation datasets. This is useful when training without the reference model to reduce the total GPU memory needed. model_init_kwargs ( Optional[dict[str, Any]] , optional , defaults to None ) — Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the model from a string. ref_model_init_kwargs ( Optional[dict[str, Any]] , optional , defaults to None ) — Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the reference model from a string. dataset_num_proc ( Optional[int] , optional , defaults to None ) — Number of processes to use for processing the dataset. prompt_sample_size ( int , optional , defaults to 1024 ) — Number of prompts that are fed to density ratio classifier. min_density_ratio ( float , optional , defaults to 0.5 ) — Minimum value of the density ratio. The estimated density ratio is clamped to this value. max_density_ratio ( float , optional , defaults to 10.0 ) — Maximum value of the density ratio. The estimated density ratio is clamped to this value. Configuration class for the BCOTrainer . Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. < > Update on GitHub ← AlignProp CPO → BC O Trainer Expected dataset type Expected model format Using the BCO Trainer Underlying Distribution matching (UD M) For Mixture of Experts Models: Enabling the auxiliary loss BCO Trainer BCO Config
Pause_and_Resume_your_Endpoint.txt
Pause and Resume your Endpoint Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Pause and Resume your Endpoint Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Pause and Resume your Endpoint You can pause & resume endpoints to save cost and configurations. Please note that if your endpoint is in a failed state, you will need to create a new endpoint. To pause / resume your endpoint, navigate to the “overview” tab and click the button at top right corner, which will show “Pause endpoint” to pause, or “Resume endpoint” to reactivate the paused endpoint. When pausing an endpoint the min & max number of replicas will be set to 0. When resuming an endpoint the min & max number of replicas will be set to 1. This allows you to programmatically pause and resume your endpoint by updating the “min_replicas” and “max_replicas” fields in the API. Paused inference endpoints will have the following status: PAUSED . Paused endpoints will be NOT be billed until resumed. Pausing & Resuming an endpoint is a great way to save costs when you don’t need your endpoint to be running. For example, you can easily pause your endpoint during the night or weekends. You should pause your endpoint when you don’t need it for the time being. The url of your endpoint will remain the same, even if you pause and resume it. This means that you can pause your endpoint and resume it later without having to update your code. Pause an Inference Endpoint To pause an endpoint, navigate to the “overview” tab and click the button at top right corner, which says “Pause endpoint”. After clicking the button, you will be asked to confirm the action. Click “Pause {ENDPOINT-NAME}” to confirm. After that your replicas will be set to 0 and your endpoint will be paused. You can see the status change of your endpoint in the “overview” tab to PAUSED . If you do not see the PAUSED status make sure you’ve followed these instructions or contact us for help. Resume an Inference Endpoint To resume an endpoint, navigate to the “overview” tab and click the button at top right corner showing “Resume endpoint”. Your endpoint will be resumed and the status will change to Initalizing and then to Running . Once your endpoint is running, you can start using it again and billing usage will incur. < > Update on GitHub ← Change Organization or Account Deploying a llama.cpp Container → Pause and Resume your Endpoint Pause an Inference Endpoint Resume an Inference Endpoint
Tabular_Classification___Regression.txt
Tabular Classification / Regression Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AutoTrain documentation Tabular Classification / Regression AutoTrain 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.8.24 v0.7.129 v0.6.48 v0.5.2 EN Getting Started 🤗 AutoTrain How much does it cost? Get help and support Frequently Asked Questions Quickstart Train on Spaces Python SDK Train Locally Config File Tasks LLM Finetuning Text Classification/Regression Extractive QA Sentence Transformer Image Classification / Regression Object Detection Seq2Seq Token Classification Tabular Miscellaneous Understanding Column Mapping AutoTrain API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.8.24 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Tabular Classification / Regression Using AutoTrain, you can train a model to classify or regress tabular data easily. All you need to do is select from a list of models and upload your dataset. Parameter tuning is done automatically. Models The following models are available for tabular classification / regression. xgboost random_forest ridge logistic_regression svm extra_trees gradient_boosting adaboost decision_tree knn Data Format Copied id,category 1 ,category 2 ,feature 1 ,target 1 ,A, X , 0 .3373961604172684 , 1 2 ,B, Z , 0 .6481718720511972 , 0 3 ,A, Y , 0 .36824153984054797 , 1 4 ,B, Z , 0 .9571551589530464 , 1 5 ,B, Z , 0 .14035078041264515 , 1 6 ,C, X , 0 .8700872583584364 , 1 7 ,A, Y , 0 .4736080452737105 , 0 8 ,C, Y , 0 .8009107519796442 , 1 9 ,A, Y , 0 .5204774795512048 , 0 10 ,A, Y , 0 .6788795301189603 , 0 . . . Columns Your CSV dataset must have two columns: id and target . Parameters class autotrain.trainers.tabular.params. TabularParams < source > ( data_path : str = None model : str = 'xgboost' username : typing.Optional[str] = None seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None project_name : str = 'project-name' token : typing.Optional[str] = None push_to_hub : bool = False id_column : str = 'id' target_columns : typing.Union[typing.List[str], str] = ['target'] categorical_columns : typing.Optional[typing.List[str]] = None numerical_columns : typing.Optional[typing.List[str]] = None task : str = 'classification' num_trials : int = 10 time_limit : int = 600 categorical_imputer : typing.Optional[str] = None numerical_imputer : typing.Optional[str] = None numeric_scaler : typing.Optional[str] = None ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the model to use. Default is “xgboost”. username (Optional[str]) — Hugging Face Username. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. project_name (str) — Name of the output directory. Default is “project-name”. token (Optional[str]) — Hub Token for authentication. push_to_hub (bool) — Whether to push the model to the hub. Default is False. id_column (str) — Name of the ID column. Default is “id”. target_columns (Union[List[str], str]) — Target column(s) in the dataset. Default is [“target”]. categorical_columns (Optional[List[str]]) — List of categorical columns. numerical_columns (Optional[List[str]]) — List of numerical columns. task (str) — Type of task (e.g., “classification”). Default is “classification”. num_trials (int) — Number of trials for hyperparameter optimization. Default is 10. time_limit (int) — Time limit for training in seconds. Default is 600. categorical_imputer (Optional[str]) — Imputer strategy for categorical columns. numerical_imputer (Optional[str]) — Imputer strategy for numerical columns. numeric_scaler (Optional[str]) — Scaler strategy for numerical columns. TabularParams is a configuration class for tabular data training parameters. < > Update on GitHub ← Token Classification Understanding Column Mapping → Tabular Classification / Regression Models Data Format Columns Parameters
Hugging_Face_JS_libraries.txt
Hugging Face JS libraries Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Hugging Face JS libraries Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Copied // Programatically interact with the Hub await createRepo ({ repo : { type : "model" , name : "my-user/nlp-model" }, accessToken : HF_TOKEN }); await uploadFile ({ repo : "my-user/nlp-model" , accessToken : HF_TOKEN , // Can work with native File in browsers file : { path : "pytorch_model.bin" , content : new Blob (...) } }); // Use HF Inference API, or external Inference Providers! await inference. chatCompletion ({ model : "meta-llama/Llama-3.1-8B-Instruct" , messages : [ { role : "user" , content : "Hello, nice to meet you!" , }, ], max_tokens : 512 , temperature : 0.5 , provider : "sambanova" , // or together, fal-ai, replicate, … }); await inference. textToImage ({ model : "black-forest-labs/FLUX.1-dev" , inputs : "a picture of a green bird" , }); // and much more… Hugging Face JS libraries This is a collection of JS libraries to interact with the Hugging Face API, with TS types included. @huggingface/inference : Use Inference API (serverless), Inference Endpoints (dedicated) and third-party Inference providers to make calls to 100,000+ Machine Learning models @huggingface/hub : Interact with huggingface.co to create or delete repos and commit / download files @huggingface/agents : Interact with HF models through a natural language interface @huggingface/gguf : A GGUF parser that works on remotely hosted files. @huggingface/dduf : Similar package for DDUF (DDUF Diffusers Unified Format) @huggingface/tasks : The definition files and source-of-truth for the Hub’s main primitives like pipeline tasks, model libraries, etc. @huggingface/jinja : A minimalistic JS implementation of the Jinja templating engine, to be used for ML chat templates. @huggingface/space-header : Use the Space mini_header outside Hugging Face We use modern features to avoid polyfills and dependencies, so the libraries will only work on modern browsers / Node.js >= 18 / Bun / Deno. The libraries are still very young, please help us by opening issues! Installation From NPM To install via NPM, you can download the libraries as needed: Copied npm install @huggingface/inference npm install @huggingface/hub npm install @huggingface/agents Then import the libraries in your code: Copied import { HfInference } from "@huggingface/inference" ; import { HfAgent } from "@huggingface/agents" ; import { createRepo, commit, deleteRepo, listFiles } from "@huggingface/hub" ; import type { RepoId } from "@huggingface/hub" ; From CDN or Static hosting You can run our packages with vanilla JS, without any bundler, by using a CDN or static hosting. Using ES modules , i.e. <script type="module"> , you can import the libraries in your code: Copied < script type = "module" > import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/+esm' ; import { createRepo, commit, deleteRepo, listFiles } from "https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/+esm" ; </ script > Deno Copied // esm.sh import { HfInference } from "https://esm.sh/@huggingface/inference" import { HfAgent } from "https://esm.sh/@huggingface/agents" ; import { createRepo, commit, deleteRepo, listFiles } from "https://esm.sh/@huggingface/hub" // or npm: import { HfInference } from "npm:@huggingface/inference" import { HfAgent } from "npm:@huggingface/agents" ; import { createRepo, commit, deleteRepo, listFiles } from "npm:@huggingface/hub" Usage examples Get your HF access token in your account settings . @huggingface/inference examples Copied import { HfInference } from "@huggingface/inference" ; const HF_TOKEN = "hf_..." ; const inference = new HfInference ( HF_TOKEN ); // Chat completion API const out = await inference. chatCompletion ({ model : "meta-llama/Llama-3.1-8B-Instruct" , messages : [{ role : "user" , content : "Hello, nice to meet you!" }], max_tokens : 512 }); console . log (out. choices [ 0 ]. message ); // Streaming chat completion API for await ( const chunk of inference. chatCompletionStream ({ model : "meta-llama/Llama-3.1-8B-Instruct" , messages : [{ role : "user" , content : "Hello, nice to meet you!" }], max_tokens : 512 })) { console . log (chunk. choices [ 0 ]. delta . content ); } /// Using a third-party provider: await inference. chatCompletion ({ model : "meta-llama/Llama-3.1-8B-Instruct" , messages : [{ role : "user" , content : "Hello, nice to meet you!" }], max_tokens : 512 , provider : "sambanova" , // or together, fal-ai, replicate, … }) await inference. textToImage ({ model : "black-forest-labs/FLUX.1-dev" , inputs : "a picture of a green bird" , provider : "fal-ai" , }) // You can also omit "model" to use the recommended model for the task await inference. translation ({ inputs : "My name is Wolfgang and I live in Amsterdam" , parameters : { src_lang : "en" , tgt_lang : "fr" , }, }); // pass multimodal files or URLs as inputs await inference. imageToText ({ model : 'nlpconnect/vit-gpt2-image-captioning' , data : await ( await fetch ( 'https://picsum.photos/300/300' )). blob (), }) // Using your own dedicated inference endpoint: https://hf.co/docs/inference-endpoints/ const gpt2 = inference. endpoint ( 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2' ); const { generated_text } = await gpt2. textGeneration ({ inputs : 'The answer to the universe is' }); // Chat Completion const llamaEndpoint = inference. endpoint ( "https://api-inference.huggingface.co/models/meta-llama/Llama-3.1-8B-Instruct" ); const out = await llamaEndpoint. chatCompletion ({ model : "meta-llama/Llama-3.1-8B-Instruct" , messages : [{ role : "user" , content : "Hello, nice to meet you!" }], max_tokens : 512 , }); console . log (out. choices [ 0 ]. message ); @huggingface/hub examples Copied import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub" ; const HF_TOKEN = "hf_..." ; await createRepo ({ repo : "my-user/nlp-model" , // or { type: "model", name: "my-user/nlp-test" }, accessToken : HF_TOKEN }); await uploadFile ({ repo : "my-user/nlp-model" , accessToken : HF_TOKEN , // Can work with native File in browsers file : { path : "pytorch_model.bin" , content : new Blob (...) } }); await deleteFiles ({ repo : { type : "space" , name : "my-user/my-space" }, // or "spaces/my-user/my-space" accessToken : HF_TOKEN , paths : [ "README.md" , ".gitattributes" ] }); @huggingface/agents example Copied import { HfAgent , LLMFromHub , defaultTools } from '@huggingface/agents' ; const HF_TOKEN = "hf_..." ; const agent = new HfAgent ( HF_TOKEN , LLMFromHub ( HF_TOKEN ), [...defaultTools] ); // you can generate the code, inspect it and then run it const code = await agent. generateCode ( "Draw a picture of a cat wearing a top hat. Then caption the picture and read it out loud." ); console . log (code); const messages = await agent.evaluateCode(code) console . log (messages); // contains the data // or you can run the code directly, however you can't check that the code is safe to execute this way, use at your own risk. const messages = await agent. run ( "Draw a picture of a cat wearing a top hat. Then caption the picture and read it out loud." ) console . log (messages); There are more features of course, check each library’s README! Formatting & testing Copied sudo corepack enable pnpm install pnpm -r format:check pnpm -r lint:check pnpm -r test Building Copied pnpm -r build This will generate ESM and CJS javascript files in packages/*/dist , eg packages/inference/dist/index.mjs . < > Update on GitHub Use Inference Endpoints → Hugging Face J S libraries Installation From NPM From CD N or Static hosting Deno Usage examples @huggingface/inference examples @huggingface/hub examples @huggingface/agents example Formatting & testing Building
Using_AllenNLP_at_Hugging_Face.txt
Using AllenNLP at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using AllenNLP at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using AllenNLP at Hugging Face allennlp is a NLP library for developing state-of-the-art models on different linguistic tasks. It provides high-level abstractions and APIs for common components and models in modern NLP. It also provides an extensible framework that makes it easy to run and manage NLP experiments. Exploring allennlp in the Hub You can find allennlp models on the Hub by filtering at the left of the models page . All models on the Hub come up with useful features A training metrics tab with automatically hosted TensorBoard traces. Metadata tags that help for discoverability. An interactive widget you can use to play out with the model directly in the browser. An Inference API that allows to make inference requests. Using existing models You can use the Predictor class to load existing models on the Hub. To achieve this, use the from_path method and use the "hf://" prefix with the repository id. Here is an end-to-end example. Copied import allennlp_models from allennlp.predictors.predictor import Predictor predictor = Predictor.from_path( "hf://allenai/bidaf-elmo" ) predictor_input = { "passage" : "My name is Wolfgang and I live in Berlin" , "question" : "Where do I live?" } predictions = predictor.predict_json(predictor_input) To get a snippet such as this, you can click Use in AllenNLP at the top right, Sharing your models The first step is to save the model locally. For example, you can use the archive_model method to save the model as a model.tar.gz file. You can then push the zipped model to the Hub. When you train a model with allennlp , the model is automatically serialized so you can use that as a preferred option. Using the AllenNLP CLI To push with the CLI, you can use the allennlp push_to_hf command as seen below. Copied allennlp push_to_hf --repo_name test_allennlp --archive_path model Argument Type Description --repo_name , -n str / Path Name of the repository on the Hub. --organization , -o str Optional name of organization to which the pipeline should be uploaded. --serialization-dir , -s str / Path Path to directory with the serialized model. --archive-path , -a str / Path If instead of a serialization path you’re using a zipped model (e.g. model/model.tar.gz), you can use this flag. --local-repo-path , -l str / Path Local path to the model repository (will be created if it doesn’t exist). Defaults to hub in the current working directory. --commit-message , -c str Commit message to use for update. Defaults to "update repository" . From a Python script The push_to_hf function has the same parameters as the bash script. Copied from allennlp.common.push_to_hf import push_to_hf serialization_dir = "path/to/serialization/directory" push_to_hf( repo_name= "my_repo_name" , serialization_dir=serialization_dir, local_repo_path=self.local_repo_path ) In just a minute, you can get your model in the Hub, try it out directly in the browser, and share it with the rest of the community. All the required metadata will be uploaded for you! Additional resources AllenNLP website . AllenNLP repository . < > Update on GitHub ← Adapters BERTopic → Using AllenNL P at Hugging Face Exploring allennlp in the Hub Using existing models Sharing your models Using the AllenNL P CLI From a Python script Additional resources
Compressed_Tensors.txt
Compressed Tensors Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Compressed Tensors Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Compressed Tensors The compressed-tensors library provides a versatile and efficient way to store and manage compressed model checkpoints. This library supports various quantization and sparsity schemes, making it a unified format for handling different model optimizations like GPTQ, AWQ, SmoothQuant, INT8, FP8, SparseGPT, and more. Some of the supported formats include: dense int-quantized ( sample ): INT8 quantized models float-quantized ( sample ): FP8 quantized models; currently support E4M3 pack-quantized ( sample ): INT4 or INT8 weight-quantized models, packed into INT32. For INT4, the weights have an INT4 range but are stored as INT8 and then packed into INT32. Compressed models can be easily created using llm-compressor . Alternatively models can be created independently and serialized with a compressed tensors config. To find existing models on the Hugging Face Model Hub, search for the compressed-tensors tag . Features: Weight and activation precisions: FP8, INT4, INT8 (for Q/DQ arbitrary precision is allowed for INT) Quantization scales and zero-points strategies: tensor, channel, group, block, token Dynamic per-token activation quantization (or any static strategy) Sparsity in weights (unstructured or semi-structured like 2:4) can be composed with quantization for extreme compression Supports quantization of arbitrary modules, not just Linear modules Targeted support or ignoring of modules by name or class Installation It is recommended to install stable releases of compressed-tensors from PyPI : Copied pip install compressed-tensors Developers who want to experiment with the latest features can also install the package from source: Copied git clone https://github.com/neuralmagic/compressed-tensors cd compressed-tensors pip install -e . Quickstart Model Load Quantized models can be easily loaded for inference as shown below. Only models that have already been quantized can be loaded at the moment. To quantize a model into the compressed-tensors format see llm-compressor . Copied from transformers import AutoModelForCausalLM # Load the model in compressed-tensors format ct_model = AutoModelForCausalLM.from_pretrained( "nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf" ) # Measure memory usage mem_params = sum ([param.nelement()*param.element_size() for param in ct_model.parameters()]) print ( f" {mem/ 2 ** 30 : .4 f} GB" ) # 8.4575 GB We can see just above that the compressed-tensors FP8 checkpoint of Llama 3.1 8B is able to be loaded for inference using half of the memory of the unquantized reference checkpoint. Sample Use Cases - Load and run an FP8 model Copied from transformers import AutoModelForCausalLM, AutoTokenizer prompt = [ "Hello, my name is" , "The capital of France is" , "The future of AI is" ] model_name = "nm-testing/Meta-Llama-3-8B-Instruct-fp8-hf_compat" quantized_model = AutoModelForCausalLM.from_pretrained(model_name, device_map= "auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer(prompt, return_tensors= "pt" ) generated_ids = quantized_model.generate(**inputs, max_length= 50 , do_sample= False ) outputs = tokenizer.batch_decode(generated_ids) print (outputs) """ ['<|begin_of_text|>Hello, my name is [Name]. I am a [Your Profession/Student] and I am here to learn about the [Course/Program] at [University/Institution]. I am excited to be here and I am looking forward to', '<|begin_of_text|>The capital of France is Paris, which is located in the north-central part of the country. Paris is the most populous city in France and is known for its stunning architecture, art museums, fashion, and romantic atmosphere. The city is home to', "<|begin_of_text|>The future of AI is here, and it's already changing the way we live and work. From virtual assistants to self-driving cars, AI is transforming industries and revolutionizing the way we interact with technology. But what does the future of AI hold"] """ The above shows a quick example for running generation using a compressed-tensors model. Currently, once loaded the model cannot be saved. Deep dive into a compressed-tensors model checkpoint In this example we will examine how the compressed-tensors model nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf is defined through its configuration entry and see how this translates to the loaded model representation. First, let us look at the quantization_config of the model . At a glance it looks overwhelming with the number of entries but this is because compressed-tensors is a format that allows for flexible expression both during and after model compression. In practice for checkpoint loading and inference the configuration can be simplified to not include all the default or empty entries, so we will do that here to focus on what compression is actually represented. Copied "quantization_config": { "config_groups": { "group_0": { "input_activations": { "num_bits": 8 , "strategy": "tensor" , "type": "float" }, "targets": [ "Linear" ], "weights": { "num_bits": 8 , "strategy": "tensor" , "type": "float" } } }, "format": "naive-quantized" , "ignore": [ "lm_head" ], "quant_method": "compressed-tensors" , "quantization_status": "frozen" } , We can see from the above configuration that it is specifying one config group that includes weight and activation quantization to FP8 with a static per-tensor strategy. It is also worth noting that in the ignore list there is an entry to skip quantization of the lm_head module, so that module should be untouched in the checkpoint. To see the result of the configuration in practice, we can simply use the safetensors viewer on the model card to see the quantized weights, input_scale, and weight_scale for all of the Linear modules in the first model layer (and so on for the rest of the layers). Tensors Shape Precision model.layers.0.input_layernorm.weight [4 096] BF16 model.layers.0.mlp.down_proj.input_scale [1] BF16 model.layers.0.mlp.down_proj.weight [4 096, 14 336] F8_E4M3 model.layers.0.mlp.down_proj.weight_scale [1] BF16 model.layers.0.mlp.gate_proj.input_scale [1] BF16 model.layers.0.mlp.gate_proj.weight [14 336, 4 096] F8_E4M3 model.layers.0.mlp.gate_proj.weight_scale [1] BF16 model.layers.0.mlp.up_proj.input_scale [1] BF16 model.layers.0.mlp.up_proj.weight [14 336, 4 096] F8_E4M3 model.layers.0.mlp.up_proj.weight_scale [1] BF16 model.layers.0.post_attention_layernorm.weight [4 096] BF16 model.layers.0.self_attn.k_proj.input_scale [1] BF16 model.layers.0.self_attn.k_proj.weight [1 024, 4 096] F8_E4M3 model.layers.0.self_attn.k_proj.weight_scale [1] BF16 model.layers.0.self_attn.o_proj.input_scale [1] BF16 model.layers.0.self_attn.o_proj.weight [4 096, 4 096] F8_E4M3 model.layers.0.self_attn.o_proj.weight_scale [1] BF16 model.layers.0.self_attn.q_proj.input_scale [1] BF16 model.layers.0.self_attn.q_proj.weight [4 096, 4 096] F8_E4M3 model.layers.0.self_attn.q_proj.weight_scale [1] BF16 model.layers.0.self_attn.v_proj.input_scale [1] BF16 model.layers.0.self_attn.v_proj.weight [1 024, 4 096] F8_E4M3 model.layers.0.self_attn.v_proj.weight_scale [1] BF16 When we load the model with the compressed-tensors HFQuantizer integration, we can see that all of the Linear modules that are specified within the quantization configuration have been replaced by CompressedLinear modules that manage the compressed weights and forward pass for inference. Note that the lm_head mentioned before in the ignore list is still kept as an unquantized Linear module. Copied from transformers import AutoModelForCausalLM ct_model = AutoModelForCausalLM.from_pretrained( "nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf" ) print (ct_model) """ LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(128256, 4096) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaSdpaAttention( (q_proj): CompressedLinear( in_features=4096, out_features=4096, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (k_proj): CompressedLinear( in_features=4096, out_features=1024, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (v_proj): CompressedLinear( in_features=4096, out_features=1024, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (o_proj): CompressedLinear( in_features=4096, out_features=4096, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): CompressedLinear( in_features=4096, out_features=14336, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (up_proj): CompressedLinear( in_features=4096, out_features=14336, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (down_proj): CompressedLinear( in_features=14336, out_features=4096, bias=False (input_observer): MovingAverageMinMaxObserver() (weight_observer): MovingAverageMinMaxObserver() ) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm((4096,), eps=1e-05) (post_attention_layernorm): LlamaRMSNorm((4096,), eps=1e-05) ) ) (norm): LlamaRMSNorm((4096,), eps=1e-05) (rotary_emb): LlamaRotaryEmbedding() ) (lm_head): Linear(in_features=4096, out_features=128256, bias=False) ) """ < > Update on GitHub ← BitNet Contribute new quantization method → Compressed Tensors Features: Installation Quickstart Model Load Sample Use Cases - Load and run an F P8 model Deep dive into a compressed-tensors model checkpoint
Text-to-image.txt
Text-to-image Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Text-to-image Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt . From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The denoising process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image. Read the How does Stable Diffusion work? blog post to learn more about how a latent diffusion model works. You can generate images from a prompt in 🤗 Diffusers in two steps: Load a checkpoint into the AutoPipelineForText2Image class, which automatically detects the appropriate pipeline class to use based on the checkpoint: Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" ).to( "cuda" ) Pass a prompt to the pipeline to generate an image: Copied image = pipeline( "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" ).images[ 0 ] image Popular models The most common text-to-image models are Stable Diffusion v1.5 , Stable Diffusion XL (SDXL) , and Kandinsky 2.2 . There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let’s use the same prompt for each model and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from Stable Diffusion v1-4 , and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like: Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" ).to( "cuda" ) generator = torch.Generator( "cuda" ).manual_seed( 31 ) image = pipeline( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" , generator=generator).images[ 0 ] image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive SDXL guide to learn more about how to use it. In general, you can use SDXL like: Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" ).to( "cuda" ) generator = torch.Generator( "cuda" ).manual_seed( 31 ) image = pipeline( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" , generator=generator).images[ 0 ] image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder" , torch_dtype=torch.float16 ).to( "cuda" ) generator = torch.Generator( "cuda" ).manual_seed( 31 ) image = pipeline( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" , generator=generator).images[ 0 ] image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5 . Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth ControlNet guide to learn more about other conditioning inputs and how to use them. In this example, let’s condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations: Copied from diffusers import ControlNetModel, AutoPipelineForText2Image from diffusers.utils import load_image import torch controlnet = ControlNetModel.from_pretrained( "lllyasviel/control_v11p_sd15_openpose" , torch_dtype=torch.float16, variant= "fp16" ).to( "cuda" ) pose_image = load_image( "https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png" ) Pass the controlnet to the AutoPipelineForText2Image , and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , controlnet=controlnet, torch_dtype=torch.float16, variant= "fp16" ).to( "cuda" ) generator = torch.Generator( "cuda" ).manual_seed( 31 ) image = pipeline( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" , image=pose_image, generator=generator).images[ 0 ] image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters. Height and width The height and width parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image: Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" ).to( "cuda" ) image = pipeline( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" , height= 768 , width= 512 ).images[ 0 ] image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidance_scale parameter affects how much the prompt influences image generation. A lower value gives the model “creativity” to generate images that are more loosely related to the prompt. Higher guidance_scale values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image. Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16 ).to( "cuda" ) image = pipeline( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" , guidance_scale= 3.5 ).images[ 0 ] image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as “low resolution” or “bad details”. You can also use a negative prompt to remove or modify the content and style of an image. Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16 ).to( "cuda" ) image = pipeline( prompt= "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" , negative_prompt= "ugly, deformed, disfigured, poor details, bad anatomy" , ).images[ 0 ] image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a seed as detailed in the Improve image quality with deterministic generation guide. You can set a seed and Generator as shown below. Creating an image with a Generator should return the same result each time instead of randomly generating a new image. Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16 ).to( "cuda" ) generator = torch.Generator(device= "cuda" ).manual_seed( 30 ) image = pipeline( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" , generator=generator, ).images[ 0 ] image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the Compel library to help you generate the weighted prompt embeddings. Learn how to create the prompt embeddings in the Prompt weighting guide. This example focuses on how to use the prompt embeddings in the pipeline. Once you’ve created the embeddings, you can pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the pipeline. Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16 ).to( "cuda" ) image = pipeline( prompt_embeds=prompt_embeds, # generated from Compel negative_prompt_embeds=negative_prompt_embeds, # generated from Compel ).images[ 0 ] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it’ll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a MultiControlNet ! There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive ControlNet guide to learn how you can use these models. Optimize Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn’t mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory. PyTorch 2.0 also supports a more memory-efficient attention mechanism called scaled dot product attention that is automatically enabled if you’re using PyTorch 2.0. You can combine this with torch.compile to speed your code up even more: Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" ).to( "cuda" ) pipeline.unet = torch. compile (pipeline.unet, mode= "reduce-overhead" , fullgraph= True ) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. < > Update on GitHub ← Unconditional image generation Image-to-image → Text-to-image Popular models Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 Control Net Configure pipeline parameters Height and width Guidance scale Negative prompt Generator Control image generation Prompt weighting Control Net Optimize
Community.txt
Community Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Community Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Community This page regroups resources around 🤗 Transformers developed by the community. Community resources: Resource Description Author Hugging Face Transformers Glossary Flashcards A set of flashcards based on the Transformers Docs Glossary that has been put into a form which can be easily learned/revised using Anki an open source, cross platform app specifically designed for long term knowledge retention. See this Introductory video on how to use the flashcards . Darigov Research Community notebooks: Notebook Description Author Fine-tune a pre-trained Transformer to generate lyrics How to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 model Aleksey Korshuk Train T5 in Tensorflow 2 How to train T5 for any task using Tensorflow 2. This notebook demonstrates a Question & Answer task implemented in Tensorflow 2 using SQUAD Muhammad Harris Train T5 on TPU How to train T5 on SQUAD with Transformers and Nlp Suraj Patil Fine-tune T5 for Classification and Multiple Choice How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning Suraj Patil Fine-tune DialoGPT on New Datasets and Languages How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots Nathan Cooper Long Sequence Modeling with Reformer How to train on sequences as long as 500,000 tokens with Reformer Patrick von Platen Fine-tune BART for Summarization How to fine-tune BART for summarization with fastai using blurr Wayde Gilliam Fine-tune a pre-trained Transformer on anyone’s tweets How to generate tweets in the style of your favorite Twitter account by fine-tuning a GPT-2 model Boris Dayma Optimize 🤗 Hugging Face models with Weights & Biases A complete tutorial showcasing W&B integration with Hugging Face Boris Dayma Pretrain Longformer How to build a “long” version of existing pretrained models Iz Beltagy Fine-tune Longformer for QA How to fine-tune longformer model for QA task Suraj Patil Evaluate Model with 🤗nlp How to evaluate longformer on TriviaQA with nlp Patrick von Platen Fine-tune T5 for Sentiment Span Extraction How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning Lorenzo Ampil Fine-tune DistilBert for Multiclass Classification How to fine-tune DistilBert for multiclass classification with PyTorch Abhishek Kumar Mishra Fine-tune BERT for Multi-label Classification How to fine-tune BERT for multi-label classification using PyTorch Abhishek Kumar Mishra Fine-tune T5 for Summarization How to fine-tune T5 for summarization in PyTorch and track experiments with WandB Abhishek Kumar Mishra Speed up Fine-Tuning in Transformers with Dynamic Padding / Bucketing How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing Michael Benesty Pretrain Reformer for Masked Language Modeling How to train a Reformer model with bi-directional self-attention layers Patrick von Platen Expand and Fine Tune Sci-BERT How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it. Tanmay Thakur Fine Tune BlenderBotSmall for Summarization using the Trainer API How to fine-tune BlenderBotSmall for summarization on a custom dataset, using the Trainer API. Tanmay Thakur Fine-tune Electra and interpret with Integrated Gradients How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients Eliza Szczechla fine-tune a non-English GPT-2 Model with Trainer class How to fine-tune a non-English GPT-2 Model with Trainer class Philipp Schmid Fine-tune a DistilBERT Model for Multi Label Classification task How to fine-tune a DistilBERT Model for Multi Label Classification task Dhaval Taunk Fine-tune ALBERT for sentence-pair classification How to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification task Nadir El Manouzi Fine-tune Roberta for sentiment analysis How to fine-tune a Roberta model for sentiment analysis Dhaval Taunk Evaluating Question Generation Models How accurate are the answers to questions generated by your seq2seq transformer model? Pascal Zoleko Classify text with DistilBERT and Tensorflow How to fine-tune DistilBERT for text classification in TensorFlow Peter Bayerle Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail How to warm-start a EncoderDecoderModel with a google-bert/bert-base-uncased checkpoint for summarization on CNN/Dailymail Patrick von Platen Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum How to warm-start a shared EncoderDecoderModel with a FacebookAI/roberta-base checkpoint for summarization on BBC/XSum Patrick von Platen Fine-tune TAPAS on Sequential Question Answering (SQA) How to fine-tune TapasForQuestionAnswering with a tapas-base checkpoint on the Sequential Question Answering (SQA) dataset Niels Rogge Evaluate TAPAS on Table Fact Checking (TabFact) How to evaluate a fine-tuned TapasForSequenceClassification with a tapas-base-finetuned-tabfact checkpoint using a combination of the 🤗 datasets and 🤗 transformers libraries Niels Rogge Fine-tuning mBART for translation How to fine-tune mBART using Seq2SeqTrainer for Hindi to English translation Vasudev Gupta Fine-tune LayoutLM on FUNSD (a form understanding dataset) How to fine-tune LayoutLMForTokenClassification on the FUNSD dataset for information extraction from scanned documents Niels Rogge Fine-Tune DistilGPT2 and Generate Text How to fine-tune DistilGPT2 and generate text Aakash Tripathi Fine-Tune LED on up to 8K tokens How to fine-tune LED on pubmed for long-range summarization Patrick von Platen Evaluate LED on Arxiv How to effectively evaluate LED on long-range summarization Patrick von Platen Fine-tune LayoutLM on RVL-CDIP (a document image classification dataset) How to fine-tune LayoutLMForSequenceClassification on the RVL-CDIP dataset for scanned document classification Niels Rogge Wav2Vec2 CTC decoding with GPT2 adjustment How to decode CTC sequence with language model adjustment Eric Lam Fine-tune BART for summarization in two languages with Trainer class How to fine-tune BART for summarization in two languages with Trainer class Eliza Szczechla Evaluate Big Bird on Trivia QA How to evaluate BigBird on long document question answering on Trivia QA Patrick von Platen Create video captions using Wav2Vec2 How to create YouTube captions from any video by transcribing the audio with Wav2Vec Niklas Muennighoff Fine-tune the Vision Transformer on CIFAR-10 using PyTorch Lightning How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and PyTorch Lightning Niels Rogge Fine-tune the Vision Transformer on CIFAR-10 using the 🤗 Trainer How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and the 🤗 Trainer Niels Rogge Evaluate LUKE on Open Entity, an entity typing dataset How to evaluate LukeForEntityClassification on the Open Entity dataset Ikuya Yamada Evaluate LUKE on TACRED, a relation extraction dataset How to evaluate LukeForEntityPairClassification on the TACRED dataset Ikuya Yamada Evaluate LUKE on CoNLL-2003, an important NER benchmark How to evaluate LukeForEntitySpanClassification on the CoNLL-2003 dataset Ikuya Yamada Evaluate BigBird-Pegasus on PubMed dataset How to evaluate BigBirdPegasusForConditionalGeneration on PubMed dataset Vasudev Gupta Speech Emotion Classification with Wav2Vec2 How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA dataset Mehrdad Farahani Detect objects in an image with DETR How to use a trained DetrForObjectDetection model to detect objects in an image and visualize attention Niels Rogge Fine-tune DETR on a custom object detection dataset How to fine-tune DetrForObjectDetection on a custom object detection dataset Niels Rogge Finetune T5 for Named Entity Recognition How to fine-tune T5 on a Named Entity Recognition Task Ogundepo Odunayo Fine-Tuning Open-Source LLM using QLoRA with MLflow and PEFT How to use QLoRA and PEFT to fine-tune an LLM in a memory-efficient way, while using MLflow to manage experiment tracking Yuki Watanabe < > Update on GitHub ← Notebooks with examples Troubleshoot → Community Community resources: Community notebooks:
Interface__TableQuestionAnsweringOutput.txt
Interface: TableQuestionAnsweringOutput Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: TableQuestionAnsweringOutput Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: TableQuestionAnsweringOutput Properties aggregator • aggregator : string The aggregator used to get the answer Defined in inference/src/tasks/nlp/tableQuestionAnswering.ts:22 answer • answer : string The plaintext answer Defined in inference/src/tasks/nlp/tableQuestionAnswering.ts:26 cells • cells : string [] A list of coordinates of the cells contents Defined in inference/src/tasks/nlp/tableQuestionAnswering.ts:30 coordinates • coordinates : number [][] a list of coordinates of the cells referenced in the answer Defined in inference/src/tasks/nlp/tableQuestionAnswering.ts:34 < > Update on GitHub ← SummarizationOutput TextGenerationInput → Interface: Table Question Answering Output Properties aggregator Defined in answer Defined in cells Defined in coordinates Defined in
Command_Line_Interface_(CLI).txt
Command Line Interface (CLI) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Command Line Interface (CLI) Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Command Line Interface (CLI) 🤗 Datasets provides a command line interface (CLI) with useful shell commands to interact with your dataset. You can check the available commands: Copied >>> datasets-cli -- help usage: datasets-cli < command > [<args>] positional arguments: {convert, env , test ,convert_to_parquet} datasets-cli command helpers convert Convert a TensorFlow Datasets dataset to a HuggingFace Datasets dataset. env Print relevant system environment info. test Test dataset implementation. convert_to_parquet Convert dataset to Parquet delete_from_hub Delete dataset config from the Hub optional arguments: -h, -- help show this help message and exit Convert to Parquet Easily convert your Hub script-based dataset to Parquet data-only dataset , so that the dataset viewer will be supported. Copied >>> datasets-cli convert_to_parquet -- help usage: datasets-cli < command > [<args>] convert_to_parquet [-h] [--token TOKEN] [--revision REVISION] [--trust_remote_code] dataset_id positional arguments: dataset_id source dataset ID, e.g. USERNAME/DATASET_NAME or ORGANIZATION/DATASET_NAME optional arguments: -h, -- help show this help message and exit --token TOKEN access token to the Hugging Face Hub (defaults to logged-in user 's one) --revision REVISION source revision --trust_remote_code whether to trust the code execution of the load script This command: makes a copy of the script on the “main” branch into a dedicated branch called “script” (if it does not already exist) creates a pull request to the Hub dataset to convert it to Parquet files (and deletes the script from the main branch) If in the future you need to recreate the Parquet files from the “script” branch, pass the --revision script argument. Note that you should pass the --trust_remote_code argument only if you trust the remote code to be executed locally on your machine. For example: Copied >>> datasets-cli convert_to_parquet USERNAME/DATASET_NAME Do not forget that you need to log in first to your Hugging Face account: Copied >>> huggingface-cli login Delete from Hub Delete a dataset configuration from a data-only dataset on the Hub. Copied >>> datasets-cli delete_from_hub -- help usage: datasets-cli < command > [<args>] delete_from_hub [-h] [--token TOKEN] [--revision REVISION] dataset_id config_name positional arguments: dataset_id source dataset ID, e.g. USERNAME/DATASET_NAME or ORGANIZATION/DATASET_NAME config_name config name to delete optional arguments: -h, -- help show this help message and exit --token TOKEN access token to the Hugging Face Hub --revision REVISION source revision For example: Copied >>> datasets-cli delete_from_hub USERNAME/DATASET_NAME CONFIG_NAME Do not forget that you need to log in first to your Hugging Face account: Copied >>> huggingface-cli login < > Update on GitHub ← Search index Troubleshooting → Command Line Interface (CL I) Convert to Parquet Delete from Hub
Network_Security.txt
Network Security Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Network Security Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Single Sign-On (SSO) Audit Logs Storage Regions Dataset viewer for Private datasets Resource Groups (Access Control) Advanced Compute Options Advanced Security Tokens Management Analytics Network Security Gating Group Collections Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Network Security This feature is part of the Enterprise Plus plan. Define your organization IP Ranges You can list the IP addresses of your organization’s outbound traffic to apply for higher rate limits and/or to enforce authenticated access to Hugging Face from your corporate network. The outbound IP address ranges are defined in CIDR format. For example, 52.219.168.0/24 or 2600:1f69:7400::/40 . You can set multiple ranges, one per line. Higher Rate Limits Apply for higher rate-limits for your organization. Most of the actions on the Hub have limits, for example, users are limited to creating a certain number of repositories per day. This option allows your organization to apply for higher limits for your organization members. This also enables higher HTTP rate limits on the Hub API, to unlock large volumes of model or dataset downloads. To activate this option, Toggle on the “Higher Hub rate-limits” option You need to have a valid Enterprise Plus subscription for this option to take effect. 2. Ensure the Organization IP Ranges are defined Once defined, higher rate limits will apply to members of your organization whose IPs match the defined ranges. Enforce authenticated access to the Hugging Face Hub This option will ensure that, when browsing from your corporate network, only authenticated users belonging to your organization are able to access the Hugging Face Hub. All public pages will show the following message if access unauthenticated: Toggle on the “Enforce authenticated access to the Hub” option You need to have a valid Enterprise Plus subscription for this option to take effect. 2. Ensure the Organization IP Ranges are defined Content Access Policy You can also define a fine grained Content Access Policy by blocking some section of the Hugging Face Hub. For example, you can block your organization’s members to access Spaces, by adding /spaces/* to the blocked URLs. When users of your organization navigate to a page that matches the URL pattern, they’ll be presented the following page: To define Blocked URLs, enter URL patterns, without the domain name, one per line: The Allowed URLs field, enables you to define some exception to the blocking rules, especially. For example by allowing a specific URL within the Blocked URLs pattern, ie /spaces/meta-llama/* < > Update on GitHub ← Analytics Gating Group Collections → Network Security Define your organization I P Ranges Higher Rate Limits Enforce authenticated access to the Hugging Face Hub Content Access Policy
Load_adapters_with_🤗_PEFT.txt
Load adapters with 🤗 PEFT Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Load adapters with 🤗 PEFT Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Load adapters with 🤗 PEFT Parameter-Efficient Fine Tuning (PEFT) methods freeze the pretrained model parameters during fine-tuning and add a small number of trainable parameters (the adapters) on top of it. The adapters are trained to learn task-specific information. This approach has been shown to be very memory-efficient with lower compute usage while producing results comparable to a fully fine-tuned model. Adapters trained with PEFT are also usually an order of magnitude smaller than the full model, making it convenient to share, store, and load them. The adapter weights for a OPTForCausalLM model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB. If you’re interested in learning more about the 🤗 PEFT library, check out the documentation . Setup Get started by installing 🤗 PEFT: Copied pip install peft If you want to try out the brand new features, you might be interested in installing the library from source: Copied pip install git+https://github.com/huggingface/peft.git Supported PEFT models 🤗 Transformers natively supports some PEFT methods, meaning you can load adapter weights stored locally or on the Hub and easily run or train them with a few lines of code. The following methods are supported: Low Rank Adapters IA3 AdaLoRA If you want to use other PEFT methods, such as prompt learning or prompt tuning, or learn about the 🤗 PEFT library in general, please refer to the documentation . Load a PEFT adapter To load and use a PEFT adapter model from 🤗 Transformers, make sure the Hub repository or local directory contains an adapter_config.json file and the adapter weights, as shown in the example image above. Then you can load the PEFT adapter model using the AutoModelFor class. For example, to load a PEFT adapter model for causal language modeling: specify the PEFT model id pass it to the AutoModelForCausalLM class Copied from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id) You can load a PEFT adapter with either an AutoModelFor class or the base model class like OPTForCausalLM or LlamaForCausalLM . You can also load a PEFT adapter by calling the load_adapter method: Copied from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "facebook/opt-350m" peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) Check out the API documentation section below for more details. Load in 8bit or 4bit The bitsandbytes integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the bitsandbytes integration guide to learn more). Add the load_in_8bit or load_in_4bit parameters to from_pretrained() and set device_map="auto" to effectively distribute the model to your hardware: Copied from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id, quantization_config=BitsAndBytesConfig(load_in_8bit= True )) Add a new adapter You can use ~peft.PeftModel.add_adapter to add a new adapter to a model with an existing adapter as long as the new adapter is the same type as the current one. For example, if you have an existing LoRA adapter attached to a model: Copied from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import LoraConfig model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=[ "q_proj" , "k_proj" ], init_lora_weights= False ) model.add_adapter(lora_config, adapter_name= "adapter_1" ) To add a new adapter: Copied # attach new adapter with same config model.add_adapter(lora_config, adapter_name= "adapter_2" ) Now you can use ~peft.PeftModel.set_adapter to set which adapter to use: Copied # use adapter_1 model.set_adapter( "adapter_1" ) output_disabled = model.generate(**inputs) print (tokenizer.decode(output_disabled[ 0 ], skip_special_tokens= True )) # use adapter_2 model.set_adapter( "adapter_2" ) output_enabled = model.generate(**inputs) print (tokenizer.decode(output_enabled[ 0 ], skip_special_tokens= True )) Enable and disable adapters Once you’ve added an adapter to a model, you can enable or disable the adapter module. To enable the adapter module: Copied from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" adapter_model_id = "ybelkada/opt-350m-lora" tokenizer = AutoTokenizer.from_pretrained(model_id) text = "Hello" inputs = tokenizer(text, return_tensors= "pt" ) model = AutoModelForCausalLM.from_pretrained(model_id) peft_config = PeftConfig.from_pretrained(adapter_model_id) # to initiate with random weights peft_config.init_lora_weights = False model.add_adapter(peft_config) model.enable_adapters() output = model.generate(**inputs) To disable the adapter module: Copied model.disable_adapters() output = model.generate(**inputs) Train a PEFT adapter PEFT adapters are supported by the Trainer class so that you can train an adapter for your specific use case. It only requires adding a few more lines of code. For example, to train a LoRA adapter: If you aren’t familiar with fine-tuning a model with Trainer , take a look at the Fine-tune a pretrained model tutorial. Define your adapter configuration with the task type and hyperparameters (see ~peft.LoraConfig for more details about what the hyperparameters do). Copied from peft import LoraConfig peft_config = LoraConfig( lora_alpha= 16 , lora_dropout= 0.1 , r= 64 , bias= "none" , task_type= "CAUSAL_LM" , ) Add adapter to the model. Copied model.add_adapter(peft_config) Now you can pass the model to Trainer ! Copied trainer = Trainer(model=model, ...) trainer.train() To save your trained adapter and load it back: Copied model.save_pretrained(save_dir) model = AutoModelForCausalLM.from_pretrained(save_dir) Add additional trainable layers to a PEFT adapter You can also fine-tune additional trainable adapters on top of a model that has adapters attached by passing modules_to_save in your PEFT config. For example, if you want to also fine-tune the lm_head on top of a model with a LoRA adapter: Copied from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import LoraConfig model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=[ "q_proj" , "k_proj" ], modules_to_save=[ "lm_head" ], ) model.add_adapter(lora_config) API docs class transformers.integrations. PeftAdapterMixin < source > ( ) A class containing all functions for loading and using adapters weights that are supported in PEFT library. For more details about adapters and injecting them on a transformer-based model, check out the documentation of PEFT library: https://huggingface.co/docs/peft/index Currently supported PEFT methods are all non-prefix tuning methods. Below is the list of supported PEFT methods that anyone can load, train and run with this mixin class: Low Rank Adapters (LoRA): https://huggingface.co/docs/peft/conceptual_guides/lora IA3: https://huggingface.co/docs/peft/conceptual_guides/ia3 AdaLora: https://arxiv.org/abs/2303.10512 Other PEFT models such as prompt tuning, prompt learning are out of scope as these adapters are not “injectable” into a torch module. For using these methods, please refer to the usage guide of PEFT library. With this mixin, if the correct PEFT version is installed, it is possible to: Load an adapter stored on a local path or in a remote Hub repository, and inject it in the model Attach new adapters in the model and train them with Trainer or by your own. Attach multiple adapters and iteratively activate / deactivate them Activate / deactivate all adapters from the model. Get the state_dict of the active adapter. load_adapter < source > ( peft_model_id : typing.Optional[str] = None adapter_name : typing.Optional[str] = None revision : typing.Optional[str] = None token : typing.Optional[str] = None device_map : typing.Optional[str] = 'auto' max_memory : typing.Optional[str] = None offload_folder : typing.Optional[str] = None offload_index : typing.Optional[int] = None peft_config : typing.Dict[str, typing.Any] = None adapter_state_dict : typing.Optional[typing.Dict[str, ForwardRef('torch.Tensor')]] = None low_cpu_mem_usage : bool = False is_trainable : bool = False adapter_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None ) Parameters peft_model_id ( str , optional ) — The identifier of the model to look for on the Hub, or a local path to the saved adapter config file and adapter weights. adapter_name ( str , optional ) — The adapter name to use. If not set, will use the default adapter. revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. To test a pull request you made on the Hub, you can pass revision="refs/pr/<pr_number>" . token ( str , optional ) — Whether to use authentication token to load the remote folder. Useful to load private repositories that are on HuggingFace Hub. You might need to call huggingface-cli login and paste your tokens to cache it. device_map ( str or Dict[str, Union[int, str, torch.device]] or int or torch.device , optional ) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. If we only pass the device ( e.g. , "cpu" , "cuda:1" , "mps" , or a GPU ordinal rank like 1 ) on which the model will be allocated, the device map will map the entire model to this device. Passing device_map = 0 means put the whole model on GPU 0. To have Accelerate compute the most optimized device_map automatically, set device_map="auto" . For more information about each option see designing a device map . max_memory ( Dict , optional ) — A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. offload_folder ( str or os.PathLike , optional ) — If the device_map contains any value "disk" , the folder where we will offload weights. offload_index ( int , optional ) — offload_index argument to be passed to accelerate.dispatch_model method. peft_config ( Dict[str, Any] , optional ) — The configuration of the adapter to add, supported adapters are non-prefix tuning and adaption prompts methods. This argument is used in case users directly pass PEFT state dicts adapter_state_dict ( Dict[str, torch.Tensor] , optional ) — The state dict of the adapter to load. This argument is used in case users directly pass PEFT state dicts low_cpu_mem_usage ( bool , optional , defaults to False ) — Reduce memory usage while loading the PEFT adapter. This should also speed up the loading process. Requires PEFT version 0.13.0 or higher. is_trainable ( bool , optional , defaults to False ) — Whether the adapter should be trainable or not. If False , the adapter will be frozen and can only be used for inference. adapter_kwargs ( Dict[str, Any] , optional ) — Additional keyword arguments passed along to the from_pretrained method of the adapter config and find_adapter_config_file method. Load adapter weights from file or remote Hub folder. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on PEFT official documentation: https://huggingface.co/docs/peft Requires peft as a backend to load the adapter weights. add_adapter < source > ( adapter_config adapter_name : typing.Optional[str] = None ) Parameters adapter_config ( ~peft.PeftConfig ) — The configuration of the adapter to add, supported adapters are non-prefix tuning and adaption prompts methods adapter_name ( str , optional , defaults to "default" ) — The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Adds a fresh new adapter to the current model for training purpose. If no adapter name is passed, a default name is assigned to the adapter to follow the convention of PEFT library (in PEFT we use “default” as the default adapter name). set_adapter < source > ( adapter_name : typing.Union[typing.List[str], str] ) Parameters adapter_name ( Union[List[str], str] ) — The name of the adapter to set. Can be also a list of strings to set multiple adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Sets a specific adapter by forcing the model to use a that adapter and disable the other adapters. disable_adapters < source > ( ) If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Disable all adapters that are attached to the model. This leads to inferring with the base model only. enable_adapters < source > ( ) If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Enable adapters that are attached to the model. active_adapters < source > ( ) If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Gets the current active adapters of the model. In case of multi-adapter inference (combining multiple adapters for inference) returns the list of all active adapters so that users can deal with them accordingly. For previous PEFT versions (that does not support multi-adapter inference), module.active_adapter will return a single string. get_adapter_state_dict < source > ( adapter_name : typing.Optional[str] = None ) Parameters adapter_name ( str , optional ) — The name of the adapter to get the state dict from. If no name is passed, the active adapter is used. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT official documentation: https://huggingface.co/docs/peft Gets the adapter state dict that should only contain the weights tensors of the specified adapter_name adapter. If no adapter_name is passed, the active adapter is used. < > Update on GitHub ← Set up distributed training with 🤗 Accelerate Share your model → Load adapters with 🤗 PEFT Setup Supported PEF T models Load a PEF T adapter Load in 8bit or 4bit Add a new adapter Enable and disable adapters Train a PEF T adapter Add additional trainable layers to a PEF T adapter AP I docs
Sentence_Transformers.txt
Sentence Transformers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AutoTrain documentation Sentence Transformers AutoTrain 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.8.24 v0.7.129 v0.6.48 v0.5.2 EN Getting Started 🤗 AutoTrain How much does it cost? Get help and support Frequently Asked Questions Quickstart Train on Spaces Python SDK Train Locally Config File Tasks LLM Finetuning Text Classification/Regression Extractive QA Sentence Transformer Image Classification / Regression Object Detection Seq2Seq Token Classification Tabular Miscellaneous Understanding Column Mapping AutoTrain API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.8.24 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Sentence Transformers This task lets you easily train or fine-tune a Sentence Transformer model on your own dataset. AutoTrain supports the following types of sentence transformer finetuning: pair : dataset with two sentences: anchor and positive pair_class : dataset with two sentences: premise and hypothesis and a target label pair_score : dataset with two sentences: sentence1 and sentence2 and a target score triplet : dataset with three sentences: anchor, positive and negative qa : dataset with two sentences: query and answer Data Format Sentence Transformers finetuning accepts data in CSV/JSONL format. You can also use a dataset from Hugging Face Hub. pair For pair training, the data should be in the following format: anchor positive hello hi how are you I am fine What is your name? My name is Abhishek Which is the best programming language? Python pair_class For pair_class training, the data should be in the following format: premise hypothesis label hello hi 1 how are you I am fine 0 What is your name? My name is Abhishek 1 Which is the best programming language? Python 1 pair_score For pair_score training, the data should be in the following format: sentence1 sentence2 score hello hi 0.8 how are you I am fine 0.2 What is your name? My name is Abhishek 0.9 Which is the best programming language? Python 0.7 triplet For triplet training, the data should be in the following format: anchor positive negative hello hi bye how are you I am fine I am not fine What is your name? My name is Abhishek Whats it to you? Which is the best programming language? Python Javascript qa For qa training, the data should be in the following format: query answer hello hi how are you I am fine What is your name? My name is Abhishek Which is the best programming language? Python Parameters class autotrain.trainers.sent_transformers.params. SentenceTransformersParams < source > ( data_path : str = None model : str = 'microsoft/mpnet-base' lr : float = 3e-05 epochs : int = 3 max_seq_length : int = 128 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' username : typing.Optional[str] = None log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 trainer : str = 'pair_score' sentence1_column : str = 'sentence1' sentence2_column : str = 'sentence2' sentence3_column : typing.Optional[str] = None target_column : typing.Optional[str] = None ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the pre-trained model to use. Default is “microsoft/mpnet-base”. lr (float) — Learning rate for training. Default is 3e-5. epochs (int) — Number of training epochs. Default is 3. max_seq_length (int) — Maximum sequence length for the input. Default is 128. batch_size (int) — Batch size for training. Default is 8. warmup_ratio (float) — Proportion of training to perform learning rate warmup. Default is 0.1. gradient_accumulation (int) — Number of steps to accumulate gradients before updating. Default is 1. optimizer (str) — Optimizer to use. Default is “adamw_torch”. scheduler (str) — Learning rate scheduler to use. Default is “linear”. weight_decay (float) — Weight decay to apply. Default is 0.0. max_grad_norm (float) — Maximum gradient norm for clipping. Default is 1.0. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. Default is None. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project for output directory. Default is “project-name”. auto_find_batch_size (bool) — Whether to automatically find the optimal batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision training mode (fp16, bf16, or None). Default is None. save_total_limit (int) — Maximum number of checkpoints to save. Default is 1. token (Optional[str]) — Token for accessing Hugging Face Hub. Default is None. push_to_hub (bool) — Whether to push the model to Hugging Face Hub. Default is False. eval_strategy (str) — Evaluation strategy to use. Default is “epoch”. username (Optional[str]) — Hugging Face username. Default is None. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Number of epochs with no improvement after which training will be stopped. Default is 5. early_stopping_threshold (float) — Threshold for measuring the new optimum, to qualify as an improvement. Default is 0.01. trainer (str) — Name of the trainer to use. Default is “pair_score”. sentence1_column (str) — Name of the column containing the first sentence. Default is “sentence1”. sentence2_column (str) — Name of the column containing the second sentence. Default is “sentence2”. sentence3_column (Optional[str]) — Name of the column containing the third sentence (if applicable). Default is None. target_column (Optional[str]) — Name of the column containing the target variable. Default is None. SentenceTransformersParams is a configuration class for setting up parameters for training sentence transformers. < > Update on GitHub ← Extractive QA Image Classification / Regression → Sentence Transformers Data Format pair pair_class pair_score triplet qa Parameters
Advanced_Setup__Instance_Types__Auto_Scaling__Vers.txt
Advanced Setup (Instance Types, Auto Scaling, Versioning) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Advanced Setup (Instance Types, Auto Scaling, Versioning) Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Advanced Setup (Instance Types, Auto Scaling, Versioning) We have seen how fast and easy it is to deploy an Endpoint in Create your first Endpoint , but that’s not all you can manage. During the creation process and after selecting your Cloud Provider and Region, click on the [Advanced configuration] button to reveal further configuration options for your Endpoint. Instance type 🤗 Inference Endpoints offers a selection of curated CPU and GPU instances. Note: Your Hugging Face account comes with a capacity quota for CPU and GPU instances. To increase your quota or request new instance types, please check with us. Default: CPU-medium Replica autoscaling Set the range (minimum (>=1) and maximum ) of replicas you want your Endpoint to automatically scale within based on utilization. Default: min 1; max 2 Task Select a supported Machine Learning Task , or set to Custom . Custom can/should be used when you are not using a Transformers-based model or when you want to customize the inference pipeline, see Create your own Inference handler . Default: derived from the model repository. Framework For Transformers models, if both PyTorch and TensorFlow weights are available, you can select which model weights to use. This will help reduce the image artifact size and accelerate startups/scaling of your endpoints. Default: PyTorch if available. Revision Create your Endpoint targeting a specific revision commit for its source Hugging Face Model Repository. This allows you to version your endpoint and make sure you are always using the same weights even if you are updating the Model Repository. Default: The most recent commit. Image Allows you to provide a custom container image you want to deploy into an Endpoint. Those can be public images, e.g tensorflow/serving:2.7.3, or private Images hosted on Docker hub , AWS ECR , Azure ACR , or Google GCR . More on how to “Use your own custom container” below. < > Update on GitHub ← Update your Endpoint Create a Private Endpoint with AWS PrivateLink → Advanced Setup ( Instance Types, Auto Scaling, Versioning)
Interface__AudioClassificationOutputValue.txt
Interface: AudioClassificationOutputValue Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: AudioClassificationOutputValue Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: AudioClassificationOutputValue Properties label • label : string The label for the class (model specific) Defined in inference/src/tasks/audio/audioClassification.ts:16 score • score : number A float that represents how likely it is that the audio file belongs to this class. Defined in inference/src/tasks/audio/audioClassification.ts:21 < > Update on GitHub ← InferenceOutputError AudioToAudioOutputValue → Interface: Audio Classification Output Value Properties label Defined in score Defined in
Token_Classification.txt
Token Classification Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AutoTrain documentation Token Classification AutoTrain 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.8.24 v0.7.129 v0.6.48 v0.5.2 EN Getting Started 🤗 AutoTrain How much does it cost? Get help and support Frequently Asked Questions Quickstart Train on Spaces Python SDK Train Locally Config File Tasks LLM Finetuning Text Classification/Regression Extractive QA Sentence Transformer Image Classification / Regression Object Detection Seq2Seq Token Classification Tabular Miscellaneous Understanding Column Mapping AutoTrain API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.8.24 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Token Classification Token classification is the task of classifying each token in a sequence. This can be used for Named Entity Recognition (NER), Part-of-Speech (POS) tagging, and more. Get your data ready in proper format and then with just a few clicks, your state-of-the-art model will be ready to be used in production. Data Format The data should be in the following CSV format: Copied tokens,tags " [ 'I' , 'love' , 'Paris' ] "," [ 'O' , 'O' , 'B-LOC' ] " " [ 'I' , 'live' , 'in' , 'New' , 'York' ] "," [ 'O' , 'O' , 'O' , 'B-LOC' , 'I-LOC' ] " . . . or you can also use JSONL format: Copied { "tokens" : [ "I" , "love" , "Paris" ] , "tags" : [ "O" , "O" , "B-LOC" ] } { "tokens" : [ "I" , "live" , "in" , "New" , "York" ] , "tags" : [ "O" , "O" , "O" , "B-LOC" , "I-LOC" ] } . . . As you can see, we have two columns in the CSV file. One column is the tokens and the other is the tags. Both the columns are stringified lists! The tokens column contains the tokens of the sentence and the tags column contains the tags for each token. If your CSV is huge, you can divide it into multiple CSV files and upload them separately. Please make sure that the column names are the same in all CSV files. One way to divide the CSV file using pandas is as follows: Copied import pandas as pd # Set the chunk size chunk_size = 1000 i = 1 # Open the CSV file and read it in chunks for chunk in pd.read_csv( 'example.csv' , chunksize=chunk_size): # Save each chunk to a new file chunk.to_csv( f'chunk_ {i} .csv' , index= False ) i += 1 Sample dataset from HuggingFace Hub: conll2003 Columns Your CSV/JSONL dataset must have two columns: tokens and tags . Parameters class autotrain.trainers.token_classification.params. TokenClassificationParams < source > ( data_path : str = None model : str = 'bert-base-uncased' lr : float = 5e-05 epochs : int = 3 max_seq_length : int = 128 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None tokens_column : str = 'tokens' tags_column : str = 'tags' logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' username : typing.Optional[str] = None log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the model to use. Default is “bert-base-uncased”. lr (float) — Learning rate. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. max_seq_length (int) — Maximum sequence length. Default is 128. batch_size (int) — Training batch size. Default is 8. warmup_ratio (float) — Warmup proportion. Default is 0.1. gradient_accumulation (int) — Gradient accumulation steps. Default is 1. optimizer (str) — Optimizer to use. Default is “adamw_torch”. scheduler (str) — Scheduler to use. Default is “linear”. weight_decay (float) — Weight decay. Default is 0.0. max_grad_norm (float) — Maximum gradient norm. Default is 1.0. seed (int) — Random seed. Default is 42. train_split (str) — Name of the training split. Default is “train”. valid_split (Optional[str]) — Name of the validation split. Default is None. tokens_column (str) — Name of the tokens column. Default is “tokens”. tags_column (str) — Name of the tags column. Default is “tags”. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project. Default is “project-name”. auto_find_batch_size (bool) — Whether to automatically find the batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision setting (fp16, bf16, or None). Default is None. save_total_limit (int) — Total number of checkpoints to save. Default is 1. token (Optional[str]) — Hub token for authentication. Default is None. push_to_hub (bool) — Whether to push the model to the Hugging Face hub. Default is False. eval_strategy (str) — Evaluation strategy. Default is “epoch”. username (Optional[str]) — Hugging Face username. Default is None. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Patience for early stopping. Default is 5. early_stopping_threshold (float) — Threshold for early stopping. Default is 0.01. TokenClassificationParams is a configuration class for token classification training parameters. < > Update on GitHub ← Seq2Seq Tabular → Token Classification Data Format Columns Parameters
Agents__supercharged___Multi_agents__External_tool.txt
Agents, supercharged - Multi-agents, External tools, and more Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Agents, supercharged - Multi-agents, External tools, and more Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Agents, supercharged - Multi-agents, External tools, and more What is an agent? If you’re new to transformers.agents , make sure to first read the main agents documentation . In this page we’re going to highlight several advanced uses of transformers.agents . Multi-agents Multi-agent has been introduced in Microsoft’s framework Autogen . It simply means having several agents working together to solve your task instead of only one. It empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. You can easily build hierarchical multi-agent systems with transformers.agents . To do so, encapsulate the agent in a ManagedAgent object. This object needs arguments agent , name , and a description , which will then be embedded in the manager agent’s system prompt to let it know how to call this managed agent, as we also do for tools. Here’s an example of making an agent that managed a specific web search agent using our DuckDuckGoSearchTool : Copied from transformers.agents import ReactCodeAgent, HfApiEngine, DuckDuckGoSearchTool, ManagedAgent llm_engine = HfApiEngine() web_agent = ReactCodeAgent(tools=[DuckDuckGoSearchTool()], llm_engine=llm_engine) managed_web_agent = ManagedAgent( agent=web_agent, name= "web_search" , description= "Runs web searches for you. Give it your query as an argument." ) manager_agent = ReactCodeAgent( tools=[], llm_engine=llm_engine, managed_agents=[managed_web_agent] ) manager_agent.run( "Who is the CEO of Hugging Face?" ) For an in-depth example of an efficient multi-agent implementation, see how we pushed our multi-agent system to the top of the GAIA leaderboard . Advanced tool usage Directly define a tool by subclassing Tool, and share it to the Hub Let’s take again the tool example from main documentation, for which we had implemented a tool decorator. If you need to add variation, like custom attributes for your tool, you can build your tool following the fine-grained method: building a class that inherits from the Tool superclass. The custom tool needs: An attribute name , which corresponds to the name of the tool itself. The name usually describes what the tool does. Since the code returns the model with the most downloads for a task, let’s name it model_download_counter . An attribute description is used to populate the agent’s system prompt. An inputs attribute, which is a dictionary with keys "type" and "description" . It contains information that helps the Python interpreter make educated choices about the input. An output_type attribute, which specifies the output type. A forward method which contains the inference code to be executed. The types for both inputs and output_type should be amongst Pydantic formats . Copied from transformers import Tool from huggingface_hub import list_models class HFModelDownloadsTool ( Tool ): name = "model_download_counter" description = """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint.""" inputs = { "task" : { "type" : "string" , "description" : "the task category (such as text-classification, depth-estimation, etc)" , } } output_type = "string" def forward ( self, task: str ): model = next ( iter (list_models( filter =task, sort= "downloads" , direction=- 1 ))) return model. id Now that the custom HfModelDownloadsTool class is ready, you can save it to a file named model_downloads.py and import it for use. Copied from model_downloads import HFModelDownloadsTool tool = HFModelDownloadsTool() You can also share your custom tool to the Hub by calling push_to_hub() on the tool. Make sure you’ve created a repository for it on the Hub and are using a token with read access. Copied tool.push_to_hub( "{your_username}/hf-model-downloads" ) Load the tool with the ~Tool.load_tool function and pass it to the tools parameter in your agent. Copied from transformers import load_tool, CodeAgent model_download_tool = load_tool( "m-ric/hf-model-downloads" ) Import a Space as a tool 🚀 You can directly import a Space from the Hub as a tool using the Tool.from_space() method! You only need to provide the id of the Space on the Hub, its name, and a description that will help you agent understand what the tool does. Under the hood, this will use gradio-client library to call the Space. For instance, let’s import the FLUX.1-dev Space from the Hub and use it to generate an image. Copied from transformers import Tool image_generation_tool = Tool.from_space( "black-forest-labs/FLUX.1-dev" , name = "image_generator" , description = "Generate an image from a prompt" ) image_generation_tool( "A sunny beach" ) And voilà, here’s your image! 🏖️ Then you can use this tool just like any other tool. For example, let’s improve the prompt a rabbit wearing a space suit and generate an image of it. Copied from transformers import ReactCodeAgent agent = ReactCodeAgent(tools=[image_generation_tool]) agent.run( "Improve this prompt, then generate an image of it." , prompt= 'A rabbit wearing a space suit' ) Copied === Agent thoughts: improved_prompt could be "A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background" Now that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt. >>> Agent is executing the code below: image = image_generator(prompt="A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background") final_answer(image) How cool is this? 🤩 Use gradio-tools gradio-tools is a powerful library that allows using Hugging Face Spaces as tools. It supports many existing Spaces as well as custom Spaces. Transformers supports gradio_tools with the Tool.from_gradio() method. For example, let’s use the StableDiffusionPromptGeneratorTool from gradio-tools toolkit for improving prompts to generate better images. Import and instantiate the tool, then pass it to the Tool.from_gradio method: Copied from gradio_tools import StableDiffusionPromptGeneratorTool from transformers import Tool, load_tool, CodeAgent gradio_prompt_generator_tool = StableDiffusionPromptGeneratorTool() prompt_generator_tool = Tool.from_gradio(gradio_prompt_generator_tool) gradio-tools require textual inputs and outputs even when working with different modalities like image and audio objects. Image and audio inputs and outputs are currently incompatible. Use LangChain tools We love Langchain and think it has a very compelling suite of tools. To import a tool from LangChain, use the from_langchain() method. Here is how you can use it to recreate the intro’s search result using a LangChain web search tool. This tool will need pip install google-search-results to work properly. Copied from langchain.agents import load_tools from transformers import Tool, ReactCodeAgent search_tool = Tool.from_langchain(load_tools([ "serpapi" ])[ 0 ]) agent = ReactCodeAgent(tools=[search_tool]) agent.run( "How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?" ) Display your agent run in a cool Gradio interface You can leverage gradio.Chatbot to display your agent’s thoughts using stream_to_gradio , here is an example: Copied import gradio as gr from transformers import ( load_tool, ReactCodeAgent, HfApiEngine, stream_to_gradio, ) # Import tool from Hub image_generation_tool = load_tool( "m-ric/text-to-image" ) llm_engine = HfApiEngine( "meta-llama/Meta-Llama-3-70B-Instruct" ) # Initialize the agent with the image generation tool agent = ReactCodeAgent(tools=[image_generation_tool], llm_engine=llm_engine) def interact_with_agent ( task ): messages = [] messages.append(gr.ChatMessage(role= "user" , content=task)) yield messages for msg in stream_to_gradio(agent, task): messages.append(msg) yield messages + [ gr.ChatMessage(role= "assistant" , content= "⏳ Task not finished yet!" ) ] yield messages with gr.Blocks() as demo: text_input = gr.Textbox(lines= 1 , label= "Chat Message" , value= "Make me a picture of the Statue of Liberty." ) submit = gr.Button( "Run illustrator agent!" ) chatbot = gr.Chatbot( label= "Agent" , type = "messages" , avatar_images=( None , "https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png" , ), ) submit.click(interact_with_agent, [text_input], [chatbot]) if __name__ == "__main__" : demo.launch() < > Update on GitHub ← Agents 101 Generation with LLMs → Agents, supercharged - Multi-agents, External tools, and more What is an agent? Multi-agents Advanced tool usage Directly define a tool by subclassing Tool, and share it to the Hub Import a Space as a tool 🚀 Use gradio-tools Use Lang Chain tools Display your agent run in a cool Gradio interface
Controlling_image_quality.txt
Controlling image quality Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Controlling image quality Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Controlling image quality The components of a diffusion model, like the UNet and scheduler, can be optimized to improve the quality of generated images leading to better details. These techniques are especially useful if you don’t have the resources to simply use a larger model for inference. You can enable these techniques during inference without any additional training. This guide will show you how to turn these techniques on in your pipeline and how to configure them to improve the quality of your generated images. Details FreeU improves image details by rebalancing the UNet’s backbone and skip connection weights. The skip connections can cause the model to overlook some of the backbone semantics which may lead to unnatural image details in the generated image. This technique does not require any additional training and can be applied on the fly during inference for tasks like image-to-image and text-to-video. Use the enable_freeu() method on your pipeline and configure the scaling factors for the backbone ( b1 and b2 ) and skip connections ( s1 and s2 ). The number after each scaling factor corresponds to the stage in the UNet where the factor is applied. Take a look at the FreeU repository for reference hyperparameters for different models. Stable Diffusion v1-5 Stable Diffusion v2-1 Stable Diffusion XL Zeroscope Copied import torch from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, safety_checker= None ).to( "cuda" ) pipeline.enable_freeu(s1= 0.9 , s2= 0.2 , b1= 1.5 , b2= 1.6 ) generator = torch.Generator(device= "cpu" ).manual_seed( 33 ) prompt = "" image = pipeline(prompt, generator=generator).images[ 0 ] image FreeU disabled FreeU enabled Call the pipelines.StableDiffusionMixin.disable_freeu() method to disable FreeU. Copied pipeline.disable_freeu() < > Update on GitHub ← Reproducible pipelines Prompt techniques → Controlling image quality Details
Tokenizers.txt
tokenizers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation tokenizers Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started tokenizers Tokenizers are used to prepare textual inputs for a model. Example: Create an AutoTokenizer and use it to tokenize a sentence. This will automatically detect the tokenizer type based on the tokenizer class defined in tokenizer.json . Copied import { AutoTokenizer } from '@huggingface/transformers' ; const tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/bert-base-uncased' ); const { input_ids } = await tokenizer ( 'I love transformers!' ); // Tensor { // data: BigInt64Array(6) [101n, 1045n, 2293n, 19081n, 999n, 102n], // dims: [1, 6], // type: 'int64', // size: 6, // } tokenizers static .TokenizerModel ⇐ Callable new TokenizerModel(config) instance .vocab : Array.<string> .tokens_to_ids : Map.<string, number> .fuse_unk : boolean ._call(tokens) ⇒ Array.<string> .encode(tokens) ⇒ Array.<string> .convert_tokens_to_ids(tokens) ⇒ Array.<number> .convert_ids_to_tokens(ids) ⇒ Array.<string> static .fromConfig(config, ...args) ⇒ TokenizerModel .PreTrainedTokenizer new PreTrainedTokenizer(tokenizerJSON, tokenizerConfig) instance .added_tokens : Array.<AddedToken> .remove_space : boolean ._call(text, options) ⇒ BatchEncoding ._encode_text(text) ⇒ Array<string> | null ._tokenize_helper(text, options) ⇒ * .tokenize(text, options) ⇒ Array.<string> .encode(text, options) ⇒ Array.<number> .batch_decode(batch, decode_args) ⇒ Array.<string> .decode(token_ids, [decode_args]) ⇒ string .decode_single(token_ids, decode_args) ⇒ string .get_chat_template(options) ⇒ string .apply_chat_template(conversation, options) ⇒ string | Tensor | Array<number> | Array<Array<number>> | BatchEncoding static .from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise.<PreTrainedTokenizer> .BertTokenizer ⇐ PreTrainedTokenizer .AlbertTokenizer ⇐ PreTrainedTokenizer .NllbTokenizer ._build_translation_inputs(raw_inputs, tokenizer_options, generate_kwargs) ⇒ Object .M2M100Tokenizer ._build_translation_inputs(raw_inputs, tokenizer_options, generate_kwargs) ⇒ Object .WhisperTokenizer ⇐ PreTrainedTokenizer ._decode_asr(sequences, options) ⇒ * .decode() : * .MarianTokenizer new MarianTokenizer(tokenizerJSON, tokenizerConfig) ._encode_text(text) ⇒ Array .AutoTokenizer .from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise.<PreTrainedTokenizer> .is_chinese_char(cp) ⇒ boolean inner ~AddedToken new AddedToken(config) ~WordPieceTokenizer ⇐ TokenizerModel new WordPieceTokenizer(config) .tokens_to_ids : Map.<string, number> .unk_token_id : number .unk_token : string .max_input_chars_per_word : number .vocab : Array.<string> .encode(tokens) ⇒ Array.<string> ~Unigram ⇐ TokenizerModel new Unigram(config, moreConfig) .populateNodes(lattice) .tokenize(normalized) ⇒ Array.<string> .encode(tokens) ⇒ Array.<string> ~BPE ⇐ TokenizerModel new BPE(config) .tokens_to_ids : Map.<string, number> .merges : * .config.merges : * .cache : Map.<string, Array<string>> .bpe(token) ⇒ Array.<string> .encode(tokens) ⇒ Array.<string> ~LegacyTokenizerModel new LegacyTokenizerModel(config, moreConfig) .tokens_to_ids : Map.<string, number> ~Normalizer new Normalizer(config) instance .normalize(text) ⇒ string ._call(text) ⇒ string static .fromConfig(config) ⇒ Normalizer ~Replace ⇐ Normalizer .normalize(text) ⇒ string ~NFC ⇐ Normalizer .normalize(text) ⇒ string ~NFKC ⇐ Normalizer .normalize(text) ⇒ string ~NFKD ⇐ Normalizer .normalize(text) ⇒ string ~StripNormalizer .normalize(text) ⇒ string ~StripAccents ⇐ Normalizer .normalize(text) ⇒ string ~Lowercase ⇐ Normalizer .normalize(text) ⇒ string ~Prepend ⇐ Normalizer .normalize(text) ⇒ string ~NormalizerSequence ⇐ Normalizer new NormalizerSequence(config) .normalize(text) ⇒ string ~BertNormalizer ⇐ Normalizer ._tokenize_chinese_chars(text) ⇒ string .stripAccents(text) ⇒ string .normalize(text) ⇒ string ~PreTokenizer ⇐ Callable instance .pre_tokenize_text(text, [options]) ⇒ Array.<string> .pre_tokenize(text, [options]) ⇒ Array.<string> ._call(text, [options]) ⇒ Array.<string> static .fromConfig(config) ⇒ PreTokenizer ~BertPreTokenizer ⇐ PreTokenizer new BertPreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~ByteLevelPreTokenizer ⇐ PreTokenizer new ByteLevelPreTokenizer(config) .add_prefix_space : boolean .trim_offsets : boolean .use_regex : boolean .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~SplitPreTokenizer ⇐ PreTokenizer new SplitPreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~PunctuationPreTokenizer ⇐ PreTokenizer new PunctuationPreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~DigitsPreTokenizer ⇐ PreTokenizer new DigitsPreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~PostProcessor ⇐ Callable new PostProcessor(config) instance .post_process(tokens, ...args) ⇒ PostProcessedOutput ._call(tokens, ...args) ⇒ PostProcessedOutput static .fromConfig(config) ⇒ PostProcessor ~BertProcessing new BertProcessing(config) .post_process(tokens, [tokens_pair]) ⇒ PostProcessedOutput ~TemplateProcessing ⇐ PostProcessor new TemplateProcessing(config) .post_process(tokens, [tokens_pair]) ⇒ PostProcessedOutput ~ByteLevelPostProcessor ⇐ PostProcessor .post_process(tokens, [tokens_pair]) ⇒ PostProcessedOutput ~PostProcessorSequence new PostProcessorSequence(config) .post_process(tokens, [tokens_pair]) ⇒ PostProcessedOutput ~Decoder ⇐ Callable new Decoder(config) instance .added_tokens : Array.<AddedToken> ._call(tokens) ⇒ string .decode(tokens) ⇒ string .decode_chain(tokens) ⇒ Array.<string> static .fromConfig(config) ⇒ Decoder ~FuseDecoder .decode_chain() : * ~WordPieceDecoder ⇐ Decoder new WordPieceDecoder(config) .decode_chain() : * ~ByteLevelDecoder ⇐ Decoder new ByteLevelDecoder(config) .convert_tokens_to_string(tokens) ⇒ string .decode_chain() : * ~CTCDecoder .convert_tokens_to_string(tokens) ⇒ string .decode_chain() : * ~DecoderSequence ⇐ Decoder new DecoderSequence(config) .decode_chain() : * ~MetaspacePreTokenizer ⇐ PreTokenizer new MetaspacePreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~MetaspaceDecoder ⇐ Decoder new MetaspaceDecoder(config) .decode_chain() : * ~Precompiled ⇐ Normalizer new Precompiled(config) .normalize(text) ⇒ string ~PreTokenizerSequence ⇐ PreTokenizer new PreTokenizerSequence(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~WhitespacePreTokenizer new WhitespacePreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~WhitespaceSplit ⇐ PreTokenizer new WhitespaceSplit(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~ReplacePreTokenizer new ReplacePreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> ~BYTES_TO_UNICODE ⇒ Object ~loadTokenizer(pretrained_model_name_or_path, options) ⇒ Promise.<Array<any>> ~regexSplit(text, regex) ⇒ Array.<string> ~createPattern(pattern, invert) ⇒ RegExp | null ~objectToMap(obj) ⇒ Map.<string, any> ~prepareTensorForDecode(tensor) ⇒ Array.<number> ~clean_up_tokenization(text) ⇒ string ~remove_accents(text) ⇒ string ~lowercase_and_remove_accent(text) ⇒ string ~whitespace_split(text) ⇒ Array.<string> ~PretrainedTokenizerOptions : Object ~BPENode : Object ~SplitDelimiterBehavior : ’removed’ | ’isolated’ | ’mergedWithPrevious’ | ’mergedWithNext’ | ’contiguous’ ~PostProcessedOutput : Object ~EncodingSingle : Object ~Message : Object ~BatchEncoding : Array<number> | Array<Array<number>> | Tensor tokenizers.TokenizerModel ⇐ <code> Callable </code> Abstract base class for tokenizer models. Kind : static class of tokenizers Extends : Callable .TokenizerModel ⇐ Callable new TokenizerModel(config) instance .vocab : Array.<string> .tokens_to_ids : Map.<string, number> .fuse_unk : boolean ._call(tokens) ⇒ Array.<string> .encode(tokens) ⇒ Array.<string> .convert_tokens_to_ids(tokens) ⇒ Array.<number> .convert_ids_to_tokens(ids) ⇒ Array.<string> static .fromConfig(config, ...args) ⇒ TokenizerModel new TokenizerModel(config) Creates a new instance of TokenizerModel. Param Type Description config Object The configuration object for the TokenizerModel. tokenizerModel.vocab : <code> Array. < string > </code> Kind : instance property of TokenizerModel tokenizerModel.tokens_to_ids : <code> Map. < string, number > </code> A mapping of tokens to ids. Kind : instance property of TokenizerModel tokenizerModel.fuse_unk : <code> boolean </code> Whether to fuse unknown tokens when encoding. Defaults to false. Kind : instance property of TokenizerModel tokenizerModel._call(tokens) ⇒ <code> Array. < string > </code> Internal function to call the TokenizerModel instance. Kind : instance method of TokenizerModel Overrides : _call Returns : Array.<string> - The encoded tokens. Param Type Description tokens Array.<string> The tokens to encode. tokenizerModel.encode(tokens) ⇒ <code> Array. < string > </code> Encodes a list of tokens into a list of token IDs. Kind : instance method of TokenizerModel Returns : Array.<string> - The encoded tokens. Throws : Will throw an error if not implemented in a subclass. Param Type Description tokens Array.<string> The tokens to encode. tokenizerModel.convert_tokens_to_ids(tokens) ⇒ <code> Array. < number > </code> Converts a list of tokens into a list of token IDs. Kind : instance method of TokenizerModel Returns : Array.<number> - The converted token IDs. Param Type Description tokens Array.<string> The tokens to convert. tokenizerModel.convert_ids_to_tokens(ids) ⇒ <code> Array. < string > </code> Converts a list of token IDs into a list of tokens. Kind : instance method of TokenizerModel Returns : Array.<string> - The converted tokens. Param Type Description ids Array<number> | Array<bigint> The token IDs to convert. TokenizerModel.fromConfig(config, ...args) ⇒ <code> TokenizerModel </code> Instantiates a new TokenizerModel instance based on the configuration object provided. Kind : static method of TokenizerModel Returns : TokenizerModel - A new instance of a TokenizerModel. Throws : Will throw an error if the TokenizerModel type in the config is not recognized. Param Type Description config Object The configuration object for the TokenizerModel. ...args * Optional arguments to pass to the specific TokenizerModel constructor. tokenizers.PreTrainedTokenizer Kind : static class of tokenizers .PreTrainedTokenizer new PreTrainedTokenizer(tokenizerJSON, tokenizerConfig) instance .added_tokens : Array.<AddedToken> .remove_space : boolean ._call(text, options) ⇒ BatchEncoding ._encode_text(text) ⇒ Array<string> | null ._tokenize_helper(text, options) ⇒ * .tokenize(text, options) ⇒ Array.<string> .encode(text, options) ⇒ Array.<number> .batch_decode(batch, decode_args) ⇒ Array.<string> .decode(token_ids, [decode_args]) ⇒ string .decode_single(token_ids, decode_args) ⇒ string .get_chat_template(options) ⇒ string .apply_chat_template(conversation, options) ⇒ string | Tensor | Array<number> | Array<Array<number>> | BatchEncoding static .from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise.<PreTrainedTokenizer> new PreTrainedTokenizer(tokenizerJSON, tokenizerConfig) Create a new PreTrainedTokenizer instance. Param Type Description tokenizerJSON Object The JSON of the tokenizer. tokenizerConfig Object The config of the tokenizer. preTrainedTokenizer.added_tokens : <code> Array. < AddedToken > </code> Kind : instance property of PreTrainedTokenizer preTrainedTokenizer.remove_space : <code> boolean </code> Whether or not to strip the text when tokenizing (removing excess spaces before and after the string). Kind : instance property of PreTrainedTokenizer preTrainedTokenizer._call(text, options) ⇒ <code> BatchEncoding </code> Encode/tokenize the given text(s). Kind : instance method of PreTrainedTokenizer Returns : BatchEncoding - Object to be passed to the model. Param Type Default Description text string | Array<string> The text to tokenize. options Object An optional object containing the following properties: [options.text_pair] string | Array<string> null Optional second sequence to be encoded. If set, must be the same type as text. [options.padding] boolean | 'max_length' false Whether to pad the input sequences. [options.add_special_tokens] boolean true Whether or not to add the special tokens associated with the corresponding model. [options.truncation] boolean Whether to truncate the input sequences. [options.max_length] number Maximum length of the returned list and optionally padding length. [options.return_tensor] boolean true Whether to return the results as Tensors or arrays. [options.return_token_type_ids] boolean Whether to return the token type ids. preTrainedTokenizer._encode_text(text) ⇒ <code> Array < string > </code> | <code> null </code> Encodes a single text using the preprocessor pipeline of the tokenizer. Kind : instance method of PreTrainedTokenizer Returns : Array<string> | null - The encoded tokens. Param Type Description text string | null The text to encode. preTrainedTokenizer._tokenize_helper(text, options) ⇒ <code> * </code> Internal helper function to tokenize a text, and optionally a pair of texts. Kind : instance method of PreTrainedTokenizer Returns : * - An object containing the tokens and optionally the token type IDs. Param Type Default Description text string The text to tokenize. options Object An optional object containing the following properties: [options.pair] string null The optional second text to tokenize. [options.add_special_tokens] boolean false Whether or not to add the special tokens associated with the corresponding model. preTrainedTokenizer.tokenize(text, options) ⇒ <code> Array. < string > </code> Converts a string into a sequence of tokens. Kind : instance method of PreTrainedTokenizer Returns : Array.<string> - The list of tokens. Param Type Default Description text string The sequence to be encoded. options Object An optional object containing the following properties: [options.pair] string A second sequence to be encoded with the first. [options.add_special_tokens] boolean false Whether or not to add the special tokens associated with the corresponding model. preTrainedTokenizer.encode(text, options) ⇒ <code> Array. < number > </code> Encodes a single text or a pair of texts using the model’s tokenizer. Kind : instance method of PreTrainedTokenizer Returns : Array.<number> - An array of token IDs representing the encoded text(s). Param Type Default Description text string The text to encode. options Object An optional object containing the following properties: [options.text_pair] string null The optional second text to encode. [options.add_special_tokens] boolean true Whether or not to add the special tokens associated with the corresponding model. [options.return_token_type_ids] boolean Whether to return token_type_ids. preTrainedTokenizer.batch_decode(batch, decode_args) ⇒ <code> Array. < string > </code> Decode a batch of tokenized sequences. Kind : instance method of PreTrainedTokenizer Returns : Array.<string> - List of decoded sequences. Param Type Description batch Array<Array<number>> | Tensor List/Tensor of tokenized input sequences. decode_args Object (Optional) Object with decoding arguments. preTrainedTokenizer.decode(token_ids, [decode_args]) ⇒ <code> string </code> Decodes a sequence of token IDs back to a string. Kind : instance method of PreTrainedTokenizer Returns : string - The decoded string. Throws : Error If `token_ids` is not a non-empty array of integers. Param Type Default Description token_ids Array<number> | Array<bigint> | Tensor List/Tensor of token IDs to decode. [decode_args] Object {} [decode_args.skip_special_tokens] boolean false If true, special tokens are removed from the output string. [decode_args.clean_up_tokenization_spaces] boolean true If true, spaces before punctuations and abbreviated forms are removed. preTrainedTokenizer.decode_single(token_ids, decode_args) ⇒ <code> string </code> Decode a single list of token ids to a string. Kind : instance method of PreTrainedTokenizer Returns : string - The decoded string Param Type Default Description token_ids Array<number> | Array<bigint> List of token ids to decode decode_args Object Optional arguments for decoding [decode_args.skip_special_tokens] boolean false Whether to skip special tokens during decoding [decode_args.clean_up_tokenization_spaces] boolean Whether to clean up tokenization spaces during decoding. If null, the value is set to this.decoder.cleanup if it exists, falling back to this.clean_up_tokenization_spaces if it exists, falling back to true . preTrainedTokenizer.get_chat_template(options) ⇒ <code> string </code> Retrieve the chat template string used for tokenizing chat messages. This template is used internally by the apply_chat_template method and can also be used externally to retrieve the model’s chat template for better generation tracking. Kind : instance method of PreTrainedTokenizer Returns : string - The chat template string. Param Type Default Description options Object An optional object containing the following properties: [options.chat_template] string null A Jinja template or the name of a template to use for this conversion. It is usually not necessary to pass anything to this argument, as the model's template will be used by default. [options.tools] Array.<Object> A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our chat templating guide for more information. preTrainedTokenizer.apply_chat_template(conversation, options) ⇒ <code> string </code> | <code> Tensor </code> | <code> Array < number > </code> | <code> Array < Array < number > > </code> | <code> BatchEncoding </code> Converts a list of message objects with "role" and "content" keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting. See here for more information. Example: Applying a chat template to a conversation. Copied import { AutoTokenizer } from "@huggingface/transformers" ; const tokenizer = await AutoTokenizer . from_pretrained ( "Xenova/mistral-tokenizer-v1" ); const chat = [ { "role" : "user" , "content" : "Hello, how are you?" }, { "role" : "assistant" , "content" : "I'm doing great. How can I help you today?" }, { "role" : "user" , "content" : "I'd like to show off how chat templating works!" }, ] const text = tokenizer. apply_chat_template (chat, { tokenize : false }); // "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]" const input_ids = tokenizer. apply_chat_template (chat, { tokenize : true , return_tensor : false }); // [1, 733, 16289, 28793, 22557, 28725, 910, 460, 368, 28804, 733, 28748, 16289, 28793, 28737, 28742, 28719, 2548, 1598, 28723, 1602, 541, 315, 1316, 368, 3154, 28804, 2, 28705, 733, 16289, 28793, 315, 28742, 28715, 737, 298, 1347, 805, 910, 10706, 5752, 1077, 3791, 28808, 733, 28748, 16289, 28793] Kind : instance method of PreTrainedTokenizer Returns : string | Tensor | Array<number> | Array<Array<number>> | BatchEncoding - The tokenized output. Param Type Default Description conversation Array.<Message> A list of message objects with "role" and "content" keys, representing the chat history so far. options Object An optional object containing the following properties: [options.chat_template] string null A Jinja template to use for this conversion. If this is not passed, the model's chat template will be used instead. [options.tools] Array.<Object> A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our chat templating guide for more information. [options.documents] * A list of dicts representing documents that will be accessible to the model if it is performing RAG (retrieval-augmented generation). If the template does not support RAG, this argument will have no effect. We recommend that each document should be a dict containing "title" and "text" keys. Please see the RAG section of the chat templating guide for examples of passing documents with chat templates. [options.add_generation_prompt] boolean false Whether to end the prompt with the token(s) that indicate the start of an assistant message. This is useful when you want to generate a response from the model. Note that this argument will be passed to the chat template, and so it must be supported in the template for this argument to have any effect. [options.tokenize] boolean true Whether to tokenize the output. If false, the output will be a string. [options.padding] boolean false Whether to pad sequences to the maximum length. Has no effect if tokenize is false. [options.truncation] boolean false Whether to truncate sequences to the maximum length. Has no effect if tokenize is false. [options.max_length] number Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize is false. If not specified, the tokenizer's max_length attribute will be used as a default. [options.return_tensor] boolean true Whether to return the output as a Tensor or an Array. Has no effect if tokenize is false. [options.return_dict] boolean true Whether to return a dictionary with named outputs. Has no effect if tokenize is false. [options.tokenizer_kwargs] Object {} Additional options to pass to the tokenizer. PreTrainedTokenizer.from_pretrained(pretrained_model_name_or_path, options) ⇒ <code> Promise. < PreTrainedTokenizer > </code> Loads a pre-trained tokenizer from the given pretrained_model_name_or_path . Kind : static method of PreTrainedTokenizer Returns : Promise.<PreTrainedTokenizer> - A new instance of the PreTrainedTokenizer class. Throws : Error Throws an error if the tokenizer.json or tokenizer_config.json files are not found in the `pretrained_model_name_or_path`. Param Type Description pretrained_model_name_or_path string The path to the pre-trained tokenizer. options PretrainedTokenizerOptions Additional options for loading the tokenizer. tokenizers.BertTokenizer ⇐ <code> PreTrainedTokenizer </code> BertTokenizer is a class used to tokenize text for BERT models. Kind : static class of tokenizers Extends : PreTrainedTokenizer tokenizers.AlbertTokenizer ⇐ <code> PreTrainedTokenizer </code> Albert tokenizer Kind : static class of tokenizers Extends : PreTrainedTokenizer tokenizers.NllbTokenizer The NllbTokenizer class is used to tokenize text for NLLB (“No Language Left Behind”) models. No Language Left Behind (NLLB) is a first-of-its-kind, AI breakthrough project that open-sources models capable of delivering high-quality translations directly between any pair of 200+ languages — including low-resource languages like Asturian, Luganda, Urdu and more. It aims to help people communicate with anyone, anywhere, regardless of their language preferences. For more information, check out their paper . For a list of supported languages (along with their language codes), Kind : static class of tokenizers See : https://github.com/facebookresearch/flores/blob/v3.0.0/flores200/README.md#languages-in-flores-200 nllbTokenizer._build_translation_inputs(raw_inputs, tokenizer_options, generate_kwargs) ⇒ <code> Object </code> Helper function to build translation inputs for an NllbTokenizer . Kind : instance method of NllbTokenizer Returns : Object - Object to be passed to the model. Param Type Description raw_inputs string | Array<string> The text to tokenize. tokenizer_options Object Options to be sent to the tokenizer generate_kwargs Object Generation options. tokenizers.M2M100Tokenizer The M2M100Tokenizer class is used to tokenize text for M2M100 (“Many-to-Many”) models. M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this paper and first released in this repository. For a list of supported languages (along with their language codes), Kind : static class of tokenizers See : https://huggingface.co/facebook/m2m100_418M#languages-covered m2M100Tokenizer._build_translation_inputs(raw_inputs, tokenizer_options, generate_kwargs) ⇒ <code> Object </code> Helper function to build translation inputs for an M2M100Tokenizer . Kind : instance method of M2M100Tokenizer Returns : Object - Object to be passed to the model. Param Type Description raw_inputs string | Array<string> The text to tokenize. tokenizer_options Object Options to be sent to the tokenizer generate_kwargs Object Generation options. tokenizers.WhisperTokenizer ⇐ <code> PreTrainedTokenizer </code> WhisperTokenizer tokenizer Kind : static class of tokenizers Extends : PreTrainedTokenizer .WhisperTokenizer ⇐ PreTrainedTokenizer ._decode_asr(sequences, options) ⇒ * .decode() : * whisperTokenizer._decode_asr(sequences, options) ⇒ <code> * </code> Decodes automatic speech recognition (ASR) sequences. Kind : instance method of WhisperTokenizer Returns : * - The decoded sequences. Param Type Description sequences * The sequences to decode. options Object The options to use for decoding. whisperTokenizer.decode() : <code> * </code> Kind : instance method of WhisperTokenizer tokenizers.MarianTokenizer Kind : static class of tokenizers Todo This model is not yet supported by Hugging Face’s “fast” tokenizers library ( https://github.com/huggingface/tokenizers ). Therefore, this implementation (which is based on fast tokenizers) may produce slightly inaccurate results. .MarianTokenizer new MarianTokenizer(tokenizerJSON, tokenizerConfig) ._encode_text(text) ⇒ Array new MarianTokenizer(tokenizerJSON, tokenizerConfig) Create a new MarianTokenizer instance. Param Type Description tokenizerJSON Object The JSON of the tokenizer. tokenizerConfig Object The config of the tokenizer. marianTokenizer._encode_text(text) ⇒ <code> Array </code> Encodes a single text. Overriding this method is necessary since the language codes must be removed before encoding with sentencepiece model. Kind : instance method of MarianTokenizer Returns : Array - The encoded tokens. See : https://github.com/huggingface/transformers/blob/12d51db243a00726a548a43cc333390ebae731e3/src/transformers/models/marian/tokenization_marian.py#L204-L213 Param Type Description text string | null The text to encode. tokenizers.AutoTokenizer Helper class which is used to instantiate pretrained tokenizers with the from_pretrained function. The chosen tokenizer class is determined by the type specified in the tokenizer config. Kind : static class of tokenizers AutoTokenizer.from_pretrained(pretrained_model_name_or_path, options) ⇒ <code> Promise. < PreTrainedTokenizer > </code> Instantiate one of the tokenizer classes of the library from a pretrained model. The tokenizer class to instantiate is selected based on the tokenizer_class property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible) Kind : static method of AutoTokenizer Returns : Promise.<PreTrainedTokenizer> - A new instance of the PreTrainedTokenizer class. Param Type Description pretrained_model_name_or_path string The name or path of the pretrained model. Can be either: A string, the model id of a pretrained tokenizer hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased , or namespaced under a user or organization name, like dbmdz/bert-base-german-cased . A path to a directory containing tokenizer files, e.g., ./my_model_directory/ . options PretrainedTokenizerOptions Additional options for loading the tokenizer. tokenizers.is_chinese_char(cp) ⇒ <code> boolean </code> Checks whether the given Unicode codepoint represents a CJK (Chinese, Japanese, or Korean) character. A “chinese character” is defined as anything in the CJK Unicode block: https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) Note that the CJK Unicode block is NOT all Japanese and Korean characters, despite its name. The modern Korean Hangul alphabet is a different block, as is Japanese Hiragana and Katakana. Those alphabets are used to write space-separated words, so they are not treated specially and are handled like all other languages. Kind : static method of tokenizers Returns : boolean - True if the codepoint represents a CJK character, false otherwise. Param Type Description cp number | bigint The Unicode codepoint to check. tokenizers~AddedToken Represent a token added by the user on top of the existing Model vocabulary. AddedToken can be configured to specify the behavior they should have in various situations like: Whether they should only match single words Whether to include any whitespace on its left or right Kind : inner class of tokenizers new AddedToken(config) Creates a new instance of AddedToken. Param Type Default Description config Object Added token configuration object. config.content string The content of the added token. config.id number The id of the added token. [config.single_word] boolean false Whether this token must be a single word or can break words. [config.lstrip] boolean false Whether this token should strip whitespaces on its left. [config.rstrip] boolean false Whether this token should strip whitespaces on its right. [config.normalized] boolean false Whether this token should be normalized. [config.special] boolean false Whether this token is special. tokenizers~WordPieceTokenizer ⇐ <code> TokenizerModel </code> A subclass of TokenizerModel that uses WordPiece encoding to encode tokens. Kind : inner class of tokenizers Extends : TokenizerModel ~WordPieceTokenizer ⇐ TokenizerModel new WordPieceTokenizer(config) .tokens_to_ids : Map.<string, number> .unk_token_id : number .unk_token : string .max_input_chars_per_word : number .vocab : Array.<string> .encode(tokens) ⇒ Array.<string> new WordPieceTokenizer(config) Param Type Default Description config Object The configuration object. config.vocab Object A mapping of tokens to ids. config.unk_token string The unknown token string. config.continuing_subword_prefix string The prefix to use for continuing subwords. [config.max_input_chars_per_word] number 100 The maximum number of characters per word. wordPieceTokenizer.tokens_to_ids : <code> Map. < string, number > </code> A mapping of tokens to ids. Kind : instance property of WordPieceTokenizer wordPieceTokenizer.unk_token_id : <code> number </code> The id of the unknown token. Kind : instance property of WordPieceTokenizer wordPieceTokenizer.unk_token : <code> string </code> The unknown token string. Kind : instance property of WordPieceTokenizer wordPieceTokenizer.max_input_chars_per_word : <code> number </code> The maximum number of characters allowed per word. Kind : instance property of WordPieceTokenizer wordPieceTokenizer.vocab : <code> Array. < string > </code> An array of tokens. Kind : instance property of WordPieceTokenizer wordPieceTokenizer.encode(tokens) ⇒ <code> Array. < string > </code> Encodes an array of tokens using WordPiece encoding. Kind : instance method of WordPieceTokenizer Returns : Array.<string> - An array of encoded tokens. Param Type Description tokens Array.<string> The tokens to encode. tokenizers~Unigram ⇐ <code> TokenizerModel </code> Class representing a Unigram tokenizer model. Kind : inner class of tokenizers Extends : TokenizerModel ~Unigram ⇐ TokenizerModel new Unigram(config, moreConfig) .populateNodes(lattice) .tokenize(normalized) ⇒ Array.<string> .encode(tokens) ⇒ Array.<string> new Unigram(config, moreConfig) Create a new Unigram tokenizer model. Param Type Description config Object The configuration object for the Unigram model. config.unk_id number The ID of the unknown token config.vocab Array.<Array<any>> A 2D array representing a mapping of tokens to scores. moreConfig Object Additional configuration object for the Unigram model. unigram.populateNodes(lattice) Populates lattice nodes. Kind : instance method of Unigram Param Type Description lattice TokenLattice The token lattice to populate with nodes. unigram.tokenize(normalized) ⇒ <code> Array. < string > </code> Encodes an array of tokens into an array of subtokens using the unigram model. Kind : instance method of Unigram Returns : Array.<string> - An array of subtokens obtained by encoding the input tokens using the unigram model. Param Type Description normalized string The normalized string. unigram.encode(tokens) ⇒ <code> Array. < string > </code> Encodes an array of tokens using Unigram encoding. Kind : instance method of Unigram Returns : Array.<string> - An array of encoded tokens. Param Type Description tokens Array.<string> The tokens to encode. tokenizers~BPE ⇐ <code> TokenizerModel </code> BPE class for encoding text into Byte-Pair-Encoding (BPE) tokens. Kind : inner class of tokenizers Extends : TokenizerModel ~BPE ⇐ TokenizerModel new BPE(config) .tokens_to_ids : Map.<string, number> .merges : * .config.merges : * .cache : Map.<string, Array<string>> .bpe(token) ⇒ Array.<string> .encode(tokens) ⇒ Array.<string> new BPE(config) Create a BPE instance. Param Type Default Description config Object The configuration object for BPE. config.vocab Object A mapping of tokens to ids. config.merges * An array of BPE merges as strings. config.unk_token string The unknown token used for out of vocabulary words. config.end_of_word_suffix string The suffix to place at the end of each word. [config.continuing_subword_suffix] string The suffix to insert between words. [config.byte_fallback] boolean false Whether to use spm byte-fallback trick (defaults to False) [config.ignore_merges] boolean false Whether or not to match tokens with the vocab before using merges. bpE.tokens_to_ids : <code> Map. < string, number > </code> Kind : instance property of BPE bpE.merges : <code> * </code> Kind : instance property of BPE merges.config.merges : <code> * </code> Kind : static property of merges bpE.cache : <code> Map. < string, Array < string > > </code> Kind : instance property of BPE bpE.bpe(token) ⇒ <code> Array. < string > </code> Apply Byte-Pair-Encoding (BPE) to a given token. Efficient heap-based priority queue implementation adapted from https://github.com/belladoreai/llama-tokenizer-js . Kind : instance method of BPE Returns : Array.<string> - The BPE encoded tokens. Param Type Description token string The token to encode. bpE.encode(tokens) ⇒ <code> Array. < string > </code> Encodes the input sequence of tokens using the BPE algorithm and returns the resulting subword tokens. Kind : instance method of BPE Returns : Array.<string> - The resulting subword tokens after applying the BPE algorithm to the input sequence of tokens. Param Type Description tokens Array.<string> The input sequence of tokens to encode. tokenizers~LegacyTokenizerModel Legacy tokenizer class for tokenizers with only a vocabulary. Kind : inner class of tokenizers ~LegacyTokenizerModel new LegacyTokenizerModel(config, moreConfig) .tokens_to_ids : Map.<string, number> new LegacyTokenizerModel(config, moreConfig) Create a LegacyTokenizerModel instance. Param Type Description config Object The configuration object for LegacyTokenizerModel. config.vocab Object A (possibly nested) mapping of tokens to ids. moreConfig Object Additional configuration object for the LegacyTokenizerModel model. legacyTokenizerModel.tokens_to_ids : <code> Map. < string, number > </code> Kind : instance property of LegacyTokenizerModel tokenizers~Normalizer A base class for text normalization. Kind : inner abstract class of tokenizers ~Normalizer new Normalizer(config) instance .normalize(text) ⇒ string ._call(text) ⇒ string static .fromConfig(config) ⇒ Normalizer new Normalizer(config) Param Type Description config Object The configuration object for the normalizer. normalizer.normalize(text) ⇒ <code> string </code> Normalize the input text. Kind : instance abstract method of Normalizer Returns : string - The normalized text. Throws : Error If this method is not implemented in a subclass. Param Type Description text string The text to normalize. normalizer._call(text) ⇒ <code> string </code> Alias for Normalizer#normalize . Kind : instance method of Normalizer Returns : string - The normalized text. Param Type Description text string The text to normalize. Normalizer.fromConfig(config) ⇒ <code> Normalizer </code> Factory method for creating normalizers from config objects. Kind : static method of Normalizer Returns : Normalizer - A Normalizer object. Throws : Error If an unknown Normalizer type is specified in the config. Param Type Description config Object The configuration object for the normalizer. tokenizers~Replace ⇐ <code> Normalizer </code> Replace normalizer that replaces occurrences of a pattern with a given string or regular expression. Kind : inner class of tokenizers Extends : Normalizer replace.normalize(text) ⇒ <code> string </code> Normalize the input text by replacing the pattern with the content. Kind : instance method of Replace Returns : string - The normalized text after replacing the pattern with the content. Param Type Description text string The input text to be normalized. tokenizers~NFC ⇐ <code> Normalizer </code> A normalizer that applies Unicode normalization form C (NFC) to the input text. Kind : inner class of tokenizers Extends : Normalizer nfC.normalize(text) ⇒ <code> string </code> Normalize the input text by applying Unicode normalization form C (NFC). Kind : instance method of NFC Returns : string - The normalized text. Param Type Description text string The input text to be normalized. tokenizers~NFKC ⇐ <code> Normalizer </code> NFKC Normalizer. Kind : inner class of tokenizers Extends : Normalizer nfkC.normalize(text) ⇒ <code> string </code> Normalize text using NFKC normalization. Kind : instance method of NFKC Returns : string - The normalized text. Param Type Description text string The text to be normalized. tokenizers~NFKD ⇐ <code> Normalizer </code> NFKD Normalizer. Kind : inner class of tokenizers Extends : Normalizer nfkD.normalize(text) ⇒ <code> string </code> Normalize text using NFKD normalization. Kind : instance method of NFKD Returns : string - The normalized text. Param Type Description text string The text to be normalized. tokenizers~StripNormalizer A normalizer that strips leading and/or trailing whitespace from the input text. Kind : inner class of tokenizers stripNormalizer.normalize(text) ⇒ <code> string </code> Strip leading and/or trailing whitespace from the input text. Kind : instance method of StripNormalizer Returns : string - The normalized text. Param Type Description text string The input text. tokenizers~StripAccents ⇐ <code> Normalizer </code> StripAccents normalizer removes all accents from the text. Kind : inner class of tokenizers Extends : Normalizer stripAccents.normalize(text) ⇒ <code> string </code> Remove all accents from the text. Kind : instance method of StripAccents Returns : string - The normalized text without accents. Param Type Description text string The input text. tokenizers~Lowercase ⇐ <code> Normalizer </code> A Normalizer that lowercases the input string. Kind : inner class of tokenizers Extends : Normalizer lowercase.normalize(text) ⇒ <code> string </code> Lowercases the input string. Kind : instance method of Lowercase Returns : string - The normalized text. Param Type Description text string The text to normalize. tokenizers~Prepend ⇐ <code> Normalizer </code> A Normalizer that prepends a string to the input string. Kind : inner class of tokenizers Extends : Normalizer prepend.normalize(text) ⇒ <code> string </code> Prepends the input string. Kind : instance method of Prepend Returns : string - The normalized text. Param Type Description text string The text to normalize. tokenizers~NormalizerSequence ⇐ <code> Normalizer </code> A Normalizer that applies a sequence of Normalizers. Kind : inner class of tokenizers Extends : Normalizer ~NormalizerSequence ⇐ Normalizer new NormalizerSequence(config) .normalize(text) ⇒ string new NormalizerSequence(config) Create a new instance of NormalizerSequence. Param Type Description config Object The configuration object. config.normalizers Array.<Object> An array of Normalizer configuration objects. normalizerSequence.normalize(text) ⇒ <code> string </code> Apply a sequence of Normalizers to the input text. Kind : instance method of NormalizerSequence Returns : string - The normalized text. Param Type Description text string The text to normalize. tokenizers~BertNormalizer ⇐ <code> Normalizer </code> A class representing a normalizer used in BERT tokenization. Kind : inner class of tokenizers Extends : Normalizer ~BertNormalizer ⇐ Normalizer ._tokenize_chinese_chars(text) ⇒ string .stripAccents(text) ⇒ string .normalize(text) ⇒ string bertNormalizer._tokenize_chinese_chars(text) ⇒ <code> string </code> Adds whitespace around any CJK (Chinese, Japanese, or Korean) character in the input text. Kind : instance method of BertNormalizer Returns : string - The tokenized text with whitespace added around CJK characters. Param Type Description text string The input text to tokenize. bertNormalizer.stripAccents(text) ⇒ <code> string </code> Strips accents from the given text. Kind : instance method of BertNormalizer Returns : string - The text with accents removed. Param Type Description text string The text to strip accents from. bertNormalizer.normalize(text) ⇒ <code> string </code> Normalizes the given text based on the configuration. Kind : instance method of BertNormalizer Returns : string - The normalized text. Param Type Description text string The text to normalize. tokenizers~PreTokenizer ⇐ <code> Callable </code> A callable class representing a pre-tokenizer used in tokenization. Subclasses should implement the pre_tokenize_text method to define the specific pre-tokenization logic. Kind : inner class of tokenizers Extends : Callable ~PreTokenizer ⇐ Callable instance .pre_tokenize_text(text, [options]) ⇒ Array.<string> .pre_tokenize(text, [options]) ⇒ Array.<string> ._call(text, [options]) ⇒ Array.<string> static .fromConfig(config) ⇒ PreTokenizer preTokenizer.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Method that should be implemented by subclasses to define the specific pre-tokenization logic. Kind : instance abstract method of PreTokenizer Returns : Array.<string> - The pre-tokenized text. Throws : Error If the method is not implemented in the subclass. Param Type Description text string The text to pre-tokenize. [options] Object Additional options for the pre-tokenization logic. preTokenizer.pre_tokenize(text, [options]) ⇒ <code> Array. < string > </code> Tokenizes the given text into pre-tokens. Kind : instance method of PreTokenizer Returns : Array.<string> - An array of pre-tokens. Param Type Description text string | Array<string> The text or array of texts to pre-tokenize. [options] Object Additional options for the pre-tokenization logic. preTokenizer._call(text, [options]) ⇒ <code> Array. < string > </code> Alias for PreTokenizer#pre_tokenize . Kind : instance method of PreTokenizer Overrides : _call Returns : Array.<string> - An array of pre-tokens. Param Type Description text string | Array<string> The text or array of texts to pre-tokenize. [options] Object Additional options for the pre-tokenization logic. PreTokenizer.fromConfig(config) ⇒ <code> PreTokenizer </code> Factory method that returns an instance of a subclass of PreTokenizer based on the provided configuration. Kind : static method of PreTokenizer Returns : PreTokenizer - An instance of a subclass of PreTokenizer . Throws : Error If the provided configuration object does not correspond to any known pre-tokenizer. Param Type Description config Object A configuration object for the pre-tokenizer. tokenizers~BertPreTokenizer ⇐ <code> PreTokenizer </code> Kind : inner class of tokenizers Extends : PreTokenizer ~BertPreTokenizer ⇐ PreTokenizer new BertPreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> new BertPreTokenizer(config) A PreTokenizer that splits text into wordpieces using a basic tokenization scheme similar to that used in the original implementation of BERT. Param Type Description config Object The configuration object. bertPreTokenizer.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Tokenizes a single text using the BERT pre-tokenization scheme. Kind : instance method of BertPreTokenizer Returns : Array.<string> - An array of tokens. Param Type Description text string The text to tokenize. [options] Object Additional options for the pre-tokenization logic. tokenizers~ByteLevelPreTokenizer ⇐ <code> PreTokenizer </code> A pre-tokenizer that splits text into Byte-Pair-Encoding (BPE) subwords. Kind : inner class of tokenizers Extends : PreTokenizer ~ByteLevelPreTokenizer ⇐ PreTokenizer new ByteLevelPreTokenizer(config) .add_prefix_space : boolean .trim_offsets : boolean .use_regex : boolean .pre_tokenize_text(text, [options]) ⇒ Array.<string> new ByteLevelPreTokenizer(config) Creates a new instance of the ByteLevelPreTokenizer class. Param Type Description config Object The configuration object. byteLevelPreTokenizer.add_prefix_space : <code> boolean </code> Whether to add a leading space to the first word.This allows to treat the leading word just as any other word. Kind : instance property of ByteLevelPreTokenizer byteLevelPreTokenizer.trim_offsets : <code> boolean </code> Whether the post processing step should trim offsetsto avoid including whitespaces. Kind : instance property of ByteLevelPreTokenizer Todo Use this in the pretokenization step. byteLevelPreTokenizer.use_regex : <code> boolean </code> Whether to use the standard GPT2 regex for whitespace splitting.Set it to False if you want to use your own splitting. Defaults to true. Kind : instance property of ByteLevelPreTokenizer byteLevelPreTokenizer.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Tokenizes a single piece of text using byte-level tokenization. Kind : instance method of ByteLevelPreTokenizer Returns : Array.<string> - An array of tokens. Param Type Description text string The text to tokenize. [options] Object Additional options for the pre-tokenization logic. tokenizers~SplitPreTokenizer ⇐ <code> PreTokenizer </code> Splits text using a given pattern. Kind : inner class of tokenizers Extends : PreTokenizer ~SplitPreTokenizer ⇐ PreTokenizer new SplitPreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> new SplitPreTokenizer(config) Param Type Description config Object The configuration options for the pre-tokenizer. config.pattern Object The pattern used to split the text. Can be a string or a regex object. config.pattern.String string | undefined The string to use for splitting. Only defined if the pattern is a string. config.pattern.Regex string | undefined The regex to use for splitting. Only defined if the pattern is a regex. config.behavior SplitDelimiterBehavior The behavior to use when splitting. config.invert boolean Whether to split (invert=false) or match (invert=true) the pattern. splitPreTokenizer.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Tokenizes text by splitting it using the given pattern. Kind : instance method of SplitPreTokenizer Returns : Array.<string> - An array of tokens. Param Type Description text string The text to tokenize. [options] Object Additional options for the pre-tokenization logic. tokenizers~PunctuationPreTokenizer ⇐ <code> PreTokenizer </code> Splits text based on punctuation. Kind : inner class of tokenizers Extends : PreTokenizer ~PunctuationPreTokenizer ⇐ PreTokenizer new PunctuationPreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> new PunctuationPreTokenizer(config) Param Type Description config Object The configuration options for the pre-tokenizer. config.behavior SplitDelimiterBehavior The behavior to use when splitting. punctuationPreTokenizer.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Tokenizes text by splitting it using the given pattern. Kind : instance method of PunctuationPreTokenizer Returns : Array.<string> - An array of tokens. Param Type Description text string The text to tokenize. [options] Object Additional options for the pre-tokenization logic. tokenizers~DigitsPreTokenizer ⇐ <code> PreTokenizer </code> Splits text based on digits. Kind : inner class of tokenizers Extends : PreTokenizer ~DigitsPreTokenizer ⇐ PreTokenizer new DigitsPreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> new DigitsPreTokenizer(config) Param Type Description config Object The configuration options for the pre-tokenizer. config.individual_digits boolean Whether to split on individual digits. digitsPreTokenizer.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Tokenizes text by splitting it using the given pattern. Kind : instance method of DigitsPreTokenizer Returns : Array.<string> - An array of tokens. Param Type Description text string The text to tokenize. [options] Object Additional options for the pre-tokenization logic. tokenizers~PostProcessor ⇐ <code> Callable </code> Kind : inner class of tokenizers Extends : Callable ~PostProcessor ⇐ Callable new PostProcessor(config) instance .post_process(tokens, ...args) ⇒ PostProcessedOutput ._call(tokens, ...args) ⇒ PostProcessedOutput static .fromConfig(config) ⇒ PostProcessor new PostProcessor(config) Param Type Description config Object The configuration for the post-processor. postProcessor.post_process(tokens, ...args) ⇒ <code> PostProcessedOutput </code> Method to be implemented in subclass to apply post-processing on the given tokens. Kind : instance method of PostProcessor Returns : PostProcessedOutput - The post-processed tokens. Throws : Error If the method is not implemented in subclass. Param Type Description tokens Array The input tokens to be post-processed. ...args * Additional arguments required by the post-processing logic. postProcessor._call(tokens, ...args) ⇒ <code> PostProcessedOutput </code> Alias for PostProcessor#post_process . Kind : instance method of PostProcessor Overrides : _call Returns : PostProcessedOutput - The post-processed tokens. Param Type Description tokens Array The text or array of texts to post-process. ...args * Additional arguments required by the post-processing logic. PostProcessor.fromConfig(config) ⇒ <code> PostProcessor </code> Factory method to create a PostProcessor object from a configuration object. Kind : static method of PostProcessor Returns : PostProcessor - A PostProcessor object created from the given configuration. Throws : Error If an unknown PostProcessor type is encountered. Param Type Description config Object Configuration object representing a PostProcessor. tokenizers~BertProcessing A post-processor that adds special tokens to the beginning and end of the input. Kind : inner class of tokenizers ~BertProcessing new BertProcessing(config) .post_process(tokens, [tokens_pair]) ⇒ PostProcessedOutput new BertProcessing(config) Param Type Description config Object The configuration for the post-processor. config.cls Array.<string> The special tokens to add to the beginning of the input. config.sep Array.<string> The special tokens to add to the end of the input. bertProcessing.post_process(tokens, [tokens_pair]) ⇒ <code> PostProcessedOutput </code> Adds the special tokens to the beginning and end of the input. Kind : instance method of BertProcessing Returns : PostProcessedOutput - The post-processed tokens with the special tokens added to the beginning and end. Param Type Default Description tokens Array.<string> The input tokens. [tokens_pair] Array.<string> An optional second set of input tokens. tokenizers~TemplateProcessing ⇐ <code> PostProcessor </code> Post processor that replaces special tokens in a template with actual tokens. Kind : inner class of tokenizers Extends : PostProcessor ~TemplateProcessing ⇐ PostProcessor new TemplateProcessing(config) .post_process(tokens, [tokens_pair]) ⇒ PostProcessedOutput new TemplateProcessing(config) Creates a new instance of TemplateProcessing . Param Type Description config Object The configuration options for the post processor. config.single Array The template for a single sequence of tokens. config.pair Array The template for a pair of sequences of tokens. templateProcessing.post_process(tokens, [tokens_pair]) ⇒ <code> PostProcessedOutput </code> Replaces special tokens in the template with actual tokens. Kind : instance method of TemplateProcessing Returns : PostProcessedOutput - An object containing the list of tokens with the special tokens replaced with actual tokens. Param Type Default Description tokens Array.<string> The list of tokens for the first sequence. [tokens_pair] Array.<string> The list of tokens for the second sequence (optional). tokenizers~ByteLevelPostProcessor ⇐ <code> PostProcessor </code> A PostProcessor that returns the given tokens as is. Kind : inner class of tokenizers Extends : PostProcessor byteLevelPostProcessor.post_process(tokens, [tokens_pair]) ⇒ <code> PostProcessedOutput </code> Post process the given tokens. Kind : instance method of ByteLevelPostProcessor Returns : PostProcessedOutput - An object containing the post-processed tokens. Param Type Default Description tokens Array.<string> The list of tokens for the first sequence. [tokens_pair] Array.<string> The list of tokens for the second sequence (optional). tokenizers~PostProcessorSequence A post-processor that applies multiple post-processors in sequence. Kind : inner class of tokenizers ~PostProcessorSequence new PostProcessorSequence(config) .post_process(tokens, [tokens_pair]) ⇒ PostProcessedOutput new PostProcessorSequence(config) Creates a new instance of PostProcessorSequence. Param Type Description config Object The configuration object. config.processors Array.<Object> The list of post-processors to apply. postProcessorSequence.post_process(tokens, [tokens_pair]) ⇒ <code> PostProcessedOutput </code> Post process the given tokens. Kind : instance method of PostProcessorSequence Returns : PostProcessedOutput - An object containing the post-processed tokens. Param Type Default Description tokens Array.<string> The list of tokens for the first sequence. [tokens_pair] Array.<string> The list of tokens for the second sequence (optional). tokenizers~Decoder ⇐ <code> Callable </code> The base class for token decoders. Kind : inner class of tokenizers Extends : Callable ~Decoder ⇐ Callable new Decoder(config) instance .added_tokens : Array.<AddedToken> ._call(tokens) ⇒ string .decode(tokens) ⇒ string .decode_chain(tokens) ⇒ Array.<string> static .fromConfig(config) ⇒ Decoder new Decoder(config) Creates an instance of Decoder . Param Type Description config Object The configuration object. decoder.added_tokens : <code> Array. < AddedToken > </code> Kind : instance property of Decoder decoder._call(tokens) ⇒ <code> string </code> Calls the decode method. Kind : instance method of Decoder Overrides : _call Returns : string - The decoded string. Param Type Description tokens Array.<string> The list of tokens. decoder.decode(tokens) ⇒ <code> string </code> Decodes a list of tokens. Kind : instance method of Decoder Returns : string - The decoded string. Param Type Description tokens Array.<string> The list of tokens. decoder.decode_chain(tokens) ⇒ <code> Array. < string > </code> Apply the decoder to a list of tokens. Kind : instance method of Decoder Returns : Array.<string> - The decoded list of tokens. Throws : Error If the `decode_chain` method is not implemented in the subclass. Param Type Description tokens Array.<string> The list of tokens. Decoder.fromConfig(config) ⇒ <code> Decoder </code> Creates a decoder instance based on the provided configuration. Kind : static method of Decoder Returns : Decoder - A decoder instance. Throws : Error If an unknown decoder type is provided. Param Type Description config Object The configuration object. tokenizers~FuseDecoder Fuse simply fuses all tokens into one big string. It’s usually the last decoding step anyway, but this decoder exists incase some decoders need to happen after that step Kind : inner class of tokenizers fuseDecoder.decode_chain() : <code> * </code> Kind : instance method of FuseDecoder tokenizers~WordPieceDecoder ⇐ <code> Decoder </code> A decoder that decodes a list of WordPiece tokens into a single string. Kind : inner class of tokenizers Extends : Decoder ~WordPieceDecoder ⇐ Decoder new WordPieceDecoder(config) .decode_chain() : * new WordPieceDecoder(config) Creates a new instance of WordPieceDecoder. Param Type Description config Object The configuration object. config.prefix string The prefix used for WordPiece encoding. config.cleanup boolean Whether to cleanup the decoded string. wordPieceDecoder.decode_chain() : <code> * </code> Kind : instance method of WordPieceDecoder tokenizers~ByteLevelDecoder ⇐ <code> Decoder </code> Byte-level decoder for tokenization output. Inherits from the Decoder class. Kind : inner class of tokenizers Extends : Decoder ~ByteLevelDecoder ⇐ Decoder new ByteLevelDecoder(config) .convert_tokens_to_string(tokens) ⇒ string .decode_chain() : * new ByteLevelDecoder(config) Create a ByteLevelDecoder object. Param Type Description config Object Configuration object. byteLevelDecoder.convert_tokens_to_string(tokens) ⇒ <code> string </code> Convert an array of tokens to string by decoding each byte. Kind : instance method of ByteLevelDecoder Returns : string - The decoded string. Param Type Description tokens Array.<string> Array of tokens to be decoded. byteLevelDecoder.decode_chain() : <code> * </code> Kind : instance method of ByteLevelDecoder tokenizers~CTCDecoder The CTC (Connectionist Temporal Classification) decoder. See https://github.com/huggingface/tokenizers/blob/bb38f390a61883fc2f29d659af696f428d1cda6b/tokenizers/src/decoders/ctc.rs Kind : inner class of tokenizers ~CTCDecoder .convert_tokens_to_string(tokens) ⇒ string .decode_chain() : * ctcDecoder.convert_tokens_to_string(tokens) ⇒ <code> string </code> Converts a connectionist-temporal-classification (CTC) output tokens into a single string. Kind : instance method of CTCDecoder Returns : string - The decoded string. Param Type Description tokens Array.<string> Array of tokens to be decoded. ctcDecoder.decode_chain() : <code> * </code> Kind : instance method of CTCDecoder tokenizers~DecoderSequence ⇐ <code> Decoder </code> Apply a sequence of decoders. Kind : inner class of tokenizers Extends : Decoder ~DecoderSequence ⇐ Decoder new DecoderSequence(config) .decode_chain() : * new DecoderSequence(config) Creates a new instance of DecoderSequence. Param Type Description config Object The configuration object. config.decoders Array.<Object> The list of decoders to apply. decoderSequence.decode_chain() : <code> * </code> Kind : instance method of DecoderSequence tokenizers~MetaspacePreTokenizer ⇐ <code> PreTokenizer </code> This PreTokenizer replaces spaces with the given replacement character, adds a prefix space if requested, and returns a list of tokens. Kind : inner class of tokenizers Extends : PreTokenizer ~MetaspacePreTokenizer ⇐ PreTokenizer new MetaspacePreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> new MetaspacePreTokenizer(config) Param Type Default Description config Object The configuration object for the MetaspacePreTokenizer. config.add_prefix_space boolean Whether to add a prefix space to the first token. config.replacement string The character to replace spaces with. [config.str_rep] string "config.replacement" An optional string representation of the replacement character. [config.prepend_scheme] 'first' | 'never' | 'always' 'always' The metaspace prepending scheme. metaspacePreTokenizer.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> This method takes a string, replaces spaces with the replacement character, adds a prefix space if requested, and returns a new list of tokens. Kind : instance method of MetaspacePreTokenizer Returns : Array.<string> - A new list of pre-tokenized tokens. Param Type Description text string The text to pre-tokenize. [options] Object The options for the pre-tokenization. [options.section_index] number The index of the section to pre-tokenize. tokenizers~MetaspaceDecoder ⇐ <code> Decoder </code> MetaspaceDecoder class extends the Decoder class and decodes Metaspace tokenization. Kind : inner class of tokenizers Extends : Decoder ~MetaspaceDecoder ⇐ Decoder new MetaspaceDecoder(config) .decode_chain() : * new MetaspaceDecoder(config) Constructs a new MetaspaceDecoder object. Param Type Description config Object The configuration object for the MetaspaceDecoder. config.add_prefix_space boolean Whether to add a prefix space to the decoded string. config.replacement string The string to replace spaces with. metaspaceDecoder.decode_chain() : <code> * </code> Kind : instance method of MetaspaceDecoder tokenizers~Precompiled ⇐ <code> Normalizer </code> A normalizer that applies a precompiled charsmap. This is useful for applying complex normalizations in C++ and exposing them to JavaScript. Kind : inner class of tokenizers Extends : Normalizer ~Precompiled ⇐ Normalizer new Precompiled(config) .normalize(text) ⇒ string new Precompiled(config) Create a new instance of Precompiled normalizer. Param Type Description config Object The configuration object for the Precompiled normalizer. config.precompiled_charsmap Object The precompiled charsmap object. precompiled.normalize(text) ⇒ <code> string </code> Normalizes the given text by applying the precompiled charsmap. Kind : instance method of Precompiled Returns : string - The normalized text. Param Type Description text string The text to normalize. tokenizers~PreTokenizerSequence ⇐ <code> PreTokenizer </code> A pre-tokenizer that applies a sequence of pre-tokenizers to the input text. Kind : inner class of tokenizers Extends : PreTokenizer ~PreTokenizerSequence ⇐ PreTokenizer new PreTokenizerSequence(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> new PreTokenizerSequence(config) Creates an instance of PreTokenizerSequence. Param Type Description config Object The configuration object for the pre-tokenizer sequence. config.pretokenizers Array.<Object> An array of pre-tokenizer configurations. preTokenizerSequence.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Applies each pre-tokenizer in the sequence to the input text in turn. Kind : instance method of PreTokenizerSequence Returns : Array.<string> - The pre-tokenized text. Param Type Description text string The text to pre-tokenize. [options] Object Additional options for the pre-tokenization logic. tokenizers~WhitespacePreTokenizer Splits on word boundaries (using the following regular expression: \w+|[^\w\s]+ ). Kind : inner class of tokenizers ~WhitespacePreTokenizer new WhitespacePreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> new WhitespacePreTokenizer(config) Creates an instance of WhitespacePreTokenizer. Param Type Description config Object The configuration object for the pre-tokenizer. whitespacePreTokenizer.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Pre-tokenizes the input text by splitting it on word boundaries. Kind : instance method of WhitespacePreTokenizer Returns : Array.<string> - An array of tokens produced by splitting the input text on whitespace. Param Type Description text string The text to be pre-tokenized. [options] Object Additional options for the pre-tokenization logic. tokenizers~WhitespaceSplit ⇐ <code> PreTokenizer </code> Splits a string of text by whitespace characters into individual tokens. Kind : inner class of tokenizers Extends : PreTokenizer ~WhitespaceSplit ⇐ PreTokenizer new WhitespaceSplit(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> new WhitespaceSplit(config) Creates an instance of WhitespaceSplit. Param Type Description config Object The configuration object for the pre-tokenizer. whitespaceSplit.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Pre-tokenizes the input text by splitting it on whitespace characters. Kind : instance method of WhitespaceSplit Returns : Array.<string> - An array of tokens produced by splitting the input text on whitespace. Param Type Description text string The text to be pre-tokenized. [options] Object Additional options for the pre-tokenization logic. tokenizers~ReplacePreTokenizer Kind : inner class of tokenizers ~ReplacePreTokenizer new ReplacePreTokenizer(config) .pre_tokenize_text(text, [options]) ⇒ Array.<string> new ReplacePreTokenizer(config) Param Type Description config Object The configuration options for the pre-tokenizer. config.pattern Object The pattern used to split the text. Can be a string or a regex object. config.content string What to replace the pattern with. replacePreTokenizer.pre_tokenize_text(text, [options]) ⇒ <code> Array. < string > </code> Pre-tokenizes the input text by replacing certain characters. Kind : instance method of ReplacePreTokenizer Returns : Array.<string> - An array of tokens produced by replacing certain characters. Param Type Description text string The text to be pre-tokenized. [options] Object Additional options for the pre-tokenization logic. tokenizers~BYTES_TO_UNICODE ⇒ <code> Object </code> Returns list of utf-8 byte and a mapping to unicode strings. Specifically avoids mapping to whitespace/control characters the BPE code barfs on. Kind : inner constant of tokenizers Returns : Object - Object with utf-8 byte keys and unicode string values. tokenizers~loadTokenizer(pretrained_model_name_or_path, options) ⇒ <code> Promise. < Array < any > > </code> Loads a tokenizer from the specified path. Kind : inner method of tokenizers Returns : Promise.<Array<any>> - A promise that resolves with information about the loaded tokenizer. Param Type Description pretrained_model_name_or_path string The path to the tokenizer directory. options PretrainedTokenizerOptions Additional options for loading the tokenizer. tokenizers~regexSplit(text, regex) ⇒ <code> Array. < string > </code> Helper function to split a string on a regex, but keep the delimiters. This is required, because the JavaScript .split() method does not keep the delimiters, and wrapping in a capturing group causes issues with existing capturing groups (due to nesting). Kind : inner method of tokenizers Returns : Array.<string> - The split string. Param Type Description text string The text to split. regex RegExp The regex to split on. tokenizers~createPattern(pattern, invert) ⇒ <code> RegExp </code> | <code> null </code> Helper method to construct a pattern from a config object. Kind : inner method of tokenizers Returns : RegExp | null - The compiled pattern. Param Type Default Description pattern Object The pattern object. invert boolean true Whether to invert the pattern. tokenizers~objectToMap(obj) ⇒ <code> Map. < string, any > </code> Helper function to convert an Object to a Map Kind : inner method of tokenizers Returns : Map.<string, any> - The map. Param Type Description obj Object The object to convert. tokenizers~prepareTensorForDecode(tensor) ⇒ <code> Array. < number > </code> Helper function to convert a tensor to a list before decoding. Kind : inner method of tokenizers Returns : Array.<number> - The tensor as a list. Param Type Description tensor Tensor The tensor to convert. tokenizers~clean_up_tokenization(text) ⇒ <code> string </code> Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms Kind : inner method of tokenizers Returns : string - The cleaned up text. Param Type Description text string The text to clean up. tokenizers~remove_accents(text) ⇒ <code> string </code> Helper function to remove accents from a string. Kind : inner method of tokenizers Returns : string - The text with accents removed. Param Type Description text string The text to remove accents from. tokenizers~lowercase_and_remove_accent(text) ⇒ <code> string </code> Helper function to lowercase a string and remove accents. Kind : inner method of tokenizers Returns : string - The lowercased text with accents removed. Param Type Description text string The text to lowercase and remove accents from. tokenizers~whitespace_split(text) ⇒ <code> Array. < string > </code> Split a string on whitespace. Kind : inner method of tokenizers Returns : Array.<string> - The split string. Param Type Description text string The text to split. tokenizers~PretrainedTokenizerOptions : <code> Object </code> Additional tokenizer-specific properties. Kind : inner typedef of tokenizers Properties Name Type Default Description [legacy] boolean false Whether or not the legacy behavior of the tokenizer should be used. tokenizers~BPENode : <code> Object </code> Kind : inner typedef of tokenizers Properties Name Type Description token string The token associated with the node bias number A positional bias for the node. [score] number The score of the node. [prev] BPENode The previous node in the linked list. [next] BPENode The next node in the linked list. tokenizers~SplitDelimiterBehavior : <code> ’ removed ’ </code> | <code> ’ isolated ’ </code> | <code> ’ mergedWithPrevious ’ </code> | <code> ’ mergedWithNext ’ </code> | <code> ’ contiguous ’ </code> Kind : inner typedef of tokenizers tokenizers~PostProcessedOutput : <code> Object </code> Kind : inner typedef of tokenizers Properties Name Type Description tokens Array.<string> List of token produced by the post-processor. [token_type_ids] Array.<number> List of token type ids produced by the post-processor. tokenizers~EncodingSingle : <code> Object </code> Kind : inner typedef of tokenizers Properties Name Type Description input_ids Array.<number> List of token ids to be fed to a model. attention_mask Array.<number> List of token type ids to be fed to a model [token_type_ids] Array.<number> List of indices specifying which tokens should be attended to by the model tokenizers~Message : <code> Object </code> Kind : inner typedef of tokenizers Properties Name Type Description role string The role of the message (e.g., "user" or "assistant" or "system"). content string The content of the message. tokenizers~BatchEncoding : <code> Array < number > </code> | <code> Array < Array < number > > </code> | <code> Tensor </code> Holds the output of the tokenizer’s call function. Kind : inner typedef of tokenizers Properties Name Type Description input_ids BatchEncodingItem List of token ids to be fed to a model. attention_mask BatchEncodingItem List of indices specifying which tokens should be attended to by the model. [token_type_ids] BatchEncodingItem List of token type ids to be fed to a model. < > Update on GitHub ← Models Processors → tokenizers tokenizers. Tokenizer Model ⇐ Callable new Tokenizer Model(config) tokenizer Model.vocab : Array. < string > tokenizer Model.tokens_to_ids : Map. < string, number > tokenizer Model.fuse_unk : boolean tokenizer Model._call(tokens) ⇒ Array. < string > tokenizer Model.encode(tokens) ⇒ Array. < string > tokenizer Model.convert_tokens_to_ids(tokens) ⇒ Array. < number > tokenizer Model.convert_ids_to_tokens(ids) ⇒ Array. < string > Tokenizer Model.from Config(config, ...args) ⇒ Tokenizer Model tokenizers. Pre Trained Tokenizer new Pre Trained Tokenizer(tokenizerJSO N, tokenizer Config) pre Trained Tokenizer.added_tokens : Array. < Added Token > pre Trained Tokenizer.remove_space : boolean pre Trained Tokenizer._call(text, options) ⇒ Batch Encoding pre Trained Tokenizer._encode_text(text) ⇒ Array < string > | null pre Trained Tokenizer._tokenize_helper(text, options) ⇒ * pre Trained Tokenizer.tokenize(text, options) ⇒ Array. < string > pre Trained Tokenizer.encode(text, options) ⇒ Array. < number > pre Trained Tokenizer.batch_decode(batch, decode_args) ⇒ Array. < string > pre Trained Tokenizer.decode(token_ids, [decode_args]) ⇒ string pre Trained Tokenizer.decode_single(token_ids, decode_args) ⇒ string pre Trained Tokenizer.get_chat_template(options) ⇒ string pre Trained Tokenizer.apply_chat_template(conversation, options) ⇒ string | Tensor | Array < number > | Array < Array < number > > | Batch Encoding Pre Trained Tokenizer.from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise. < Pre Trained Tokenizer > tokenizers. Bert Tokenizer ⇐ Pre Trained Tokenizer tokenizers. Albert Tokenizer ⇐ Pre Trained Tokenizer tokenizers. Nllb Tokenizer nllb Tokenizer._build_translation_inputs(raw_inputs, tokenizer_options, generate_kwargs) ⇒ Object tokenizers. M2 M100 Tokenizer m2 M100 Tokenizer._build_translation_inputs(raw_inputs, tokenizer_options, generate_kwargs) ⇒ Object tokenizers. Whisper Tokenizer ⇐ Pre Trained Tokenizer whisper Tokenizer._decode_asr(sequences, options) ⇒ * whisper Tokenizer.decode() : * tokenizers. Marian Tokenizer new Marian Tokenizer(tokenizerJSO N, tokenizer Config) marian Tokenizer._encode_text(text) ⇒ Array tokenizers. Auto Tokenizer Auto Tokenizer.from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise. < Pre Trained Tokenizer > tokenizers.is_chinese_char(cp) ⇒ boolean tokenizers~ Added Token new Added Token(config) tokenizers~ Word Piece Tokenizer ⇐ Tokenizer Model new Word Piece Tokenizer(config) word Piece Tokenizer.tokens_to_ids : Map. < string, number > word Piece Tokenizer.unk_token_id : number word Piece Tokenizer.unk_token : string word Piece Tokenizer.max_input_chars_per_word : number word Piece Tokenizer.vocab : Array. < string > word Piece Tokenizer.encode(tokens) ⇒ Array. < string > tokenizers~ Unigram ⇐ Tokenizer Model new Unigram(config, more Config) unigram.populate Nodes(lattice) unigram.tokenize(normalized) ⇒ Array. < string > unigram.encode(tokens) ⇒ Array. < string > tokenizers~BP E ⇐ Tokenizer Model new BP E(config) bp E.tokens_to_ids : Map. < string, number > bp E.merges : * merges.config.merges : * bp E.cache : Map. < string, Array < string > > bp E.bpe(token) ⇒ Array. < string > bp E.encode(tokens) ⇒ Array. < string > tokenizers~ Legacy Tokenizer Model new Legacy Tokenizer Model(config, more Config) legacy Tokenizer Model.tokens_to_ids : Map. < string, number > tokenizers~ Normalizer new Normalizer(config) normalizer.normalize(text) ⇒ string normalizer._call(text) ⇒ string Normalizer.from Config(config) ⇒ Normalizer tokenizers~ Replace ⇐ Normalizer replace.normalize(text) ⇒ string tokenizers~NF C ⇐ Normalizer nf C.normalize(text) ⇒ string tokenizers~NFK C ⇐ Normalizer nfk C.normalize(text) ⇒ string tokenizers~NFK D ⇐ Normalizer nfk D.normalize(text) ⇒ string tokenizers~ Strip Normalizer strip Normalizer.normalize(text) ⇒ string tokenizers~ Strip Accents ⇐ Normalizer strip Accents.normalize(text) ⇒ string tokenizers~ Lowercase ⇐ Normalizer lowercase.normalize(text) ⇒ string tokenizers~ Prepend ⇐ Normalizer prepend.normalize(text) ⇒ string tokenizers~ Normalizer Sequence ⇐ Normalizer new Normalizer Sequence(config) normalizer Sequence.normalize(text) ⇒ string tokenizers~ Bert Normalizer ⇐ Normalizer bert Normalizer._tokenize_chinese_chars(text) ⇒ string bert Normalizer.strip Accents(text) ⇒ string bert Normalizer.normalize(text) ⇒ string tokenizers~ Pre Tokenizer ⇐ Callable pre Tokenizer.pre_tokenize_text(text, [options]) ⇒ Array. < string > pre Tokenizer.pre_tokenize(text, [options]) ⇒ Array. < string > pre Tokenizer._call(text, [options]) ⇒ Array. < string > Pre Tokenizer.from Config(config) ⇒ Pre Tokenizer tokenizers~ Bert Pre Tokenizer ⇐ Pre Tokenizer new Bert Pre Tokenizer(config) bert Pre Tokenizer.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~ Byte Level Pre Tokenizer ⇐ Pre Tokenizer new Byte Level Pre Tokenizer(config) byte Level Pre Tokenizer.add_prefix_space : boolean byte Level Pre Tokenizer.trim_offsets : boolean byte Level Pre Tokenizer.use_regex : boolean byte Level Pre Tokenizer.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~ Split Pre Tokenizer ⇐ Pre Tokenizer new Split Pre Tokenizer(config) split Pre Tokenizer.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~ Punctuation Pre Tokenizer ⇐ Pre Tokenizer new Punctuation Pre Tokenizer(config) punctuation Pre Tokenizer.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~ Digits Pre Tokenizer ⇐ Pre Tokenizer new Digits Pre Tokenizer(config) digits Pre Tokenizer.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~ Post Processor ⇐ Callable new Post Processor(config) post Processor.post_process(tokens, ...args) ⇒ Post Processed Output post Processor._call(tokens, ...args) ⇒ Post Processed Output Post Processor.from Config(config) ⇒ Post Processor tokenizers~ Bert Processing new Bert Processing(config) bert Processing.post_process(tokens, [tokens_pair]) ⇒ Post Processed Output tokenizers~ Template Processing ⇐ Post Processor new Template Processing(config) template Processing.post_process(tokens, [tokens_pair]) ⇒ Post Processed Output tokenizers~ Byte Level Post Processor ⇐ Post Processor byte Level Post Processor.post_process(tokens, [tokens_pair]) ⇒ Post Processed Output tokenizers~ Post Processor Sequence new Post Processor Sequence(config) post Processor Sequence.post_process(tokens, [tokens_pair]) ⇒ Post Processed Output tokenizers~ Decoder ⇐ Callable new Decoder(config) decoder.added_tokens : Array. < Added Token > decoder._call(tokens) ⇒ string decoder.decode(tokens) ⇒ string decoder.decode_chain(tokens) ⇒ Array. < string > Decoder.from Config(config) ⇒ Decoder tokenizers~ Fuse Decoder fuse Decoder.decode_chain() : * tokenizers~ Word Piece Decoder ⇐ Decoder new Word Piece Decoder(config) word Piece Decoder.decode_chain() : * tokenizers~ Byte Level Decoder ⇐ Decoder new Byte Level Decoder(config) byte Level Decoder.convert_tokens_to_string(tokens) ⇒ string byte Level Decoder.decode_chain() : * tokenizers~CTC Decoder ctc Decoder.convert_tokens_to_string(tokens) ⇒ string ctc Decoder.decode_chain() : * tokenizers~ Decoder Sequence ⇐ Decoder new Decoder Sequence(config) decoder Sequence.decode_chain() : * tokenizers~ Metaspace Pre Tokenizer ⇐ Pre Tokenizer new Metaspace Pre Tokenizer(config) metaspace Pre Tokenizer.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~ Metaspace Decoder ⇐ Decoder new Metaspace Decoder(config) metaspace Decoder.decode_chain() : * tokenizers~ Precompiled ⇐ Normalizer new Precompiled(config) precompiled.normalize(text) ⇒ string tokenizers~ Pre Tokenizer Sequence ⇐ Pre Tokenizer new Pre Tokenizer Sequence(config) pre Tokenizer Sequence.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~ Whitespace Pre Tokenizer new Whitespace Pre Tokenizer(config) whitespace Pre Tokenizer.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~ Whitespace Split ⇐ Pre Tokenizer new Whitespace Split(config) whitespace Split.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~ Replace Pre Tokenizer new Replace Pre Tokenizer(config) replace Pre Tokenizer.pre_tokenize_text(text, [options]) ⇒ Array. < string > tokenizers~BYTE S_T O_UNICOD E ⇒ Object tokenizers~load Tokenizer(pretrained_model_name_or_path, options) ⇒ Promise. < Array < any > > tokenizers~regex Split(text, regex) ⇒ Array. < string > tokenizers~create Pattern(pattern, invert) ⇒ Reg Exp | null tokenizers~object To Map(obj) ⇒ Map. < string, any > tokenizers~prepare Tensor For Decode(tensor) ⇒ Array. < number > tokenizers~clean_up_tokenization(text) ⇒ string tokenizers~remove_accents(text) ⇒ string tokenizers~lowercase_and_remove_accent(text) ⇒ string tokenizers~whitespace_split(text) ⇒ Array. < string > tokenizers~ Pretrained Tokenizer Options : Object tokenizers~BPE Node : Object tokenizers~ Split Delimiter Behavior : ’ removed ’ | ’ isolated ’ | ’ merged With Previous ’ | ’ merged With Next ’ | ’ contiguous ’ tokenizers~ Post Processed Output : Object tokenizers~ Encoding Single : Object tokenizers~ Message : Object tokenizers~ Batch Encoding : Array < number > | Array < Array < number > > | Tensor
NeuronX_Text_generation_inference_for_AWS_inferent.txt
NeuronX Text-generation-inference for AWS inferentia2 Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation NeuronX Text-generation-inference for AWS inferentia2 AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started NeuronX Text-generation-inference for AWS inferentia2 Text Generation Inference ( TGI ) is a toolkit for deploying and serving Large Language Models (LLMs). It is available for Inferentia2. Features The basic TGI features are supported: continuous batching, token streaming, greedy search and multinomial sampling using transformers . License NeuronX TGI is released under an Apache2 License . Deploy the service from the Hugging Face hub The simplest way to deploy the NeuronX TGI service for a specific model is to follow the deployment instructions in the model card: click on the “Deploy” button on the right, select your deployment service (“Inference Endpoints” and “SageMaker” are supported), select “AWS Inferentia”, follow the instructions. Deploy the service on a dedicated host The service is launched simply by running the neuronx-tgi container with two sets of parameters: Copied docker run < system_parameters > ghcr.io/huggingface/neuronx-tgi:latest < service_parameters > system parameters are used to map ports, volumes and devices between the host and the service, service parameters are forwarded to the text-generation-launcher . When deploying a service, you will need a pre-compiled Neuron model. The NeuronX TGI service supports two main modes of operation: you can either deploy the service on a model that has already been exported to Neuron, or alternatively you can take advantage of the Neuron Model Cache to export your own model. Common system parameters Whenever you launch a TGI service, we highly recommend you to mount a shared volume mounted as /data in the container: this is where the models will be cached to speed up further instantiations of the service. Note also that enough neuron devices should be visible by the container.The simplest way to achieve that is to launch the service in privileged mode to get access to all neuron devices. Alternatively, each device can be explicitly exposed using the --device option. Finally, you might want to export the HF_TOKEN if you want to access gated repositories. Here is an example of a service instantiation: Copied docker run -p 8080:80 \ -v $( pwd )/data:/data \ --privileged \ -e HF_TOKEN= ${HF_TOKEN} \ ghcr.io/huggingface/neuronx-tgi:latest \ <service_parameters> If you only want to map the first device, the launch command becomes: Copied docker run -p 8080 : 80 \ -v $(pwd) /data:/ data \ --device= /dev/ neuron0 \ -e HF_TOKEN= ${HF_TOKEN} \ ghcr.io /huggingface/ neuronx-tgi:latest \ <service_parameters> Using a standard model from the 🤗 HuggingFace Hub (recommended) We maintain a Neuron Model Cache of the most popular architecture and deployment parameters under aws-neuron/optimum-neuron-cache . If you just want to try the service quickly using a model that has not been exported yet, it is thus still possible to export it dynamically, pending some conditions: you must specify the export parameters when launching the service (or use default parameters), the model configuration must be cached. The snippet below shows how you can deploy a service from a hub standard model: Copied export HF_TOKEN=<YOUR_TOKEN> docker run -p 8080 : 80 \ -v $(pwd)/data:/data \ --privileged \ -e HF_TOKEN=${HF_TOKEN} \ -e HF_AUTO_CAST_TYPE= "fp16" \ -e HF_NUM_CORES= 2 \ ghcr.io/huggingface/neuronx-tgi:latest \ --model-id meta-llama/Meta-Llama- 3 - 8 B \ --max-batch-size 1 \ --max-input-length 3164 \ --max-total-tokens 4096 Using a model exported to a local path Alternatively, you can first export the model to neuron format locally. You can then deploy the service inside the shared volume: Copied docker run -p 8080 : 80 \ -v $(pwd) /data:/ data \ --privileged \ ghcr.io /huggingface/ neuronx-tgi:latest \ --model-id /data/ <neuron_model_path> Note: You don’t need to specify any service parameters, as they will all be deduced from the model export configuration. Using a neuron model from the 🤗 HuggingFace Hub The easiest way to share a neuron model inside your organization is to push it on the Hugging Face hub, so that it can be deployed directly without requiring an export. The snippet below shows how you can deploy a service from a hub neuron model: Copied docker run -p 8080:80 \ -v $( pwd )/data:/data \ --privileged \ -e HF_TOKEN= ${HF_TOKEN} \ ghcr.io/huggingface/neuronx-tgi:latest \ --model-id <organization>/<neuron-model> Choosing service parameters Use the following command to list the available service parameters: Copied docker run ghcr.io/huggingface/neuronx-tgi -- help The configuration of an inference endpoint is always a compromise between throughput and latency: serving more requests in parallel will allow a higher throughput, but it will increase the latency. The neuron models have static input dimensions [batch_size, max_length] . This adds several restrictions to the following parameters: --max-batch-size must be set to batch size , --max-input-length must be lower than max_length , --max-total-tokens must be set to max_length (it is per-request). Although not strictly necessary, but important for efficient prefilling: --max-batch-prefill-tokens should be set to batch_size * max-input-length . Choosing the correct batch size As seen in the previous paragraph, neuron model static batch size has a direct influence on the endpoint latency and throughput. Please refer to text-generation-inference for optimization hints. Note that the main constraint is to be able to fit the model for the specified batch_size within the total device memory available on your instance (16GB per neuron core, with 2 cores per device). Query the service You can query the model using either the /generate or /generate_stream routes: Copied curl 127 .0 .0 .1 : 8080 / generate \ -X POST \ -d '{ "inputs" : "What is Deep Learning?" , "parameters" :{ "max_new_tokens" : 20 }}' \ -H 'Content-Type: application/json' Copied curl 127.0 . 0.1 : 8080 /generate_stream \ -X POST \ -d '{ "inputs" : "What is Deep Learning?" , "parameters" :{ "max_new_tokens" : 20 }}' \ -H 'Content - Type : application/json' Note: replace 127.0.0.1:8080 with your actual IP address and port. ← Inference pipelines with AWS Neuron Mistral Small on AWS Inferentia2 → Neuron X Text-generation-inference for AW S inferentia2 Features License Deploy the service from the Hugging Face hub Deploy the service on a dedicated host Common system parameters Using a standard model from the 🤗 Hugging Face Hub (recommended) Using a model exported to a local path Using a neuron model from the 🤗 Hugging Face Hub Choosing service parameters Choosing the correct batch size Query the service
Learning_Rate_Schedulers.txt
Learning Rate Schedulers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up timm documentation Learning Rate Schedulers timm 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.0.14 v0.9.16 EN Get started Home Quickstart Installation Changelog Tutorials Using Pretrained Models as Feature Extractors Training With The Official Training Script Share and Load Models from the 🤗 Hugging Face Hub Model Pages Reference Models Data Optimizers Learning Rate Schedulers Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Learning Rate Schedulers This page contains the API reference documentation for learning rate schedulers included in timm . Schedulers Factory functions timm.scheduler.create_scheduler < source > ( args optimizer : Optimizer updates_per_epoch : int = 0 ) timm.scheduler.create_scheduler_v2 < source > ( optimizer : Optimizer sched : str = 'cosine' num_epochs : int = 300 decay_epochs : int = 90 decay_milestones : typing.List[int] = (90, 180, 270) cooldown_epochs : int = 0 patience_epochs : int = 10 decay_rate : float = 0.1 min_lr : float = 0 warmup_lr : float = 1e-05 warmup_epochs : int = 0 warmup_prefix : bool = False noise : typing.Union[float, typing.List[float]] = None noise_pct : float = 0.67 noise_std : float = 1.0 noise_seed : int = 42 cycle_mul : float = 1.0 cycle_decay : float = 0.1 cycle_limit : int = 1 k_decay : float = 1.0 plateau_mode : str = 'max' step_on_epochs : bool = True updates_per_epoch : int = 0 ) Scheduler Classes class timm.scheduler. CosineLRScheduler < source > ( optimizer : Optimizer t_initial : int lr_min : float = 0.0 cycle_mul : float = 1.0 cycle_decay : float = 1.0 cycle_limit : int = 1 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = False t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 k_decay = 1.0 initialize = True ) Cosine decay with restarts. This is described in the paper https://arxiv.org/abs/1608.03983 . Inspiration from https://github.com/allenai/allennlp/blob/master/allennlp/training/learning_rate_schedulers/cosine.py k-decay option based on k-decay: A New Method For Learning Rate Schedule - https://arxiv.org/abs/2004.05909 class timm.scheduler. MultiStepLRScheduler < source > ( optimizer : Optimizer decay_t : typing.List[int] decay_rate : float = 1.0 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = True t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 initialize = True ) class timm.scheduler. PlateauLRScheduler < source > ( optimizer decay_rate = 0.1 patience_t = 10 verbose = True threshold = 0.0001 cooldown_t = 0 warmup_t = 0 warmup_lr_init = 0 lr_min = 0 mode = 'max' noise_range_t = None noise_type = 'normal' noise_pct = 0.67 noise_std = 1.0 noise_seed = None initialize = True ) Decay the LR by a factor every time the validation loss plateaus. class timm.scheduler. PolyLRScheduler < source > ( optimizer : Optimizer t_initial : int power : float = 0.5 lr_min : float = 0.0 cycle_mul : float = 1.0 cycle_decay : float = 1.0 cycle_limit : int = 1 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = False t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 k_decay = 1.0 initialize = True ) Polynomial LR Scheduler w/ warmup, noise, and k-decay k-decay option based on k-decay: A New Method For Learning Rate Schedule - https://arxiv.org/abs/2004.05909 class timm.scheduler. StepLRScheduler < source > ( optimizer : Optimizer decay_t : float decay_rate : float = 1.0 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = True t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 initialize = True ) class timm.scheduler. TanhLRScheduler < source > ( optimizer : Optimizer t_initial : int lb : float = -7.0 ub : float = 3.0 lr_min : float = 0.0 cycle_mul : float = 1.0 cycle_decay : float = 1.0 cycle_limit : int = 1 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = False t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 initialize = True ) Hyberbolic-Tangent decay with restarts. This is described in the paper https://arxiv.org/abs/1806.01593 < > Update on GitHub ← Optimizers Learning Rate Schedulers Schedulers Factory functions Scheduler Classes
Running_models_on_WebGPU.txt
Running models on WebGPU Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Running models on WebGPU Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Running models on WebGPU Using quantized models (dtypes) Accessing Private/Gated Models Server-side Audio Processing API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities You are viewing main version, which requires installation from source . If you'd like regular npm install, checkout the latest stable version ( v3.0.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Running models on WebGPU WebGPU is a new web standard for accelerated graphics and compute. The API enables web developers to use the underlying system’s GPU to carry out high-performance computations directly in the browser. WebGPU is the successor to WebGL and provides significantly better performance, because it allows for more direct interaction with modern GPUs. Lastly, it supports general-purpose GPU computations, which makes it just perfect for machine learning! As of October 2024, global WebGPU support is around 70% (according to caniuse.com ), meaning some users may not be able to use the API. If the following demos do not work in your browser, you may need to enable it using a feature flag: Firefox: with the dom.webgpu.enabled flag (see here ). Safari: with the WebGPU feature flag (see here ). Older Chromium browsers (on Windows, macOS, Linux): with the enable-unsafe-webgpu flag (see here ). Usage in Transformers.js v3 Thanks to our collaboration with ONNX Runtime Web , enabling WebGPU acceleration is as simple as setting device: 'webgpu' when loading a model. Let’s see some examples! Example: Compute text embeddings on WebGPU ( demo ) Copied import { pipeline } from "@huggingface/transformers" ; // Create a feature-extraction pipeline const extractor = await pipeline ( "feature-extraction" , "mixedbread-ai/mxbai-embed-xsmall-v1" , { device : "webgpu" }, ); // Compute embeddings const texts = [ "Hello world!" , "This is an example sentence." ]; const embeddings = await extractor (texts, { pooling : "mean" , normalize : true }); console . log (embeddings. tolist ()); // [ // [-0.016986183822155, 0.03228696808218956, -0.0013630966423079371, ... ], // [0.09050482511520386, 0.07207386940717697, 0.05762749910354614, ... ], // ] Example: Perform automatic speech recognition with OpenAI whisper on WebGPU ( demo ) Copied import { pipeline } from "@huggingface/transformers" ; // Create automatic speech recognition pipeline const transcriber = await pipeline ( "automatic-speech-recognition" , "onnx-community/whisper-tiny.en" , { device : "webgpu" }, ); // Transcribe audio from a URL const url = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav" ; const output = await transcriber (url); console . log (output); // { text: ' And so my fellow Americans ask not what your country can do for you, ask what you can do for your country.' } Example: Perform image classification with MobileNetV4 on WebGPU ( demo ) Copied import { pipeline } from "@huggingface/transformers" ; // Create image classification pipeline const classifier = await pipeline ( "image-classification" , "onnx-community/mobilenetv4_conv_small.e2400_r224_in1k" , { device : "webgpu" }, ); // Classify an image from a URL const url = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg" ; const output = await classifier (url); console . log (output); // [ // { label: 'tiger, Panthera tigris', score: 0.6149784922599792 }, // { label: 'tiger cat', score: 0.30281734466552734 }, // { label: 'tabby, tabby cat', score: 0.0019135422771796584 }, // { label: 'lynx, catamount', score: 0.0012161266058683395 }, // { label: 'Egyptian cat', score: 0.0011465961579233408 } // ] Reporting bugs and providing feedback Due to the experimental nature of WebGPU, especially in non-Chromium browsers, you may experience issues when trying to run a model (even it it can run in WASM). If you do, please open an issue on GitHub and we’ll do our best to address it. Thanks! < > Update on GitHub ← Server-side Inference in Node.js Using quantized models (dtypes) → Running models on WebGPU Usage in Transformers.js v3 Reporting bugs and providing feedback
DataLoaders,_Optimizers,_and_Schedulers.txt
DataLoaders, Optimizers, and Schedulers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation DataLoaders, Optimizers, and Schedulers Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started DataLoaders, Optimizers, and Schedulers The internal classes Accelerate uses to prepare objects for distributed training when calling prepare() . DataLoader utilities accelerate.data_loader.prepare_data_loader < source > ( dataloader : DataLoader device : typing.Optional[torch.device] = None num_processes : typing.Optional[int] = None process_index : typing.Optional[int] = None split_batches : bool = False put_on_device : bool = False rng_types : typing.Optional[typing.List[typing.Union[str, accelerate.utils.dataclasses.RNGType]]] = None dispatch_batches : typing.Optional[bool] = None even_batches : bool = True slice_fn_for_dispatch : typing.Optional[typing.Callable] = None use_seedable_sampler : bool = False data_seed : typing.Optional[int] = None non_blocking : bool = False use_stateful_dataloader : bool = False ) → torch.utils.data.dataloader.DataLoader Parameters dataloader ( torch.utils.data.dataloader.DataLoader ) — The data loader to split across several devices. device ( torch.device ) — The target device for the returned DataLoader . num_processes ( int , optional ) — The number of processes running concurrently. Will default to the value given by PartialState . process_index ( int , optional ) — The index of the current process. Will default to the value given by PartialState . split_batches ( bool , optional , defaults to False ) — Whether the resulting DataLoader should split the batches of the original data loader across devices or yield full batches (in which case it will yield batches starting at the process_index -th and advancing of num_processes batches at each iteration). Another way to see this is that the observed batch size will be the same as the initial dataloader if this option is set to True , the batch size of the initial dataloader multiplied by num_processes otherwise. Setting this option to True requires that the batch size of the dataloader is a round multiple of batch_size . put_on_device ( bool , optional , defaults to False ) — Whether or not to put the batches on device (only works if the batches are nested list, tuples or dictionaries of tensors). rng_types (list of str or RNGType ) — The list of random number generators to synchronize at the beginning of each iteration. Should be one or several of: "torch" : the base torch random number generator "cuda" : the CUDA random number generator (GPU only) "xla" : the XLA random number generator (TPU only) "generator" : the torch.Generator of the sampler (or batch sampler if there is no sampler in your dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type. dispatch_batches ( bool , optional ) — If set to True , the dataloader prepared is only iterated through on the main process and then the batches are split and broadcast to each process. Will default to True when the underlying dataset is an IterableDataset , False otherwise. even_batches ( bool , optional , defaults to True ) — If set to True , in cases where the total batch size across all processes does not exactly divide the dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among all workers. slice_fn_for_dispatch ( Callable , optional ) -- If passed, this function will be used to slice tensors across num_processes . Will default to [slice_tensors()](/docs/accelerate/v1.3.0/en/package_reference/utilities#accelerate.utils.slice_tensors). This argument is used only when dispatch_batches is set to True` and will be ignored otherwise. use_seedable_sampler ( bool , optional , defaults to False ) — Whether to use the SeedableRandomSampler instead of a RandomSampler for better reproducability. Comes at a cost of potentially different performances due to different shuffling algorithms but ensures results will be the exact same. Should be paired with set_seed() at every self.set_epoch data_seed ( int , optional , defaults to None ) — The seed to use for the underlying generator when using use_seedable_sampler . If None , the generator will use the current default seed from torch. non_blocking ( bool , optional , defaults to False ) — If set to True , dataloader will utilize non-blocking host-to-device transfers. If the dataloader has pin_memory set to True , this will help to increase overlap between data transfer and computations. use_stateful_dataloader ( bool , optional , defaults to False ) — “If set to true, the dataloader prepared by the Accelerator will be backed by ” ” torchdata.StatefulDataLoader . This requires torchdata version 0.8.0 or higher that supports StatefulDataLoader to be installed.” Returns torch.utils.data.dataloader.DataLoader A new data loader that will yield the portion of the batches Wraps a PyTorch DataLoader to generate batches for one of the processes only. Depending on the value of the drop_last attribute of the dataloader passed, it will either stop the iteration at the first batch that would be too small / not present on all processes or loop with indices from the beginning. BatchSampler s with varying batch sizes are not enabled by default. To enable this behaviour, set even_batches equal to False accelerate.skip_first_batches < source > ( dataloader num_batches = 0 ) Creates a torch.utils.data.DataLoader that will efficiently skip the first num_batches . Should not be used if the original dataloader is a StatefulDataLoader . BatchSamplerShard class accelerate.data_loader. BatchSamplerShard < source > ( batch_sampler : BatchSampler num_processes : int = 1 process_index : int = 0 split_batches : bool = False even_batches : bool = True ) Parameters batch_sampler ( torch.utils.data.sampler.BatchSampler ) — The batch sampler to split in several shards. num_processes ( int , optional , defaults to 1) — The number of processes running concurrently. process_index ( int , optional , defaults to 0) — The index of the current process. split_batches ( bool , optional , defaults to False ) — Whether the shards should be created by splitting a batch to give a piece of it on each process, or by yielding different full batches on each process. On two processes with a sampler of [[0, 1, 2, 3], [4, 5, 6, 7]] , this will result in: the sampler on process 0 to yield [0, 1, 2, 3] and the sampler on process 1 to yield [4, 5, 6, 7] if this argument is set to False . the sampler on process 0 to yield [0, 1] then [4, 5] and the sampler on process 1 to yield [2, 3] then [6, 7] if this argument is set to True . even_batches ( bool , optional , defaults to True ) — Whether or not to loop back at the beginning of the sampler when the number of samples is not a round multiple of (original batch size / number of processes). Wraps a PyTorch BatchSampler to generate batches for one of the processes only. Instances of this class will always yield a number of batches that is a round multiple of num_processes and that all have the same size. Depending on the value of the drop_last attribute of the batch sampler passed, it will either stop the iteration at the first batch that would be too small / not present on all processes or loop with indices from the beginning. BatchSampler s with varying batch sizes are not enabled by default. To enable this behaviour, set even_batches equal to False IterableDatasetShard class accelerate.data_loader. IterableDatasetShard < source > ( dataset : IterableDataset batch_size : int = 1 drop_last : bool = False num_processes : int = 1 process_index : int = 0 split_batches : bool = False ) Parameters dataset ( torch.utils.data.dataset.IterableDataset ) — The batch sampler to split in several shards. batch_size ( int , optional , defaults to 1) — The size of the batches per shard (if split_batches=False ) or the size of the batches (if split_batches=True ). drop_last ( bool , optional , defaults to False ) — Whether or not to drop the last incomplete batch or complete the last batches by using the samples from the beginning. num_processes ( int , optional , defaults to 1) — The number of processes running concurrently. process_index ( int , optional , defaults to 0) — The index of the current process. split_batches ( bool , optional , defaults to False ) — Whether the shards should be created by splitting a batch to give a piece of it on each process, or by yielding different full batches on each process. On two processes with an iterable dataset yielding of [0, 1, 2, 3, 4, 5, 6, 7] , this will result in: the shard on process 0 to yield [0, 1, 2, 3] and the shard on process 1 to yield [4, 5, 6, 7] if this argument is set to False . the shard on process 0 to yield [0, 1, 4, 5] and the sampler on process 1 to yield [2, 3, 6, 7] if this argument is set to True . Wraps a PyTorch IterableDataset to generate samples for one of the processes only. Instances of this class will always yield a number of samples that is a round multiple of the actual batch size (depending of the value of split_batches , this is either batch_size or batch_size x num_processes ). Depending on the value of the drop_last attribute of the batch sampler passed, it will either stop the iteration at the first batch that would be too small or loop with indices from the beginning. DataLoaderShard class accelerate.data_loader. DataLoaderShard < source > ( dataset device = None rng_types = None synchronized_generator = None skip_batches = 0 use_stateful_dataloader = False _drop_last : bool = False _non_blocking : bool = False **kwargs ) Parameters dataset ( torch.utils.data.dataset.Dataset ) — The dataset to use to build this dataloader. device ( torch.device , optional ) — If passed, the device to put all batches on. rng_types (list of str or RNGType ) — The list of random number generators to synchronize at the beginning of each iteration. Should be one or several of: "torch" : the base torch random number generator "cuda" : the CUDA random number generator (GPU only) "xla" : the XLA random number generator (TPU only) "generator" : an optional torch.Generator synchronized_generator ( torch.Generator , optional ) — A random number generator to keep synchronized across processes. skip_batches ( int , optional , defaults to 0) — The number of batches to skip at the beginning. use_stateful_dataloader ( bool , optional , defaults to False ) — Whether to have this class adapt StatefulDataLoader from torchdata instead of the regular DataLoader . * *kwargs (additional keyword arguments, optional ) — All other keyword arguments to pass to the regular DataLoader initialization. Subclass of DataLoaderAdapter that will deal with device placement and current distributed setup. Available attributes: total_batch_size ( int ) — Total batch size of the dataloader across all processes. Equal to the original batch size when split_batches=True ; otherwise the original batch size * the total number of processes total_dataset_length ( int ) — Total length of the inner dataset across all processes. DataLoaderDispatcher class accelerate.data_loader. DataLoaderDispatcher < source > ( dataset split_batches : bool = False skip_batches = 0 use_stateful_dataloader = False _drop_last : bool = False _non_blocking : bool = False slice_fn = None **kwargs ) Parameters split_batches ( bool , optional , defaults to False ) — Whether the resulting DataLoader should split the batches of the original data loader across devices or yield full batches (in which case it will yield batches starting at the process_index -th and advancing of num_processes batches at each iteration). Another way to see this is that the observed batch size will be the same as the initial dataloader if this option is set to True , the batch size of the initial dataloader multiplied by num_processes otherwise. Setting this option to True requires that the batch size of the dataloader is a round multiple of batch_size . skip_batches ( int , optional , defaults to 0) — The number of batches to skip at the beginning of an iteration. use_stateful_dataloader ( bool , optional , defaults to False ) — Whether to have this class adapt StatefulDataLoader from torchdata instead of the regular DataLoader . Subclass of DataLoaderAdapter that will iterate and preprocess on process 0 only, then dispatch on each process their part of the batch. Available attributes: total_batch_size ( int ) — Total batch size of the dataloader across all processes. Equal to the original batch size when split_batches=True ; otherwise the original batch size * the total number of processes total_dataset_length ( int ) — Total length of the inner dataset across all processes. AcceleratedOptimizer class accelerate.optimizer. AcceleratedOptimizer < source > ( optimizer device_placement = True scaler = None ) Parameters optimizer ( torch.optim.optimizer.Optimizer ) — The optimizer to wrap. device_placement ( bool , optional , defaults to True ) — Whether or not the optimizer should handle device placement. If so, it will place the state dictionary of optimizer on the right device. scaler ( torch.cuda.amp.grad_scaler.GradScaler , optional ) — The scaler to use in the step function if training with mixed precision. Internal wrapper around a torch optimizer. Conditionally will perform step and zero_grad if gradients should be synchronized when performing gradient accumulation. eval < source > ( ) Sets the optimizer to “eval” mode. Useful for optimizers like schedule_free train < source > ( ) Sets the optimizer to “train” mode. Useful for optimizers like schedule_free AcceleratedScheduler class accelerate.scheduler. AcceleratedScheduler < source > ( scheduler optimizers step_with_optimizer : bool = True split_batches : bool = False ) Parameters scheduler ( torch.optim.lr_scheduler._LRScheduler ) — The scheduler to wrap. optimizers (one or a list of torch.optim.Optimizer ) — The optimizers used. step_with_optimizer ( bool , optional , defaults to True ) — Whether or not the scheduler should be stepped at each optimizer step. split_batches ( bool , optional , defaults to False ) — Whether or not the dataloaders split one batch across the different processes (so batch size is the same regardless of the number of processes) or create batches on each process (so batch size is the original batch size multiplied by the number of processes). A wrapper around a learning rate scheduler that will only step when the optimizer(s) have a training step. Useful to avoid making a scheduler step too fast when gradients went overflow and there was no training step (in mixed precision training) When performing gradient accumulation scheduler lengths should not be changed accordingly, Accelerate will always step the scheduler to account for it. < > Update on GitHub ← The Command Line Experiment trackers → Data Loaders, Optimizers, and Schedulers Data Loader utilities Batch Sampler Shard Iterable Dataset Shard Data Loader Shard Data Loader Dispatcher Accelerated Optimizer Accelerated Scheduler
Using_🧨_diffusers_at_Hugging_Face.txt
Using 🧨 diffusers at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using 🧨 diffusers at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using 🧨 diffusers at Hugging Face Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, Diffusers is a modular toolbox that supports both. The library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. Exploring Diffusers in the Hub There are over 10,000 diffusers compatible pipelines on the Hub which you can find by filtering at the left of the models page . Diffusion systems are typically composed of multiple components such as text encoder, UNet, VAE, and scheduler. Even though they are not standalone models, the pipeline abstraction makes it easy to use them for inference or training. You can find diffusion pipelines for many different tasks: Generating images from natural language text prompts ( text-to-image ). Transforming images using natural language text prompts ( image-to-image ). Generating videos from natural language descriptions ( text-to-video ). You can try out the models directly in the browser if you want to test them out without downloading them, thanks to the in-browser widgets! Using existing pipelines All diffusers pipelines are a line away from being used! To run generation we recommended to always start from the DiffusionPipeline : Copied from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" ) If you want to load a specific pipeline component such as the UNet, you can do so by: Copied from diffusers import UNet2DConditionModel unet = UNet2DConditionModel.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , subfolder= "unet" ) Sharing your pipelines and models All the pipeline classes , model classes , and scheduler classes are fully compatible with the Hub. More specifically, they can be easily loaded from the Hub using the from_pretrained() method and can be shared with others using the push_to_hub() method. For more details, please check out the documentation . Additional resources Diffusers library . Diffusers docs . < > Update on GitHub ← Asteroid ESPnet → Using 🧨 diffusers at Hugging Face Exploring Diffusers in the Hub Using existing pipelines Sharing your pipelines and models Additional resources
@huggingface_hub.txt
@huggingface/hub Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation @huggingface/hub Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started @huggingface/hub Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser Type Aliases AccessToken Ƭ AccessToken : string Actually hf_${string} , but for convenience, using the string type Defined in hub/src/types/public.ts:15 AccessTokenRole Ƭ AccessTokenRole : "admin" | "write" | "contributor" | "read" Defined in hub/src/types/public.ts:70 AuthType Ƭ AuthType : "access_token" | "app_token" | "app_token_as_user" Defined in hub/src/types/public.ts:72 CommitOperation Ƭ CommitOperation : CommitDeletedEntry | CommitFile Defined in hub/src/lib/commit.ts:54 CommitParams Ƭ CommitParams : { abortSignal? : AbortSignal ; branch? : string ; description? : string ; fetch? : typeof fetch ; hubUrl? : string ; isPullRequest? : boolean ; operations : CommitOperation [] ; parentCommit? : string ; repo : RepoDesignation ; title : string ; useWebWorkers? : boolean | { minSize? : number ; poolSize? : number } } & Partial \< CredentialsParams > Defined in hub/src/lib/commit.ts:57 CommitProgressEvent Ƭ CommitProgressEvent : { event : "phase" ; phase : "preuploading" | "uploadingLargeFiles" | "committing" } | { event : "fileProgress" ; path : string ; progress : number ; state : "hashing" | "uploading" } Defined in hub/src/lib/commit.ts:106 ContentSource Ƭ ContentSource : Blob | URL Defined in hub/src/lib/commit.ts:35 Dtype Ƭ Dtype : "F64" | "F32" | "F16" | "BF16" | "I64" | "I32" | "I16" | "I8" | "U8" | "BOOL" Defined in hub/src/lib/parse-safetensors-metadata.ts:45 PipelineType Ƭ PipelineType : keyof typeof PIPELINE_DATA Defined in tasks/dist/commonjs/pipelines.d.ts:426 RepoDesignation Ƭ RepoDesignation : RepoId | RepoFullName Defined in hub/src/types/public.ts:12 RepoFullName Ƭ RepoFullName : string | `spaces/${string}` | `datasets/${string}` Defined in hub/src/types/public.ts:10 RepoType Ƭ RepoType : "space" | "dataset" | "model" Defined in hub/src/types/public.ts:3 SafetensorsFileHeader Ƭ SafetensorsFileHeader : Record \< TensorName , TensorInfo > & { __metadata__ : Record \< string , string > } Defined in hub/src/lib/parse-safetensors-metadata.ts:53 SafetensorsParseFromRepo Ƭ SafetensorsParseFromRepo : { header : SafetensorsFileHeader ; parameterCount? : Partial \< Record \< Dtype , number >> ; sharded : false } | { headers : SafetensorsShardedHeaders ; index : SafetensorsIndexJson ; parameterCount? : Partial \< Record \< Dtype , number >> ; sharded : true } Defined in hub/src/lib/parse-safetensors-metadata.ts:67 SafetensorsShardedHeaders Ƭ SafetensorsShardedHeaders : Record \< FileName , SafetensorsFileHeader > Defined in hub/src/lib/parse-safetensors-metadata.ts:65 SpaceHardwareFlavor Ƭ SpaceHardwareFlavor : "cpu-basic" | "cpu-upgrade" | "t4-small" | "t4-medium" | "l4x1" | "l4x4" | "a10g-small" | "a10g-large" | "a10g-largex2" | "a10g-largex4" | "a100-large" | "v5e-1x1" | "v5e-2x2" | "v5e-2x4" Defined in hub/src/types/public.ts:40 SpaceSdk Ƭ SpaceSdk : "streamlit" | "gradio" | "docker" | "static" Defined in hub/src/types/public.ts:56 SpaceStage Ƭ SpaceStage : "NO_APP_FILE" | "CONFIG_ERROR" | "BUILDING" | "BUILD_ERROR" | "RUNNING" | "RUNNING_BUILDING" | "RUNTIME_ERROR" | "DELETING" | "PAUSED" | "SLEEPING" Defined in hub/src/types/public.ts:58 TensorName Ƭ TensorName : string Defined in hub/src/lib/parse-safetensors-metadata.ts:44 WhoAmI Ƭ WhoAmI : WhoAmIApp | WhoAmIOrg | WhoAmIUser Defined in hub/src/lib/who-am-i.ts:50 Variables DATASET _ EXPANDABLE _ KEYS • Const DATASET_EXPANDABLE_KEYS : readonly [ "author" , "cardData" , "citation" , "createdAt" , "disabled" , "description" , "downloads" , "downloadsAllTime" , "gated" , "gitalyUid" , "lastModified" , "likes" , "paperswithcode_id" , "private" , "sha" , "tags" ] Defined in hub/src/lib/list-datasets.ts:17 DATASET _ EXPAND _ KEYS • Const DATASET_EXPAND_KEYS : readonly [ "private" , "downloads" , "gated" , "likes" , "lastModified" ] Defined in hub/src/lib/list-datasets.ts:9 DEFAULT _ REVISION • Const DEFAULT_REVISION : "main" Defined in hub/src/lib/snapshot-download.ts:12 MODEL _ EXPANDABLE _ KEYS • Const MODEL_EXPANDABLE_KEYS : readonly [ "author" , "cardData" , "config" , "createdAt" , "disabled" , "downloads" , "downloadsAllTime" , "gated" , "gitalyUid" , "lastModified" , "library_name" , "likes" , "model-index" , "pipeline_tag" , "private" , "safetensors" , "sha" , "spaces" , "tags" , "transformersInfo" ] Defined in hub/src/lib/list-models.ts:18 MODEL _ EXPAND _ KEYS • Const MODEL_EXPAND_KEYS : readonly [ "pipeline_tag" , "private" , "gated" , "downloads" , "likes" , "lastModified" ] Defined in hub/src/lib/list-models.ts:9 REGEX _ COMMIT _ HASH • Const REGEX_COMMIT_HASH : RegExp Defined in hub/src/lib/download-file-to-cache-dir.ts:10 REPO _ ID _ SEPARATOR • Const REPO_ID_SEPARATOR : string = "--" Defined in hub/src/lib/cache-management.ts:25 RE _ SAFETENSORS _ FILE • Const RE_SAFETENSORS_FILE : RegExp Defined in hub/src/lib/parse-safetensors-metadata.ts:14 RE _ SAFETENSORS _ INDEX _ FILE • Const RE_SAFETENSORS_INDEX_FILE : RegExp Defined in hub/src/lib/parse-safetensors-metadata.ts:15 RE _ SAFETENSORS _ SHARD _ FILE • Const RE_SAFETENSORS_SHARD_FILE : RegExp Defined in hub/src/lib/parse-safetensors-metadata.ts:16 SAFETENSORS _ FILE • Const SAFETENSORS_FILE : "model.safetensors" Defined in hub/src/lib/parse-safetensors-metadata.ts:10 SAFETENSORS _ INDEX _ FILE • Const SAFETENSORS_INDEX_FILE : "model.safetensors.index.json" Defined in hub/src/lib/parse-safetensors-metadata.ts:11 SPACE _ EXPANDABLE _ KEYS • Const SPACE_EXPANDABLE_KEYS : readonly [ "author" , "cardData" , "datasets" , "disabled" , "gitalyUid" , "lastModified" , "createdAt" , "likes" , "private" , "runtime" , "sdk" , "sha" , "subdomain" , "tags" , "models" ] Defined in hub/src/lib/list-spaces.ts:15 SPACE _ EXPAND _ KEYS • Const SPACE_EXPAND_KEYS : readonly [ "sdk" , "likes" , "private" , "lastModified" ] Defined in hub/src/lib/list-spaces.ts:9 Functions _ _ internal _ sha256 ▸ __internal_sha256 ( buffer , opts? ): AsyncGenerator \< number , string > Parameters Name Type buffer Blob opts? Object opts.abortSignal? AbortSignal opts.useWebWorker? boolean | { minSize? : number ; poolSize? : number } Returns AsyncGenerator \< number , string > hex-encoded sha Yields progress (0-1) Defined in hub/src/utils/sha256.ts:72 checkRepoAccess ▸ checkRepoAccess ( params ): Promise \< void > Check if we have read access to a repository. Throw a HubApiError error if we don’t have access. HubApiError.statusCode will be 401, 403 or 404. Parameters Name Type params { fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; repo : RepoDesignation } & Partial \< CredentialsParams > Returns Promise \< void > Defined in hub/src/lib/check-repo-access.ts:13 commit ▸ commit ( params ): Promise \< CommitOutput > Parameters Name Type params CommitParams Returns Promise \< CommitOutput > Defined in hub/src/lib/commit.ts:553 commitIter ▸ commitIter ( params ): AsyncGenerator \< CommitProgressEvent , CommitOutput > Internal function for now, used by commit. Can be exposed later to offer fine-tuned progress info Parameters Name Type params CommitParams Returns AsyncGenerator \< CommitProgressEvent , CommitOutput > Defined in hub/src/lib/commit.ts:123 countCommits ▸ countCommits ( params ): Promise \< number > Parameters Name Type params { fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< number > Defined in hub/src/lib/count-commits.ts:7 createRepo ▸ createRepo ( params ): Promise \<{ repoUrl : string }> Parameters Name Type params { fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; files? : { content : ArrayBuffer | Blob ; path : string }[] ; hubUrl? : string ; license? : string ; private? : boolean ; repo : RepoDesignation ; sdk? : SpaceSdk | undefined } & CredentialsParams Returns Promise \<{ repoUrl : string }> Defined in hub/src/lib/create-repo.ts:9 datasetInfo ▸ datasetInfo \< T >( params ): Promise \< DatasetEntry & Pick \< ApiDatasetInfo , T >> Type parameters Name Type T extends "author" | "cardData" | "disabled" | "gitalyUid" | "createdAt" | "tags" | "paperswithcode_id" | "sha" | "citation" | "description" | "downloadsAllTime" = never Parameters Name Type params { additionalFields? : T [] ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; name : string ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< DatasetEntry & Pick \< ApiDatasetInfo , T >> Defined in hub/src/lib/dataset-info.ts:9 deleteFile ▸ deleteFile ( params ): Promise \< CommitOutput > Parameters Name Type params { branch? : string ; commitDescription? : string ; commitTitle? : string ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; isPullRequest? : boolean ; parentCommit? : string ; path : string ; repo : RepoDesignation } & CredentialsParams Returns Promise \< CommitOutput > Defined in hub/src/lib/delete-file.ts:5 deleteFiles ▸ deleteFiles ( params ): Promise \< CommitOutput > Parameters Name Type params { branch? : string ; commitDescription? : string ; commitTitle? : string ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; isPullRequest? : boolean ; parentCommit? : string ; paths : string [] ; repo : RepoDesignation } & CredentialsParams Returns Promise \< CommitOutput > Defined in hub/src/lib/delete-files.ts:5 deleteRepo ▸ deleteRepo ( params ): Promise \< void > Parameters Name Type params { fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; repo : RepoDesignation } & CredentialsParams Returns Promise \< void > Defined in hub/src/lib/delete-repo.ts:7 downloadFile ▸ downloadFile ( params ): Promise \< Response | null > Parameters Name Type params { fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; path : string ; range? : [ number , number ] ; raw? : boolean ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< Response | null > null when the file doesn’t exist Defined in hub/src/lib/download-file.ts:10 downloadFileToCacheDir ▸ downloadFileToCacheDir ( params ): Promise \< string > Download a given file if it’s not already present in the local cache. Parameters Name Type params { cacheDir? : string ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; path : string ; raw? : boolean ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< string > the symlink to the blob object Defined in hub/src/lib/download-file-to-cache-dir.ts:40 fileDownloadInfo ▸ fileDownloadInfo ( params ): Promise \< FileDownloadInfoOutput | null > Parameters Name Type params { fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; noContentDisposition? : boolean ; path : string ; raw? : boolean ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< FileDownloadInfoOutput | null > null when the file doesn’t exist Defined in hub/src/lib/file-download-info.ts:18 fileExists ▸ fileExists ( params ): Promise \< boolean > Parameters Name Type params { fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; path : string ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< boolean > Defined in hub/src/lib/file-exists.ts:7 getBlobStat ▸ getBlobStat ( blobPath , blobStats ): Promise \< Stats > Parameters Name Type blobPath string blobStats Map \< string , Stats > Returns Promise \< Stats > Defined in hub/src/lib/cache-management.ts:244 getHFHubCachePath ▸ getHFHubCachePath (): string Returns string Defined in hub/src/lib/cache-management.ts:19 getRepoFolderName ▸ getRepoFolderName ( «destructured» ): string Parameters Name Type «destructured» RepoId Returns string Defined in hub/src/lib/cache-management.ts:27 listCommits ▸ listCommits ( params ): AsyncGenerator \< CommitData > Parameters Name Type params { batchSize? : number ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns AsyncGenerator \< CommitData > Defined in hub/src/lib/list-commits.ts:17 listDatasets ▸ listDatasets \< T >( params? ): AsyncGenerator \< DatasetEntry & Pick \< ApiDatasetInfo , T >> Type parameters Name Type T extends "author" | "cardData" | "disabled" | "gitalyUid" | "createdAt" | "tags" | "paperswithcode_id" | "sha" | "citation" | "description" | "downloadsAllTime" = never Parameters Name Type params? { search?: { query?: string | undefined; owner?: string | undefined; tags?: string[] | undefined; } | undefined; hubUrl?: string | undefined; additionalFields?: T[] | undefined; limit?: number | undefined; fetch?: { …; } | undefined; } & Partial\<…> Returns AsyncGenerator \< DatasetEntry & Pick \< ApiDatasetInfo , T >> Defined in hub/src/lib/list-datasets.ts:47 listFiles ▸ listFiles ( params ): AsyncGenerator \< ListFileEntry > List files in a folder. To list ALL files in the directory, call it with params.recursive set to true . Parameters Name Type params { expand? : boolean ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; path? : string ; recursive? : boolean ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns AsyncGenerator \< ListFileEntry > Defined in hub/src/lib/list-files.ts:38 listModels ▸ listModels \< T >( params? ): AsyncGenerator \< ModelEntry & Pick \< ApiModelInfo , T >> Type parameters Name Type T extends "spaces" | "author" | "cardData" | "disabled" | "gitalyUid" | "createdAt" | "tags" | "sha" | "downloadsAllTime" | "config" | "library_name" | "model-index" | "safetensors" | "transformersInfo" = never Parameters Name Type params? { search?: { query?: string | undefined; owner?: string | undefined; task?: “other” | “text-classification” | “token-classification” | “table-question-answering” | “question-answering” | … 48 more … | undefined; tags?: string[] | undefined; } | undefined; hubUrl?: string | undefined; additionalFields?: T[] | und… Returns AsyncGenerator \< ModelEntry & Pick \< ApiModelInfo , T >> Defined in hub/src/lib/list-models.ts:53 listSpaces ▸ listSpaces \< T >( params? ): AsyncGenerator \< SpaceEntry & Pick \< ApiSpaceInfo , T >> Type parameters Name Type T extends "models" | "datasets" | "author" | "cardData" | "disabled" | "gitalyUid" | "createdAt" | "tags" | "sha" | "subdomain" | "runtime" = never Parameters Name Type params? { search?: { query?: string | undefined; owner?: string | undefined; tags?: string[] | undefined; } | undefined; hubUrl?: string | undefined; fetch?: { (input: URL | RequestInfo, init?: RequestInit | undefined): Promise\<…>; (input: string | … 1 more … | Request, init?: RequestInit | undefined): Promise\<…>; }… Returns AsyncGenerator \< SpaceEntry & Pick \< ApiSpaceInfo , T >> Defined in hub/src/lib/list-spaces.ts:44 modelInfo ▸ modelInfo \< T >( params ): Promise \< ModelEntry & Pick \< ApiModelInfo , T >> Type parameters Name Type T extends "spaces" | "author" | "cardData" | "disabled" | "gitalyUid" | "createdAt" | "tags" | "sha" | "downloadsAllTime" | "config" | "library_name" | "model-index" | "safetensors" | "transformersInfo" = never Parameters Name Type params { additionalFields? : T [] ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; name : string ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< ModelEntry & Pick \< ApiModelInfo , T >> Defined in hub/src/lib/model-info.ts:9 oauthHandleRedirect ▸ oauthHandleRedirect ( opts? ): Promise \< OAuthResult > To call after the OAuth provider redirects back to the app. There is also a helper function oauthHandleRedirectIfPresent , which will call oauthHandleRedirect if the URL contains an oauth code in the query parameters and return false otherwise. Parameters Name Type Description opts? Object - opts.codeVerifier? string codeVerifier generated by oauthLoginUrl Default ts localStorage.getItem("huggingface.co:oauth:code_verifier") opts.hubUrl? string The URL of the hub. Defaults to HUB_URL. opts.nonce? string nonce generated by oauthLoginUrl Default ts localStorage.getItem("huggingface.co:oauth:nonce") opts.redirectedUrl? string The URL to analyze. Default ts window.location.href Returns Promise \< OAuthResult > Defined in hub/src/lib/oauth-handle-redirect.ts:114 oauthHandleRedirectIfPresent ▸ oauthHandleRedirectIfPresent ( opts? ): Promise \< OAuthResult | false > To call after the OAuth provider redirects back to the app. It returns false if the URL does not contain an oauth code in the query parameters, otherwise it calls oauthHandleRedirect . Depending on your app, you may want to call oauthHandleRedirect directly instead. Parameters Name Type Description opts? Object - opts.codeVerifier? string codeVerifier generated by oauthLoginUrl Default ts localStorage.getItem("huggingface.co:oauth:code_verifier") opts.hubUrl? string The URL of the hub. Defaults to HUB_URL. opts.nonce? string nonce generated by oauthLoginUrl Default ts localStorage.getItem("huggingface.co:oauth:nonce") opts.redirectedUrl? string The URL to analyze. Default ts window.location.href Returns Promise \< OAuthResult | false > Defined in hub/src/lib/oauth-handle-redirect.ts:284 oauthLoginUrl ▸ oauthLoginUrl ( opts? ): Promise \< string > Use “Sign in with Hub” to authenticate a user, and get oauth user info / access token. Returns an url to redirect to. After the user is redirected back to your app, call oauthHandleRedirect to get the oauth user info / access token. When called from inside a static Space with OAuth enabled, it will load the config from the space, otherwise you need to at least specify the client ID of your OAuth App. Parameters Name Type Description opts? Object - opts.clientId? string OAuth client ID. For static Spaces, you can omit this and it will be loaded from the Space config, as long as hf_oauth: true is present in the README.md’s metadata. For other Spaces, it is available to the backend in the OAUTH_CLIENT_ID environment variable, as long as hf_oauth: true is present in the README.md’s metadata. You can also create a Developer Application at https://huggingface.co/settings/connected-applications and use its client ID. opts.hubUrl? string - opts.localStorage? Object If provided, will be filled with the code verifier and nonce used for the OAuth flow, instead of using localStorage. When calling oauthHandleRedirectIfPresent or oauthHandleRedirect you will need to provide the same values. opts.localStorage.codeVerifier? string - opts.localStorage.nonce? string - opts.redirectUrl? string Redirect URI, defaults to the current URL. For Spaces, any URL within the Space is allowed. For Developer Applications, you can add any URL you want to the list of allowed redirect URIs at https://huggingface.co/settings/connected-applications . opts.scopes? string OAuth scope, a list of space-separated scopes. For static Spaces, you can omit this and it will be loaded from the Space config, as long as hf_oauth: true is present in the README.md’s metadata. For other Spaces, it is available to the backend in the OAUTH_SCOPES environment variable, as long as hf_oauth: true is present in the README.md’s metadata. Defaults to “openid profile”. You can also create a Developer Application at https://huggingface.co/settings/connected-applications and use its scopes. See https://huggingface.co/docs/hub/oauth for a list of available scopes. opts.state? string State to pass to the OAuth provider, which will be returned in the call to oauthLogin after the redirect. Returns Promise \< string > Example Copied import { oauthLoginUrl, oauthHandleRedirectIfPresent } from "@huggingface/hub" ; const oauthResult = await oauthHandleRedirectIfPresent (); if (!oauthResult) { // If the user is not logged in, redirect to the login page window . location . href = await oauthLoginUrl (); } // You can use oauthResult.accessToken, oauthResult.accessTokenExpiresAt and oauthResult.userInfo console . log (oauthResult); (Theoretically, this function could be used to authenticate a user for any OAuth provider supporting PKCE and OpenID Connect by changing hubUrl , but it is currently only tested with the Hugging Face Hub.) Defined in hub/src/lib/oauth-login-url.ts:31 parseRepoType ▸ parseRepoType ( type ): RepoType Parameters Name Type type string Returns RepoType Defined in hub/src/lib/cache-management.ts:254 parseSafetensorsMetadata ▸ parseSafetensorsMetadata ( params ): Promise \< SetRequired \< SafetensorsParseFromRepo , "parameterCount" >> Analyze model.safetensors.index.json or model.safetensors from a model hosted on Hugging Face using smart range requests to extract its metadata. Parameters Name Type params { computeParametersCount : true ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; path? : string ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< SetRequired \< SafetensorsParseFromRepo , "parameterCount" >> Defined in hub/src/lib/parse-safetensors-metadata.ts:177 ▸ parseSafetensorsMetadata ( params ): Promise \< SafetensorsParseFromRepo > Parameters Name Type params { computeParametersCount? : boolean ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; path? : string ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< SafetensorsParseFromRepo > Defined in hub/src/lib/parse-safetensors-metadata.ts:199 parseSafetensorsShardFilename ▸ parseSafetensorsShardFilename ( filename ): SafetensorsShardFileInfo | null Parameters Name Type filename string Returns SafetensorsShardFileInfo | null Defined in hub/src/lib/parse-safetensors-metadata.ts:24 pathsInfo ▸ pathsInfo ( params ): Promise \< PathInfo & { lastCommit : CommitInfo ; securityFileStatus : SecurityFileStatus }[]> Parameters Name Type params { expand : true ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; paths : string [] ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< PathInfo & { lastCommit : CommitInfo ; securityFileStatus : SecurityFileStatus }[]> Defined in hub/src/lib/paths-info.ts:37 ▸ pathsInfo ( params ): Promise \< PathInfo []> Parameters Name Type params { expand? : boolean ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; paths : string [] ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< PathInfo []> Defined in hub/src/lib/paths-info.ts:50 scanCacheDir ▸ scanCacheDir ( cacheDir? ): Promise \< HFCacheInfo > Parameters Name Type Default value cacheDir undefined | string undefined Returns Promise \< HFCacheInfo > Defined in hub/src/lib/cache-management.ts:72 scanCachedRepo ▸ scanCachedRepo ( repoPath ): Promise \< CachedRepoInfo > Parameters Name Type repoPath string Returns Promise \< CachedRepoInfo > Defined in hub/src/lib/cache-management.ts:114 scanRefsDir ▸ scanRefsDir ( refsPath , refsByHash ): Promise \< void > Parameters Name Type refsPath string refsByHash Map \< string , string []> Returns Promise \< void > Defined in hub/src/lib/cache-management.ts:204 scanSnapshotDir ▸ scanSnapshotDir ( revisionPath , cachedFiles , blobStats ): Promise \< void > Parameters Name Type revisionPath string cachedFiles CachedFileInfo [] blobStats Map \< string , Stats > Returns Promise \< void > Defined in hub/src/lib/cache-management.ts:219 snapshotDownload ▸ snapshotDownload ( params ): Promise \< string > Downloads an entire repository at a given revision in the cache directory getHFHubCachePath . You can list all cached repositories using scanCachedRepo Parameters Name Type params { cacheDir? : string ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; repo : RepoDesignation ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< string > Remarks It uses internally downloadFileToCacheDir . Defined in hub/src/lib/snapshot-download.ts:19 spaceInfo ▸ spaceInfo \< T >( params ): Promise \< SpaceEntry & Pick \< ApiSpaceInfo , T >> Type parameters Name Type T extends "models" | "datasets" | "author" | "cardData" | "disabled" | "gitalyUid" | "createdAt" | "tags" | "sha" | "subdomain" | "runtime" = never Parameters Name Type params { additionalFields? : T [] ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string ; name : string ; revision? : string } & Partial \< CredentialsParams > Returns Promise \< SpaceEntry & Pick \< ApiSpaceInfo , T >> Defined in hub/src/lib/space-info.ts:10 uploadFile ▸ uploadFile ( params ): Promise \< CommitOutput > Parameters Name Type params { abortSignal? : AbortSignal ; branch? : string ; commitDescription? : string ; commitTitle? : string ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; file : URL | File | { content : ContentSource ; path : string } ; hubUrl? : string ; isPullRequest? : boolean ; parentCommit? : string ; repo : RepoDesignation ; useWebWorkers? : boolean | { minSize? : number ; poolSize? : number } } & Partial \< CredentialsParams > Returns Promise \< CommitOutput > Defined in hub/src/lib/upload-file.ts:5 uploadFiles ▸ uploadFiles ( params ): Promise \< CommitOutput > Parameters Name Type params { abortSignal? : AbortSignal ; branch? : string ; commitDescription? : string ; commitTitle? : string ; fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; files : ( URL | File | { content : ContentSource ; path : string })[] ; hubUrl? : string ; isPullRequest? : boolean ; parentCommit? : string ; repo : RepoDesignation ; useWebWorkers? : boolean | { minSize? : number ; poolSize? : number } } & Partial \< CredentialsParams > Returns Promise \< CommitOutput > Defined in hub/src/lib/upload-files.ts:5 uploadFilesWithProgress ▸ uploadFilesWithProgress ( params ): AsyncGenerator \< CommitProgressEvent , CommitOutput > Uploads with progress Needs XMLHttpRequest to be available for progress events for uploads Set useWebWorkers to true in order to have progress events for hashing Parameters Name Type params { abortSignal? : AbortSignal ; branch? : string ; commitDescription? : string ; commitTitle? : string ; files : ( URL | File | { content : ContentSource ; path : string })[] ; hubUrl? : string ; isPullRequest? : boolean ; parentCommit? : string ; repo : RepoDesignation ; useWebWorkers? : boolean | { minSize? : number ; poolSize? : number } } & Partial \< CredentialsParams > Returns AsyncGenerator \< CommitProgressEvent , CommitOutput > Defined in hub/src/lib/upload-files-with-progress.ts:20 whoAmI ▸ whoAmI ( params ): Promise \< WhoAmI & { auth : AuthInfo }> Parameters Name Type params { fetch? : ( input : URL | RequestInfo, init? : RequestInit ) => Promise \< Response >( input : string | URL | Request , init? : RequestInit ) => Promise \< Response > ; hubUrl? : string } & CredentialsParams Returns Promise \< WhoAmI & { auth : AuthInfo }> Defined in hub/src/lib/who-am-i.ts:61 < > Update on GitHub ← Interact with the Hub HubApiError → @huggingface/hub Classes Interfaces Type Aliases Access Token Defined in Access Token Role Defined in Auth Type Defined in Commit Operation Defined in Commit Params Defined in Commit Progress Event Defined in Content Source Defined in Dtype Defined in Pipeline Type Defined in Repo Designation Defined in Repo Full Name Defined in Repo Type Defined in Safetensors File Header Defined in Safetensors Parse From Repo Defined in Safetensors Sharded Headers Defined in Space Hardware Flavor Defined in Space Sdk Defined in Space Stage Defined in Tensor Name Defined in Who AmI Defined in Variables DATASE T _ EXPANDABL E _ KEYS Defined in DATASE T _ EXPAN D _ KEYS Defined in DEFAUL T _ REVISION Defined in MODE L _ EXPANDABL E _ KEYS Defined in MODE L _ EXPAN D _ KEYS Defined in REGE X _ COMMI T _ HASH Defined in REP O _ I D _ SEPARATOR Defined in R E _ SAFETENSOR S _ FILE Defined in R E _ SAFETENSOR S _ INDE X _ FILE Defined in R E _ SAFETENSOR S _ SHAR D _ FILE Defined in SAFETENSOR S _ FILE Defined in SAFETENSOR S _ INDE X _ FILE Defined in SPAC E _ EXPANDABL E _ KEYS Defined in SPAC E _ EXPAN D _ KEYS Defined in Functions _ _ internal _ sha256 Parameters Returns Defined in check Repo Access Parameters Returns Defined in commit Parameters Returns Defined in commit Iter Parameters Returns Defined in count Commits Parameters Returns Defined in create Repo Parameters Returns Defined in dataset Info Type parameters Parameters Returns Defined in delete File Parameters Returns Defined in delete Files Parameters Returns Defined in delete Repo Parameters Returns Defined in download File Parameters Returns Defined in download File To Cache Dir Parameters Returns Defined in file Download Info Parameters Returns Defined in file Exists Parameters Returns Defined in get Blob Stat Parameters Returns Defined in getHF Hub Cache Path Returns Defined in get Repo Folder Name Parameters Returns Defined in list Commits Parameters Returns Defined in list Datasets Type parameters Parameters Returns Defined in list Files Parameters Returns Defined in list Models Type parameters Parameters Returns Defined in list Spaces Type parameters Parameters Returns Defined in model Info Type parameters Parameters Returns Defined in oauth Handle Redirect Parameters Returns Defined in oauth Handle Redirect If Present Parameters Returns Defined in oauth Login Url Parameters Returns Defined in parse Repo Type Parameters Returns Defined in parse Safetensors Metadata Parameters Returns Defined in Parameters Returns Defined in parse Safetensors Shard Filename Parameters Returns Defined in paths Info Parameters Returns Defined in Parameters Returns Defined in scan Cache Dir Parameters Returns Defined in scan Cached Repo Parameters Returns Defined in scan Refs Dir Parameters Returns Defined in scan Snapshot Dir Parameters Returns Defined in snapshot Download Parameters Returns Defined in space Info Type parameters Parameters Returns Defined in upload File Parameters Returns Defined in upload Files Parameters Returns Defined in upload Files With Progress Parameters Returns Defined in who AmI Parameters Returns Defined in
🧨_Diffusers’_Ethical_Guidelines.txt
🧨 Diffusers’ Ethical Guidelines Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation 🧨 Diffusers’ Ethical Guidelines Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. Ethical guidelines The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. Transparency : we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. Consistency : we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. Simplicity : with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. Accessibility : the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. Reproducibility : we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. Responsibility : as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. Examples of implementations: Safety features and Mechanisms The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. Community tab : it enables the community to discuss and better collaborate on a project. Bias exploration and evaluation : the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. Encouraging safety in deployment Safe Stable Diffusion : It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models . Safety Checker : It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. Staged released on the Hub : in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. Licensing : OpenRAILs , a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. < > Update on GitHub ← How to contribute? Evaluating Diffusion Models → 🧨 Diffusers’ Ethical Guidelines Preamble Scope Ethical guidelines Examples of implementations: Safety features and Mechanisms
🟧_Label_Studio_on_Spaces.txt
🟧 Label Studio on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation 🟧 Label Studio on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🟧 Label Studio on Spaces Label Studio is an open-source data labeling platform for labeling, annotating, and exploring many different data types. Additionally, Label Studio includes a powerful machine learning interface that can be used for new model training, active learning, supervised learning, and many other training techniques. This guide will teach you how to deploy Label Studio for data labeling and annotation within the Hugging Face Hub. You can use the default configuration of Label Studio as a self-contained application hosted completely on the Hub using Docker for demonstration and evaluation purposes, or you can attach your own database and cloud storage to host a fully-featured production-ready application hosted on Spaces. ⚡️ Deploy Label Studio on Spaces You can deploy Label Studio on Spaces with just a few clicks: Spaces requires you to define: An Owner : either your personal account or an organization you’re a part of. A Space name : the name of the Space within the account you’re creating the Space. The Visibility : private if you want the Space to be visible only to you or your organization, or public if you want it to be visible to other users or applications using the Label Studio API (suggested). 🚀 Using the Default Configuration By default, Label Studio is installed in Spaces with a configuration that uses local storage for the application database to store configuration, account credentials, and project information. Labeling tasks and data items are also held in local storage. Storage in Hugging Face Spaces is ephemeral, and the data you store in the default configuration can be lost in a reboot or reset of the Space. Because of this, we strongly encourage you to use the default configuration only for testing and demonstration purposes. After launching Label Studio, you will be presented with the standard login screen. You can start by creating a new account using your email address and logging in with your new credentials. Periodically after logging in, Label Studio will warn you that the storage is ephemeral and data could be lost if your Space is restarted. You will also be preset with a prompt from Heidi, the helpful Label Studio mascot, to create a new project to start labeling your data. To get started, check out the Label Studio “Zero to One” tutorial with a guide on how to build an annotation interface for sentiment analysis. 🛠️ Configuring a Production-Ready Instance of Label Studio To make your Space production-ready, you will need to make three configuration changes: Disable the unrestricted creation of new accounts. Enable persistence by attaching an external database. Attach cloud storage for labeling tasks. Disable Unrestricted Creation of New Accounts The default configuration on Label Studio allows for the unrestricted creation of new accounts for anyone who has the URL for your application. You can restrict signups by adding the following configuration secrets to your Space Settings . LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK : Setting this value to true will disable unrestricted account creation. LABEL_STUDIO_USERNAME : This is the username of the account that you will use as the first user in your Label Studio Space. It should be a valid email address. LABEL_STUDIO_PASSWORD : The password that will be associated with the first user account. Restart the Space to apply these settings. The ability to create new accounts from the login screen will be disabled. To create new accounts, you will need to invite new users in the Organization settings in the Label Studio application. Enable Configuration Persistence By default, this Space stores all project configuration and data annotations in local storage with SQLite. If the Space is reset, all configuration and annotation data in the Space will be lost. You can enable configuration persistence by connecting an external Postgres database to your space , guaranteeing that all project and annotation settings are preserved. Set the following secret variables to match your own hosted instance of Postgres. We strongly recommend setting these as secrets to prevent leaking information about your database service to the public in your spaces definition. DJANGO_DB : Set this to default . POSTGRE_NAME : Set this to the name of the Postgres database. POSTGRE_USER : Set this to the Postgres username. POSTGRE_PASSWORD : Set this to the password for your Postgres user. POSTGRE_HOST : Set this to the host that your Postgres database is running on. POSTGRE_PORT : Set this to the port that your Pogtgres database is running on. STORAGE_PERSISTENCE : Set this to 1 to remove the warning about ephemeral storage. Restart the Space to apply these settings. Information about users, projects, and annotations will be stored in the database, and will be reloaded by Label Studio if the space is restarted or reset. Enable Cloud Storage By default, the only data storage enabled for this Space is local. In the case of a Space reset, all data will be lost. To enable permanent storage, you must enable a cloud storage connector . Choose the appropriate cloud connector and configure the secrets for it. Amazon S3 STORAGE_TYPE : Set this to s3 . STORAGE_AWS_ACCESS_KEY_ID : <YOUR_ACCESS_KEY_ID> STORAGE_AWS_SECRET_ACCESS_KEY : <YOUR_SECRET_ACCESS_KEY> STORAGE_AWS_BUCKET_NAME : <YOUR_BUCKET_NAME> STORAGE_AWS_REGION_NAME : <YOUR_BUCKET_REGION> STORAGE_AWS_FOLDER : Set this to an empty string. Google Cloud Storage STORAGE_TYPE : Set this to gcs . STORAGE_GCS_BUCKET_NAME : <YOUR_BUCKET_NAME> STORAGE_GCS_PROJECT_ID : <YOUR_PROJECT_ID> STORAGE_GCS_FOLDER : Set this to an empty string. GOOGLE_APPLICATION_CREDENTIALS : Set this to /opt/heartex/secrets/key.json . Azure Blob Storage STORAGE_TYPE : Set this to azure . STORAGE_AZURE_ACCOUNT_NAME : <YOUR_STORAGE_ACCOUNT> STORAGE_AZURE_ACCOUNT_KEY : <YOUR_STORAGE_KEY> STORAGE_AZURE_CONTAINER_NAME : <YOUR_CONTAINER_NAME> STORAGE_AZURE_FOLDER : Set this to an empty string. 🤗 Next Steps, Feedback, and Support To get started with Label Studio, check out the Label Studio “Zero to One” tutorial , which walks you through an example sentiment analysis annotation project. You can find a full set of resources about Label Studio and the Label Studio community on at the Label Studio Home Page . This includes full documentation , an interactive playground for trying out different annotation interfaces, and links to join the Label Studio Slack Community . < > Update on GitHub ← Livebook on Spaces Aim on Spaces → 🟧 Label Studio on Spaces ⚡️ Deploy Label Studio on Spaces 🚀 Using the Default Configuration 🛠️ Configuring a Production- Ready Instance of Label Studio Disable Unrestricted Creation of New Accounts Enable Configuration Persistence Enable Cloud Storage Amazon S3 Google Cloud Storage Azure Blob Storage 🤗 Next Steps, Feedback, and Support
Testing.txt
Testing Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Testing Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Testing Let’s take a look at how 🤗 Transformers models are tested and how you can write new tests and improve the existing ones. There are 2 test suites in the repository: tests — tests for the general API examples — tests primarily for various applications that aren’t part of the API How transformers are tested Once a PR is submitted it gets tested with 9 CircleCi jobs. Every new commit to that PR gets retested. These jobs are defined in this config file , so that if needed you can reproduce the same environment on your machine. These CI jobs don’t run @slow tests. There are 3 jobs run by github actions : torch hub integration : checks whether torch hub integration works. self-hosted (push) : runs fast tests on GPU only on commits on main . It only runs if a commit on main has updated the code in one of the following folders: src , tests , .github (to prevent running on added model cards, notebooks, etc.) self-hosted runner : runs normal and slow tests on GPU in tests and examples : Copied RUN_SLOW=1 pytest tests/ RUN_SLOW=1 pytest examples/ The results can be observed here . Running tests Choosing which tests to run This document goes into many details of how tests can be run. If after reading everything, you need even more details you will find them here . Here are some most useful ways of running tests. Run all: Copied pytest or: Copied make test Note that the latter is defined as: Copied python -m pytest -n auto --dist=loadfile -s -v ./tests/ which tells pytest to: run as many test processes as they are CPU cores (which could be too many if you don’t have a ton of RAM!) ensure that all tests from the same file will be run by the same test process do not capture output run in verbose mode Getting the list of all tests All tests of the test suite: Copied pytest --collect-only -q All tests of a given test file: Copied pytest tests/test_optimization.py --collect-only -q Run a specific test module To run an individual test module: Copied pytest tests/utils/test_logging.py Run specific tests Since unittest is used inside most of the tests, to run specific subtests you need to know the name of the unittest class containing those tests. For example, it could be: Copied pytest tests/test_optimization.py::OptimizationTest::test_adam_w Here: tests/test_optimization.py - the file with tests OptimizationTest - the name of the class test_adam_w - the name of the specific test function If the file contains multiple classes, you can choose to run only tests of a given class. For example: Copied pytest tests/test_optimization.py::OptimizationTest will run all the tests inside that class. As mentioned earlier you can see what tests are contained inside the OptimizationTest class by running: Copied pytest tests/test_optimization.py::OptimizationTest --collect-only -q You can run tests by keyword expressions. To run only tests whose name contains adam : Copied pytest -k adam tests/test_optimization.py Logical and and or can be used to indicate whether all keywords should match or either. not can be used to negate. To run all tests except those whose name contains adam : Copied pytest -k "not adam" tests/test_optimization.py And you can combine the two patterns in one: Copied pytest -k "ada and not adam" tests/test_optimization.py For example to run both test_adafactor and test_adam_w you can use: Copied pytest -k "test_adafactor or test_adam_w" tests/test_optimization.py Note that we use or here, since we want either of the keywords to match to include both. If you want to include only tests that include both patterns, and is to be used: Copied pytest -k "test and ada" tests/test_optimization.py Run accelerate tests Sometimes you need to run accelerate tests on your models. For that you can just add -m accelerate_tests to your command, if let’s say you want to run these tests on OPT run: Copied RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py Run documentation tests In order to test whether the documentation examples are correct, you should check that the doctests are passing. As an example, let’s use WhisperModel.forward ’s docstring Copied r""" Returns: Example: ```python >>> import torch >>> from transformers import WhisperModel, WhisperFeatureExtractor >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```""" Just run the following line to automatically test every docstring example in the desired file: Copied pytest --doctest-modules <path_to_file_or_dir> If the file has a markdown extention, you should add the --doctest-glob="*.md" argument. Run only modified tests You can run the tests related to the unstaged files or the current branch (according to Git) by using pytest-picked . This is a great way of quickly testing your changes didn’t break anything, since it won’t run the tests related to files you didn’t touch. Copied pip install pytest-picked Copied pytest --picked All tests will be run from files and folders which are modified, but not yet committed. Automatically rerun failed tests on source modification pytest-xdist provides a very useful feature of detecting all failed tests, and then waiting for you to modify files and continuously re-rerun those failing tests until they pass while you fix them. So that you don’t need to re start pytest after you made the fix. This is repeated until all tests pass after which again a full run is performed. Copied pip install pytest-xdist To enter the mode: pytest -f or pytest --looponfail File changes are detected by looking at looponfailroots root directories and all of their contents (recursively). If the default for this value does not work for you, you can change it in your project by setting a configuration option in setup.cfg : Copied [tool:pytest] looponfailroots = transformers tests or pytest.ini / tox.ini files: Copied [pytest] looponfailroots = transformers tests This would lead to only looking for file changes in the respective directories, specified relatively to the ini-file’s directory. pytest-watch is an alternative implementation of this functionality. Skip a test module If you want to run all test modules, except a few you can exclude them by giving an explicit list of tests to run. For example, to run all except test_modeling_*.py tests: Copied pytest * ls -1 tests/*py | grep -v test_modeling* Clearing state CI builds and when isolation is important (against speed), cache should be cleared: Copied pytest --cache-clear tests Running tests in parallel As mentioned earlier make test runs tests in parallel via pytest-xdist plugin ( -n X argument, e.g. -n 2 to run 2 parallel jobs). pytest-xdist ’s --dist= option allows one to control how the tests are grouped. --dist=loadfile puts the tests located in one file onto the same process. Since the order of executed tests is different and unpredictable, if running the test suite with pytest-xdist produces failures (meaning we have some undetected coupled tests), use pytest-replay to replay the tests in the same order, which should help with then somehow reducing that failing sequence to a minimum. Test order and repetition It’s good to repeat the tests several times, in sequence, randomly, or in sets, to detect any potential inter-dependency and state-related bugs (tear down). And the straightforward multiple repetition is just good to detect some problems that get uncovered by randomness of DL. Repeat tests pytest-flakefinder : Copied pip install pytest-flakefinder And then run every test multiple times (50 by default): Copied pytest --flake-finder --flake-runs=5 tests/test_failing_test.py This plugin doesn’t work with -n flag from pytest-xdist . There is another plugin pytest-repeat , but it doesn’t work with unittest . Run tests in a random order Copied pip install pytest-random-order Important: the presence of pytest-random-order will automatically randomize tests, no configuration change or command line options is required. As explained earlier this allows detection of coupled tests - where one test’s state affects the state of another. When pytest-random-order is installed it will print the random seed it used for that session, e.g: Copied pytest tests [...] Using --random-order-bucket=module Using --random-order-seed=573663 So that if the given particular sequence fails, you can reproduce it by adding that exact seed, e.g.: Copied pytest --random-order-seed=573663 [...] Using --random-order-bucket=module Using --random-order-seed=573663 It will only reproduce the exact order if you use the exact same list of tests (or no list at all). Once you start to manually narrowing down the list you can no longer rely on the seed, but have to list them manually in the exact order they failed and tell pytest to not randomize them instead using --random-order-bucket=none , e.g.: Copied pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py To disable the shuffling for all tests: Copied pytest --random-order-bucket=none By default --random-order-bucket=module is implied, which will shuffle the files on the module levels. It can also shuffle on class , package , global and none levels. For the complete details please see its documentation . Another randomization alternative is: pytest-randomly . This module has a very similar functionality/interface, but it doesn’t have the bucket modes available in pytest-random-order . It has the same problem of imposing itself once installed. Look and feel variations pytest-sugar pytest-sugar is a plugin that improves the look-n-feel, adds a progressbar, and show tests that fail and the assert instantly. It gets activated automatically upon installation. Copied pip install pytest-sugar To run tests without it, run: Copied pytest -p no:sugar or uninstall it. Report each sub-test name and its progress For a single or a group of tests via pytest (after pip install pytest-pspec ): Copied pytest --pspec tests/test_optimization.py Instantly shows failed tests pytest-instafail shows failures and errors instantly instead of waiting until the end of test session. Copied pip install pytest-instafail Copied pytest --instafail To GPU or not to GPU On a GPU-enabled setup, to test in CPU-only mode add CUDA_VISIBLE_DEVICES="" for CUDA GPUs: Copied CUDA_VISIBLE_DEVICES= "" pytest tests/utils/test_logging.py or if you have multiple gpus, you can specify which one is to be used by pytest . For example, to use only the second gpu if you have gpus 0 and 1 , you can run: Copied CUDA_VISIBLE_DEVICES= "1" pytest tests/utils/test_logging.py For Intel GPUs, use ZE_AFFINITY_MASK instead of CUDA_VISIBLE_DEVICES in the above example. This is handy when you want to run different tasks on different GPUs. Some tests must be run on CPU-only, others on either CPU or GPU or TPU, yet others on multiple-GPUs. The following skip decorators are used to set the requirements of tests CPU/GPU/XPU/TPU-wise: require_torch - this test will run only under torch require_torch_gpu - as require_torch plus requires at least 1 GPU require_torch_multi_gpu - as require_torch plus requires at least 2 GPUs require_torch_non_multi_gpu - as require_torch plus requires 0 or 1 GPUs require_torch_up_to_2_gpus - as require_torch plus requires 0 or 1 or 2 GPUs require_torch_xla - as require_torch plus requires at least 1 TPU Let’s depict the GPU requirements in the following table: n gpus decorator >= 0 @require_torch >= 1 @require_torch_gpu >= 2 @require_torch_multi_gpu < 2 @require_torch_non_multi_gpu < 3 @require_torch_up_to_2_gpus For example, here is a test that must be run only when there are 2 or more GPUs available and pytorch is installed: Copied @require_torch_multi_gpu def test_example_with_multi_gpu (): If a test requires tensorflow use the require_tf decorator. For example: Copied @require_tf def test_tf_thing_with_tensorflow (): These decorators can be stacked. For example, if a test is slow and requires at least one GPU under pytorch, here is how to set it up: Copied @require_torch_gpu @slow def test_example_slow_on_gpu (): Some decorators like @parametrized rewrite test names, therefore @require_* skip decorators have to be listed last for them to work correctly. Here is an example of the correct usage: Copied @parameterized.expand( ... ) @require_torch_multi_gpu def test_integration_foo (): This order problem doesn’t exist with @pytest.mark.parametrize , you can put it first or last and it will still work. But it only works with non-unittests. Inside tests: How many GPUs are available: Copied from transformers.testing_utils import get_gpu_count n_gpu = get_gpu_count() # works with torch and tf Testing with a specific PyTorch backend or device To run the test suite on a specific torch device add TRANSFORMERS_TEST_DEVICE="$device" where $device is the target backend. For example, to test on CPU only: Copied TRANSFORMERS_TEST_DEVICE= "cpu" pytest tests/utils/test_logging.py This variable is useful for testing custom or less common PyTorch backends such as mps , xpu or npu . It can also be used to achieve the same effect as CUDA_VISIBLE_DEVICES by targeting specific GPUs or testing in CPU-only mode. Certain devices will require an additional import after importing torch for the first time. This can be specified using the environment variable TRANSFORMERS_TEST_BACKEND : Copied TRANSFORMERS_TEST_BACKEND= "torch_npu" pytest tests/utils/test_logging.py Alternative backends may also require the replacement of device-specific functions. For example torch.cuda.manual_seed may need to be replaced with a device-specific seed setter like torch.npu.manual_seed or torch.xpu.manual_seed to correctly set a random seed on the device. To specify a new backend with backend-specific device functions when running the test suite, create a Python device specification file spec.py in the format: Copied import torch import torch_npu # for xpu, replace it with `import intel_extension_for_pytorch` # !! Further additional imports can be added here !! # Specify the device name (eg. 'cuda', 'cpu', 'npu', 'xpu', 'mps') DEVICE_NAME = 'npu' # Specify device-specific backends to dispatch to. # If not specified, will fallback to 'default' in 'testing_utils.py` MANUAL_SEED_FN = torch.npu.manual_seed EMPTY_CACHE_FN = torch.npu.empty_cache DEVICE_COUNT_FN = torch.npu.device_count This format also allows for specification of any additional imports required. To use this file to replace equivalent methods in the test suite, set the environment variable TRANSFORMERS_TEST_DEVICE_SPEC to the path of the spec file, e.g. TRANSFORMERS_TEST_DEVICE_SPEC=spec.py . Currently, only MANUAL_SEED_FN , EMPTY_CACHE_FN and DEVICE_COUNT_FN are supported for device-specific dispatch. Distributed training pytest can’t deal with distributed training directly. If this is attempted - the sub-processes don’t do the right thing and end up thinking they are pytest and start running the test suite in loops. It works, however, if one spawns a normal process that then spawns off multiple workers and manages the IO pipes. Here are some tests that use it: test_trainer_distributed.py test_deepspeed.py To jump right into the execution point, search for the execute_subprocess_async call in those tests. You will need at least 2 GPUs to see these tests in action: Copied CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py Output capture During test execution any output sent to stdout and stderr is captured. If a test or a setup method fails, its according captured output will usually be shown along with the failure traceback. To disable output capturing and to get the stdout and stderr normally, use -s or --capture=no : Copied pytest -s tests/utils/test_logging.py To send test results to JUnit format output: Copied pytest tests --junitxml=result.xml Color control To have no color (e.g., yellow on white background is not readable): Copied pytest --color=no tests/utils/test_logging.py Sending test report to online pastebin service Creating a URL for each test failure: Copied pytest --pastebin=failed tests/utils/test_logging.py This will submit test run information to a remote Paste service and provide a URL for each failure. You may select tests as usual or add for example -x if you only want to send one particular failure. Creating a URL for a whole test session log: Copied pytest --pastebin=all tests/utils/test_logging.py Writing tests 🤗 transformers tests are based on unittest , but run by pytest , so most of the time features from both systems can be used. You can read here which features are supported, but the important thing to remember is that most pytest fixtures don’t work. Neither parametrization, but we use the module parameterized that works in a similar way. Parametrization Often, there is a need to run the same test multiple times, but with different arguments. It could be done from within the test, but then there is no way of running that test for just one set of arguments. Copied # test_this1.py import unittest from parameterized import parameterized class TestMathUnitTest (unittest.TestCase): @parameterized.expand( [ ( "negative" , - 1.5 , - 2.0 ), ( "integer" , 1 , 1.0 ), ( "large fraction" , 1.6 , 1 ), ] ) def test_floor ( self, name, input , expected ): assert_equal(math.floor( input ), expected) Now, by default this test will be run 3 times, each time with the last 3 arguments of test_floor being assigned the corresponding arguments in the parameter list. and you could run just the negative and integer sets of params with: Copied pytest -k "negative and integer" tests/test_mytest.py or all but negative sub-tests, with: Copied pytest -k "not negative" tests/test_mytest.py Besides using the -k filter that was just mentioned, you can find out the exact name of each sub-test and run any or all of them using their exact names. Copied pytest test_this1.py --collect-only -q and it will list: Copied test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer test_this1.py::TestMathUnitTest::test_floor_2_large_fraction So now you can run just 2 specific sub-tests: Copied pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer The module parameterized which is already in the developer dependencies of transformers works for both: unittests and pytest tests. If, however, the test is not a unittest , you may use pytest.mark.parametrize (or you may see it being used in some existing tests, mostly under examples ). Here is the same example, this time using pytest ’s parametrize marker: Copied # test_this2.py import pytest @pytest.mark.parametrize( "name, input, expected" , [ ( "negative" , - 1.5 , - 2.0 ), ( "integer" , 1 , 1.0 ), ( "large fraction" , 1.6 , 1 ), ], ) def test_floor ( name, input , expected ): assert_equal(math.floor( input ), expected) Same as with parameterized , with pytest.mark.parametrize you can have a fine control over which sub-tests are run, if the -k filter doesn’t do the job. Except, this parametrization function creates a slightly different set of names for the sub-tests. Here is what they look like: Copied pytest test_this2.py --collect-only -q and it will list: Copied test_this2.py::test_floor[integer-1-1.0] test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[large fraction-1.6-1] So now you can run just the specific test: Copied pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0] as in the previous example. Files and directories In tests often we need to know where things are relative to the current test file, and it’s not trivial since the test could be invoked from more than one directory or could reside in sub-directories with different depths. A helper class transformers.test_utils.TestCasePlus solves this problem by sorting out all the basic paths and provides easy accessors to them: pathlib objects (all fully resolved): test_file_path - the current test file path, i.e. __file__ test_file_dir - the directory containing the current test file tests_dir - the directory of the tests test suite examples_dir - the directory of the examples test suite repo_root_dir - the directory of the repository src_dir - the directory of src (i.e. where the transformers sub-dir resides) stringified paths---same as above but these return paths as strings, rather than pathlib objects: test_file_path_str test_file_dir_str tests_dir_str examples_dir_str repo_root_dir_str src_dir_str To start using those all you need is to make sure that the test resides in a subclass of transformers.test_utils.TestCasePlus . For example: Copied from transformers.testing_utils import TestCasePlus class PathExampleTest ( TestCasePlus ): def test_something_involving_local_locations ( self ): data_dir = self.tests_dir / "fixtures/tests_samples/wmt_en_ro" If you don’t need to manipulate paths via pathlib or you just need a path as a string, you can always invoked str() on the pathlib object or use the accessors ending with _str . For example: Copied from transformers.testing_utils import TestCasePlus class PathExampleTest ( TestCasePlus ): def test_something_involving_stringified_locations ( self ): examples_dir = self.examples_dir_str Temporary files and directories Using unique temporary files and directories are essential for parallel test running, so that the tests won’t overwrite each other’s data. Also we want to get the temporary files and directories removed at the end of each test that created them. Therefore, using packages like tempfile , which address these needs is essential. However, when debugging tests, you need to be able to see what goes into the temporary file or directory and you want to know it’s exact path and not having it randomized on every test re-run. A helper class transformers.test_utils.TestCasePlus is best used for such purposes. It’s a sub-class of unittest.TestCase , so we can easily inherit from it in the test modules. Here is an example of its usage: Copied from transformers.testing_utils import TestCasePlus class ExamplesTests ( TestCasePlus ): def test_whatever ( self ): tmp_dir = self.get_auto_remove_tmp_dir() This code creates a unique temporary directory, and sets tmp_dir to its location. Create a unique temporary dir: Copied def test_whatever ( self ): tmp_dir = self.get_auto_remove_tmp_dir() tmp_dir will contain the path to the created temporary dir. It will be automatically removed at the end of the test. Create a temporary dir of my choice, ensure it’s empty before the test starts and don’t empty it after the test. Copied def test_whatever ( self ): tmp_dir = self.get_auto_remove_tmp_dir( "./xxx" ) This is useful for debug when you want to monitor a specific directory and want to make sure the previous tests didn’t leave any data in there. You can override the default behavior by directly overriding the before and after args, leading to one of the following behaviors: before=True : the temporary dir will always be cleared at the beginning of the test. before=False : if the temporary dir already existed, any existing files will remain there. after=True : the temporary dir will always be deleted at the end of the test. after=False : the temporary dir will always be left intact at the end of the test. In order to run the equivalent of rm -r safely, only subdirs of the project repository checkout are allowed if an explicit tmp_dir is used, so that by mistake no /tmp or similar important part of the filesystem will get nuked. i.e. please always pass paths that start with ./ . Each test can register multiple temporary directories and they all will get auto-removed, unless requested otherwise. Temporary sys.path override If you need to temporary override sys.path to import from another test for example, you can use the ExtendSysPath context manager. Example: Copied import os from transformers.testing_utils import ExtendSysPath bindir = os.path.abspath(os.path.dirname(__file__)) with ExtendSysPath( f" {bindir} /.." ): from test_trainer import TrainerIntegrationCommon # noqa Skipping tests This is useful when a bug is found and a new test is written, yet the bug is not fixed yet. In order to be able to commit it to the main repository we need make sure it’s skipped during make test . Methods: A skip means that you expect your test to pass only if some conditions are met, otherwise pytest should skip running the test altogether. Common examples are skipping windows-only tests on non-windows platforms, or skipping tests that depend on an external resource which is not available at the moment (for example a database). A xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with pytest.mark.xfail), it’s an xpass and will be reported in the test summary. One of the important differences between the two is that skip doesn’t run the test, and xfail does. So if the code that’s buggy causes some bad state that will affect other tests, do not use xfail . Implementation Here is how to skip whole test unconditionally: Copied @unittest.skip( reason= "this bug needs to be fixed" ) def test_feature_x (): or via pytest: Copied @pytest.mark.skip( reason= "this bug needs to be fixed" ) or the xfail way: Copied @pytest.mark.xfail def test_feature_x (): Here’s how to skip a test based on internal checks within the test: Copied def test_feature_x (): if not has_something(): pytest.skip( "unsupported configuration" ) or the whole module: Copied import pytest if not pytest.config.getoption( "--custom-flag" ): pytest.skip( "--custom-flag is missing, skipping tests" , allow_module_level= True ) or the xfail way: Copied def test_feature_x (): pytest.xfail( "expected to fail until bug XYZ is fixed" ) Here is how to skip all tests in a module if some import is missing: Copied docutils = pytest.importorskip( "docutils" , minversion= "0.3" ) Skip a test based on a condition: Copied @pytest.mark.skipif( sys.version_info < ( 3 , 6 ), reason= "requires python3.6 or higher" ) def test_feature_x (): or: Copied @unittest.skipIf( torch_device == "cpu" , "Can't do half precision" ) def test_feature_x (): or skip the whole module: Copied @pytest.mark.skipif( sys.platform == 'win32' , reason= "does not run on windows" ) class TestClass (): def test_feature_x ( self ): More details, example and ways are here . Slow tests The library of tests is ever-growing, and some of the tests take minutes to run, therefore we can’t afford waiting for an hour for the test suite to complete on CI. Therefore, with some exceptions for essential tests, slow tests should be marked as in the example below: Copied from transformers.testing_utils import slow @slow def test_integration_foo (): Once a test is marked as @slow , to run such tests set RUN_SLOW=1 env var, e.g.: Copied RUN_SLOW=1 pytest tests Some decorators like @parameterized rewrite test names, therefore @slow and the rest of the skip decorators @require_* have to be listed last for them to work correctly. Here is an example of the correct usage: Copied @parameterized.expand( ... ) @slow def test_integration_foo (): As explained at the beginning of this document, slow tests get to run on a scheduled basis, rather than in PRs CI checks. So it’s possible that some problems will be missed during a PR submission and get merged. Such problems will get caught during the next scheduled CI job. But it also means that it’s important to run the slow tests on your machine before submitting the PR. Here is a rough decision making mechanism for choosing which tests should be marked as slow: If the test is focused on one of the library’s internal components (e.g., modeling files, tokenization files, pipelines), then we should run that test in the non-slow test suite. If it’s focused on an other aspect of the library, such as the documentation or the examples, then we should run these tests in the slow test suite. And then, to refine this approach we should have exceptions: All tests that need to download a heavy set of weights or a dataset that is larger than ~50MB (e.g., model or tokenizer integration tests, pipeline integration tests) should be set to slow. If you’re adding a new model, you should create and upload to the hub a tiny version of it (with random weights) for integration tests. This is discussed in the following paragraphs. All tests that need to do a training not specifically optimized to be fast should be set to slow. We can introduce exceptions if some of these should-be-non-slow tests are excruciatingly slow, and set them to @slow . Auto-modeling tests, which save and load large files to disk, are a good example of tests that are marked as @slow . If a test completes under 1 second on CI (including downloads if any) then it should be a normal test regardless. Collectively, all the non-slow tests need to cover entirely the different internals, while remaining fast. For example, a significant coverage can be achieved by testing with specially created tiny models with random weights. Such models have the very minimal number of layers (e.g., 2), vocab size (e.g., 1000), etc. Then the @slow tests can use large slow models to do qualitative testing. To see the use of these simply look for tiny models with: Copied grep tiny tests examples Here is an example of a script that created the tiny model stas/tiny-wmt19-en-de . You can easily adjust it to your specific model’s architecture. It’s easy to measure the run-time incorrectly if for example there is an overheard of downloading a huge model, but if you test it locally the downloaded files would be cached and thus the download time not measured. Hence check the execution speed report in CI logs instead (the output of pytest --durations=0 tests ). That report is also useful to find slow outliers that aren’t marked as such, or which need to be re-written to be fast. If you notice that the test suite starts getting slow on CI, the top listing of this report will show the slowest tests. Testing the stdout/stderr output In order to test functions that write to stdout and/or stderr , the test can access those streams using the pytest ’s capsys system . Here is how this is accomplished: Copied import sys def print_to_stdout ( s ): print (s) def print_to_stderr ( s ): sys.stderr.write(s) def test_result_and_stdout ( capsys ): msg = "Hello" print_to_stdout(msg) print_to_stderr(msg) out, err = capsys.readouterr() # consume the captured output streams # optional: if you want to replay the consumed streams: sys.stdout.write(out) sys.stderr.write(err) # test: assert msg in out assert msg in err And, of course, most of the time, stderr will come as a part of an exception, so try/except has to be used in such a case: Copied def raise_exception ( msg ): raise ValueError(msg) def test_something_exception (): msg = "Not a good value" error = "" try : raise_exception(msg) except Exception as e: error = str (e) assert msg in error, f" {msg} is in the exception:\n {error} " Another approach to capturing stdout is via contextlib.redirect_stdout : Copied from io import StringIO from contextlib import redirect_stdout def print_to_stdout ( s ): print (s) def test_result_and_stdout (): msg = "Hello" buffer = StringIO() with redirect_stdout(buffer): print_to_stdout(msg) out = buffer.getvalue() # optional: if you want to replay the consumed streams: sys.stdout.write(out) # test: assert msg in out An important potential issue with capturing stdout is that it may contain \r characters that in normal print reset everything that has been printed so far. There is no problem with pytest , but with pytest -s these characters get included in the buffer, so to be able to have the test run with and without -s , you have to make an extra cleanup to the captured output, using re.sub(r'~.*\r', '', buf, 0, re.M) . But, then we have a helper context manager wrapper to automatically take care of it all, regardless of whether it has some \r ’s in it or not, so it’s a simple: Copied from transformers.testing_utils import CaptureStdout with CaptureStdout() as cs: function_that_writes_to_stdout() print (cs.out) Here is a full test example: Copied from transformers.testing_utils import CaptureStdout msg = "Secret message\r" final = "Hello World" with CaptureStdout() as cs: print (msg + final) assert cs.out == final + "\n" , f"captured: {cs.out} , expecting {final} " If you’d like to capture stderr use the CaptureStderr class instead: Copied from transformers.testing_utils import CaptureStderr with CaptureStderr() as cs: function_that_writes_to_stderr() print (cs.err) If you need to capture both streams at once, use the parent CaptureStd class: Copied from transformers.testing_utils import CaptureStd with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print (cs.err, cs.out) Also, to aid debugging test issues, by default these context managers automatically replay the captured streams on exit from the context. Capturing logger stream If you need to validate the output of a logger, you can use CaptureLogger : Copied from transformers import logging from transformers.testing_utils import CaptureLogger msg = "Testing 1, 2, 3" logging.set_verbosity_info() logger = logging.get_logger( "transformers.models.bart.tokenization_bart" ) with CaptureLogger(logger) as cl: logger.info(msg) assert cl.out, msg + "\n" Testing with environment variables If you want to test the impact of environment variables for a specific test you can use a helper decorator transformers.testing_utils.mockenv Copied from transformers.testing_utils import mockenv class HfArgumentParserTest (unittest.TestCase): @mockenv( TRANSFORMERS_VERBOSITY= "error" ) def test_env_override ( self ): env_level_str = os.getenv( "TRANSFORMERS_VERBOSITY" , None ) At times an external program needs to be called, which requires setting PYTHONPATH in os.environ to include multiple local paths. A helper class transformers.test_utils.TestCasePlus comes to help: Copied from transformers.testing_utils import TestCasePlus class EnvExampleTest ( TestCasePlus ): def test_external_prog ( self ): env = self.get_env() # now call the external program, passing `env` to it Depending on whether the test file was under the tests test suite or examples it’ll correctly set up env[PYTHONPATH] to include one of these two directories, and also the src directory to ensure the testing is done against the current repo, and finally with whatever env[PYTHONPATH] was already set to before the test was called if anything. This helper method creates a copy of the os.environ object, so the original remains intact. Getting reproducible results In some situations you may want to remove randomness for your tests. To get identical reproducible results set, you will need to fix the seed: Copied seed = 42 # python RNG import random random.seed(seed) # pytorch RNGs import torch torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # numpy RNG import numpy as np np.random.seed(seed) # tf RNG import tensorflow as tf tf.random.set_seed(seed) Debugging tests To start a debugger at the point of the warning, do this: Copied pytest tests/utils/test_logging.py -W error::UserWarning --pdb Working with github actions workflows To trigger a self-push workflow CI job, you must: Create a new branch on transformers origin (not a fork!). The branch name has to start with either ci_ or ci- ( main triggers it too, but we can’t do PRs on main ). It also gets triggered only for specific paths - you can find the up-to-date definition in case it changed since this document has been written here under push: Create a PR from this branch. Then you can see the job appear here . It may not run right away if there is a backlog. Testing Experimental CI Features Testing CI features can be potentially problematic as it can interfere with the normal CI functioning. Therefore if a new CI feature is to be added, it should be done as following. Create a new dedicated job that tests what needs to be tested The new job must always succeed so that it gives us a green ✓ (details below). Let it run for some days to see that a variety of different PR types get to run on it (user fork branches, non-forked branches, branches originating from github.com UI direct file edit, various forced pushes, etc. - there are so many) while monitoring the experimental job’s logs (not the overall job green as it’s purposefully always green) When it’s clear that everything is solid, then merge the new changes into existing jobs. That way experiments on CI functionality itself won’t interfere with the normal workflow. Now how can we make the job always succeed while the new CI feature is being developed? Some CIs, like TravisCI support ignore-step-failure and will report the overall job as successful, but CircleCI and Github Actions as of this writing don’t support that. So the following workaround can be used: set +euo pipefail at the beginning of the run command to suppress most potential failures in the bash script. the last command must be a success: echo "done" or just true will do Here is an example: Copied - run: name: run CI experiment command: | set +euo pipefail echo "setting run-all-despite-any-errors-mode" this_command_will_fail echo "but bash continues to run" # emulate another failure false # but the last command must be a success echo "during experiment do not remove: reporting success to CI, even if there were failures" For simple commands you could also do: Copied cmd_that_may_fail || true Of course, once satisfied with the results, integrate the experimental step or job with the rest of the normal jobs, while removing set +euo pipefail or any other things you may have added to ensure that the experimental job doesn’t interfere with the normal CI functioning. This whole process would have been much easier if we only could set something like allow-failure for the experimental step, and let it fail without impacting the overall status of PRs. But as mentioned earlier CircleCI and Github Actions don’t support it at the moment. You can vote for this feature and see where it is at these CI-specific threads: Github Actions: CircleCI: DeepSpeed integration For a PR that involves the DeepSpeed integration, keep in mind our CircleCI PR CI setup doesn’t have GPUs. Tests requiring GPUs are run on a different CI nightly. This means if you get a passing CI report in your PR, it doesn’t mean the DeepSpeed tests pass. To run DeepSpeed tests: Copied RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py Any changes to the modeling or PyTorch examples code requires running the model zoo tests as well. Copied RUN_SLOW=1 pytest tests/deepspeed < > Update on GitHub ← How to add a pipeline to 🤗 Transformers? Checks on a Pull Request → Testing How transformers are tested Running tests Choosing which tests to run Getting the list of all tests Run a specific test module Run specific tests Run accelerate tests Run documentation tests Run only modified tests Automatically rerun failed tests on source modification Skip a test module Clearing state Running tests in parallel Test order and repetition Repeat tests Run tests in a random order Look and feel variations pytest-sugar Report each sub-test name and its progress Instantly shows failed tests To GP U or not to GPU Testing with a specific Py Torch backend or device Distributed training Output capture Color control Sending test report to online pastebin service Writing tests Parametrization Files and directories Temporary files and directories Temporary sys.path override Skipping tests Implementation Slow tests Testing the stdout/stderr output Capturing logger stream Testing with environment variables Getting reproducible results Debugging tests Working with github actions workflows Testing Experimental C I Features Deep Speed integration
PEFT_configurations_and_models.txt
PEFT configurations and models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation PEFT configurations and models PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started PEFT configurations and models The sheer size of today’s large pretrained models - which commonly have billions of parameters - present a significant training challenge because they require more storage space and more computational power to crunch all those calculations. You’ll need access to powerful GPUs or TPUs to train these large pretrained models which is expensive, not widely accessible to everyone, not environmentally friendly, and not very practical. PEFT methods address many of these challenges. There are several types of PEFT methods (soft prompting, matrix decomposition, adapters), but they all focus on the same thing, reduce the number of trainable parameters. This makes it more accessible to train and store large models on consumer hardware. The PEFT library is designed to help you quickly train large models on free or low-cost GPUs, and in this tutorial, you’ll learn how to setup a configuration to apply a PEFT method to a pretrained base model for training. Once the PEFT configuration is setup, you can use any training framework you like (Transformer’s Trainer class, Accelerate , a custom PyTorch training loop). PEFT configurations Learn more about the parameters you can configure for each PEFT method in their respective API reference page. A configuration stores important parameters that specify how a particular PEFT method should be applied. For example, take a look at the following LoraConfig for applying LoRA and PromptEncoderConfig for applying p-tuning (these configuration files are already JSON-serialized). Whenever you load a PEFT adapter, it is a good idea to check whether it has an associated adapter_config.json file which is required. LoraConfig PromptEncoderConfig Copied { "base_model_name_or_path" : "facebook/opt-350m" , #base model to apply LoRA to "bias" : "none" , "fan_in_fan_out" : false , "inference_mode" : true , "init_lora_weights" : true , "layers_pattern" : null , "layers_to_transform" : null , "lora_alpha" : 32 , "lora_dropout" : 0.05 , "modules_to_save" : null , "peft_type" : "LORA" , #PEFT method type "r" : 16 , "revision" : null , "target_modules" : [ "q_proj" , #model modules to apply LoRA to (query and value projection layers) "v_proj" ] , "task_type" : "CAUSAL_LM" #type of task to train model on } You can create your own configuration for training by initializing a LoraConfig . Copied from peft import LoraConfig, TaskType lora_config = LoraConfig( r= 16 , target_modules=[ "q_proj" , "v_proj" ], task_type=TaskType.CAUSAL_LM, lora_alpha= 32 , lora_dropout= 0.05 ) PEFT models With a PEFT configuration in hand, you can now apply it to any pretrained model to create a PeftModel . Choose from any of the state-of-the-art models from the Transformers library, a custom model, and even new and unsupported transformer architectures. For this tutorial, load a base facebook/opt-350m model to finetune. Copied from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "facebook/opt-350m" ) Use the get_peft_model() function to create a PeftModel from the base facebook/opt-350m model and the lora_config you created earlier. Copied from peft import get_peft_model lora_model = get_peft_model(model, lora_config) lora_model.print_trainable_parameters() "trainable params: 1,572,864 || all params: 332,769,280 || trainable%: 0.472659014678278" Now you can train the PeftModel with your preferred training framework! After training, you can save your model locally with save_pretrained() or upload it to the Hub with the push_to_hub method. Copied # save locally lora_model.save_pretrained( "your-name/opt-350m-lora" ) # push to Hub lora_model.push_to_hub( "your-name/opt-350m-lora" ) To load a PeftModel for inference, you’ll need to provide the PeftConfig used to create it and the base model it was trained from. Copied from peft import PeftModel, PeftConfig config = PeftConfig.from_pretrained( "ybelkada/opt-350m-lora" ) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path) lora_model = PeftModel.from_pretrained(model, "ybelkada/opt-350m-lora" ) By default, the PeftModel is set for inference, but if you’d like to train the adapter some more you can set is_trainable=True . Copied lora_model = PeftModel.from_pretrained(model, "ybelkada/opt-350m-lora" , is_trainable= True ) The PeftModel.from_pretrained() method is the most flexible way to load a PeftModel because it doesn’t matter what model framework was used (Transformers, timm, a generic PyTorch model). Other classes, like AutoPeftModel , are just a convenient wrapper around the base PeftModel , and makes it easier to load PEFT models directly from the Hub or locally where the PEFT weights are stored. Copied from peft import AutoPeftModelForCausalLM lora_model = AutoPeftModelForCausalLM.from_pretrained( "ybelkada/opt-350m-lora" ) Take a look at the AutoPeftModel API reference to learn more about the AutoPeftModel classes. Next steps With the appropriate PeftConfig , you can apply it to any pretrained model to create a PeftModel and train large powerful models faster on freely available GPUs! To learn more about PEFT configurations and models, the following guide may be helpful: Learn how to configure a PEFT method for models that aren’t from Transformers in the Working with custom models guide. < > Update on GitHub ← Installation Integrations → PEF T configurations and models PEF T configurations PEF T models Next steps
Use_custom_models.txt
Use custom models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Use custom models Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Use custom models By default, Transformers.js uses hosted pretrained models and precompiled WASM binaries , which should work out-of-the-box. You can customize this as follows: Settings Copied import { env } from '@huggingface/transformers' ; // Specify a custom location for models (defaults to '/models/'). env. localModelPath = '/path/to/models/' ; // Disable the loading of remote models from the Hugging Face Hub: env. allowRemoteModels = false ; // Set location of .wasm files. Defaults to use a CDN. env. backends . onnx . wasm . wasmPaths = '/path/to/files/' ; For a full list of available settings, check out the API Reference . Convert your models to ONNX We recommend using our conversion script to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses 🤗 Optimum to perform conversion and quantization of your model. Copied python -m scripts.convert --quantize --model_id <model_name_or_path> For example, convert and quantize bert-base-uncased using: Copied python -m scripts.convert --quantize --model_id bert-base-uncased This will save the following files to ./models/ : Copied bert-base-uncased/ ├── config .json ├── tokenizer .json ├── tokenizer_config .json └── onnx/ ├── model .onnx └── model_quantized.onnx For the full list of supported architectures, see the Optimum documentation . < > Update on GitHub ← The pipeline API Building a Vanilla JS Application → Use custom models Settings Convert your models to ONNX
Use_with_PyTorch.txt
Use with PyTorch Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Use with PyTorch Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Use with PyTorch This document is a quick introduction to using datasets with PyTorch, with a particular focus on how to get torch.Tensor objects out of our datasets, and how to use a PyTorch DataLoader and a Hugging Face Dataset with the best performance. Dataset format By default, datasets return regular python objects: integers, floats, strings, lists, etc. To get PyTorch tensors instead, you can set the format of the dataset to pytorch using Dataset.with_format() : Copied >>> from datasets import Dataset >>> data = [[ 1 , 2 ],[ 3 , 4 ]] >>> ds = Dataset.from_dict({ "data" : data}) >>> ds = ds.with_format( "torch" ) >>> ds[ 0 ] { 'data' : tensor([ 1 , 2 ])} >>> ds[: 2 ] { 'data' : tensor([[ 1 , 2 ], [ 3 , 4 ]])} A Dataset object is a wrapper of an Arrow table, which allows fast zero-copy reads from arrays in the dataset to PyTorch tensors. To load the data as tensors on a GPU, specify the device argument: Copied >>> import torch >>> device = torch.device( "cuda" if torch.cuda.is_available() else "cpu" ) >>> ds = ds.with_format( "torch" , device=device) >>> ds[ 0 ] { 'data' : tensor([ 1 , 2 ], device= 'cuda:0' )} N-dimensional arrays If your dataset consists of N-dimensional arrays, you will see that by default they are considered as the same tensor if the shape is fixed: Copied >>> from datasets import Dataset >>> data = [[[ 1 , 2 ],[ 3 , 4 ]],[[ 5 , 6 ],[ 7 , 8 ]]] # fixed shape >>> ds = Dataset.from_dict({ "data" : data}) >>> ds = ds.with_format( "torch" ) >>> ds[ 0 ] { 'data' : tensor([[ 1 , 2 ], [ 3 , 4 ]])} Copied >>> from datasets import Dataset >>> data = [[[ 1 , 2 ],[ 3 ]],[[ 4 , 5 , 6 ],[ 7 , 8 ]]] # varying shape >>> ds = Dataset.from_dict({ "data" : data}) >>> ds = ds.with_format( "torch" ) >>> ds[ 0 ] { 'data' : [tensor([ 1 , 2 ]), tensor([ 3 ])]} However this logic often requires slow shape comparisons and data copies. To avoid this, you must explicitly use the Array feature type and specify the shape of your tensors: Copied >>> from datasets import Dataset, Features, Array2D >>> data = [[[ 1 , 2 ],[ 3 , 4 ]],[[ 5 , 6 ],[ 7 , 8 ]]] >>> features = Features({ "data" : Array2D(shape=( 2 , 2 ), dtype= 'int32' )}) >>> ds = Dataset.from_dict({ "data" : data}, features=features) >>> ds = ds.with_format( "torch" ) >>> ds[ 0 ] { 'data' : tensor([[ 1 , 2 ], [ 3 , 4 ]])} >>> ds[: 2 ] { 'data' : tensor([[[ 1 , 2 ], [ 3 , 4 ]], [[ 5 , 6 ], [ 7 , 8 ]]])} Other feature types ClassLabel data are properly converted to tensors: Copied >>> from datasets import Dataset, Features, ClassLabel >>> labels = [ 0 , 0 , 1 ] >>> features = Features({ "label" : ClassLabel(names=[ "negative" , "positive" ])}) >>> ds = Dataset.from_dict({ "label" : labels}, features=features) >>> ds = ds.with_format( "torch" ) >>> ds[: 3 ] { 'label' : tensor([ 0 , 0 , 1 ])} String and binary objects are unchanged, since PyTorch only supports numbers. The Image and Audio feature types are also supported. To use the Image feature type, you’ll need to install the vision extra as pip install datasets[vision] . Copied >>> from datasets import Dataset, Features, Audio, Image >>> images = [ "path/to/image.png" ] * 10 >>> features = Features({ "image" : Image()}) >>> ds = Dataset.from_dict({ "image" : images}, features=features) >>> ds = ds.with_format( "torch" ) >>> ds[ 0 ][ "image" ].shape torch.Size([ 512 , 512 , 4 ]) >>> ds[ 0 ] { 'image' : tensor([[[ 255 , 215 , 106 , 255 ], [ 255 , 215 , 106 , 255 ], ..., [ 255 , 255 , 255 , 255 ], [ 255 , 255 , 255 , 255 ]]], dtype=torch.uint8)} >>> ds[: 2 ][ "image" ].shape torch.Size([ 2 , 512 , 512 , 4 ]) >>> ds[: 2 ] { 'image' : tensor([[[[ 255 , 215 , 106 , 255 ], [ 255 , 215 , 106 , 255 ], ..., [ 255 , 255 , 255 , 255 ], [ 255 , 255 , 255 , 255 ]]]], dtype=torch.uint8)} To use the Audio feature type, you’ll need to install the audio extra as pip install datasets[audio] . Copied >>> from datasets import Dataset, Features, Audio, Image >>> audio = [ "path/to/audio.wav" ] * 10 >>> features = Features({ "audio" : Audio()}) >>> ds = Dataset.from_dict({ "audio" : audio}, features=features) >>> ds = ds.with_format( "torch" ) >>> ds[ 0 ][ "audio" ][ "array" ] tensor([ 6.1035e-05 , 1.5259e-05 , 1.6785e-04 , ..., - 1.5259e-05 , - 1.5259e-05 , 1.5259e-05 ]) >>> ds[ 0 ][ "audio" ][ "sampling_rate" ] tensor( 44100 ) Data loading Like torch.utils.data.Dataset objects, a Dataset can be passed directly to a PyTorch DataLoader : Copied >>> import numpy as np >>> from datasets import Dataset >>> from torch.utils.data import DataLoader >>> data = np.random.rand( 16 ) >>> label = np.random.randint( 0 , 2 , size= 16 ) >>> ds = Dataset.from_dict({ "data" : data, "label" : label}).with_format( "torch" ) >>> dataloader = DataLoader(ds, batch_size= 4 ) >>> for batch in dataloader: ... print (batch) { 'data' : tensor([ 0.0047 , 0.4979 , 0.6726 , 0.8105 ]), 'label' : tensor([ 0 , 1 , 0 , 1 ])} { 'data' : tensor([ 0.4832 , 0.2723 , 0.4259 , 0.2224 ]), 'label' : tensor([ 0 , 0 , 0 , 0 ])} { 'data' : tensor([ 0.5837 , 0.3444 , 0.4658 , 0.6417 ]), 'label' : tensor([ 0 , 1 , 0 , 0 ])} { 'data' : tensor([ 0.7022 , 0.1225 , 0.7228 , 0.8259 ]), 'label' : tensor([ 1 , 1 , 1 , 1 ])} Optimize data loading There are several ways you can increase the speed your data is loaded which can save you time, especially if you are working with large datasets. PyTorch offers parallelized data loading, retrieving batches of indices instead of individually, and streaming to iterate over the dataset without downloading it on disk. Use multiple Workers You can parallelize data loading with the num_workers argument of a PyTorch DataLoader and get a higher throughput. Under the hood, the DataLoader starts num_workers processes. Each process reloads the dataset passed to the DataLoader and is used to query examples. Reloading the dataset inside a worker doesn’t fill up your RAM, since it simply memory-maps the dataset again from your disk. Copied >>> import numpy as np >>> from datasets import Dataset, load_from_disk >>> from torch.utils.data import DataLoader >>> data = np.random.rand( 10_000 ) >>> Dataset.from_dict({ "data" : data}).save_to_disk( "my_dataset" ) >>> ds = load_from_disk( "my_dataset" ).with_format( "torch" ) >>> dataloader = DataLoader(ds, batch_size= 32 , num_workers= 4 ) Stream data Stream a dataset by loading it as an IterableDataset . This allows you to progressively iterate over a remote dataset without downloading it on disk and or over local data files. Learn more about which type of dataset is best for your use case in the choosing between a regular dataset or an iterable dataset guide. An iterable dataset from datasets inherits from torch.utils.data.IterableDataset so you can pass it to a torch.utils.data.DataLoader : Copied >>> import numpy as np >>> from datasets import Dataset, load_dataset >>> from torch.utils.data import DataLoader >>> data = np.random.rand( 10_000 ) >>> Dataset.from_dict({ "data" : data}).push_to_hub( "<username>/my_dataset" ) # Upload to the Hugging Face Hub >>> my_iterable_dataset = load_dataset( "<username>/my_dataset" , streaming= True , split= "train" ) >>> dataloader = DataLoader(my_iterable_dataset, batch_size= 32 ) If the dataset is split in several shards (i.e. if the dataset consists of multiple data files), then you can stream in parallel using num_workers : Copied >>> my_iterable_dataset = load_dataset( "deepmind/code_contests" , streaming= True , split= "train" ) >>> my_iterable_dataset.num_shards 39 >>> dataloader = DataLoader(my_iterable_dataset, batch_size= 32 , num_workers= 4 ) In this case each worker is given a subset of the list of shards to stream from. Checkpoint and resume If you need a DataLoader that you can checkpoint and resume in the middle of training, you can use the StatefulDataLoader from torchdata : Copied >>> from torchdata.stateful_dataloader import StatefulDataLoader >>> my_iterable_dataset = load_dataset( "deepmind/code_contests" , streaming= True , split= "train" ) >>> dataloader = StatefulDataLoader(my_iterable_dataset, batch_size= 32 , num_workers= 4 ) >>> # save in the middle of training >>> state_dict = dataloader.state_dict() >>> # and resume later >>> dataloader.load_state_dict(state_dict) This is possible thanks to IterableDataset.state_dict() and IterableDataset.load_state_dict() . Distributed To split your dataset across your training nodes, you can use datasets.distributed.split_dataset_by_node() : Copied import os from datasets.distributed import split_dataset_by_node ds = split_dataset_by_node(ds, rank= int (os.environ[ "RANK" ]), world_size= int (os.environ[ "WORLD_SIZE" ])) This works for both map-style datasets and iterable datasets. The dataset is split for the node at rank rank in a pool of nodes of size world_size . For map-style datasets: Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset. For iterable datasets: If the dataset has a number of shards that is a factor of world_size (i.e. if dataset.num_shards % world_size == 0 ), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of world_size , skipping the other examples. This can also be combined with a torch.utils.data.DataLoader if you want each node to use multiple workers to load the data. < > Update on GitHub ← Use with TensorFlow Use with JAX → Use with Py Torch Dataset format N-dimensional arrays Other feature types Data loading Optimize data loading Use multiple Workers Stream data Checkpoint and resume Distributed
Using_PaddleNLP_at_Hugging_Face.txt
Using PaddleNLP at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using PaddleNLP at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using PaddleNLP at Hugging Face Leveraging the PaddlePaddle framework, PaddleNLP is an easy-to-use and powerful NLP library with awesome pre-trained model zoo, supporting wide-range of NLP tasks from research to industrial applications. Exploring PaddleNLP in the Hub You can find PaddleNLP models by filtering at the left of the models page . All models on the Hub come up with the following features: An automatically generated model card with a brief description and metadata tags that help for discoverability. An interactive widget you can use to play out with the model directly in the browser. An Inference API that allows to make inference requests. Easily deploy your model as a Gradio app on Spaces. Installation To get started, you can follow PaddlePaddle Quick Start to install the PaddlePaddle Framework with your favorite OS, Package Manager and Compute Platform. paddlenlp offers a quick one-line install through pip: Copied pip install -U paddlenlp Using existing models Similar to transformer models, the paddlenlp library provides a simple one-liner to load models from the Hugging Face Hub by setting from_hf_hub=True ! Depending on how you want to use them, you can use the high-level API using the Taskflow function or you can use AutoModel and AutoTokenizer for more control. Copied # Taskflow provides a simple end-to-end capability and a more optimized experience for inference from paddlenlp.transformers import Taskflow taskflow = Taskflow( "fill-mask" , task_path= "PaddlePaddle/ernie-1.0-base-zh" , from_hf_hub= True ) # If you want more control, you will need to define the tokenizer and model. from paddlenlp.transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained( "PaddlePaddle/ernie-1.0-base-zh" , from_hf_hub= True ) model = AutoModelForMaskedLM.from_pretrained( "PaddlePaddle/ernie-1.0-base-zh" , from_hf_hub= True ) If you want to see how to load a specific model, you can click Use in paddlenlp and you will be given a working snippet that you can load it! Sharing your models You can share your PaddleNLP models by using the save_to_hf_hub method under all Model and Tokenizer classes. Copied from paddlenlp.transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained( "PaddlePaddle/ernie-1.0-base-zh" , from_hf_hub= True ) model = AutoModelForMaskedLM.from_pretrained( "PaddlePaddle/ernie-1.0-base-zh" , from_hf_hub= True ) tokenizer.save_to_hf_hub(repo_id= "<my_org_name>/<my_repo_name>" ) model.save_to_hf_hub(repo_id= "<my_org_name>/<my_repo_name>" ) Additional resources PaddlePaddle Installation guide . PaddleNLP GitHub Repo . PaddlePaddle on the Hugging Face Hub < > Update on GitHub ← OpenCLIP peft → Using PaddleNL P at Hugging Face Exploring PaddleNL P in the Hub Installation Using existing models Sharing your models Additional resources
Hotswapping_adapters.txt
Hotswapping adapters Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Hotswapping adapters PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Hotswapping adapters The idea of hotswapping an adapter is the following: We can already load multiple adapters, e.g. two LoRAs, at the same time. But sometimes, we want to load one LoRA and then replace its weights in-place with the LoRA weights of another adapter. This is now possible the hotswap_adapter function. In general, this should be faster than deleting one adapter and loading the adapter in its place, which would be the how to achieve the same final outcome without hotswapping. Another advantage of hotswapping is that it prevents re-compilation in case the PEFT model is already compiled using torch.compile . This can save quite a lot of time. Copied import torch from transformers import AutoModelForCausalLM from peft import PeftModel from peft.utils.hotswap import hotswap_adapter model_id = ... inputs = ... device = ... model = AutoModelForCausalLM.from_pretrained(model_id).to(device) # load lora 0 model = PeftModel.from_pretrained(model, <path-adapter- 0 >) model = torch. compile (model) # optionally compile the model with torch.inference_mode(): output_adapter_0 = model(inputs) # replace the "default" lora adapter with the new one hotswap_adapter(model, <path-adapter- 1 >, adapter_name= "default" , torch_device=device) with torch.inference_mode(): output_adapter_1 = model(inputs).logits Hotswapping works with transformers models and diffusers models. However, there are some caveats: It only works for the same PEFT method, so no swapping LoRA and LoHa, for example. Right now, only LoRA is properly supported. The adapters must be compatible (e.g. same LoRA alpha, same target modules). If you use torch.compile and want to avoid recompilation, the LoRA rank must be the same. peft.utils.hotswap.hotswap_adapter < source > ( model model_name_or_path adapter_name torch_device = None **kwargs ) Parameters model ( ~PeftModel ) — The PEFT model with the loaded adapter. model_name_or_path ( str ) — The name or path of the model to load the new adapter from. adapter_name ( str ) — The name of the adapter to swap, e.g. "default" . The name will stay the same after swapping. torch_device — ( str , optional , defaults to None): The device to load the new adapter onto. * *kwargs ( optional ) — Additional keyword arguments used for loading the config and weights. Substitute old adapter data with new adapter data, keeping the rest the same. As of now, only LoRA is supported. This function is useful when you want to replace the loaded adapter with a new adapter. The adapter name will remain the same, but the weights and other parameters will be swapped out. If the adapters are incomptabile, e.g. targeting different layers or having different alpha values, an error will be raised. Example: Copied >>> import torch >>> from transformers import AutoModelForCausalLM >>> from peft import PeftModel >>> from peft.utils.hotswap import hotswap_adapter >>> model_id = ... >>> inputs = ... >>> device = ... >>> model = AutoModelForCausalLM.from_pretrained(model_id).to(device) >>> # load lora 0 >>> model = PeftModel.from_pretrained(model, "path-adapter-0" ) >>> model = torch. compile (model) # optionally compile the model >>> with torch.inference_mode(): ... output_adapter_0 = model(inputs) >>> # replace the "default" lora adapter with the new one >>> hotswap_adapter(model, "path-adapter-1" , adapter_name= "default" , torch_device=device) >>> with torch.inference_mode(): ... output_adapter_1 = model(inputs).logits peft.utils.hotswap.hotswap_adapter_from_state_dict < source > ( model state_dict adapter_name parameter_prefix = 'lora_' ) Parameters model ( nn.Module ) — The model with the loaded adapter. state_dict ( dict[str, torch.Tensor] ) — The state dict of the new adapter, which needs to be compatible (targeting same modules etc.). adapter_name ( str ) — The name of the adapter that should be hot-swapped, e.g. "default" . The name will remain the same after swapping. parameter_prefix ( str , optional , defaults to "lora_" ) — The prefix used to identify the adapter’s keys in the state dict. For LoRA, this would be "lora_" (the default). Raises RuntimeError RuntimeError — If the old and the new adapter are not compatible, a RuntimeError is raised. Swap out the adapter weights from the model with the weights from state_dict. As of now, only LoRA is supported. This is a low-level function that assumes that the adapters have been checked for compatibility and that the state_dict has been correctly mapped to work with PEFT. For a high level function that performs this work for you, use hotswap_adapter instead. < > Update on GitHub ← Helpers Hotswapping adapters
Interface__SpaceRuntime.txt
Interface: SpaceRuntime Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: SpaceRuntime Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: SpaceRuntime Properties errorMessage • Optional errorMessage : string Defined in hub/src/types/public.ts:80 gcTimeout • Optional gcTimeout : null | number in seconds Defined in hub/src/types/public.ts:90 hardware • Optional hardware : Object Type declaration Name Type current null | SpaceHardwareFlavor currentPrettyName? string requested null | SpaceHardwareFlavor requestedPrettyName? string Defined in hub/src/types/public.ts:81 resources • Optional resources : SpaceResourceConfig when calling /spaces, those props are only fetched if ?full=true Defined in hub/src/types/public.ts:88 sdk • Optional sdk : SpaceSdk Defined in hub/src/types/public.ts:78 sdkVersion • Optional sdkVersion : string Defined in hub/src/types/public.ts:79 stage • stage : SpaceStage Defined in hub/src/types/public.ts:77 < > Update on GitHub ← SpaceResourceRequirement TensorInfo → Interface: Space Runtime Properties error Message Defined in gc Timeout Defined in hardware Type declaration Defined in resources Defined in sdk Defined in sdk Version Defined in stage Defined in
Distributed_inference.txt
Distributed inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Distributed inference Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Distributed inference Distributed inference can fall into three brackets: Loading an entire model onto each GPU and sending chunks of a batch through each GPU’s model copy at a time Loading parts of a model onto each GPU and processing a single input at one time Loading parts of a model onto each GPU and using what is called scheduled Pipeline Parallelism to combine the two prior techniques. We’re going to go through the first and the last bracket, showcasing how to do each as they are more realistic scenarios. Sending chunks of a batch automatically to each loaded model This is the most memory-intensive solution, as it requires each GPU to keep a full copy of the model in memory at a given time. Normally when doing this, users send the model to a specific device to load it from the CPU, and then move each prompt to a different device. A basic pipeline using the diffusers library might look something like so: Copied import torch import torch.distributed as dist from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5" , torch_dtype=torch.float16) Followed then by performing inference based on the specific prompt: Copied def run_inference ( rank, world_size ): dist.init_process_group( "nccl" , rank=rank, world_size=world_size) pipe.to(rank) if torch.distributed.get_rank() == 0 : prompt = "a dog" elif torch.distributed.get_rank() == 1 : prompt = "a cat" result = pipe(prompt).images[ 0 ] result.save( f"result_ {rank} .png" ) One will notice how we have to check the rank to know what prompt to send, which can be a bit tedious. A user might then also think that with Accelerate, using the Accelerator to prepare a dataloader for such a task might also be a simple way to manage this. (To learn more, check out the relevant section in the Quick Tour ) Can it manage it? Yes. Does it add unneeded extra code however: also yes. With Accelerate, we can simplify this process by using the Accelerator.split_between_processes() context manager (which also exists in PartialState and AcceleratorState ). This function will automatically split whatever data you pass to it (be it a prompt, a set of tensors, a dictionary of the prior data, etc.) across all the processes (with a potential to be padded) for you to use right away. Let’s rewrite the above example using this context manager: Copied from accelerate import PartialState # Can also be Accelerator or AcceleratorState from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5" , torch_dtype=torch.float16) distributed_state = PartialState() pipe.to(distributed_state.device) # Assume two processes with distributed_state.split_between_processes([ "a dog" , "a cat" ]) as prompt: result = pipe(prompt).images[ 0 ] result.save( f"result_ {distributed_state.process_index} .png" ) And then to launch the code, we can use the Accelerate: If you have generated a config file to be used using accelerate config : Copied accelerate launch distributed_inference.py If you have a specific config file you want to use: Copied accelerate launch --config_file my_config.json distributed_inference.py Or if don’t want to make any config files and launch on two GPUs: Note: You will get some warnings about values being guessed based on your system. To remove these you can do accelerate config default or go through accelerate config to create a config file. Copied accelerate launch --num_processes 2 distributed_inference.py We’ve now reduced the boilerplate code needed to split this data to a few lines of code quite easily. But what if we have an odd distribution of prompts to GPUs? For example, what if we have 3 prompts, but only 2 GPUs? Under the context manager, the first GPU would receive the first two prompts and the second GPU the third, ensuring that all prompts are split and no overhead is needed. However , what if we then wanted to do something with the results of all the GPUs ? (Say gather them all and perform some kind of post processing) You can pass in apply_padding=True to ensure that the lists of prompts are padded to the same length, with extra data being taken from the last sample. This way all GPUs will have the same number of prompts, and you can then gather the results. This is only needed when trying to perform an action such as gathering the results, where the data on each device needs to be the same length. Basic inference does not require this. For instance: Copied from accelerate import PartialState # Can also be Accelerator or AcceleratorState from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5" , torch_dtype=torch.float16) distributed_state = PartialState() pipe.to(distributed_state.device) # Assume two processes with distributed_state.split_between_processes([ "a dog" , "a cat" , "a chicken" ], apply_padding= True ) as prompt: result = pipe(prompt).images On the first GPU, the prompts will be ["a dog", "a cat"] , and on the second GPU it will be ["a chicken", "a chicken"] . Make sure to drop the final sample, as it will be a duplicate of the previous one. You can find more complex examples here such as how to use it with LLMs. Memory-efficient pipeline parallelism (experimental) This next part will discuss using pipeline parallelism . This is an experimental API that utilizes torch.distributed.pipelining as a native solution. The general idea with pipeline parallelism is: say you have 4 GPUs and a model big enough it can be split on four GPUs using device_map="auto" . With this method you can send in 4 inputs at a time (for example here, any amount works) and each model chunk will work on an input, then receive the next input once the prior chunk finished, making it much more efficient and faster than the method described earlier. Here’s a visual taken from the PyTorch repository: To illustrate how you can use this with Accelerate, we have created an example zoo showcasing a number of different models and situations. In this tutorial, we’ll show this method for GPT2 across two GPUs. Before you proceed, please make sure you have the latest PyTorch version installed by running the following: Copied pip install torch Start by creating the model on the CPU: Copied from transformers import GPT2ForSequenceClassification, GPT2Config config = GPT2Config () model = GPT2ForSequenceClassification (config) model. eval () Next you’ll need to create some example inputs to use. These help torch.distributed.pipelining trace the model. However you make this example will determine the relative batch size that will be used/passed through the model at a given time, so make sure to remember how many items there are! Copied input = torch.randint( low =0, high =config.vocab_size, size=(2, 1024), # bs x seq_len device = "cpu" , dtype =torch.int64, requires_grad = False , ) Next we need to actually perform the tracing and get the model ready. To do so, use the inference.prepare_pippy() function and it will fully wrap the model for pipeline parallelism automatically: Copied from accelerate.inference import prepare_pippy example_inputs = {"input_ids": input } model = prepare_pippy(model, example_args=( input ,)) There are a variety of parameters you can pass through to prepare_pippy : split_points lets you determine what layers to split the model at. By default we use wherever device_map="auto" declares, such as fc or conv1`. num_chunks determines how the batch will be split and sent to the model itself (so num_chunks=1 with four split points/four GPUs will have a naive MP where a single input gets passed between the four layer split points) From here, all that’s left is to actually perform the distributed inference! When passing inputs, we highly recommend to pass them in as a tuple of arguments. Using kwargs is supported, however, this approach is experimental. Copied args = some_more_arguments with torch.no_grad (): output = model (* args ) When finished all the data will be on the last process only: Copied from accelerate import PartialState if PartialState().is_last_process: print (output) If you pass in gather_output=True to inference.prepare_pippy() , the output will be sent across to all the GPUs afterwards without needing the is_last_process check. This is False by default as it incurs a communication call. And that’s it! To explore more, please check out the inference examples in the Accelerate repo and our documentation as we work to improving this integration. < > Update on GitHub ← Big Model Inference Accelerate's internal mechanism → Distributed inference Sending chunks of a batch automatically to each loaded model Memory-efficient pipeline parallelism (experimental)
DiffEdit.txt
DiffEdit Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation DiffEdit Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab #!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt . These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears , then: Copied source_prompt = "a bowl of fruits" target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt , but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline pipeline = StableDiffusionDiffEditPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1" , torch_dtype=torch.float16, safety_checker= None , use_safetensors= True , ) pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) pipeline.enable_model_cpu_offload() pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" raw_image = load_image(img_url).resize(( 768 , 768 )) raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image source_prompt = "a bowl of fruits" target_prompt = "a basket of pears" mask_image = pipeline.generate_mask( image=raw_image, source_prompt=source_prompt, target_prompt=target_prompt, ) Image.fromarray((mask_image.squeeze()* 255 ).astype( "uint8" ), "L" ).resize(( 768 , 768 )) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt : Copied output_image = pipeline( prompt=target_prompt, mask_image=mask_image, image_latents=inv_latents, negative_prompt=source_prompt, ).images[ 0 ] mask_image = Image.fromarray((mask_image.squeeze()* 255 ).astype( "uint8" ), "L" ).resize(( 768 , 768 )) make_image_grid([raw_image, mask_image, output_image], rows= 1 , cols= 3 ) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch from transformers import AutoTokenizer, T5ForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained( "google/flan-t5-large" ) model = T5ForConditionalGeneration.from_pretrained( "google/flan-t5-large" , device_map= "auto" , torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" target_concept = "basket" source_text = f"Provide a caption for images containing a {source_concept} . " "The captions should be in English and should be no longer than 150 characters." target_text = f"Provide a caption for images containing a {target_concept} . " "The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() def generate_prompts ( input_prompt ): input_ids = tokenizer(input_prompt, return_tensors= "pt" ).input_ids.to( "cuda" ) outputs = model.generate( input_ids, temperature= 0.8 , num_return_sequences= 16 , do_sample= True , max_new_tokens= 128 , top_k= 10 ) return tokenizer.batch_decode(outputs, skip_special_tokens= True ) source_prompts = generate_prompts(source_text) target_prompts = generate_prompts(target_text) print (source_prompts) print (target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch from diffusers import StableDiffusionDiffEditPipeline pipeline = StableDiffusionDiffEditPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1" , torch_dtype=torch.float16, use_safetensors= True ) pipeline.enable_model_cpu_offload() pipeline.enable_vae_slicing() @torch.no_grad() def embed_prompts ( sentences, tokenizer, text_encoder, device= "cuda" ): embeddings = [] for sent in sentences: text_inputs = tokenizer( sent, padding= "max_length" , max_length=tokenizer.model_max_length, truncation= True , return_tensors= "pt" , ) text_input_ids = text_inputs.input_ids prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask= None )[ 0 ] embeddings.append(prompt_embeds) return torch.concatenate(embeddings, dim= 0 ).mean(dim= 0 ).unsqueeze( 0 ) source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder) target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler from diffusers.utils import load_image, make_image_grid from PIL import Image pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" raw_image = load_image(img_url).resize((768, 768)) mask_image = pipeline.generate_mask( image=raw_image, - source_prompt=source_prompt, - target_prompt=target_prompt, + source_prompt_embeds=source_embeds, + target_prompt_embeds=target_embeds, ) inv_latents = pipeline.invert( - prompt=source_prompt, + prompt_embeds=source_embeds, image=raw_image, ).latents output_image = pipeline( mask_image=mask_image, image_latents=inv_latents, - prompt=target_prompt, - negative_prompt=source_prompt, + prompt_embeds=target_embeds, + negative_prompt_embeds=source_embeds, ).images[0] mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L") make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the 🤗 Transformers library: Copied import torch from transformers import BlipForConditionalGeneration, BlipProcessor processor = BlipProcessor.from_pretrained( "Salesforce/blip-image-captioning-base" ) model = BlipForConditionalGeneration.from_pretrained( "Salesforce/blip-image-captioning-base" , torch_dtype=torch.float16, low_cpu_mem_usage= True ) Create a utility function to generate a caption from the input image: Copied @torch.no_grad() def generate_caption ( images, caption_generator, caption_processor ): text = "a photograph of" inputs = caption_processor(images, text, return_tensors= "pt" ).to(device= "cuda" , dtype=caption_generator.dtype) caption_generator.to( "cuda" ) outputs = caption_generator.generate(**inputs, max_new_tokens= 128 ) # offload caption generator caption_generator.to( "cpu" ) caption = caption_processor.batch_decode(outputs, skip_special_tokens= True )[ 0 ] return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" raw_image = load_image(img_url).resize(( 768 , 768 )) caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents! < > Update on GitHub ← Shap-E Trajectory Consistency Distillation-LoRA → Diff Edit Generate source and target embeddings Generate a caption for inversion
External_Resources.txt
External Resources Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation External Resources text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started External Resources Adyen wrote a detailed article about the interplay between TGI’s main components: router and server. LLM inference at scale with TGI (Martin Iglesias Goyanes - Adyen, 2024) < > Update on GitHub ← LoRA (Low-Rank Adaptation) External Resources
How_to_configure_OIDC_SSO_with_Okta.txt
How to configure OIDC SSO with Okta Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation How to configure OIDC SSO with Okta Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security User Access Tokens Two-Factor Authentication Git over SSH Signing Commits with GPG Single Sign-On (SSO) How to configure OIDC with Okta in the Hub How to configure SAML with Okta in the Hub How to configure SAML with Azure in the Hub How to configure OIDC with Azure in the Hub Advanced Access Control (Resource Groups) Malware Scanning Pickle Scanning Secrets Scanning Protect AI Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started How to configure OIDC SSO with Okta In this guide, we will use Okta as the SSO provider and with the Open ID Connect (OIDC) protocol as our preferred identity protocol. This feature is part of the Enterprise Hub . Step 1: Create a new application in your Identity Provider Open a new tab/window in your browser and sign in to your Okta account. Navigate to “Admin/Applications” and click the “Create App Integration” button. Then choose an “OIDC - OpenID Connect” application, select the application type “Web Application” and click “Create”. Step 2: Configure your application in Okta Open a new tab/window in your browser and navigate to the SSO section of your organization’s settings. Select the OIDC protocol. Copy the “Redirection URI” from the organization’s settings on Hugging Face, and paste it in the “Sign-in redirect URI” field on Okta. The URL looks like this: https://huggingface.co/organizations/[organizationIdentifier]/oidc/consume . You can leave the optional Sign-out redirect URIs blank. Save your new application. Step 3: Finalize configuration on Hugging Face In your Okta application, under “General”, find the following fields: Client ID Client secret Issuer URL You will need these to finalize the SSO setup on Hugging Face. The Okta Issuer URL is generally a URL like https://tenantId.okta.com ; you can refer to their guide for more details. In the SSO section of your organization’s settings on Hugging Face, copy-paste these values from Okta: Client ID Client Secret You can now click on “Update and Test OIDC configuration” to save the settings. You should be redirected to your SSO provider (IdP) login prompt. Once logged in, you’ll be redirected to your organization’s settings page. A green check mark near the OIDC selector will attest that the test was successful. Step 4: Enable SSO in your organization Now that Single Sign-On is configured and tested, you can enable it for members of your organization by clicking on the “Enable” button. Once enabled, members of your organization must complete the SSO authentication flow described in the How does it work? section. < > Update on GitHub ← Single Sign-On (SSO) How to configure SAML with Okta in the Hub → How to configure OID C SS O with Okta Step 1: Create a new application in your Identity Provider Step 2: Configure your application in Okta Step 3: Finalize configuration on Hugging Face Step 4: Enable SS O in your organization
Building_a_Vanilla_JavaScript_Application.txt
Building a Vanilla JavaScript Application Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Building a Vanilla JavaScript Application Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Building a Vanilla JavaScript Application In this tutorial, you’ll build a simple web application that detects objects in images using Transformers.js! To follow along, all you need is a code editor, a browser, and a simple server (e.g., VS Code Live Server). Here’s how it works: the user clicks “Upload image” and selects an image using an input dialog. After analysing the image with an object detection model, the predicted bounding boxes are overlaid on top of the image, like this: Useful links: Demo site Interactive code walk-through (scrim) Source code Step 1: HTML and CSS setup Before we start building with Transformers.js, we first need to lay the groundwork with some markup and styling. Create an index.html file with a basic HTML skeleton, and add the following <main> tag to the <body> : Copied < main class = "container" > < label class = "custom-file-upload" > < input id = "file-upload" type = "file" accept = "image/*" /> < img class = "upload-icon" src = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/upload-icon.png" /> Upload image </ label > < div id = "image-container" > </ div > < p id = "status" > </ p > </ main > Click here to see a breakdown of this markup. We’re adding an <input> element with type="file" that accepts images. This allows the user to select an image from their local file system using a popup dialog. The default styling for this element looks quite bad, so let’s add some styling. The easiest way to achieve this is to wrap the <input> element in a <label> , hide the input, and then style the label as a button. We’re also adding an empty <div> container for displaying the image, plus an empty <p> tag that we’ll use to give status updates to the user while we download and run the model, since both of these operations take some time. Next, add the following CSS rules in a style.css file and link it to the HTML: Copied html , body { font-family : Arial, Helvetica, sans-serif; } .container { margin : 40px auto; width : max ( 50vw , 400px ); display : flex; flex-direction : column; align-items : center; } .custom-file-upload { display : flex; align-items : center; gap : 10px ; border : 2px solid black; padding : 8px 16px ; cursor : pointer; border-radius : 6px ; } #file-upload { display : none; } .upload-icon { width : 30px ; } #image-container { width : 100% ; margin-top : 20px ; position : relative; } #image-container > img { width : 100% ; } Here’s how the UI looks at this point: Step 2: JavaScript setup With the boring part out of the way, let’s start writing some JavaScript code! Create a file called index.js and link to it in index.html by adding the following to the end of the <body> : Copied < script src = "./index.js" type = "module" > </ script > The type="module" attribute is important, as it turns our file into a JavaScript module , meaning that we’ll be able to use imports and exports. Moving into index.js , let’s import Transformers.js by adding the following line to the top of the file: Copied import { pipeline, env } from "https://cdn.jsdelivr.net/npm/@huggingface/transformers" ; Since we will be downloading the model from the Hugging Face Hub, we can skip the local model check by setting: Copied env. allowLocalModels = false ; Next, let’s create references to the various DOM elements we will access later: Copied const fileUpload = document . getElementById ( "file-upload" ); const imageContainer = document . getElementById ( "image-container" ); const status = document . getElementById ( "status" ); Step 3: Create an object detection pipeline We’re finally ready to create our object detection pipeline! As a reminder, a pipeline . is a high-level interface provided by the library to perform a specific task. In our case, we will instantiate an object detection pipeline with the pipeline() helper function. Since this can take some time (especially the first time when we have to download the ~40MB model), we first update the status paragraph so that the user knows that we’re about to load the model. Copied status. textContent = "Loading model..." ; To keep this tutorial simple, we’ll be loading and running the model in the main (UI) thread. This is not recommended for production applications, since the UI will freeze when we’re performing these actions. This is because JavaScript is a single-threaded language. To overcome this, you can use a web worker to download and run the model in the background. However, we’re not going to do cover that in this tutorial… We can now call the pipeline() function that we imported at the top of our file, to create our object detection pipeline: Copied const detector = await pipeline ( "object-detection" , "Xenova/detr-resnet-50" ); We’re passing two arguments into the pipeline() function: (1) task and (2) model. The first tells Transformers.js what kind of task we want to perform. In our case, that is object-detection , but there are many other tasks that the library supports, including text-generation , sentiment-analysis , summarization , or automatic-speech-recognition . See here for the full list. The second argument specifies which model we would like to use to solve the given task. We will use Xenova/detr-resnet-50 , as it is a relatively small (~40MB) but powerful model for detecting objects in an image. Once the function returns, we’ll tell the user that the app is ready to be used. Copied status. textContent = "Ready" ; Step 4: Create the image uploader The next step is to support uploading/selection of images. To achieve this, we will listen for “change” events from the fileUpload element. In the callback function, we use a FileReader() to read the contents of the image if one is selected (and nothing otherwise). Copied fileUpload. addEventListener ( "change" , function ( e ) { const file = e. target . files [ 0 ]; if (!file) { return ; } const reader = new FileReader (); // Set up a callback when the file is loaded reader. onload = function ( e2 ) { imageContainer. innerHTML = "" ; const image = document . createElement ( "img" ); image. src = e2. target . result ; imageContainer. appendChild (image); // detect(image); // Uncomment this line to run the model }; reader. readAsDataURL (file); }); Once the image has been loaded into the browser, the reader.onload callback function will be invoked. In it, we append the new <img> element to the imageContainer to be displayed to the user. Don’t worry about the detect(image) function call (which is commented out) - we will explain it later! For now, try to run the app and upload an image to the browser. You should see your image displayed under the button like this: Step 5: Run the model We’re finally ready to start interacting with Transformers.js! Let’s uncomment the detect(image) function call from the snippet above. Then we’ll define the function itself: Copied async function detect ( img ) { status. textContent = "Analysing..." ; const output = await detector (img. src , { threshold : 0.5 , percentage : true , }); status. textContent = "" ; console . log ( "output" , output); // ... } NOTE: The detect function needs to be asynchronous, since we’ll await the result of the the model. Once we’ve updated the status to “Analysing”, we’re ready to perform inference , which simply means to run the model with some data. This is done via the detector() function that was returned from pipeline() . The first argument we’re passing is the image data ( img.src ). The second argument is an options object: We set the threshold property to 0.5 . This means that we want the model to be at least 50% confident before claiming it has detected an object in the image. The lower the threshold, the more objects it’ll detect (but may misidentify objects); the higher the threshold, the fewer objects it’ll detect (but may miss objects in the scene). We also specify percentage: true , which means that we want the bounding box for the objects to be returned as percentages (instead of pixels). If you now try to run the app and upload an image, you should see the following output logged to the console: In the example above, we uploaded an image of two elephants, so the output variable holds an array with two objects, each containing a label (the string “elephant”), a score (indicating the model’s confidence in its prediction) and a box object (representing the bounding box of the detected entity). Step 6: Render the boxes The final step is to display the box coordinates as rectangles around each of the elephants. At the end of our detect() function, we’ll run the renderBox function on each object in the output array, using .forEach() . Copied output. forEach (renderBox); Here’s the code for the renderBox() function with comments to help you understand what’s going on: Copied // Render a bounding box and label on the image function renderBox ( { box, label } ) { const { xmax, xmin, ymax, ymin } = box; // Generate a random color for the box const color = "#" + Math . floor ( Math . random () * 0xffffff ). toString ( 16 ). padStart ( 6 , 0 ); // Draw the box const boxElement = document . createElement ( "div" ); boxElement. className = "bounding-box" ; Object . assign (boxElement. style , { borderColor : color, left : 100 * xmin + "%" , top : 100 * ymin + "%" , width : 100 * (xmax - xmin) + "%" , height : 100 * (ymax - ymin) + "%" , }); // Draw the label const labelElement = document . createElement ( "span" ); labelElement. textContent = label; labelElement. className = "bounding-box-label" ; labelElement. style . backgroundColor = color; boxElement. appendChild (labelElement); imageContainer. appendChild (boxElement); } The bounding box and label span also need some styling, so add the following to the style.css file: Copied .bounding-box { position : absolute; box-sizing : border-box; border-width : 2px ; border-style : solid; } .bounding-box-label { color : white; position : absolute; font-size : 12px ; margin-top : - 16px ; margin-left : - 2px ; padding : 1px ; } And that’s it! You’ve now built your own fully-functional AI application that detects objects in images, which runns completely in your browser: no external server, APIs, or build tools. Pretty cool! 🥳 The app is live at the following URL: https://huggingface.co/spaces/Scrimba/vanilla-js-object-detector < > Update on GitHub ← Custom usage Building a React Application → Building a Vanilla Java Script Application Step 1: HTM L and CS S setup Step 2: Java Script setup Step 3: Create an object detection pipeline Step 4: Create the image uploader Step 5: Run the model Step 6: Render the boxes
Efficient_Training_on_CPU.txt
Efficient Training on CPU Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Efficient Training on CPU Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Efficient Training on CPU This guide focuses on training large models efficiently on CPU. Mixed precision with IPEX Mixed precision uses single (fp32) and half-precision (bf16/fp16) data types in a model to accelerate training or inference while still preserving much of the single-precision accuracy. Modern CPUs such as 3rd, 4th, and 5th Gen Intel® Xeon® Scalable processors natively support bf16. 6th Gen Intel® Xeon® Scalable processors natively support bf16 and fp16. You should get more performance out of the box by enabling mixed precision training with bf16 or fp16. To further maximize training performance, you can use Intel® Extension for PyTorch (IPEX), which is a library built on PyTorch and adds additional CPU instruction level architecture (ISA) level support such as Intel® Advanced Vector Extensions 512 Vector Neural Network Instructions (Intel® AVX512-VNNI), and Intel® Advanced Matrix Extensions (Intel® AMX) for an extra performance boost on Intel CPUs. However, CPUs with only AVX2 (e.g., AMD or older Intel CPUs) are not guaranteed to have better performance under IPEX. Auto Mixed Precision (AMP) for CPU backends has been enabled since PyTorch 1.10. AMP support for bf16/fp16 on CPUs and bf16/fp16 operator optimization is also supported in IPEX and partially upstreamed to the main PyTorch branch. You can get better performance and user experience with IPEX AMP. Check more detailed information for Auto Mixed Precision . IPEX installation: IPEX release is following PyTorch, to install via pip: PyTorch Version IPEX version 2.5.0 2.5.0+cpu 2.4.0 2.4.0+cpu 2.3.0 2.3.0+cpu 2.2.0 2.2.0+cpu Please run pip list | grep torch to get your pytorch_version , so you can get the IPEX version_name . Copied pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu You can check the latest versions in ipex-whl-stable-cpu if needed. Check more approaches for IPEX installation . Usage in Trainer To enable auto mixed precision with IPEX in Trainer, users should add use_ipex , bf16 or fp16 , and no_cuda in training command arguments. Take an example of the use cases on Transformers question-answering Training with IPEX using BF16 auto mixed precision on CPU: python examples/pytorch/question-answering/run_qa.py \ --model_name_or_path google-bert/bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --use_ipex \ --bf16 \ --use_cpu If you want to enable use_ipex and bf16 in your script, add these parameters to TrainingArguments like this: Copied training_args = TrainingArguments( output_dir=args.output_path, + bf16=True, + use_ipex=True, + use_cpu=True, **kwargs ) Practice example Blog: Accelerating PyTorch Transformers with Intel Sapphire Rapids < > Update on GitHub ← DeepSpeed Distributed CPU training → Efficient Training on CPU Mixed precision with IPEX IPE X installation: Usage in Trainer Practice example