filename
stringlengths 7
54
| content
stringlengths 1.74k
710k
|
---|---|
Neuron_Model_Cache.txt | Neuron Model Cache Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Neuron Model Cache AWS Trainium & Inferentia π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron π€ Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Neuron Model Cache The Neuron Model Cache is a remote cache for compiled Neuron models in the neff format. It is integrated into the NeuronTrainer and NeuronModelForCausalLM classes to enable loading pretrained models from the cache instead of compiling them locally. Note: it is not available for models exported using any other NeuronModelXX classes, that use a different export mechanism. The Neuron Model Cache is hosted on the Hugging Face Hub and includes compiled files for all popular and supported optimum-neuron pre-trained models. Before training a Transformers or Diffusion model or loading a NeuronModelForCausalLM on Neuron platforms, it needs to be exported to neuron format with torch-neuronx . When exporting a model, torch-neuronx will: convert it to a set of XLA subgraphs, compile each subgraph with the neuronx compiler into a Neuron Executable File Format (NEFF) binary file. The first step is relatively fast, but the compilation takes a lot of time. To avoid recompiling all NEFF files every time a model is loaded on a NeuronX host, torch-neuronx stores NEFF files in a local directory, usually /var/tmp/neuron-compile-cache . However, this local cache is not shared between platforms, which means that every time you train or export a model on a new host, you need to recompile it. We created the Neuron Model Cache to solve this limitation by providing a public repository of precompiled model graphs. Note: we also support the creation of private, secured, remote model cache. How to use the Neuron model cache The public model cache will be used when you use the NeuronTrainer or NeuronModelForCausalLM classes. There are no additional changes needed. When exporting a model to neuron format, optimum-neuron will simply look for cached NEFF files in the hub repository during the compilation of the model subgraphs. If the NEFF files are cached, they will be fetched from the hub and directly loaded instead of being recompiled. How caching works The Optimum Neuron Cache is built on top of the NeuronX compiler cache . It is important to understand that the cache operates on NEFF binaries, and not on the model itself. As explained previously, each model exported to Neuron using the NeuronTrainer or NeuronModelForCausalLM is composed of XLA subgraphs. Each subgraph is unique, and results from the combination of: the transformers or transformers_neuronx python modeling code, the transformers model config, the input_shapes selected during the export, The precision of the model, full-precision, fp16 or bf16. When compiling a subgraph to a NEFF file, other parameters influence the result: The version of the Neuron X compiler, The number of Neuron cores used, The compilation parameters (such as the optimization level). All these parameters are combined together to create a unique hash that identifies a NEFF file. This has two very important consequences: it is only when actually exporting a model that the associated NEFF files can be identified, even a small change in the model configuration will lead to a different set of NEFF files. It is therefore very difficult to know in advance if the NEFFs associated to a specific model configuration are cached. Neuron model cache lookup (inferentia only) The neuron cache lookup is a feature allowing users to look for compatible cached model configurations before exporting a model for inference. It is based on a dedicated registry composed of stored cached configurations. Cached model configurations are stored as entries under a specific subfolder in the Neuron Model Cache: Copied neuronxcc -2 . 12 . 54 . 0 +f631c2365 βββ 0 _REGISTRY β βββ 0 . 0 . 18 β βββ llama β βββ meta- llama β βββ Llama-2-7b-chat-hf β βββ 54 c 1f 6689cd 88f 246f ce. json Each entry corresponds to the combination of a model configuration and its export parameters: this is as close as we can get to uniquely identify the exported model. You can use the optimum-cli to lookup for compatible cached entries by passing it a hub model_id or the path to a file containing a model config.json . Copied $ optimum-cli neuron cache lookup meta-llama/Llama-2-7b-chat-hf *** 1 entrie(s) found in cache for meta-llama/Llama-2-7b-chat-hf *** task: text-generation batch_size: 1 num_cores: 24 auto_cast_type: fp16 sequence_length: 2048 compiler_type: neuronx-cc compiler_version: 2.12.54.0+f631c2365 checkpoint_id: meta-llama/Llama-2-7b-chat-hf checkpoint_revision: c1b0db933684edbfe29a06fa47eb19cc48025e93 Note that even if compatible cached entries exist, this does not always guarantee that the model will not be recompiled during export if you modified the compilation parameters or updated the neuronx packages. Advanced usage (trainium only) How to use a private Neuron model cache (trainium only) The repository for the public cache is aws-neuron/optimum-neuron-cache . This repository includes all precompiled files for commonly used models so that it is publicly available and free to use for everyone. But there are two limitations: You will not be able to push your own compiled files on this repo It is public and you might want to use a private repo for private models To alleviate that you can create your own private cache repository using the optimum-cli or set the environment variable CUSTOM_CACHE_REPO . Using the Optimum CLI The Optimum CLI offers 2 subcommands for cache creation and setting: create : To create a new cache repository that you can use as a private Neuron Model cache. set : To set the name of the Neuron cache repository locally, the repository needs to exists and will be used by default by optimum-neuron . Create a new Neuron cache repository: Copied optimum-cli neuron cache create --help usage: optimum-cli neuron cache create [-h] [-n NAME ] [-- public ] optional arguments: -h, --help show this help message and exit -n NAME , -- name NAME The name of the repo that will be used as a remote cache for the compilation files. -- public If set , the created repo will be public . By default the cache repo is private . The -n / --name option allows you to specify a name for the Neuron cache repo, if not set the default name will be used. The --public flag allows you to make your Neuron cache public as it will be created as a private repository by default. Example: Copied optimum-cli neuron cache create Neuron cache created on the Hugging Face Hub: michaelbenayoun/optimum-neuron-cache [ private ]. Neuron cache name set locally to michaelbenayoun /optimum-neuron-cache in / home /michael/ .cache /huggingface/ optimum_neuron_custom_cache. Set a different Trainiun cache repository: Copied usage: optimum-cli neuron cache set [-h] name positional arguments: name The name of the repo to use as remote cache. optional arguments: -h, --help show this help message and exit Example: Copied optimum-cli neuron cache set michaelbenayoun/optimum-neuron-cache Neuron cache name set locally to michaelbenayoun /optimum-neuron-cache in / home /michael/ .cache /huggingface/ optimum_neuron_custom_cache The optimum-cli neuron cache set command is useful when working on a new instance to use your own cache. Using the environment variable CUSTOM_CACHE_REPO Using the CLI is not always feasible, and not very practical for small testing. In this case, you can simply set the environment variable CUSTOM_CACHE_REPO . For example, if your cache repo is called michaelbenayoun/my_custom_cache_repo , you just need to do: Copied CUSTOM_CACHE_REPO= "michaelbenayoun/my_custom_cache_repo" torchrun ... or: Copied export CUSTOM_CACHE_REPO= "michaelbenayoun/my_custom_cache_repo" torchrun ... You have to be logged into the Hugging Face Hub to be able to push and pull files from your private cache repository. Cache system flow Cache system flow At each the beginning of each training step, the NeuronTrainer computes a NeuronHash and checks the cache repo(s) (official and custom) on the Hugging Face Hub to see if there are compiled files associated to this hash. If that is the case, the files are downloaded directly to the local cache directory and no compilation is needed. Otherwise compilation is performed. Just as for downloading compiled files, the NeuronTrainer will keep track of the newly created compilation files at each training step, and upload them to the Hugging Face Hub at save time or when training ends. This assumes that you have writing access to the cache repo, otherwise nothing will be pushed. Optimum CLI The Optimum CLI can be used to perform various cache-related tasks, as described by the optimum-cli neuron cache command usage message: Copied usage: optimum-cli neuron cache [-h] {create, set , add , list } ... positional arguments: {create, set , add , list ,synchronize,lookup} create Create a model repo on the Hugging Face Hub to store Neuron X compilation files . set Set the name of the Neuron cache repo to use locally (trainium only ). add Add a model to the cache of your choice (trainium only ). list List models in a cache repo (trainium only ). synchronize Synchronize local compiler cache with the hub cache (inferentia only ). lookup Lookup the neuronx compiler hub cache for the specified model id (inferentia only ). optional arguments: -h, -- help show this help message and exit Add a model to the cache (trainium only) It is possible to add a model compilation files to a cache repo via the optimum-cli neuron cache add command: Copied usage: optimum-cli neuron cache add [-h] -m MODEL --task TASK --train_batch_size TRAIN_BATCH_SIZE [--eval_batch_size EVAL_BATCH_SIZE] [--sequence_length SEQUENCE_LENGTH] [--encoder_sequence_length ENCODER_SEQUENCE_LENGTH] [--decoder_sequence_length DECODER_SEQUENCE_LENGTH] [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS] --precision {fp,bf16} --num_cores { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 } [--max_steps MAX_STEPS] When running this command a small training session will be run and the resulting compilation files will be pushed. Make sure that the Neuron cache repo to use is set up locally, this can be done by running the `optimum-cli neuron cache set` command. You also need to make sure that you are logged in to the Hugging Face Hub and that you have the writing rights for the specified cache repo, this can be done via the `huggingface-cli login` command. If at least one of those requirements is not met, the command will fail. Example: Copied optimum-cli neuron cache add \ --model prajjwal1/bert-tiny \ --task text-classification \ --train_batch_size 16 \ --eval_batch_size 16 \ --sequence_length 128 \ --gradient_accumulation_steps 32 \ --num_cores 32 \ --precision bf16 This will push compilation files for the prajjwal1/bert-tiny model on the Neuron cache repo that was set up for the specified parameters. List a cache repo It can also be convenient to request the cache repo to know which compilation files are available. This can be done via the optimum-cli neuron cache list command: Copied usage: optimum-cli neuron cache list [-h] [- m MODEL] [-v VERSION] [name] positional arguments: name The name of the repo to list . Will use the locally saved cache repo if left unspecified. optional arguments: -h, -- help show this help message and exit - m MODEL, --model MODEL The model name or path of the model to consider. If left unspecified, will list all available models. -v VERSION, -- version VERSION The version of the Neuron X Compiler to consider. Will list all available versions if left unspecified. As you can see, it is possible to: List all the models available for all compiler versions. List all the models available for a given compiler version by specifying the -v / --version argument. List all compilation files for a given model, there can be many for different input shapes and so on, by specifying the -m / --model argument. Example: Copied optimum-cli neuron cache list aws-neuron/optimum-neuron- cache β Training and Deployment using Amazon Sagemaker Fine-tune Transformers with AWS Trainium β Neuron Model Cache How to use the Neuron model cache How caching works Neuron model cache lookup (inferentia only) Advanced usage (trainium only) How to use a private Neuron model cache (trainium only) Using the Optimum CLI Using the environment variable CUSTO M_CACH E_REPO Cache system flow Optimum CLI Add a model to the cache (trainium only) List a cache repo |
Fine_tune_BERT_for_Text_Classification_on_AWS_Trai.txt | Fine-tune BERT for Text Classification on AWS Trainium Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Fine-tune BERT for Text Classification on AWS Trainium AWS Trainium & Inferentia π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron π€ Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Fine-tune BERT for Text Classification on AWS Trainium There is a notebook version of that tutorial here . This tutorial will help you to get started with AWS Trainium and Hugging Face Transformers. It will cover how to set up a Trainium instance on AWS, load & fine-tune a transformers model for text-classification You will learn how to: Setup AWS environment Load and process the dataset Fine-tune BERT using Hugging Face Transformers and Optimum Neuron Before we can start, make sure you have a Hugging Face Account to save artifacts and experiments. Quick intro: AWS Trainium AWS Trainium (Trn1) is a purpose-built EC2 for deep learning (DL) training workloads. Trainium is the successor of AWS Inferentia focused on high-performance training workloads claiming up to 50% cost-to-train savings over comparable GPU-based instances. Trainium has been optimized for training natural language processing, computer vision, and recommender models used. The accelerator supports a wide range of data types, including FP32, TF32, BF16, FP16, UINT8, and configurable FP8. The biggest Trainium instance, the trn1.32xlarge comes with over 500GB of memory, making it easy to fine-tune ~10B parameter models on a single instance. Below you will find an overview of the available instance types. More details here : instance size accelerators accelerator memory vCPU CPU Memory price per hour trn1.2xlarge 1 32 8 32 $1.34 trn1.32xlarge 16 512 128 512 $21.50 trn1n.32xlarge (2x bandwidth) 16 512 128 512 $24.78 Now we know what Trainium offers, letβs get started. π Note: This tutorial was created on a trn1.2xlarge AWS EC2 Instance. 1. Setup AWS environment In this example, we will use the trn1.2xlarge instance on AWS with 1 Accelerator, including two Neuron Cores and the Hugging Face Neuron Deep Learning AMI . This blog post doesnβt cover how to create the instance in detail. You can check out my previous blog about βSetting up AWS Trainium for Hugging Face Transformersβ , which includes a step-by-step guide on setting up the environment. Once the instance is up and running, we can ssh into it. But instead of developing inside a terminal we want to use a Jupyter environment, which we can use for preparing our dataset and launching the training. For this, we need to add a port for forwarding in the ssh command, which will tunnel our localhost traffic to the Trainium instance. Copied PUBLIC_DNS= "" # IP address, e.g. ec2-3-80-.... KEY_PATH= "" # local path to key, e.g. ssh/trn.pem ssh -L 8080:localhost:8080 -i ${KEY_NAME} .pem ubuntu@ $PUBLIC_DNS We can now start our jupyter server. Copied python -m notebook --allow-root --port=8080 You should see a familiar jupyter output with a URL to the notebook. http://localhost:8080/?token=8c1739aff1755bd7958c4cfccc8d08cb5da5234f61f129a9 We can click on it, and a jupyter environment opens in our local browser. We are going to use the Jupyter environment only for preparing the dataset and then torchrun for launching our training script on both neuron cores for distributed training. Lets create a new notebook and get started. 2. Load and process the dataset We are training a Text Classification model on the emotion dataset to keep the example straightforward. The emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. We will use the load_dataset() method from the π€ Datasets library to load the emotion . Copied from datasets import load_dataset # Dataset id from huggingface.co/dataset dataset_id = "philschmid/emotion" # Load raw dataset raw_dataset = load_dataset(dataset_id) print ( f"Train dataset size: { len (raw_dataset[ 'train' ])} " ) print ( f"Test dataset size: { len (raw_dataset[ 'test' ])} " ) # Train dataset size: 16000 # Test dataset size: 2000 Letβs check out an example of the dataset. Copied from random import randrange random_id = randrange( len (raw_dataset[ 'train' ])) raw_dataset[ 'train' ][random_id] # {'text': 'i feel isolated and alone in my trade', 'label': 0} We must convert our βNatural Languageβ to token IDs to train our model. This is done by a Tokenizer, which tokenizes the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary). if you want to learn more about this, out chapter 6 of the Hugging Face Course . Our Neuron Accelerator expects a fixed shape of inputs. We need to truncate or pad all samples to the same length. Copied from transformers import AutoTokenizer import os # Model id to load the tokenizer model_id = "bert-base-uncased" save_dataset_path = "lm_dataset" # Load Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_id) # Tokenize helper function def tokenize ( batch ): return tokenizer(batch[ 'text' ], padding= 'max_length' , truncation= True ,return_tensors= "pt" ) # Tokenize dataset raw_dataset = raw_dataset.rename_column( "label" , "labels" ) # to match Trainer tokenized_dataset = raw_dataset. map (tokenize, batched= True , remove_columns=[ "text" ]) tokenized_dataset = tokenized_dataset.with_format( "torch" ) # save dataset to disk tokenized_dataset[ "train" ].save_to_disk(os.path.join(save_dataset_path, "train" )) tokenized_dataset[ "test" ].save_to_disk(os.path.join(save_dataset_path, "eval" )) 3. Fine-tune BERT using Hugging Face Transformers Normally you would use the Trainer and TrainingArguments to fine-tune PyTorch-based transformer models. But together with AWS, we have developed a NeuronTrainer to improve performance, robustness, and safety when training on Trainium or Inferentia2 instances. The NeuronTrainer also comes with a model cache , which allows us to use precompiled models and configuration from Hugging Face Hub to skip the compilation step, which would be needed at the beginning of training. This can reduce the training time by ~3x. The NeuronTrainer is part of the optimum-neuron library and can be used as a 1-to-1 replacement for the Trainer . You only have to adjust the import in your training script. Copied - from transformers import Trainer, TrainingArguments + from optimum.neuron import NeuronTrainer as Trainer + from optimum.neuron import NeuronTrainingArguments as TrainingArguments We prepared a simple train.py training script based on the βGetting started with Pytorch 2.0 and Hugging Face Transformersβ blog post with the NeuronTrainer . Below is an excerpt Copied from transformers import TrainingArguments from optimum.neuron import NeuronTrainer as Trainer def parse_args (): ... def training_function ( args ): # load dataset from disk and tokenizer train_dataset = load_from_disk(os.path.join(args.dataset_path, "train" )) ... # Download the model from huggingface.co/models model = AutoModelForSequenceClassification.from_pretrained( args.model_id, num_labels=num_labels, label2id=label2id, id2label=id2label ) training_args = TrainingArguments( ... ) # Create Trainer instance trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, ) # Start training trainer.train() We can load the training script into our environment using the wget command or manually copy it into the notebook from here . Copied !wget https://raw.githubusercontent.com/huggingface/optimum-neuron/main/notebooks/text-classification/scripts/train.py We will use torchrun to launch our training script on both neuron cores for distributed training. torchrun is a tool that automatically distributes a PyTorch model across multiple accelerators. We can pass the number of accelerators as nproc_per_node arguments alongside our hyperparameters. Weβll use the following command to launch training: Copied !torchrun --nproc_per_node=2 train.py \ --model_id bert-base-uncased \ --dataset_path lm_dataset \ --lr 5e-5 \ --per_device_train_batch_size 16 \ --bf16 True \ --epochs 3 Note : If you see bad, bad accuracy, you might want to deactivate bf16 for now. After 9 minutes the training was completed and achieved an excellent f1 score of 0.914 . Copied ***** train metrics ***** epoch = 3.0 train_runtime = 0:08:30 train_samples = 16000 train_samples_per_second = 96.337 ***** eval metrics ***** eval_f1 = 0.914 eval_runtime = 0:00:08 Last but not least, terminate the EC2 instance to avoid unnecessary charges. Looking at the price-performance, our training only cost 20ct ( 1.34$/h * 0.15h = 0.20$ ) β Notebooks Fine-tune Llama 3 8B on AWS Trainium β Fine-tune BER T for Text Classification on AW S Trainium Quick intro: AW S Trainium 1. Setup AW S environment 2. Load and process the dataset 3. Fine-tune BER T using Hugging Face Transformers |
Run_training_on_Amazon_SageMaker.txt | Run training on Amazon SageMaker Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Amazon SageMaker documentation Run training on Amazon SageMaker Amazon SageMaker π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Hugging Face on Amazon SageMaker Get started Run training on Amazon SageMaker Deploy models to Amazon SageMaker Reference Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Run training on Amazon SageMaker This guide will show you how to train a π€ Transformers model with the HuggingFace SageMaker Python SDK. Learn how to: Install and setup your training environment . Prepare a training script . Create a Hugging Face Estimator . Run training with the fit method . Access your trained model . Perform distributed training . Create a spot instance . Load a training script from a GitHub repository . Collect training metrics . Installation and setup Before you can train a π€ Transformers model with SageMaker, you need to sign up for an AWS account. If you donβt have an AWS account yet, learn more here . Once you have an AWS account, get started using one of the following: SageMaker Studio SageMaker notebook instance Local environment To start training locally, you need to setup an appropriate IAM role . Upgrade to the latest sagemaker version: Copied pip install sagemaker --upgrade SageMaker environment Setup your SageMaker environment as shown below: Copied import sagemaker sess = sagemaker.Session() role = sagemaker.get_execution_role() Note: The execution role is only available when running a notebook within SageMaker. If you run get_execution_role in a notebook not on SageMaker, expect a region error. Local environment Setup your local environment as shown below: Copied import sagemaker import boto3 iam_client = boto3.client( 'iam' ) role = iam_client.get_role(RoleName= 'role-name-of-your-iam-role-with-right-permissions' )[ 'Role' ][ 'Arn' ] sess = sagemaker.Session() Prepare a π€ Transformers fine-tuning script Our training script is very similar to a training script you might run outside of SageMaker. However, you can access useful properties about the training environment through various environment variables (see here for a complete list), such as: SM_MODEL_DIR : A string representing the path to which the training job writes the model artifacts. After training, artifacts in this directory are uploaded to S3 for model hosting. SM_MODEL_DIR is always set to /opt/ml/model . SM_NUM_GPUS : An integer representing the number of GPUs available to the host. SM_CHANNEL_XXXX: A string representing the path to the directory that contains the input data for the specified channel. For example, when you specify train and test in the Hugging Face Estimator fit method, the environment variables are set to SM_CHANNEL_TRAIN and SM_CHANNEL_TEST . The hyperparameters defined in the Hugging Face Estimator are passed as named arguments and processed by ArgumentParser() . Copied import transformers import datasets import argparse import os if __name__ == "__main__" : parser = argparse.ArgumentParser() # hyperparameters sent by the client are passed as command-line arguments to the script parser.add_argument( "--epochs" , type = int , default= 3 ) parser.add_argument( "--per_device_train_batch_size" , type = int , default= 32 ) parser.add_argument( "--model_name_or_path" , type = str ) # data, model, and output directories parser.add_argument( "--model-dir" , type = str , default=os.environ[ "SM_MODEL_DIR" ]) parser.add_argument( "--training_dir" , type = str , default=os.environ[ "SM_CHANNEL_TRAIN" ]) parser.add_argument( "--test_dir" , type = str , default=os.environ[ "SM_CHANNEL_TEST" ]) Note that SageMaker doesnβt support argparse actions. For example, if you want to use a boolean hyperparameter, specify type as bool in your script and provide an explicit True or False value. Look train.py file for a complete example of a π€ Transformers training script. Training Output Management If output_dir in the TrainingArguments is set to β/opt/ml/modelβ the Trainer saves all training artifacts, including logs, checkpoints, and models. Amazon SageMaker archives the whole β/opt/ml/modelβ directory as model.tar.gz and uploads it at the end of the training job to Amazon S3. Depending on your Hyperparameters and TrainingArguments this could lead to a large artifact (> 5GB), which can slow down deployment for Amazon SageMaker Inference. You can control how checkpoints, logs, and artifacts are saved by customization the TrainingArguments . For example by providing save_total_limit as TrainingArgument you can control the limit of the total amount of checkpoints. Deletes the older checkpoints in output_dir if new ones are saved and the maximum limit is reached. In addition to the options already mentioned above, there is another option to save the training artifacts during the training session. Amazon SageMaker supports Checkpointing , which allows you to continuously save your artifacts during training to Amazon S3 rather than at the end of your training. To enable Checkpointing you need to provide the checkpoint_s3_uri parameter pointing to an Amazon S3 location in the HuggingFace estimator and set output_dir to /opt/ml/checkpoints . Note: If you set output_dir to /opt/ml/checkpoints make sure to call trainer.save_model("/opt/ml/model") or model.save_pretrained(β/opt/ml/modelβ)/ tokenizer.save_pretrained("/opt/ml/model") at the end of your training to be able to deploy your model seamlessly to Amazon SageMaker for Inference. Create a Hugging Face Estimator Run π€ Transformers training scripts on SageMaker by creating a Hugging Face Estimator . The Estimator handles end-to-end SageMaker training. There are several parameters you should define in the Estimator: entry_point specifies which fine-tuning script to use. instance_type specifies an Amazon instance to launch. Refer here for a complete list of instance types. hyperparameters specifies training hyperparameters. View additional available hyperparameters in train.py file . The following code sample shows how to train with a custom script train.py with three hyperparameters ( epochs , per_device_train_batch_size , and model_name_or_path ): Copied from sagemaker.huggingface import HuggingFace # hyperparameters which are passed to the training job hyperparameters={ 'epochs' : 1 , 'per_device_train_batch_size' : 32 , 'model_name_or_path' : 'distilbert-base-uncased' } # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3.2xlarge' , instance_count= 1 , role=role, transformers_version= '4.26' , pytorch_version= '1.13' , py_version= 'py39' , hyperparameters = hyperparameters ) If you are running a TrainingJob locally, define instance_type='local' or instance_type='local_gpu' for GPU usage. Note that this will not work with SageMaker Studio. Execute training Start your TrainingJob by calling fit on a Hugging Face Estimator. Specify your input training data in fit . The input training data can be a: S3 URI such as s3://my-bucket/my-training-data . FileSystemInput for Amazon Elastic File System or FSx for Lustre. See here for more details about using these file systems as input. Call fit to begin training: Copied huggingface_estimator.fit( { 'train' : 's3://sagemaker-us-east-1-558105141721/samples/datasets/imdb/train' , 'test' : 's3://sagemaker-us-east-1-558105141721/samples/datasets/imdb/test' } ) SageMaker starts and manages all the required EC2 instances and initiates the TrainingJob by running: Copied /opt/conda/bin/python train.py --epochs 1 --model_name_or_path distilbert-base-uncased --per_device_train_batch_size 32 Access trained model Once training is complete, you can access your model through the AWS console or download it directly from S3. Copied from sagemaker.s3 import S3Downloader S3Downloader.download( s3_uri=huggingface_estimator.model_data, # S3 URI where the trained model is located local_path= '.' , # local path where *.targ.gz is saved sagemaker_session=sess # SageMaker session used for training the model ) Distributed training SageMaker provides two strategies for distributed training: data parallelism and model parallelism. Data parallelism splits a training set across several GPUs, while model parallelism splits a model across several GPUs. Data parallelism The Hugging Face Trainer supports SageMakerβs data parallelism library. If your training script uses the Trainer API, you only need to define the distribution parameter in the Hugging Face Estimator: Copied # configuration for running training on smdistributed data parallel distribution = { 'smdistributed' :{ 'dataparallel' :{ 'enabled' : True }}} # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3dn.24xlarge' , instance_count= 2 , role=role, transformers_version= '4.26.0' , pytorch_version= '1.13.1' , py_version= 'py39' , hyperparameters = hyperparameters, distribution = distribution ) π Open the sagemaker-notebook.ipynb notebook for an example of how to run the data parallelism library with TensorFlow. Model parallelism The Hugging Face [Trainer] also supports SageMakerβs model parallelism library. If your training script uses the Trainer API, you only need to define the distribution parameter in the Hugging Face Estimator (see here for more detailed information about using model parallelism): Copied # configuration for running training on smdistributed model parallel mpi_options = { "enabled" : True , "processes_per_host" : 8 } smp_options = { "enabled" : True , "parameters" : { "microbatches" : 4 , "placement_strategy" : "spread" , "pipeline" : "interleaved" , "optimize" : "speed" , "partitions" : 4 , "ddp" : True , } } distribution={ "smdistributed" : { "modelparallel" : smp_options}, "mpi" : mpi_options } # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3dn.24xlarge' , instance_count= 2 , role=role, transformers_version= '4.26.0' , pytorch_version= '1.13.1' , py_version= 'py39' , hyperparameters = hyperparameters, distribution = distribution ) π Open the sagemaker-notebook.ipynb notebook for an example of how to run the model parallelism library. Spot instances The Hugging Face extension for the SageMaker Python SDK means we can benefit from fully-managed EC2 spot instances . This can help you save up to 90% of training costs! Note: Unless your training job completes quickly, we recommend you use checkpointing with managed spot training. In this case, you need to define the checkpoint_s3_uri . Set use_spot_instances=True and define your max_wait and max_run time in the Estimator to use spot instances: Copied # hyperparameters which are passed to the training job hyperparameters={ 'epochs' : 1 , 'train_batch_size' : 32 , 'model_name' : 'distilbert-base-uncased' , 'output_dir' : '/opt/ml/checkpoints' } # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3.2xlarge' , instance_count= 1 , checkpoint_s3_uri= f's3:// {sess.default_bucket()} /checkpoints' use_spot_instances= True , # max_wait should be equal to or greater than max_run in seconds max_wait= 3600 , max_run= 1000 , role=role, transformers_version= '4.26' , pytorch_version= '1.13' , py_version= 'py39' , hyperparameters = hyperparameters ) # Training seconds: 874 # Billable seconds: 262 # Managed Spot Training savings: 70.0% π Open the sagemaker-notebook.ipynb notebook for an example of how to use spot instances. Git repository The Hugging Face Estimator can load a training script stored in a GitHub repository . Provide the relative path to the training script in entry_point and the relative path to the directory in source_dir . If you are using git_config to run the π€ Transformers example scripts , you need to configure the correct 'branch' in transformers_version (e.g. if you use transformers_version='4.4.2 you have to use 'branch':'v4.4.2' ). Tip: Save your model to S3 by setting output_dir=/opt/ml/model in the hyperparameter of your training script. Copied # configure git settings git_config = { 'repo' : 'https://github.com/huggingface/transformers.git' , 'branch' : 'v4.4.2' } # v4.4.2 refers to the transformers_version you use in the estimator # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'run_glue.py' , source_dir= './examples/pytorch/text-classification' , git_config=git_config, instance_type= 'ml.p3.2xlarge' , instance_count= 1 , role=role, transformers_version= '4.26' , pytorch_version= '1.13' , py_version= 'py39' , hyperparameters=hyperparameters ) SageMaker metrics SageMaker metrics automatically parses training job logs for metrics and sends them to CloudWatch. If you want SageMaker to parse the logs, you must specify the metricβs name and a regular expression for SageMaker to use to find the metric. Copied # define metrics definitions metric_definitions = [ { "Name" : "train_runtime" , "Regex" : "train_runtime.*=\D*(.*?)$" }, { "Name" : "eval_accuracy" , "Regex" : "eval_accuracy.*=\D*(.*?)$" }, { "Name" : "eval_loss" , "Regex" : "eval_loss.*=\D*(.*?)$" }, ] # create the Estimator huggingface_estimator = HuggingFace( entry_point= 'train.py' , source_dir= './scripts' , instance_type= 'ml.p3.2xlarge' , instance_count= 1 , role=role, transformers_version= '4.26' , pytorch_version= '1.13' , py_version= 'py39' , metric_definitions=metric_definitions, hyperparameters = hyperparameters) π Open the notebook for an example of how to capture metrics in SageMaker. < > Update on GitHub β Get started Deploy models to Amazon SageMaker β Run training on Amazon Sage Maker Installation and setup Prepare a π€ Transformers fine-tuning script Training Output Management Create a Hugging Face Estimator Execute training Access trained model Distributed training Data parallelism Model parallelism Spot instances Git repository Sage Maker metrics |
Object_Detection.txt | Object detection Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Object detection api-inference π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Object detection Object Detection models allow users to identify objects of certain defined classes. These models receive an image as input and output the images with bounding boxes and labels on detected objects. For more details about the object-detection task, check out its dedicated page ! You will find examples and related materials. Recommended models facebook/detr-resnet-50 : Solid object detection model pre-trained on the COCO 2017 dataset. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/facebook/detr-resnet-50" headers = { "Authorization" : "Bearer hf_***" } def query ( filename ): with open (filename, "rb" ) as f: data = f.read() response = requests.post(API_URL, headers=headers, data=data) return response.json() output = query( "cats.jpg" ) To use the Python client, see huggingface_hub βs package reference . API specification Request Payload inputs* string The input image data as a base64-encoded string. If no parameters are provided, you can also provide the image data as a raw bytes payload. parameters object threshold number The probability necessary to make a prediction. Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body (array) object[] Output is an array of objects. label string The predicted label for the bounding box. score number The associated score / probability. box object xmin integer The x-coordinate of the top-left corner of the bounding box. xmax integer The x-coordinate of the bottom-right corner of the bounding box. ymin integer The y-coordinate of the top-left corner of the bounding box. ymax integer The y-coordinate of the bottom-right corner of the bounding box. < > Update on GitHub β Image-Text to Text Question Answering β Object detection Recommended models Using the API AP I specification Request Response |
The_Command_Line.txt | The Command Line Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation The Command Line Accelerate π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started π€ Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started The Command Line Below is a list of all the available commands π€ Accelerate with their parameters accelerate config Command : accelerate config or accelerate-config Launches a series of prompts to create and save a default_config.yml configuration file for your training system. Should always be ran first on your machine. Usage : Copied accelerate config [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) β The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with βaccelerateβ, or if you donβt have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) β Show a help message and exit accelerate config default Command : accelerate config default or accelerate-config default Create a default config file for Accelerate with only a few flags set. Usage : Copied accelerate config default [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) β The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with βaccelerateβ, or if you donβt have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) β Show a help message and exit --mixed_precision {no,fp16,bf16} ( str ) β Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later. accelerate config update Command : accelerate config update or accelerate-config update Update an existing config file with the latest defaults while maintaining the old configuration. Usage : Copied accelerate config update [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) β The path to the config file to update. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with βaccelerateβ, or if you donβt have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) β Show a help message and exit accelerate env Command : accelerate env or accelerate-env or python -m accelerate.commands.env Lists the contents of the passed π€ Accelerate configuration file. Should always be used when opening an issue on the GitHub repository . Usage : Copied accelerate env [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) β The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with βaccelerateβ, or if you donβt have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) β Show a help message and exit accelerate launch Command : accelerate launch or accelerate-launch or python -m accelerate.commands.launch Launches a specified script on a distributed system with the right parameters. Usage : Copied accelerate launch [arguments] {training_script} --{training_script-argument-1} --{training_script-argument-2} ... Positional Arguments : {training_script} β The full path to the script to be launched in parallel --{training_script-argument-1} β Arguments of the training script Optional Arguments : -h , --help ( bool ) β Show a help message and exit --config_file CONFIG_FILE ( str )β The config file to use for the default values in the launching script. -m , --module ( bool ) β Change each process to interpret the launch script as a Python module, executing with the same behavior as βpython -mβ. --no_python ( bool ) β Skip prepending the training script with βpythonβ - just execute it directly. Useful when the script is not a Python script. --debug ( bool ) β Whether to print out the torch.distributed stack trace when something fails. -q , --quiet ( bool ) β Silence subprocess errors from the launch stack trace to only show the relevant tracebacks. (Only applicable to DeepSpeed and single-process configurations). The rest of these arguments are configured through accelerate config and are read in from the specified --config_file (or default configuration) for their values. They can also be passed in manually. Hardware Selection Arguments : --cpu ( bool ) β Whether or not to force the training on the CPU. --multi_gpu ( bool ) β Whether or not this should launch a distributed GPU training. --tpu ( bool ) β Whether or not this should launch a TPU training. --ipex ( bool ) β Whether or not this should launch an Intel Pytorch Extension (IPEX) training. Resource Selection Arguments : The following arguments are useful for fine-tuning how available hardware should be used --mixed_precision {no,fp16,bf16,fp8} ( str ) β Whether or not to use mixed precision training. Choose between FP16 and BF16 (bfloat16) training. BF16 training is only supported on Nvidia Ampere GPUs and PyTorch 1.10 or later. --num_processes NUM_PROCESSES ( int ) β The total number of processes to be launched in parallel. --num_machines NUM_MACHINES ( int ) β The total number of machines used in this training. --num_cpu_threads_per_process NUM_CPU_THREADS_PER_PROCESS ( int ) β The number of CPU threads per process. Can be tuned for optimal performance. --enable_cpu_affinity ( bool ) β Whether or not CPU affinity and balancing should be enabled. Currently only supported on NVIDIA hardware. Training Paradigm Arguments : The following arguments are useful for selecting which training paradigm to use. --use_deepspeed ( bool ) β Whether or not to use DeepSpeed for training. --use_fsdp ( bool ) β Whether or not to use FullyShardedDataParallel for training. --use_megatron_lm ( bool ) β Whether or not to use Megatron-LM for training. --use_xpu ( bool ) β Whether to use IPEX plugin to speed up training on XPU specifically. Distributed GPU Arguments : The following arguments are only useful when multi_gpu is passed or multi-gpu training is configured through accelerate config : --gpu_ids ( str ) β What GPUs (by id) should be used for training on this machine as a comma-seperated list --same_network ( bool ) β Whether all machines used for multinode training exist on the same local network. --machine_rank ( int ) β The rank of the machine on which this script is launched. --main_process_ip ( str ) β The IP address of the machine of rank 0. --main_process_port ( int ) β The port to use to communicate with the machine of rank 0. -t , --tee ( str ) β Tee std streams into a log file and also to console. --log_dir ( str ) β Base directory to use for log files when using torchrun/torch.distributed.run as launcher. Use with βtee to redirect std streams info log files. --role ( str ) β User-defined role for the workers. --rdzv_backend ( str ) β The rendezvous method to use, such as βstaticβ (the default) or βc10dβ --rdzv_conf ( str ) β Additional rendezvous configuration (<key1>=<value1>,<key2>=<value2>,β¦). --max_restarts ( int ) β Maximum number of worker group restarts before failing. --monitor_interval ( int ) β Interval, in seconds, to monitor the state of workers. TPU Arguments : The following arguments are only useful when tpu is passed or TPU training is configured through accelerate config : --tpu_cluster ( bool ) β Whether to use a GCP TPU pod for training. --tpu_use_sudo ( bool ) β Whether to use sudo when running the TPU training script in each pod. --vm ( str ) β List of single Compute VM instance names. If not provided we assume usage of instance groups. For TPU pods. --env ( str ) β List of environment variables to set on the Compute VM instances. For TPU pods. --main_training_function ( str ) β The name of the main function to be executed in your script (only for TPU training). --downcast_bf16 ( bool ) β Whether when using bf16 precision on TPUs if both float and double tensors are cast to bfloat16 or if double tensors remain as float32. DeepSpeed Arguments : The following arguments are only useful when use_deepspeed is passed or deepspeed is configured through accelerate config : --deepspeed_config_file ( str ) β DeepSpeed config file. --zero_stage ( int ) β DeepSpeedβs ZeRO optimization stage. --offload_optimizer_device ( str ) β Decides where (none|cpu|nvme) to offload optimizer states. --offload_param_device ( str ) β Decides where (none|cpu|nvme) to offload parameters. --offload_optimizer_nvme_path ( str ) β Decides Nvme Path to offload optimizer states. --gradient_accumulation_steps ( int ) β No of gradient_accumulation_steps used in your training script. --gradient_clipping ( float ) β Gradient clipping value used in your training script. --zero3_init_flag ( str ) β Decides Whether (true|false) to enable deepspeed.zero.Init for constructing massive models. Only applicable with DeepSpeed ZeRO Stage-3. --zero3_save_16bit_model ( str ) β Decides Whether (true|false) to save 16-bit model weights when using ZeRO Stage-3. Only applicable with DeepSpeed ZeRO Stage-3. --deepspeed_hostfile ( str ) β DeepSpeed hostfile for configuring multi-node compute resources. --deepspeed_exclusion_filter ( str ) β DeepSpeed exclusion filter string when using mutli-node setup. --deepspeed_inclusion_filter ( str ) β DeepSpeed inclusion filter string when using mutli-node setup. --deepspeed_multinode_launcher ( str ) β DeepSpeed multi-node launcher to use. --deepspeed_moe_layer_cls_names ( str ) β comma-separated list of transformer MoE layer class names (case-sensitive) to wrap, e.g, MixtralSparseMoeBlock Qwen2MoeSparseMoeBlock , JetMoEAttention,JetMoEBlock Fully Sharded Data Parallelism Arguments : The following arguments are only useful when use_fsdp is passed or Fully Sharded Data Parallelism is configured through accelerate config : --fsdp_offload_params ( str ) β Decides Whether (true|false) to offload parameters and gradients to CPU. --fsdp_min_num_params ( int ) β FSDPβs minimum number of parameters for Default Auto Wrapping. --fsdp_sharding_strategy ( int ) β FSDPβs Sharding Strategy. --fsdp_auto_wrap_policy ( str ) β FSDPβs auto wrap policy. --fsdp_transformer_layer_cls_to_wrap ( str ) β Transformer layer class name (case-sensitive) to wrap, e.g, BertLayer , GPTJBlock , T5Block β¦ --fsdp_backward_prefetch_policy ( str ) β FSDPβs backward prefetch policy. --fsdp_state_dict_type ( str ) β FSDPβs state dict type. --fsdp_forward_prefetch ( str ) β FSDP forward prefetch. --fsdp_use_orig_params ( str ) β If True, allows non-uniform requires_grad mixed in a FSDP unit. --fsdp_cpu_ram_efficient_loading ( str ) β If true, only the first process loads the pretrained model checkoint while all other processes have empty weights. When using this, --fsdp_sync_module_states needs to True. --fsdp_sync_module_states ( str ) β If true, each individually wrapped FSDP unit will broadcast module parameters from rank 0. --fsdp_activation_checkpointing ( bool ) β Decides Whether intermediate activations are freed during the forward pass, and a checkpoint is left as a placeholder Megatron-LM Arguments : The following arguments are only useful when use_megatron_lm is passed or Megatron-LM is configured through accelerate config : --megatron_lm_tp_degree (β) β Megatron-LMβs Tensor Parallelism (TP) degree. --megatron_lm_pp_degree (β) β Megatron-LMβs Pipeline Parallelism (PP) degree. --megatron_lm_num_micro_batches (β) β Megatron-LMβs number of micro batches when PP degree > 1. --megatron_lm_sequence_parallelism (β) β Decides Whether (true|false) to enable Sequence Parallelism when TP degree > 1. --megatron_lm_recompute_activations (β) β Decides Whether (true|false) to enable Selective Activation Recomputation. --megatron_lm_use_distributed_optimizer (β) β Decides Whether (true|false) to use distributed optimizer which shards optimizer state and gradients across Data Parallel (DP) ranks. --megatron_lm_gradient_clipping (β) β Megatron-LMβs gradient clipping value based on global L2 Norm (0 to disable). FP8 Arguments : --fp8_backend ( str ) β Choose a backend to train with FP8 ( te or msamp ) --fp8_use_autocast_during_eval ( bool ) β Whether to use FP8 autocast during eval mode (useful only when --fp8_backend=te is passed). Generally better metrics are found when this is not passed. --fp8_margin ( int ) β The margin to use for the gradient scaling (useful only when --fp8_backend=te is passed). --fp8_interval ( int ) β The interval to use for how often the scaling factor is recomputed (useful only when --fp8_backend=te is passed). --fp8_format ( str ) β The format to use for the FP8 recipe (useful only when --fp8_backend=te is passed). --fp8_amax_history_len ( int ) β The length of the history to use for the scaling factor computation (useful only when --fp8_backend=te is passed). --fp8_amax_compute_algo ( str ) β The algorithm to use for the scaling factor computation. (useful only when --fp8_backend=te is passed). --fp8_override_linear_precision ( Tuple[bool, bool, bool] ) β Whether or not to execute fprop , dgrad , and wgrad GEMMS in higher precision. --fp8_opt_level ( str ) β What level of 8-bit collective communication should be used with MS-AMP (useful only when --fp8_backend=msamp is passed) AWS SageMaker Arguments : The following arguments are only useful when training in SageMaker --aws_access_key_id AWS_ACCESS_KEY_ID ( str ) β The AWS_ACCESS_KEY_ID used to launch the Amazon SageMaker training job --aws_secret_access_key AWS_SECRET_ACCESS_KEY ( str ) β The AWS_SECRET_ACCESS_KEY used to launch the Amazon SageMaker training job accelerate estimate-memory Command : accelerate estimate-memory or accelerate-estimate-memory or python -m accelerate.commands.estimate Estimates the total vRAM a particular model hosted on the Hub needs to be loaded in with an estimate for training. Requires that huggingface_hub be installed. When performing inference, typically add β€20% to the result as overall allocation as referenced here . We will have more extensive estimations in the future that will automatically be included in the calculation. Usage : Copied accelerate estimate-memory {MODEL_NAME} --library_name {LIBRARY_NAME} --dtypes {dtype_1} {dtype_2} ... Required Arguments : MODEL_NAME ( str )β The model name on the Hugging Face Hub Optional Arguments : --library_name {timm,transformers} ( str ) β The library the model has an integration with, such as transformers , needed only if this information is not stored on the Hub --dtypes {float32,float16,int8,int4} ( [{float32,float16,int8,int4} ...] ) β The dtypes to use for the model, must be one (or many) of float32 , float16 , int8 , and int4 --trust_remote_code ( bool ) β Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be passed for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. accelerate tpu-config accelerate tpu-config Usage : Copied accelerate tpu-config [arguments] Optional Arguments : -h , --help ( bool ) β Show a help message and exit Config Arguments : Arguments that can be configured through accelerate config . --config_file ( str ) β Path to the config file to use for accelerate. --tpu_name ( str ) β The name of the TPU to use. If not specified, will use the TPU specified in the config file. --tpu_zone ( str ) β The zone of the TPU to use. If not specified, will use the zone specified in the config file. TPU Arguments : Arguments for options ran inside the TPU. --command_file ( str ) β The path to the file containing the commands to run on the pod on startup. --command ( str ) β A command to run on the pod. Can be passed multiple times. --install_accelerate ( bool ) β Whether to install accelerate on the pod. Defaults to False. --accelerate_version ( str ) β The version of accelerate to install on the pod. If not specified, will use the latest pypi version. Specify βdevβ to install from GitHub. --debug ( bool ) β If set, will print the command that would be run instead of running it. accelerate test accelerate test or accelerate-test Runs accelerate/test_utils/test_script.py to verify that π€ Accelerate has been properly configured on your system and runs. Usage : Copied accelerate test [arguments] Optional Arguments : --config_file CONFIG_FILE ( str ) β The path to use to store the config file. Will default to a file named default_config.yaml in the cache location, which is the content of the environment HF_HOME suffixed with βaccelerateβ, or if you donβt have such an environment variable, your cache directory ( ~/.cache or the content of XDG_CACHE_HOME ) suffixed with huggingface . -h , --help ( bool ) β Show a help message and exit < > Update on GitHub β Stateful classes DataLoaders, Optimizers, Schedulers β The Command Line accelerate config accelerate config default accelerate config update accelerate env accelerate launch accelerate estimate-memory accelerate tpu-config accelerate test |
Perplexity_of_fixed-length_models.txt | Perplexity of fixed-length models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Perplexity of fixed-length models Transformers π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started π€ Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with π€ Accelerate Load and train adapters with π€ PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from π€ Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to π€ Transformers? How to add a model to π€ Transformers? How to add a pipeline to π€ Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What π€ Transformers can do How π€ Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Perplexity of fixed-length models Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note that the metric applies specifically to classical language models (sometimes called autoregressive or causal language models) and is not well defined for masked language models like BERT (see summary of the models ). Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized sequence X = ( x 0 , x 1 , β¦ , x t ) X = (x_0, x_1, \dots, x_t) X = ( x 0 β , x 1 β , β¦ , x t β ) , then the perplexity of X X X is, PPL ( X ) = exp β‘ { β 1 t β i t log β‘ p ΞΈ ( x i β£ x < i ) } \text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\} PPL ( X ) = exp { β t 1 β i β t β lo g p ΞΈ β ( x i β β£ x < i β ) } where log β‘ p ΞΈ ( x i β£ x < i ) \log p_\theta (x_i|x_{<i}) lo g p ΞΈ β ( x i β β£ x < i β ) is the log-likelihood of the ith token conditioned on the preceding tokens x < i x_{<i} x < i β according to our model. Intuitively, it can be thought of as an evaluation of the modelβs ability to predict uniformly among the set of specified tokens in a corpus. Importantly, this means that the tokenization procedure has a direct impact on a modelβs perplexity which should always be taken into consideration when comparing different models. This is also equivalent to the exponentiation of the cross-entropy between the data and model predictions. For more intuition about perplexity and its relationship to Bits Per Character (BPC) and data compression, check out this fantastic blog post on The Gradient . Calculating PPL with fixed-length models If we werenβt limited by a modelβs context size, we would evaluate the modelβs perplexity by autoregressively factorizing a sequence and conditioning on the entire preceding subsequence at each step, as shown below. When working with approximate models, however, we typically have a constraint on the number of tokens the model can process. The largest version of GPT-2 , for example, has a fixed length of 1024 tokens, so we cannot calculate p ΞΈ ( x t β£ x < t ) p_\theta(x_t|x_{<t}) p ΞΈ β ( x t β β£ x < t β ) directly when t t t is greater than 1024. Instead, the sequence is typically broken into subsequences equal to the modelβs maximum input size. If a modelβs max input size is k k k , we then approximate the likelihood of a token x t x_t x t β by conditioning only on the k β 1 k-1 k β 1 tokens that precede it rather than the entire context. When evaluating the modelβs perplexity of a sequence, a tempting but suboptimal approach is to break the sequence into disjoint chunks and add up the decomposed log-likelihoods of each segment independently. This is quick to compute since the perplexity of each segment can be computed in one forward pass, but serves as a poor approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will have less context at most of the prediction steps. Instead, the PPL of fixed-length models should be evaluated with a sliding-window strategy. This involves repeatedly sliding the context window so that the model has more context when making each prediction. This is a closer approximation to the true decomposition of the sequence probability and will typically yield a more favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good practical compromise is to employ a strided sliding window, moving the context by larger strides rather than sliding by 1 token a time. This allows computation to proceed much faster while still giving the model a large context to make predictions at each step. Example: Calculating perplexity with GPT-2 in π€ Transformers Letβs demonstrate this process with GPT-2. Copied from transformers import GPT2LMHeadModel, GPT2TokenizerFast from accelerate.test_utils.testing import get_backend device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) model_id = "openai-community/gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) Weβll load in the WikiText-2 dataset and evaluate the perplexity using a few different sliding-window strategies. Since this dataset is small and weβre just doing one forward pass over the set, we can just load and encode the entire dataset in memory. Copied from datasets import load_dataset test = load_dataset( "wikitext" , "wikitext-2-raw-v1" , split= "test" ) encodings = tokenizer( "\n\n" .join(test[ "text" ]), return_tensors= "pt" ) With π€ Transformers, we can simply pass the input_ids as the labels to our model, and the average negative log-likelihood for each token is returned as the loss. With our sliding window approach, however, there is overlap in the tokens we pass to the model at each iteration. We donβt want the log-likelihood for the tokens weβre just treating as context to be included in our loss, so we can set these targets to -100 so that they are ignored. The following is an example of how we could do this with a stride of 512 . This means that the model will have at least 512 tokens for context when calculating the conditional likelihood of any one token (provided there are 512 preceding tokens available to condition on). Copied import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size( 1 ) nll_sum = 0.0 n_tokens = 0 prev_end_loc = 0 for begin_loc in tqdm( range ( 0 , seq_len, stride)): end_loc = min (begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # may be different from stride on last loop input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = - 100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # loss is calculated using CrossEntropyLoss which averages over valid labels # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels # to the left by 1. neg_log_likelihood = outputs.loss # Accumulate the total negative log-likelihood and the total number of tokens num_valid_tokens = (target_ids != - 100 ). sum ().item() # number of valid tokens in target_ids batch_size = target_ids.size( 0 ) num_loss_tokens = num_valid_tokens - batch_size # subtract batch_size due to internal label shift nll_sum += neg_log_likelihood * num_loss_tokens n_tokens += num_loss_tokens prev_end_loc = end_loc if end_loc == seq_len: break avg_nll = nll_sum / n_tokens # average negative log-likelihood per token ppl = torch.exp(avg_nll) Running this with the stride length equal to the max input length is equivalent to the suboptimal, non-sliding-window strategy we discussed above. The smaller the stride, the more context the model will have in making each prediction, and the better the reported perplexity will typically be. When we run the above with stride = 1024 , i.e. no overlap, the resulting PPL is 19.44 , which is about the same as the 19.93 reported in the GPT-2 paper. By using stride = 512 and thereby employing our striding window strategy, this jumps down to 16.44 . This is not only a more favorable score, but is calculated in a way that is closer to the true autoregressive decomposition of a sequence likelihood. < > Update on GitHub β BERTology Pipelines for webserver inference β Perplexity of fixed-length models Calculating PP L with fixed-length models Example: Calculating perplexity with GP T-2 in π€ Transformers |
Get_Croissant_metadata.txt | Get Croissant metadata Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Get Croissant metadata Dataset viewer π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started π€ Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Get Croissant metadata The dataset viewer automatically generates the metadata in Croissant format (JSON-LD) for every dataset on the Hugging Face Hub. It lists the datasetβs name, description, URL, and the distribution of the dataset as Parquet files, including the columnsβ metadata. The Croissant metadata is available for all the datasets that can be converted to Parquet format . What is Croissant? Croissant is a metadata format built on top of schema.org aimed at describing datasets used for machine learning to help indexing, searching and loading them programmatically. Get the metadata This guide shows you how to use Hugging Face /croissant endpoint to retrieve the Croissant metadata associated to a dataset. The /croissant endpoint takes the dataset name in the URL, for example for the ibm/duorc dataset: Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://huggingface.co/api/datasets/ibm/duorc/croissant" def query (): response = requests.get(API_URL, headers=headers) return response.json() data = query() Under the hood it uses the https://datasets-server.huggingface.co/croissant-crumbs endpoint and enriches it with the Hub metadata. The endpoint response is a JSON-LD containing the metadata in the Croissant format. For example, the ibm/duorc dataset has two subsets, ParaphraseRC and SelfRC (see the List splits and subsets guide for more details about splits and subsets). The metadata links to their Parquet files and describes the type of each of the six columns: plot_id , plot , title , question_id , question , and no_answer : Copied { "@context" : { "@language" : "en" , "@vocab" : "https://schema.org/" , "citeAs" : "cr:citeAs" , "column" : "cr:column" , "conformsTo" : "dct:conformsTo" , "cr" : "http://mlcommons.org/croissant/" , "data" : { "@id" : "cr:data" , "@type" : "@json" } , "dataBiases" : "cr:dataBiases" , "dataCollection" : "cr:dataCollection" , "dataType" : { "@id" : "cr:dataType" , "@type" : "@vocab" } , "dct" : "http://purl.org/dc/terms/" , "extract" : "cr:extract" , "field" : "cr:field" , "fileProperty" : "cr:fileProperty" , "fileObject" : "cr:fileObject" , "fileSet" : "cr:fileSet" , "format" : "cr:format" , "includes" : "cr:includes" , "isLiveDataset" : "cr:isLiveDataset" , "jsonPath" : "cr:jsonPath" , "key" : "cr:key" , "md5" : "cr:md5" , "parentField" : "cr:parentField" , "path" : "cr:path" , "personalSensitiveInformation" : "cr:personalSensitiveInformation" , "recordSet" : "cr:recordSet" , "references" : "cr:references" , "regex" : "cr:regex" , "repeated" : "cr:repeated" , "replace" : "cr:replace" , "sc" : "https://schema.org/" , "separator" : "cr:separator" , "source" : "cr:source" , "subField" : "cr:subField" , "transform" : "cr:transform" } , "@type" : "sc:Dataset" , "distribution" : [ { "@type" : "cr:FileObject" , "@id" : "repo" , "name" : "repo" , "description" : "The Hugging Face git repository." , "contentUrl" : "https://huggingface.co/datasets/ibm/duorc/tree/refs%2Fconvert%2Fparquet" , "encodingFormat" : "git+https" , "sha256" : "https://github.com/mlcommons/croissant/issues/80" } , { "@type" : "cr:FileSet" , "@id" : "parquet-files-for-config-ParaphraseRC" , "name" : "parquet-files-for-config-ParaphraseRC" , "description" : "The underlying Parquet files as converted by Hugging Face (see: https://huggingface.co/docs/dataset-viewer/parquet)." , "containedIn" : { "@id" : "repo" } , "encodingFormat" : "application/x-parquet" , "includes" : "ParaphraseRC/*/*.parquet" } , { "@type" : "cr:FileSet" , "@id" : "parquet-files-for-config-SelfRC" , "name" : "parquet-files-for-config-SelfRC" , "description" : "The underlying Parquet files as converted by Hugging Face (see: https://huggingface.co/docs/dataset-viewer/parquet)." , "containedIn" : { "@id" : "repo" } , "encodingFormat" : "application/x-parquet" , "includes" : "SelfRC/*/*.parquet" } ] , "recordSet" : [ { "@type" : "cr:RecordSet" , "@id" : "ParaphraseRC" , "name" : "ParaphraseRC" , "description" : "ibm/duorc - 'ParaphraseRC' subset\n\nAdditional information:\n- 3 splits: train, validation, test\n- 1 skipped column: answers" , "field" : [ { "@type" : "cr:Field" , "@id" : "ParaphraseRC/plot_id" , "name" : "ParaphraseRC/plot_id" , "description" : "Column 'plot_id' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "plot_id" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/plot" , "name" : "ParaphraseRC/plot" , "description" : "Column 'plot' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "plot" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/title" , "name" : "ParaphraseRC/title" , "description" : "Column 'title' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "title" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/question_id" , "name" : "ParaphraseRC/question_id" , "description" : "Column 'question_id' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "question_id" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/question" , "name" : "ParaphraseRC/question" , "description" : "Column 'question' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "question" } } } , { "@type" : "cr:Field" , "@id" : "ParaphraseRC/no_answer" , "name" : "ParaphraseRC/no_answer" , "description" : "Column 'no_answer' from the Hugging Face parquet file." , "dataType" : "sc:Boolean" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-ParaphraseRC" } , "extract" : { "column" : "no_answer" } } } ] } , { "@type" : "cr:RecordSet" , "@id" : "SelfRC" , "name" : "SelfRC" , "description" : "ibm/duorc - 'SelfRC' subset\n\nAdditional information:\n- 3 splits: train, validation, test\n- 1 skipped column: answers" , "field" : [ { "@type" : "cr:Field" , "@id" : "SelfRC/plot_id" , "name" : "SelfRC/plot_id" , "description" : "Column 'plot_id' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "plot_id" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/plot" , "name" : "SelfRC/plot" , "description" : "Column 'plot' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "plot" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/title" , "name" : "SelfRC/title" , "description" : "Column 'title' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "title" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/question_id" , "name" : "SelfRC/question_id" , "description" : "Column 'question_id' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "question_id" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/question" , "name" : "SelfRC/question" , "description" : "Column 'question' from the Hugging Face parquet file." , "dataType" : "sc:Text" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "question" } } } , { "@type" : "cr:Field" , "@id" : "SelfRC/no_answer" , "name" : "SelfRC/no_answer" , "description" : "Column 'no_answer' from the Hugging Face parquet file." , "dataType" : "sc:Boolean" , "source" : { "fileSet" : { "@id" : "parquet-files-for-config-SelfRC" } , "extract" : { "column" : "no_answer" } } } ] } ] , "name" : "duorc" , "description" : "\n\t\n\t\t\n\t\n\t\n\t\tDataset Card for duorc\n\t\n\n\n\t\n\t\t\n\t\n\t\n\t\tDataset Summary\n\t\n\nThe DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and theβ¦ See the full description on the dataset page: https://huggingface.co/datasets/ibm/duorc." , "alternateName" : [ "ibm/duorc" , "DuoRC" ] , "creator" : { "@type" : "Organization" , "name" : "IBM" , "url" : "https://huggingface.co/ibm" } , "keywords" : [ "question-answering" , "text2text-generation" , "abstractive-qa" , "extractive-qa" , "crowdsourced" , "crowdsourced" , "monolingual" , "100K<n<1M" , "10K<n<100K" , "original" , "English" , "mit" , "Croissant" , "arxiv:1804.07927" , "πΊπΈ Region: US" ] , "license" : "https://choosealicense.com/licenses/mit/" , "sameAs" : "https://duorc.github.io/" , "url" : "https://huggingface.co/datasets/ibm/duorc" } Load the dataset To load the dataset, you can use the mlcroissant library. It provides a simple way to load datasets from Croissant metadata. < > Update on GitHub β Explore dataset statistics Overview β Get Croissant metadata What is Croissant? Get the metadata Load the dataset |
RLOO_Trainer.txt | RLOO Trainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation RLOO Trainer TRL π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started RLOO Trainer TRL supports training LLMs with REINFORCE Leave-One-Out (RLOO). The idea is that instead of using a value function, RLOO generates K completions for each prompt. For each completion, RLOO uses the mean scores from the other K-1 completions as a baseline to calculate the advantage. RLOO also models the entire completion as a single action, where as PPO models each token as an action. Note that REINFORCE / A2C is a special case of PPO, when the number of PPO epochs is 1 and the number of mini-batches is 1, which is how we implement RLOO in TRL. References: Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs A2C is a special case of PPO Fine-Tuning Language Models from Human Preferences Learning to Summarize from Human Feedback The N Implementation Details of RLHF with PPO The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization Get started To just run a RLOO script to make sure the trainer can run, you can run the following command to train a RLOO model with a dummy reward model. Copied python examples/scripts/rloo/rloo.py \ --dataset_name trl-internal-testing/descriptiveness-sentiment-trl-style \ --dataset_train_split descriptiveness \ --learning_rate 3e-6 \ --output_dir models/minimal/rloo \ --per_device_train_batch_size 64 \ --gradient_accumulation_steps 1 \ --total_episodes 10000 \ --model_name_or_path EleutherAI/pythia-14m \ --reward_model_path EleutherAI/pythia-14m \ --missing_eos_penalty 1.0 Explanation of the logged metrics The logged metrics are as follows. Here is an example tracked run at Weights and Biases eps : Tracks the number of episodes per second. objective/kl : The mean Kullback-Leibler (KL) divergence between the current policy and reference policy. objective/entropy : The mean entropy of the policy, indicating the randomness of the actions chosen by the policy. objective/non_score_reward : The mean reward from non-score-related sources, basically beta * kl.sum(1) , where beta is the KL penalty coefficient and kl is the per-token KL divergence. objective/rlhf_reward : The mean RLHF reward, which is score - non_score_reward . objective/scores : The mean scores returned by the reward model / environment. policy/approxkl_avg : The average approximate KL divergence between consecutive PPO policies. Note that this is not the same as objective/kl . policy/clipfrac_avg : The average fraction of policy updates that are clipped, indicating how often the policy updates are constrained to prevent large changes. loss/policy_avg : The average policy loss, indicating how well the policy is performing. val/clipfrac_avg : The average fraction of value function updates that are clipped, similar to policy/clipfrac_avg but for the value function. policy/entropy_avg : The average entropy of the policy during training, indicating how diverse the policyβs actions are. val/ratio : The mean ratio of the current policy probability to the old policy probability, providing a measure of how much the policy has changed. val/ratio_var : The variance of the val/ratio , indicating the variability in policy changes. val/num_eos_tokens : The number of end-of-sequence (EOS) tokens generated, which can indicate the number of complete responses. lr : lr: The current learning rate used by the optimizer. episode : episode: The current global step or episode count in the training process. Cookbook Debugging TIP: objective/rlhf_reward : this is the ultimate objective of the RLHF training. If training works as intended, this metric should keep going up. Debugging TIP: val/ratio : this number should float around 1.0, and it gets clipped by --cliprange 0.2 with PPOβs surrogate loss. So if this ratio is too high like 2.0 or 1000.0 or too small like 0.1, it means the updates between consecutive policies are too drastic. You should try undertand why this is happening and try to fix it. Memory TIP: If you are running out of memory, you can try to reduce the --per_device_train_batch_size or increase the --gradient_accumulation_steps to reduce the memory footprint. Memory TIP: If you have multiple GPUs, you can also run training with DeepSpeed stage 3 to reduce the memory footprint accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml . Usage TIP: We recommend to use the βEOS trickβ via --missing_eos_penalty , which subtracts a static scalar penalty from the score of completions that do not end with an EOS token. This can help the model learn to generate more coherent completions. What is my model doing exactly? To help you understand what your model is doing, we periodically log some sample completions from the model. Here is an example of a completion. In an example tracked run at Weights and Biases , it looks like the following, allowing you to see the modelβs response at different stages of training. By default we generate --num_sample_generations 10 during training, but you can customize the number of generations. In the logs the sampled generations look like Copied βββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββββββββββββ³βββββββββββ β query β model response β score β β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ© β SUBREDDIT: r/AskReddit β I'm in love with a friend, and β 3.921875 β β β I don't know how to get rid of β β β TITLE: How do you get someone β those feelings. I'm β β β out of your head? β desperate.<|endoftext|>[PAD][Pβ¦ β β β β β β β POST: Hi, β β β β I'm 22 , and I have been with my β β β β girlfriend for 5 years now. We β β β β recently moved together. We've β β β β always loved each other β β β β intensely. β β β β β β β β Problem, I recently started to β β β β have feelings for an other β β β β person (a friend). This person β β β β has had a boyfriend for now 3 β β β β years, and has absolutely no β β β β ideas. Those feelings were so β β β β strong, it was hard to hide β β β β them. After 2 months of me β β β β being distant and really sad, β β β β my girlfriend forced me to say β β β β what was bothering me . I'm not β β β β a good liar, and now she knows. β β β β β β β β We decided to give us a week β β β β alone, I went to my parents. β β β β β β β β Now, I'm completely lost. I β β β β keep on thinking about this β β β β person, and I hate that . I β β β β would like for those feelings β β β β to go away, to leave me alone. β β β β But I can't. β β β β β β β β What do I do? It's been 3 β β β β months now, and I'm just β β β β desperate. β β β β β β β β TL;DR: β β β βββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββΌβββββββββββ€ β SUBREDDIT: r/pettyrevenge β My mom woke me up with a loud β 6.84375 β β β TV. I blasted Gangnam Style on β β β TITLE: So, my mom woke me up β repeat , with the bass cranked β β β with a loud TV. β up as high as it could β β β β go.<|endoftext|>[PAD][PAD][PADβ¦ β β β POST: She was in her living β β β β room, watching TV. This was at β β β β about 8 : 30 in the morning, and β β β β she was exercising. She turned β β β β the TV up extra loud to hear it β β β β over her excercycle, and woke β β β β me up. I went in there asking β β β β for her to turn it down. She β β β β said she didn't have to ; I β β β β explained that I always used β β β β headphones so she didn't have β β β β to deal with my noise and that β β β β she should give me a little β β β β more respect, given that I paid β β β β rent at the time . β β β β β β β β She disagreed. I went back to β β β β my room, rather pissed off at β β β β the lack of equality. I had no β β β β lock on my door; but I had a β β β β dresser right next to it , so I β β β β pulled one of the drawers out β β β β enough so that it caused the β β β β door to not be openable. Then, β β β β I turned my speakers up really β β β β loud and blasted Gangnam Style β β β β on repeat , with the bass β β β β cranked up as high as it could β β β β go. β β β β β β β β If you hate Gangnam Style for β β β β being overplayed, you will see β β β β why I chose that particular β β β β song. I personally don't mind β β β β it . But here's the thing about β β β β my bass; it vibrates the walls, β β β β making one hell of a lot of β β β β noise. Needless to say , my mom β β β β was not pleased and shut off β β β β the internet. But it was oh so β β β β worth it . β β β β β β β β TL;DR: β β β βββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ΄βββββββββββ Implementation details The bulk of RLOOTrainer is based on the PPO implementation, which is based on the The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization . Below is a vectorized advantage calculation for RLOO: Copied def test_rloo_reward (): local_batch_size = 3 rloo_k = 4 rlhf_reward = torch.tensor([ 1 , 2 , 3 , # first rlhf reward for three prompts 2 , 3 , 4 , # second rlhf reward for three prompts 5 , 6 , 7 , # third rlhf reward for three prompts 8 , 9 , 10 , # fourth rlhf reward for three prompts ]). float () # here we have 3 prompts which have 4 completions each baseline = (rlhf_reward. sum ( 0 ) - rlhf_reward) / (rloo_k - 1 ) advantages = torch.zeros_like(rlhf_reward) for i in range ( 0 , len (advantages), local_batch_size): other_response_rlhf_rewards = [] for j in range ( 0 , len (advantages), local_batch_size): if i != j: other_response_rlhf_rewards.append(rlhf_reward[j : j + local_batch_size]) advantages[i : i + local_batch_size] = rlhf_reward[i : i + local_batch_size] - torch.stack(other_response_rlhf_rewards).mean( 0 ) assert ( 1 - ( 2 + 5 + 8 ) / 3 - advantages[ 0 ].item()) < 1e-6 # First rlhf reward for the first prompt assert ( 6 - ( 3 + 2 + 9 ) / 3 - advantages[ 7 ].item()) < 1e-6 # Third rlhf reward for the second prompt # Vectorized implementation rlhf_reward = rlhf_reward.reshape(rloo_k, local_batch_size) baseline = (rlhf_reward. sum ( 0 ) - rlhf_reward) / (rloo_k - 1 ) vec_advantages = rlhf_reward - baseline torch.testing.assert_close(vec_advantages.flatten(), advantages) Benchmark experiments To validate the RLOO implementation works, we ran experiment on the 1B model. Here are the command we used to run the experiment. We take the SFT / RM models directly from The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization . Copied accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \ --output_dir models/minimal/rloo_tldr \ --dataset_name trl-internal-testing/tldr-preference-sft-trl-style \ --dataset_test_split validation \ --num_ppo_epochs 2 \ --num_mini_batches 2 \ --learning_rate 3e-6 \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 16 \ --total_episodes 1000000 \ --model_name_or_path EleutherAI/pythia-1b-deduped \ --sft_model_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr \ --reward_model_path cleanrl/EleutherAI_pythia-1b-deduped__reward__tldr \ --local_rollout_forward_batch_size 16 \ --missing_eos_penalty 1.0 \ --stop_token eos \ --kl_coef 0.03 Checkpoints and experiment tracking are available at: π€ Model checkpoint π Tracked experiment To evaluate, we use vLLM to load the checkpoints and GPT-4o mini as a judge model to evaluate the generated TL;DR against the reference TL;DR. For more information on how to use judges, see Judges . Copied $ python examples/scripts/evals/judge_tldr.py --model_name_or_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 33.00% $ python examples/scripts/evals/judge_tldr.py --model_name_or_path vwxyzjn/rloo_tldr --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 51.20% The RLOO checkpoint gets a 51.2% preferred rate vs the 33.0% preference rate of the SFT checkpoint. This is a good sign that the RLOO training is working as intended. Metrics: Copied # pip install openrlbenchmark==0.2.1a5 # see https://github.com/openrlbenchmark/openrlbenchmark#get-started for documentation # to use it, change `?we=huggingface&wpn=trl` to your own project and `?tag=pr-1540` to your own tag python -m openrlbenchmark.rlops_multi_metrics \ --filters '?we=huggingface&wpn=trl&xaxis=train/episode&ceik=output_dir&cen=sft_model_path&metrics=train/objective/rlhf_reward&metrics=train/objective/scores&metrics=train/objective/kl&metrics=train/objective/non_score_reward&metrics=train/objective/entropy&metrics=train/policy/approxkl_avg&metrics=train/policy/clipfrac_avg&metrics=train/loss/policy_avg&metrics=train/policy/entropy_avg&metrics=train/val/ratio&metrics=train/val/ratio_var&metrics=train/val/num_eos_tokens&metrics=train/lr&metrics=train/eps' \ "cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr?tag=pr-1540" \ --env-ids models/minimal/rloo_tldr \ --pc.ncols 4 \ --pc.ncols-legend 1 \ --pc.xlabel "Episode" \ --output-filename benchmark/trl/pr-1540/rloo \ --scan-history RLOOTrainer class trl. RLOOTrainer < source > ( config : RLOOConfig processing_class : typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] policy : Module ref_policy : Module reward_model : Module train_dataset : Dataset data_collator : typing.Optional[transformers.data.data_collator.DataCollatorWithPadding] = None eval_dataset : typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = None optimizers : tuple = (None, None) callbacks : typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None ) create_model_card < source > ( model_name : typing.Optional[str] = None dataset_name : typing.Optional[str] = None tags : typing.Union[str, list[str], NoneType] = None ) Parameters model_name ( str , optional , defaults to None ) β The name of the model. dataset_name ( str , optional , defaults to None ) β The name of the dataset used for training. tags ( str , list[str] or None , optional , defaults to None ) β Tags to be associated with the model card. Creates a draft of a model card using the information available to the Trainer . RLOOConfig class trl. RLOOConfig < source > ( output_dir : str overwrite_output_dir : bool = False do_train : bool = False do_eval : bool = False do_predict : bool = False eval_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only : bool = False per_device_train_batch_size : int = 8 per_device_eval_batch_size : int = 8 per_gpu_train_batch_size : typing.Optional[int] = None per_gpu_eval_batch_size : typing.Optional[int] = None gradient_accumulation_steps : int = 1 eval_accumulation_steps : typing.Optional[int] = None eval_delay : typing.Optional[float] = 0 torch_empty_cache_steps : typing.Optional[int] = None learning_rate : float = 5e-05 weight_decay : float = 0.0 adam_beta1 : float = 0.9 adam_beta2 : float = 0.999 adam_epsilon : float = 1e-08 max_grad_norm : float = 1.0 num_train_epochs : float = 3.0 max_steps : int = -1 lr_scheduler_type : typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs : typing.Union[dict, str, NoneType] = <factory> warmup_ratio : float = 0.0 warmup_steps : int = 0 log_level : typing.Optional[str] = 'passive' log_level_replica : typing.Optional[str] = 'warning' log_on_each_node : bool = True logging_dir : typing.Optional[str] = None logging_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step : bool = False logging_steps : float = 500 logging_nan_inf_filter : bool = True save_strategy : typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps : float = 500 save_total_limit : typing.Optional[int] = None save_safetensors : typing.Optional[bool] = True save_on_each_node : bool = False save_only_model : bool = False restore_callback_states_from_checkpoint : bool = False no_cuda : bool = False use_cpu : bool = False use_mps_device : bool = False seed : int = 42 data_seed : typing.Optional[int] = None jit_mode_eval : bool = False use_ipex : bool = False bf16 : bool = False fp16 : bool = False fp16_opt_level : str = 'O1' half_precision_backend : str = 'auto' bf16_full_eval : bool = False fp16_full_eval : bool = False tf32 : typing.Optional[bool] = None local_rank : int = -1 ddp_backend : typing.Optional[str] = None tpu_num_cores : typing.Optional[int] = None tpu_metrics_debug : bool = False debug : typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last : bool = False eval_steps : typing.Optional[float] = None dataloader_num_workers : int = 0 dataloader_prefetch_factor : typing.Optional[int] = None past_index : int = -1 run_name : typing.Optional[str] = None disable_tqdm : typing.Optional[bool] = None remove_unused_columns : typing.Optional[bool] = True label_names : typing.Optional[typing.List[str]] = None load_best_model_at_end : typing.Optional[bool] = False metric_for_best_model : typing.Optional[str] = None greater_is_better : typing.Optional[bool] = None ignore_data_skip : bool = False fsdp : typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params : int = 0 fsdp_config : typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap : typing.Optional[str] = None accelerator_config : typing.Union[dict, str, NoneType] = None deepspeed : typing.Union[dict, str, NoneType] = None label_smoothing_factor : float = 0.0 optim : typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args : typing.Optional[str] = None adafactor : bool = False group_by_length : bool = False length_column_name : typing.Optional[str] = 'length' report_to : typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters : typing.Optional[bool] = None ddp_bucket_cap_mb : typing.Optional[int] = None ddp_broadcast_buffers : typing.Optional[bool] = None dataloader_pin_memory : bool = True dataloader_persistent_workers : bool = False skip_memory_metrics : bool = True use_legacy_prediction_loop : bool = False push_to_hub : bool = False resume_from_checkpoint : typing.Optional[str] = None hub_model_id : typing.Optional[str] = None hub_strategy : typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token : typing.Optional[str] = None hub_private_repo : typing.Optional[bool] = None hub_always_push : bool = False gradient_checkpointing : bool = False gradient_checkpointing_kwargs : typing.Union[dict, str, NoneType] = None include_inputs_for_metrics : bool = False include_for_metrics : typing.List[str] = <factory> eval_do_concat_batches : bool = True fp16_backend : str = 'auto' evaluation_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id : typing.Optional[str] = None push_to_hub_organization : typing.Optional[str] = None push_to_hub_token : typing.Optional[str] = None mp_parameters : str = '' auto_find_batch_size : bool = False full_determinism : bool = False torchdynamo : typing.Optional[str] = None ray_scope : typing.Optional[str] = 'last' ddp_timeout : typing.Optional[int] = 1800 torch_compile : bool = False torch_compile_backend : typing.Optional[str] = None torch_compile_mode : typing.Optional[str] = None dispatch_batches : typing.Optional[bool] = None split_batches : typing.Optional[bool] = None include_tokens_per_second : typing.Optional[bool] = False include_num_input_tokens_seen : typing.Optional[bool] = False neftune_noise_alpha : typing.Optional[float] = None optim_target_modules : typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics : bool = False eval_on_start : bool = False use_liger_kernel : typing.Optional[bool] = False eval_use_gather_object : typing.Optional[bool] = False average_tokens_across_devices : typing.Optional[bool] = False dataset_num_proc : typing.Optional[int] = None num_mini_batches : int = 1 total_episodes : typing.Optional[int] = None local_rollout_forward_batch_size : int = 64 num_sample_generations : int = 10 response_length : int = 53 stop_token : typing.Optional[typing.Literal['eos']] = None stop_token_id : typing.Optional[int] = None temperature : float = 0.7 missing_eos_penalty : typing.Optional[float] = None sft_model_path : str = 'EleutherAI/pythia-160m' world_size : typing.Optional[int] = None num_total_batches : typing.Optional[int] = None micro_batch_size : typing.Optional[int] = None local_batch_size : typing.Optional[int] = None batch_size : typing.Optional[int] = None local_mini_batch_size : typing.Optional[int] = None mini_batch_size : typing.Optional[int] = None exp_name : str = 'rloo_config' reward_model_path : str = 'EleutherAI/pythia-160m' num_ppo_epochs : int = 4 whiten_rewards : bool = False kl_coef : float = 0.05 cliprange : float = 0.2 rloo_k : int = 2 ) Parameters exp_name ( str , optional , defaults to os.path.basename(__file__)[ -- -len(".py")] ): Name of this experiment. reward_model_path ( str , optional , defaults to "EleutherAI/pythia-160m" ) β Path to the reward model. num_ppo_epochs ( int , optional , defaults to 4 ) β Number of epochs to train. whiten_rewards ( bool , optional , defaults to False ) β Whether to whiten the rewards. kl_coef ( float , optional , defaults to 0.05 ) β KL coefficient. cliprange ( float , optional , defaults to 0.2 ) β Clip range. rloo_k ( int , optional , defaults to 2 ) β REINFORCE Leave-One-Out (RLOO) number of online samples per prompt. Configuration class for the RLOOTrainer . Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. < > Update on GitHub β Reward SFT β RLO O Trainer Get started Explanation of the logged metrics Cookbook What is my model doing exactly? Implementation details Benchmark experiments RLOO Trainer RLOO Config |
Training_FAQ.txt | Training FAQ Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Training FAQ TRL π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Training FAQ What Metrics Should I Look at? When performing classical supervised fine-tuning of language models, the loss (especially the validation loss) serves as a good indicator of the training progress. However, in Reinforcement Learning (RL), the loss becomes less informative about the modelβs performance, and its value may fluctuate while the actual performance improves. To address this, we recommend focusing on two key metrics first: Mean Reward : The primary goal is to maximize the reward achieved by the model during RL training. Objective KL Divergence : KL divergence (Kullback-Leibler divergence) measures the dissimilarity between two probability distributions. In the context of RL training, we use it to quantify the difference between the current model and a reference model. Ideally, we want to keep the KL divergence between 0 and 10 to ensure the modelβs generated text remains close to what the reference model produces. However, there are more metrics that can be useful for debugging, checkout the logging section . Why Do We Use a Reference Model, and Whatβs the Purpose of KL Divergence? When training RL models, optimizing solely for reward may lead to unexpected behaviors, where the model exploits the environment in ways that donβt align with good language generation. In the case of RLHF, we use a reward model trained to predict whether a generated text is highly ranked by humans. However, the RL model being optimized against the reward model may learn patterns that yield high reward but do not represent good language. This can result in extreme cases where the model generates texts with excessive exclamation marks or emojis to maximize the reward. In some worst-case scenarios, the model may generate patterns completely unrelated to natural language yet receive high rewards, similar to adversarial attacks. Figure: Samples without a KL penalty from https://huggingface.co/papers/1909.08593 . To address this issue, we add a penalty to the reward function based on the KL divergence between the current model and the reference model. By doing this, we encourage the model to stay close to what the reference model generates. What Is the Concern with Negative KL Divergence? If you generate text by purely sampling from the model distribution things work fine in general. But when you use the generate method there are a few caveats because it does not always purely sample depending on the settings which can cause KL-divergence to go negative. Essentially when the active model achieves log_p_token_active < log_p_token_ref we get negative KL-div. This can happen in a several cases: top-k sampling : the model can smooth out the probability distribution causing the top-k tokens having a smaller probability than those of the reference model but they still are selected min_length : this ignores the EOS token until min_length is reached. thus the model can assign a very low log prob to the EOS token and very high probs to all others until min_length is reached These are just a few examples. Why is negative KL an issue? The total reward R is computed R = r - beta * KL so if the model can learn how to drive KL-divergence negative it effectively gets a positive reward. In many cases it can be much easier to exploit such a bug in the generation than actually learning the reward function. In addition the KL can become arbitrarily small thus the actual reward can be very small compared to it. So how should you generate text for PPO training? Letβs have a look! How to generate text for training? In order to avoid the KL issues described above we recommend to use the following settings: Copied generation_kwargs = { "min_length" : - 1 , # don't ignore the EOS token (see above) "top_k" : 0.0 , # no top-k sampling "top_p" : 1.0 , # no nucleus sampling "do_sample" : True , # yes, we want to sample "pad_token_id" : tokenizer.eos_token_id, # most decoder models don't have a padding token - use EOS token instead "max_new_tokens" : 32 , # specify how many tokens you want to generate at most } With these settings we usually donβt encounter any issues. You can also experiments with other settings but if you encounter issues with negative KL-divergence try to go back to these and see if they persist. How can debug your own use-case? Debugging the RL pipeline can be challenging due to its complexity. Here are some tips and suggestions to make the process easier: Start from a working example : Begin with a working example from the trl repository and gradually modify it to fit your specific use-case. Changing everything at once can make it difficult to identify the source of potential issues. For example, you can start by replacing the model in the example and once you figure out the best hyperparameters try to switch to your dataset and reward model. If you change everything at once you wonβt know where a potential problem comes from. Start small, scale later : Training large models can be very slow and take several hours or days until you see any improvement. For debugging this is not a convenient timescale so try to use small model variants during the development phase and scale up once that works. That being said you sometimes have to be careful as small models might not have the capacity to solve a complicated task either. Start simple : Try to start with a minimal example and build complexity from there. Your use-case might require for example a complicated reward function consisting of many different rewards - try to use one signal first and see if you can optimize that and then add more complexity after that. Inspect the generations : Itβs always a good idea to inspect what the model is generating. Maybe there is a bug in your post-processing or your prompt. Due to bad settings you might cut-off generations too soon. These things are very hard to see on the metrics but very obvious if you look at the generations. Inspect the reward model : If you reward is not improving over time maybe thereβs an issue with the reward model. You can look at extreme cases to see if it does what it should: e.g. in the sentiment case you can check if simple positive and negative examples really get different rewards. And you can look at the distribution of your dataset. Finally, maybe the reward is dominated by the query which the model canβt affect so you might need to normalize this (e.g. reward of query+response minus reward of the query). These are just a few tips that we find helpful - if you have more useful tricks feel free to open a PR to add them as well! < > Update on GitHub β Dataset Formats Use Trained Models β Training FAQ What Metrics Should I Look at? Why Do We Use a Reference Model, and Whatβs the Purpose of K L Divergence? What Is the Concern with Negative K L Divergence? How to generate text for training? How can debug your own use-case? |
LayerNorm_Tuning.txt | LayerNorm Tuning Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation LayerNorm Tuning PEFT π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started π€ PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format π€ Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started LayerNorm Tuning LayerNorm Tuning ( LN Tuning ) is a PEFT method that only fine-tunes the parameters of the LayerNorm layers in a model. The paper has tested the performance of this method on large language models and has shown that it can achieve strong performance with a significant reduction in the number of trainable parameters and GPU memory usage. However, the method is not limited to language models and can be applied to any model that uses LayerNorm layers. In this implementation, the default is that all layernorm layers inside a model is finetuned, but it could be used to target other layer types such as MLP or Attention layers, this can be done by specifying the target_modules in the LNTuningConfig . The abstract from the paper is: This paper introduces an efficient strategy to transform Large Language Models (LLMs) into Multi-Modal Large Language Models (MLLMs). By conceptualizing this transformation as a domain adaptation process, i.e., transitioning from text understanding to embracing multiple modalities, we intriguingly note that, within each attention block, tuning LayerNorm suffices to yield strong performance. Moreover, when benchmarked against other tuning approaches like full parameter finetuning or LoRA, its benefits on efficiency are substantial. For example, when compared to LoRA on a 13B model scale, performance can be enhanced by an average of over 20% across five multi-modal tasks, and meanwhile, results in a significant reduction of trainable parameters by 41.9% and a decrease in GPU memory usage by 17.6%. On top of this LayerNorm strategy, we showcase that selectively tuning only with conversational data can improve efficiency further. Beyond these empirical outcomes, we provide a comprehensive analysis to explore the role of LayerNorm in adapting LLMs to the multi-modal domain and improving the expressive power of the model. LNTuningConfig class peft. LNTuningConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False target_modules : Optional[Union[list[str], str]] = None exclude_modules : Optional[Union[list[str], str]] = None modules_to_save : Optional[Union[list[str], str]] = None ) Parameters target_modules ( Optional[Union[List[str], str]] ) β List of module names or regex expression of the module names to replace with LNTuning. For example, β. decoder. β or β. encoder. β. If this is not specified, modules will be chosen according to the model architecture. If the architecture is not known, an error will be raised β in this case, you should specify the target modules manually. exclude_modules ( Optional[Union[List[str], str]] ) β The names of the modules to not apply the adapter. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. modules_to_save ( Optional[Union[List[str], str]] ) β List of modules to be set as trainable and saved in the final checkpoint. For example, in Sequence Classification or Token Classification tasks, the final layer classifier/score are randomly initialized and as such need to be trainable and saved. This is the configuration class to store the configuration of a LNTuningModel . LNTuningModel class peft. LNTuningModel < source > ( model config adapter_name low_cpu_mem_usage : bool = False ) β βtorch.nn.Moduleβ Parameters model ( torch.nn.Module ) β The model to be adapted. config ( LNTuningConfig ) β The configuration of the Lora model. adapter_name ( str ) β The name of the adapter, defaults to "default" . low_cpu_mem_usage ( bool , optional , defaults to False ) β This option has no effect on LN tuning but exists for consistency with other PEFT methods. Returns βtorch.nn.Moduleβ The adapted model with LayerNorm tuned on. Creates LayerNorm tuning from a pretrained transformer model. The method is described in detail in https://arxiv.org/abs/2312.11420 . Example: Copied >>> from transformers import AutoModelForCausalLM >>> from peft import get_peft_model, TaskType, LNTuningConfig >>> peft_config = LNTuningConfig( ... task_type=TaskType.CAUSAL_LM, ... ) >>> model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-7b-hf" ) >>> model = get_peft_model(model, peft_config) >>> model.print_trainable_parameters() Attributes : model ( PreTrainedModel ) β The model to be adapted. peft_config ( LNTuningConfig ): The configuration of the Lora model. disable_adapter_layers < source > ( ) Disable all adapters. When disabling all adapters, the model output corresponds to the output of the base model. enable_adapter_layers < source > ( ) Enable all adapters. Call this if you have previously disabled all adapters and want to re-enable them. < > Update on GitHub β Prompt tuning VeRA β Layer Norm Tuning LN Tuning Config LN Tuning Model |
Interface__TranslationOutputValue.txt | Interface: TranslationOutputValue Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: TranslationOutputValue Huggingface.js π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN π€ Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: TranslationOutputValue Properties translation _ text β’ translation_text : string The string after translation Defined in inference/src/tasks/nlp/translation.ts:16 < > Update on GitHub β TokenClassificationOutputValue VisualQuestionAnsweringOutput β Interface: Translation Output Value Properties translation _ text Defined in |
Distributed_Training_with_optimum-neuron.txt | Distributed Training with optimum-neuron Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Distributed Training with optimum-neuron AWS Trainium & Inferentia π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron π€ Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Distributed Training with optimum-neuron AWS Trainium instances are great to train models. They can contain up to 16 Neuron devices, each device containing 2 Neuron cores and has 32GB of memory (16GB per core). For example a trn1.32xlarge instance has 32 x 16 = 512GB of memory. But there is a caveat: each Neuron core is an independent data-parallel worker by default. It means that the model, the gradient state and the optimizer state, amounting to approximately 4 times the model size, must fit in each of the Neuron cores (16GB) to be able to train. If that is the case, then the activations must also fit in the remaining memory. To alleviate that, optimum-neuron supports parallelism features enabling you to harness the full power of your Trainium instance: ZeRO-1 : It is an optimization of data-parallelism which consists in sharding the optimizer state (which usually represents half of the memory needed on the device) over the data-parallel ranks. Tensor Parallelism : It is a technique which consists in sharding each of your model matrix-multiplications along a given axis (row or column) on multiple devices. It also known as intra-layer model parallelism. The number of devices to shard your parameters on is called the tensor_parallel_size . Sequence parallelism : It is an optimization over Tensor Parallelism which shards the activations on the sequence axis outside of the tensor parallel regions. It is useful because it saves memory by sharding the activations. Pipeline Parallelism : It consists in sharding the model block layers on multiple devices. It is also known as inter-layer model parallelism. The number of devices to shard your layers on is called the pipeline_parallel_size . The good news is that is it possible to combine those techniques, and optimum-neuron makes it very easy! All the example scripts provided in the optimum-neuron repo have those features implemented via the NeuronTrainer . How to enable ZeRO-1? Whether you use the NeuronTrainer or decide to have your own training script that uses the NeuronAccelerator , it is very easy to enable the ZeRO-1 optimization. Via the NeuronTrainer Copied from optimum.neuron import NeuronTrainingArguments, NeuronTrainer # To enable ZeRO-1, set the `zero_1` argument to `True` in the training arguments. training_args = NeuronTrainingArguments( ..., zero_1= True , ) trainer = NeuronTrainer( model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() Since the example scripts use the NeuronTrainer , you can enable ZeRO-1 when using them by add the --zero_1 flag to your command line. For example: Copied torchrun --nproc_per_node=2 examples/language-modeling/run_clm.py \ --model_name_or_path TinyLlama/TinyLlama-1.1B-Chat-v0.6 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --per_device_train_batch_size 1 \ --block_size 1024 \ --bf16 \ --zero_1 \ --output_dir my_training/ Via the NeuronAccelerator There is a little bit more work to do when not using the NeuronTrainer : (Optional) Wrap the optimizer class to make it lazy. When ZeRO-1 is enabled the original optimizer is overridden to use a sharded version of it. Hence, it is possible to load the original optimizer lazily so that the optimizer state is not materialized until it is actually sharded. Copied from torch.optim import AdamW from optimum.neuron.distributed import make_optimizer_constructor_lazy lazy_adamw = make_optimizer_constructor_lazy(AdamW) Set the zero_1 argument to True when instantiating the NeuronAccelerator . Copied accelerator = NeuronAccelerator( ... zero_1= True , ) model = ... lazy_optimizer = lazy_adamw(...) # Actually instantiate the optimizer. model, optimizer = accelerator.prepare(model, lazy_optimizer) How to enable Tensor Parallelism? Just as for ZeRO-1, it is possible to apply Tensor Parallelism either with the NeuronTrainer or the NeuronAccelerator . When doing Tensor Parallelism, you have different settings: The tensor_parallel_size . Ideally it should be smallest value for which the model fits. Whether or not sequence parallelism should be enabled. Sequence parallelism shards the activations on the sequence axis outside of the tensor parallel regions. It is useful because it saves memory by sharding the activations. Whether or not parallelization of the embedding layer should be done. By default it is done because it offers multiple benefits: Parallelizing the embedding layer saves memory, which can enable fitting a bigger batch size and/or sequence length. For language models, where the embedding layer weights and the language-modeling head weights are usually tied, the language-modeling head ends up parallel and does not require to all-gather its output since it is fed to a cross entropy loss compatible with parallelism, saving expensive communication. On top of that, it is very important to make sure that the original model is loaded in an efficient manner: the training script is going to be called by torchrun , which will dispatch it to workers, one worker per core. If each worker (there are 32 of them in a trn1.32xlarge instance) loads the full model weights, it can take a lot of time and go out-of-memory really fast. optimum-neuron provides a context-manager distributed.lazy_load_for_parallelism() that loads the model lazily to prevent that, only the parameters of the corresponding model shard will be materialized in each worker. Via the NeuronTrainer Copied from optimum.neuron import NeuronTrainingArguments, NeuronTrainer from optimum.neuron.distributed import lazy_load_for_parallelism # Specify the `tensor_parallel_size` in the training arguments. training_args = NeuronTrainingArguments( ..., tensor_parallel_size= 8 , disable_embedding_parallelization= False , # It is `False` by default. disable_sequence_parallel= False , # It is `False` by default. ) with lazy_load_for_parallelism(tensor_parallel_size=training_args.tensor_parallel_size): model = ... trainer = NeuronTrainer( model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() Since the example scripts use the NeuronTrainer , you can enable Tensor Parallelism when using them by specifying the --tensor_parallel_size argument, and optionally the disable_embedding_parallelization and disable_sequence_parallel flags. to your command line. For example: Copied torchrun --nproc_per_node=2 examples/language-modeling/run_clm.py \ --model_name_or_path TinyLlama/TinyLlama-1.1B-Chat-v0.6 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --per_device_train_batch_size 1 \ --block_size 1024 \ --bf16 \ --tensor_parallel_size 2 \ --output_dir my_training/ Via the NeuronAccelerator Just as for ZeRO-1, it is possible to wrap the optimizer class to make it lazy. Since the model parameters are going to be sharded, it is not needed to materialize the optimizer state prior to model parallelization: the wrapper makes sure that it stays unmaterialized. Copied from torch.optim import AdamW from optimum.neuron import NeuronAccelerator from optimum.neuron.accelerate.utils import ModelParallelismPlugin from optimum.neuron.distributed import lazy_load_for_parallelism tensor_parallel_size = 8 mp_plugin = ModelParallelismPlugin( tensor_parallel_size, parallelize_embeddings= True , sequence_parallel_enabled= True , checkpoint_dir= None , # Can be specified when resuming from checkpoint. ) accelerator = NeuronAccelerator( ... mp_plugin=mp_plugin, ) with lazy_load_for_parallelism(tensor_parallel_size=tensor_parallel_size): model = ... lazy_adamw = make_optimizer_constructor_lazy(AdamW) lazy_optimizer = lazy_adamw(...) # Actually instantiate the optimizer. model, optimizer = accelerator.prepare(model, lazy_optimizer) Checkpoint consolidation Since Tensor Parallelism consists in sharding the model weights accross different workers, only sharded checkpoints will be saved during training. It is necessary to consolidate the sharded checkpoints to be able to share and use them outside of the specific training configuration there were created under. The Optimum CLI provides a way of doing that very easily via the optimum neuron consolidate command: Copied optimum-cli neuron consolidate --help usage: optimum-cli neuron consolidate [-h] [-f {pytorch,safetensors}] checkpoint_dir output_dir positional arguments: checkpoint_dir The path to the directory containing the checkpoints. output_dir The path to the output directory containing the consolidated checkpoint. optional arguments: -h, --help show this help message and exit -f {pytorch,safetensors}, --format {pytorch,safetensors} The format used to save the consolidated checkpoint. All you need to do is specify the sharded checkpoints directory and the output directory that will contain the consolidated checkpoints, and the command takes care of the rest. It is also possible to specify the output format of the consolidated checkpoints, by default it will export them to the safetensors format, which is the recommend format to use. Example: Training with Tensor Parallelism just completed and the output dir is called my_training . The directory looks like the following: Copied my_training/ βββ README.md βββ all_results.json βββ checkpoint-10 βΒ Β βββ config.json βΒ Β βββ scheduler.pt βΒ Β βββ special_tokens_map.json βΒ Β βββ tensor_parallel_shards βΒ Β βββ tokenizer.json βΒ Β βββ tokenizer.model βΒ Β βββ tokenizer_config.json βΒ Β βββ trainer_state.json βΒ Β βββ training_args.bin βββ config.json βββ special_tokens_map.json βββ tensor_parallel_shards βΒ Β βββ tp_rank_00_pp_rank_00 βΒ Β βββ tp_rank_01_pp_rank_00 βΒ Β βββ tp_rank_02_pp_rank_00 βΒ Β βββ tp_rank_03_pp_rank_00 βΒ Β βββ tp_rank_04_pp_rank_00 βΒ Β βββ tp_rank_05_pp_rank_00 βΒ Β βββ tp_rank_06_pp_rank_00 βΒ Β βββ tp_rank_07_pp_rank_00 βββ tokenizer.json βββ tokenizer.model βββ tokenizer_config.json βββ train_results.json βββ trainer_state.json βββ training_args.bin It is possible to consolidate the sharded checkpoints in my_training/tensor_parallel_shards , which correspond to the sharded checkpoints saved at the end of the training, by running the following command: Copied optimum-cli neuron consolidate my_training my_training_consolidated_checkpoint The sharded checkpoints are saved under a directory called tensor_parallel_shards . The optimum-cli neuron consolidate command accept as input both a directory that contains a tensor_parallel_shards directory, or the tensor_parallel_shards directory itself. β Fine-tune Transformers with AWS Trainium Export a model to Inferentia β Distributed Training with optimum-neuron How to enable ZeR O-1? Via the Neuron Trainer Via the Neuron Accelerator How to enable Tensor Parallelism? Via the Neuron Trainer Via the Neuron Accelerator Checkpoint consolidation |
Interface__CachedFileInfo.txt | Interface: CachedFileInfo Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: CachedFileInfo Huggingface.js π‘ View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN π€ Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: CachedFileInfo Properties blob β’ blob : Object Underlying file - which path is symlinked to Type declaration Name Type lastAccessedAt Date lastModifiedAt Date path string size number Defined in hub/src/lib/cache-management.ts:37 path β’ path : string Defined in hub/src/lib/cache-management.ts:33 < > Update on GitHub β AuthInfo CachedRepoInfo β Interface: Cached File Info Properties blob Type declaration Defined in path Defined in |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 66