filename
stringlengths
7
54
content
stringlengths
1.74k
710k
Inference_pipelines_with_AWS_Neuron__Inf2_Trn1__e8.txt
Inference pipelines with AWS Neuron (Inf2/Trn1) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Inference pipelines with AWS Neuron (Inf2/Trn1) AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Inference pipelines with AWS Neuron (Inf2/Trn1) The pipeline() function makes it simple to use models from the Model Hub for accelerated inference on a variety of tasks such as text classification, question answering and image classification. You can also use the pipeline() function from Transformers and provide your NeurModel model class. Currently the supported tasks are: feature-extraction fill-mask text-classification token-classification question-answering zero-shot-classification Optimum pipeline usage While each task has an associated pipeline class, it is simpler to use the general pipeline() function which wraps all the task-specific pipelines in one object. The pipeline() function automatically loads a default model and tokenizer/feature-extractor capable of performing inference for your task. Start by creating a pipeline by specifying an inference task: Copied >>> from optimum.neuron.pipelines import pipeline >>> classifier = pipeline(task= "text-classification" ) Pass your input text/image to the pipeline() function: Copied >>> classifier( "I like you. I love you." ) [{ 'label' : 'POSITIVE' , 'score' : 0.9998838901519775 }] Note: The default models used in the pipeline() function are not optimized for inference or quantized, so there won’t be a performance improvement compared to their PyTorch counterparts. Using vanilla Transformers model and converting to AWS Neuron The pipeline() function accepts any supported model from the Hugging Face Hub . There are tags on the Model Hub that allow you to filter for a model you’d like to use for your task. To be able to load the model with the Neuron Runtime, the export to neuron needs to be supported for the considered architecture. You can check the list of supported architectures here . Once you have picked an appropriate model, you can create the pipeline() by specifying the model repo: Copied >>> from optimum.neuron.pipelines import pipeline # The model will be loaded to an NeuronModelForQuestionAnswering. >>> neuron_qa = pipeline( "question-answering" , model= "deepset/roberta-base-squad2" , export= True ) >>> question = "What's my name?" >>> context = "My name is Philipp and I live in Nuremberg." >>> pred = neuron_qa(question=question, context=context) It is also possible to load it with the from_pretrained(model_name_or_path, export=True) method associated with the NeuronModelForXXX class. For example, here is how you can load the ~neuron.NeuronModelForQuestionAnswering class for question answering: Copied >>> from transformers import AutoTokenizer >>> from optimum.neuron import NeuronModelForQuestionAnswering, pipeline >>> tokenizer = AutoTokenizer.from_pretrained( "deepset/roberta-base-squad2" ) >>> # Loading the PyTorch checkpoint and converting to the neuron format by providing export=True >>> model = NeuronModelForQuestionAnswering.from_pretrained( ... "deepset/roberta-base-squad2" , ... export= True ... ) >>> neuron_qa = pipeline( "question-answering" , model=model, tokenizer=tokenizer) >>> question = "What's my name?" >>> context = "My name is Philipp and I live in Nuremberg." >>> pred = neuron_qa(question=question, context=context) Defining Input Shapes NeuronModels currently require static input_shapes to run inference. The default input shapes will be used if you are not providing input shapes when providing the export=True parameter. Below is an example of how to specify the input shapes for the sequence length and batch size. Copied >>> from optimum.neuron.pipelines import pipeline >>> input_shapes = { "batch_size" : 1 , "sequence_length" : 64 } >>> clt = pipeline( "token-classification" , model= "dslim/bert-base-NER" , export= True ,input_shapes=input_shapes) >>> context = "My name is Philipp and I live in Nuremberg." >>> pred = clt(context) ← Export a model to Inferentia NeuronX Text-generation-inference for AWS inferentia2 → Inference pipelines with AW S Neuron ( Inf2/ Trn1) Optimum pipeline usage Using vanilla Transformers model and converting to AW S Neuron Defining Input Shapes
Download_slices_of_rows.txt
Download slices of rows Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Download slices of rows Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Download slices of rows The dataset viewer provides a /rows endpoint for visualizing any slice of rows of a dataset. This will let you walk-through and inspect the data contained in a dataset. Currently, only datasets with parquet exports are supported so the dataset viewer can extract any slice of rows without downloading the whole dataset. This guide shows you how to use the dataset viewer’s /rows endpoint to download slices of a dataset. Feel free to also try it out with Postman , RapidAPI , or ReDoc . The /rows endpoint accepts five query parameters: dataset : the dataset name, for example nyu-mll/glue or mozilla-foundation/common_voice_10_0 config : the subset name, for example cola split : the split name, for example train offset : the offset of the slice, for example 150 length : the length of the slice, for example 10 (maximum: 100 ) Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://datasets-server.huggingface.co/rows?dataset=ibm/duorc&config=SelfRC&split=train&offset=150&length=10" def query (): response = requests.get(API_URL, headers=headers) return response.json() data = query() The endpoint response is a JSON containing two keys: The features of a dataset, including the column’s name and data type. The slice of rows of a dataset and the content contained in each column of a specific row. For example, here are the features and the slice of rows of the ibm/duorc / SelfRC train split from 150 to 151: Copied // https://datasets-server.huggingface.co/rows?dataset=ibm/duorc&config=SelfRC&split=train&offset=150&length=2 { "features" : [ { "feature_idx" : 0 , "name" : "plot_id" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 1 , "name" : "plot" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 2 , "name" : "title" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 3 , "name" : "question_id" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 4 , "name" : "question" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 5 , "name" : "answers" , "type" : { "feature" : { "dtype" : "string" , "_type" : "Value" } , "_type" : "Sequence" } } , { "feature_idx" : 6 , "name" : "no_answer" , "type" : { "dtype" : "bool" , "_type" : "Value" } } ] , "rows" : [ { "row_idx" : 150 , "row" : { "plot_id" : "/m/03wj_q" , "plot" : "The film is centered on Mortal Kombat, a fighting tournament between the representatives of the realms of Earth and Outworld conceived by the Elder Gods amid looming invasion of the Earth by Outworld. If the realm of Outworld wins Mortal Kombat ten consecutive times, its Emperor Shao Kahn will be able to invade and conquer the Earth realm.\nShaolin monk Liu Kang and his comrades, movie star Johnny Cage and military officer Sonya Blade were handpicked by Raiden, the god of thunder and defender of the Earth realm, to overcome their powerful adversaries in order to prevent Outworld from winning their tenth straight Mortal Kombat tournament. Each of the three has his or her own reason for competing: Liu seeks revenge against the tournament host Shang Tsung for killing his brother Chan; Sonya seeks revenge on an Australian crime lord Kano; and Cage, having been branded as a fake by the media, seeks to prove otherwise.\nAt Shang Tsung's island, Liu is attracted to Princess Kitana, Shao Kahn's adopted daughter. Aware that Kitana is a dangerous adversary because she is the rightful heir to Outworld and that she will attempt to ally herself with the Earth warriors, Tsung orders the creature Reptile to spy on her. Liu defeats his first opponent and Sonya gets her revenge on Kano by snapping his neck. Cage encounters and barely beats Scorpion. Liu engages in a brief duel with Kitana, who secretly offers him cryptic advice for his next battle. Liu's next opponent is Sub-Zero, whose defense seems untouched because of his freezing abilities, until Liu recalls Kitana's advice and uses it to kill Sub-Zero.\nPrince Goro enters the tournament and mercilessly crushes every opponent he faces. One of Cage's peers, Art Lean, is defeated by Goro as well and has his soul taken by Shang Tsung. Sonya worries that they may not win against Goro, but Raiden disagrees. He reveals their own fears and egos are preventing them from winning the tournament.\nDespite Sonya's warning, Cage comes to Tsung to request a fight with Goro. The sorcerer accepts on the condition that he be allowed to challenge any opponent of his choosing, anytime and anywhere he chooses. Raiden tries to intervene, but the conditions are agreed upon before he can do so. After Shang Tsung leaves, Raiden confronts Cage for what he has done in challenging Goro, but is impressed when Cage shows his awareness of the gravity of the tournament. Cage faces Goro and uses guile and the element of surprise to defeat the defending champion. Now desperate, Tsung takes Sonya hostage and takes her to Outworld, intending to fight her as his opponent. Knowing that his powers are ineffective there and that Sonya cannot defeat Tsung by herself, Raiden sends Liu and Cage into Outworld in order to rescue Sonya and challenge Tsung. In Outworld, Liu is attacked by Reptile, but eventually gains the upper hand and defeats him. Afterward, Kitana meets up with Cage and Liu, revealing to the pair the origins of both herself and Outworld. Kitana allies with them and helps them to infiltrate Tsung's castle.\nInside the castle tower, Shang Tsung challenges Sonya to fight him, claiming that her refusal to accept will result in the Earth realm forfeiting Mortal Kombat (this is, in fact, a lie on Shang's part). All seems lost for Earth realm until Kitana, Liu, and Cage appear. Kitana berates Tsung for his treachery to the Emperor as Sonya is set free. Tsung challenges Cage, but is counter-challenged by Liu. During the lengthy battle, Liu faces not only Tsung, but the souls that Tsung had forcibly taken in past tournaments. In a last-ditch attempt to take advantage, Tsung morphs into Chan. Seeing through the charade, Liu renews his determination and ultimately fires an energy bolt at the sorcerer, knocking him down and impaling him on a row of spikes. Tsung's death releases all of the captive souls, including Chan's. Before ascending to the afterlife, Chan tells Liu that he will remain with him in spirit until they are once again reunited, after Liu dies.\nThe warriors return to Earth realm, where a victory celebration is taking place at the Shaolin temple. The jubilation abruptly stops, however, when Shao Kahn's giant figure suddenly appears in the skies. When the Emperor declares that he has come for everyone's souls, the warriors take up fighting stances." , "title" : "Mortal Kombat" , "question_id" : "40c1866a-b214-11ba-be57-8979d2cefa90" , "question" : "Where is Sonya taken to?" , "answers" : [ "Outworld" ] , "no_answer" : false } , "truncated_cells" : [ ] } , { "row_idx" : 151 , "row" : { "plot_id" : "/m/03wj_q" , "plot" : "The film is centered on Mortal Kombat, a fighting tournament between the representatives of the realms of Earth and Outworld conceived by the Elder Gods amid looming invasion of the Earth by Outworld. If the realm of Outworld wins Mortal Kombat ten consecutive times, its Emperor Shao Kahn will be able to invade and conquer the Earth realm.\nShaolin monk Liu Kang and his comrades, movie star Johnny Cage and military officer Sonya Blade were handpicked by Raiden, the god of thunder and defender of the Earth realm, to overcome their powerful adversaries in order to prevent Outworld from winning their tenth straight Mortal Kombat tournament. Each of the three has his or her own reason for competing: Liu seeks revenge against the tournament host Shang Tsung for killing his brother Chan; Sonya seeks revenge on an Australian crime lord Kano; and Cage, having been branded as a fake by the media, seeks to prove otherwise.\nAt Shang Tsung's island, Liu is attracted to Princess Kitana, Shao Kahn's adopted daughter. Aware that Kitana is a dangerous adversary because she is the rightful heir to Outworld and that she will attempt to ally herself with the Earth warriors, Tsung orders the creature Reptile to spy on her. Liu defeats his first opponent and Sonya gets her revenge on Kano by snapping his neck. Cage encounters and barely beats Scorpion. Liu engages in a brief duel with Kitana, who secretly offers him cryptic advice for his next battle. Liu's next opponent is Sub-Zero, whose defense seems untouched because of his freezing abilities, until Liu recalls Kitana's advice and uses it to kill Sub-Zero.\nPrince Goro enters the tournament and mercilessly crushes every opponent he faces. One of Cage's peers, Art Lean, is defeated by Goro as well and has his soul taken by Shang Tsung. Sonya worries that they may not win against Goro, but Raiden disagrees. He reveals their own fears and egos are preventing them from winning the tournament.\nDespite Sonya's warning, Cage comes to Tsung to request a fight with Goro. The sorcerer accepts on the condition that he be allowed to challenge any opponent of his choosing, anytime and anywhere he chooses. Raiden tries to intervene, but the conditions are agreed upon before he can do so. After Shang Tsung leaves, Raiden confronts Cage for what he has done in challenging Goro, but is impressed when Cage shows his awareness of the gravity of the tournament. Cage faces Goro and uses guile and the element of surprise to defeat the defending champion. Now desperate, Tsung takes Sonya hostage and takes her to Outworld, intending to fight her as his opponent. Knowing that his powers are ineffective there and that Sonya cannot defeat Tsung by herself, Raiden sends Liu and Cage into Outworld in order to rescue Sonya and challenge Tsung. In Outworld, Liu is attacked by Reptile, but eventually gains the upper hand and defeats him. Afterward, Kitana meets up with Cage and Liu, revealing to the pair the origins of both herself and Outworld. Kitana allies with them and helps them to infiltrate Tsung's castle.\nInside the castle tower, Shang Tsung challenges Sonya to fight him, claiming that her refusal to accept will result in the Earth realm forfeiting Mortal Kombat (this is, in fact, a lie on Shang's part). All seems lost for Earth realm until Kitana, Liu, and Cage appear. Kitana berates Tsung for his treachery to the Emperor as Sonya is set free. Tsung challenges Cage, but is counter-challenged by Liu. During the lengthy battle, Liu faces not only Tsung, but the souls that Tsung had forcibly taken in past tournaments. In a last-ditch attempt to take advantage, Tsung morphs into Chan. Seeing through the charade, Liu renews his determination and ultimately fires an energy bolt at the sorcerer, knocking him down and impaling him on a row of spikes. Tsung's death releases all of the captive souls, including Chan's. Before ascending to the afterlife, Chan tells Liu that he will remain with him in spirit until they are once again reunited, after Liu dies.\nThe warriors return to Earth realm, where a victory celebration is taking place at the Shaolin temple. The jubilation abruptly stops, however, when Shao Kahn's giant figure suddenly appears in the skies. When the Emperor declares that he has come for everyone's souls, the warriors take up fighting stances." , "title" : "Mortal Kombat" , "question_id" : "f1fdefcf-1191-b5f9-4cae-4ce4d0a59da7" , "question" : "Who took Goro's soul?" , "answers" : [ "Shang Tsung." ] , "no_answer" : false } , "truncated_cells" : [ ] } ] , "num_rows_total" : 60721 , "num_rows_per_page" : 100 , "partial" : false } Image and audio samples Image and audio are represented by a URL that points to the file. Images Images are represented as a JSON object with three fields: src : URL to the image file. It’s a signed URL that expires after a certain time. height : height (in pixels) of the image width : width (in pixels) of the image Here is an example of image, from the first row of the uoft-cs/cifar100 dataset: Copied // https://datasets-server.huggingface.co/rows?dataset=uoft-cs/cifar100&config=cifar100&split=train&offset=0&length=1 { "features" : [ { "feature_idx" : 0 , "name" : "img" , "type" : { "_type" : "Image" } } , ... ] , "rows" : [ { "row_idx" : 0 , "row" : { "img" : { "src" : "https://datasets-server.huggingface.co/cached-assets/uoft-cs/cifar100/--/aadb3af77e9048adbea6b47c21a81e47dd092ae5/--/cifar100/train/0/img/image.jpg?Expires=1710283469&Signature=A1v0cG07nuaBxYbuPR5EUZpJ9Se072SBDr4935gEsOESHGVyeqvd3qmvdsy1fuqbHk0dnx~p6MLtQ-hg3aCBOJ8eIJ5ItIoyYT4riJRuPQC0VFUb~b1maEwU8LRoXXuvrSysSz2QhBbC~ofv6cQudm~~bgGxXWAslDs180KnmPDsMU55ySsKyKQYNEkQKyuYvrGIJbFeg4lEps0f5CEwUstAwRAwlk~mzRpzUDBq7nJ~DcujTlllLv36nJX~too8mMnFn6dCn2nfGOFYwUiyYM73Czv-laLhVaIVUzcuJum90No~KNGzfYeFZpPqktA7MjCzRLf1gz5kA7wBqnY-8Q__&Key-Pair-Id=K3EI6M078Z3AC3" , "height" : 32 , "width" : 32 } , "fine_label" : 19 , "coarse_label" : 11 } , "truncated_cells" : [ ] } ] , "num_rows_total" : 50000 , "num_rows_per_page" : 100 , "partial" : false } If the result has partial: true it means that the slices couldn’t be run on the full dataset because it’s too big. Caching The images and audio samples are cached by the dataset viewer temporarily. Internally we empty the cached assets of certain datasets from time to time based on usage. If a certain asset is not available, you may have to call /rows again. Truncated responses Unlike /first-rows , there is currently no truncation in /rows . The truncated_cells field is still there but is always empty. < > Update on GitHub ← Preview a dataset Search text in a dataset → Download slices of rows Image and audio samples Images Caching Truncated responses
Tensor_Parallelism.txt
Tensor Parallelism Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Tensor Parallelism text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Tensor Parallelism Tensor parallelism is a technique used to fit a large model in multiple GPUs. For example, when multiplying the input tensors with the first weight tensor, the matrix multiplication is equivalent to splitting the weight tensor column-wise, multiplying each column with the input separately, and then concatenating the separate outputs. These outputs are then transferred from the GPUs and concatenated together to get the final result, like below 👇 Tensor Parallelism only works for models officially supported , it will not work when falling back to transformers . You can get more information about unsupported models here . You can learn a lot more details about tensor-parallelism from the transformers docs . < > Update on GitHub ← Quantization PagedAttention → Tensor Parallelism
Merge_LoRAs.txt
Merge LoRAs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Merge LoRAs Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Merge LoRAs It can be fun and creative to use multiple LoRAs together to generate something entirely new and unique. This works by merging multiple LoRA weights together to produce images that are a blend of different styles. Diffusers provides a few methods to merge LoRAs depending on how you want to merge their weights, which can affect image quality. This guide will show you how to merge LoRAs using the set_adapters() and add_weighted_adapter methods. To improve inference speed and reduce memory-usage of merged LoRAs, you’ll also see how to use the fuse_lora() method to fuse the LoRA weights with the original weights of the underlying model. For this guide, load a Stable Diffusion XL (SDXL) checkpoint and the KappaNeuro/studio-ghibli-style and Norod78/sdxl-chalkboarddrawing-lora LoRAs with the load_lora_weights() method. You’ll need to assign each LoRA an adapter_name to combine them later. Copied from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16).to( "cuda" ) pipeline.load_lora_weights( "ostris/ikea-instructions-lora-sdxl" , weight_name= "ikea_instructions_xl_v1_5.safetensors" , adapter_name= "ikea" ) pipeline.load_lora_weights( "lordjia/by-feng-zikai" , weight_name= "fengzikai_v1.0_XL.safetensors" , adapter_name= "feng" ) set_adapters The set_adapters() method merges LoRA adapters by concatenating their weighted matrices. Use the adapter name to specify which LoRAs to merge, and the adapter_weights parameter to control the scaling for each LoRA. For example, if adapter_weights=[0.5, 0.5] , then the merged LoRA output is an average of both LoRAs. Try adjusting the adapter weights to see how it affects the generated image! Copied pipeline.set_adapters([ "ikea" , "feng" ], adapter_weights=[ 0.7 , 0.8 ]) generator = torch.manual_seed( 0 ) prompt = "A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai" image = pipeline(prompt, generator=generator, cross_attention_kwargs={ "scale" : 1.0 }).images[ 0 ] image add_weighted_adapter This is an experimental method that adds PEFTs add_weighted_adapter method to Diffusers to enable more efficient merging methods. Check out this issue if you’re interested in learning more about the motivation and design behind this integration. The add_weighted_adapter method provides access to more efficient merging method such as TIES and DARE . To use these merging methods, make sure you have the latest stable version of Diffusers and PEFT installed. Copied pip install -U diffusers peft There are three steps to merge LoRAs with the add_weighted_adapter method: Create a PeftModel from the underlying model and LoRA checkpoint. Load a base UNet model and the LoRA adapters. Merge the adapters using the add_weighted_adapter method and the merging method of your choice. Let’s dive deeper into what these steps entail. Load a UNet that corresponds to the UNet in the LoRA checkpoint. In this case, both LoRAs use the SDXL UNet as their base model. Copied from diffusers import UNet2DConditionModel import torch unet = UNet2DConditionModel.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, use_safetensors= True , variant= "fp16" , subfolder= "unet" , ).to( "cuda" ) Load the SDXL pipeline and the LoRA checkpoints, starting with the ostris/ikea-instructions-lora-sdxl LoRA. Copied from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , variant= "fp16" , torch_dtype=torch.float16, unet=unet ).to( "cuda" ) pipeline.load_lora_weights( "ostris/ikea-instructions-lora-sdxl" , weight_name= "ikea_instructions_xl_v1_5.safetensors" , adapter_name= "ikea" ) Now you’ll create a PeftModel from the loaded LoRA checkpoint by combining the SDXL UNet and the LoRA UNet from the pipeline. Copied from peft import get_peft_model, LoraConfig import copy sdxl_unet = copy.deepcopy(unet) ikea_peft_model = get_peft_model( sdxl_unet, pipeline.unet.peft_config[ "ikea" ], adapter_name= "ikea" ) original_state_dict = { f"base_model.model. {k} " : v for k, v in pipeline.unet.state_dict().items()} ikea_peft_model.load_state_dict(original_state_dict, strict= True ) You can optionally push the ikea_peft_model to the Hub by calling ikea_peft_model.push_to_hub("ikea_peft_model", token=TOKEN) . Repeat this process to create a PeftModel from the lordjia/by-feng-zikai LoRA. Copied pipeline.delete_adapters( "ikea" ) sdxl_unet.delete_adapters( "ikea" ) pipeline.load_lora_weights( "lordjia/by-feng-zikai" , weight_name= "fengzikai_v1.0_XL.safetensors" , adapter_name= "feng" ) pipeline.set_adapters(adapter_names= "feng" ) feng_peft_model = get_peft_model( sdxl_unet, pipeline.unet.peft_config[ "feng" ], adapter_name= "feng" ) original_state_dict = { f"base_model.model. {k} " : v for k, v in pipe.unet.state_dict().items()} feng_peft_model.load_state_dict(original_state_dict, strict= True ) Load a base UNet model and then load the adapters onto it. Copied from peft import PeftModel base_unet = UNet2DConditionModel.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, use_safetensors= True , variant= "fp16" , subfolder= "unet" , ).to( "cuda" ) model = PeftModel.from_pretrained(base_unet, "stevhliu/ikea_peft_model" , use_safetensors= True , subfolder= "ikea" , adapter_name= "ikea" ) model.load_adapter( "stevhliu/feng_peft_model" , use_safetensors= True , subfolder= "feng" , adapter_name= "feng" ) Merge the adapters using the add_weighted_adapter method and the merging method of your choice (learn more about other merging methods in this blog post ). For this example, let’s use the "dare_linear" method to merge the LoRAs. Keep in mind the LoRAs need to have the same rank to be merged! Copied model.add_weighted_adapter( adapters=[ "ikea" , "feng" ], weights=[ 1.0 , 1.0 ], combination_type= "dare_linear" , adapter_name= "ikea-feng" ) model.set_adapters( "ikea-feng" ) Now you can generate an image with the merged LoRA. Copied model = model.to(dtype=torch.float16, device= "cuda" ) pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , unet=model, variant= "fp16" , torch_dtype=torch.float16, ).to( "cuda" ) image = pipeline( "A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai" , generator=torch.manual_seed( 0 )).images[ 0 ] image fuse_lora Both the set_adapters() and add_weighted_adapter methods require loading the base model and the LoRA adapters separately which incurs some overhead. The fuse_lora() method allows you to fuse the LoRA weights directly with the original weights of the underlying model. This way, you’re only loading the model once which can increase inference and lower memory-usage. You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the fuse_lora() method, which can lead to a speed-up in inference and lower VRAM usage. For example, if you have a base model and adapters loaded and set as active with the following adapter weights: Copied from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16).to( "cuda" ) pipeline.load_lora_weights( "ostris/ikea-instructions-lora-sdxl" , weight_name= "ikea_instructions_xl_v1_5.safetensors" , adapter_name= "ikea" ) pipeline.load_lora_weights( "lordjia/by-feng-zikai" , weight_name= "fengzikai_v1.0_XL.safetensors" , adapter_name= "feng" ) pipeline.set_adapters([ "ikea" , "feng" ], adapter_weights=[ 0.7 , 0.8 ]) Fuse these LoRAs into the UNet with the fuse_lora() method. The lora_scale parameter controls how much to scale the output by with the LoRA weights. It is important to make the lora_scale adjustments in the fuse_lora() method because it won’t work if you try to pass scale to the cross_attention_kwargs in the pipeline. Copied pipeline.fuse_lora(adapter_names=[ "ikea" , "feng" ], lora_scale= 1.0 ) Then you should use unload_lora_weights() to unload the LoRA weights since they’ve already been fused with the underlying base model. Finally, call save_pretrained() to save the fused pipeline locally or you could call push_to_hub() to push the fused pipeline to the Hub. Copied pipeline.unload_lora_weights() # save locally pipeline.save_pretrained( "path/to/fused-pipeline" ) # save to the Hub pipeline.push_to_hub( "fused-ikea-feng" ) Now you can quickly load the fused pipeline and use it for inference without needing to separately load the LoRA adapters. Copied pipeline = DiffusionPipeline.from_pretrained( "username/fused-ikea-feng" , torch_dtype=torch.float16, ).to( "cuda" ) image = pipeline( "A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai" , generator=torch.manual_seed( 0 )).images[ 0 ] image You can call ~~loaders.lora_base.LoraBaseMixin.unfuse_lora to restore the original model’s weights (for example, if you want to use a different lora_scale value). However, this only works if you’ve only fused one LoRA adapter to the original model. If you’ve fused multiple LoRAs, you’ll need to reload the model. Copied pipeline.unfuse_lora() torch.compile torch.compile can speed up your pipeline even more, but the LoRA weights must be fused first and then unloaded. Typically, the UNet is compiled because it is such a computationally intensive component of the pipeline. Copied from diffusers import DiffusionPipeline import torch # load base model and LoRAs pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16).to( "cuda" ) pipeline.load_lora_weights( "ostris/ikea-instructions-lora-sdxl" , weight_name= "ikea_instructions_xl_v1_5.safetensors" , adapter_name= "ikea" ) pipeline.load_lora_weights( "lordjia/by-feng-zikai" , weight_name= "fengzikai_v1.0_XL.safetensors" , adapter_name= "feng" ) # activate both LoRAs and set adapter weights pipeline.set_adapters([ "ikea" , "feng" ], adapter_weights=[ 0.7 , 0.8 ]) # fuse LoRAs and unload weights pipeline.fuse_lora(adapter_names=[ "ikea" , "feng" ], lora_scale= 1.0 ) pipeline.unload_lora_weights() # torch.compile pipeline.unet.to(memory_format=torch.channels_last) pipeline.unet = torch. compile (pipeline.unet, mode= "reduce-overhead" , fullgraph= True ) image = pipeline( "A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai" , generator=torch.manual_seed( 0 )).images[ 0 ] Learn more about torch.compile in the Accelerate inference of text-to-image diffusion models guide. Next steps For more conceptual details about how each merging method works, take a look at the 🤗 PEFT welcomes new merging methods blog post! < > Update on GitHub ← Distributed inference Scheduler features → Merge LoR As set_adapters add_weighted_adapter fuse_lora torch.compile Next steps
Launchers.txt
Launchers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Launchers Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Launchers Functions for launching training on distributed processes. notebook_launcher accelerate.notebook_launcher < source > ( function args = () num_processes = None mixed_precision = 'no' use_port = '29500' master_addr = '127.0.0.1' node_rank = 0 num_nodes = 1 rdzv_backend = 'static' rdzv_endpoint = '' rdzv_conf = None rdzv_id = 'none' max_restarts = 0 monitor_interval = 0.1 log_line_prefix_template = None ) Parameters function ( Callable ) — The training function to execute. If it accepts arguments, the first argument should be the index of the process run. args ( Tuple ) — Tuple of arguments to pass to the function (it will receive *args ). num_processes ( int , optional ) — The number of processes to use for training. Will default to 8 in Colab/Kaggle if a TPU is available, to the number of GPUs available otherwise. mixed_precision ( str , optional , defaults to "no" ) — If fp16 or bf16 , will use mixed precision training on multi-GPU. use_port ( str , optional , defaults to "29500" ) — The port to use to communicate between processes when launching a multi-GPU training. master_addr ( str , optional , defaults to "127.0.0.1" ) — The address to use for communication between processes. node_rank ( int , optional , defaults to 0) — The rank of the current node. num_nodes ( int , optional , defaults to 1) — The number of nodes to use for training. rdzv_backend ( str , optional , defaults to "static" ) — The rendezvous method to use, such as ‘static’ (the default) or ‘c10d’ rdzv_endpoint ( str , optional , defaults to "" ) — The endpoint of the rdzv sync. storage. rdzv_conf ( Dict , optional , defaults to None ) — Additional rendezvous configuration. rdzv_id ( str , optional , defaults to "none" ) — The unique run id of the job. max_restarts ( int , optional , defaults to 0) — The maximum amount of restarts that elastic agent will conduct on workers before failure. monitor_interval ( float , optional , defaults to 0.1) — The interval in seconds that is used by the elastic_agent as a period of monitoring workers. log_line_prefix_template ( str , optional , defaults to None ) — The prefix template for elastic launch logging. Available from PyTorch 2.2.0. Launches a training function, using several processes or multiple nodes if it’s possible in the current environment (TPU with multiple cores for instance). To use this function absolutely zero calls to a CUDA device must be made in the notebook session before calling. If any have been made, you will need to restart the notebook and make sure no cells use any CUDA capability. Setting ACCELERATE_DEBUG_MODE="1" in your environment will run a test before truly launching to ensure that none of those calls have been made. Example: Copied # Assume this is defined in a Jupyter Notebook on an instance with two GPUs from accelerate import notebook_launcher def train ( *args ): # Your training function here ... notebook_launcher(train, args=(arg1, arg2), num_processes= 2 , mixed_precision= "fp16" ) debug_launcher accelerate.debug_launcher < source > ( function args = () num_processes = 2 ) Parameters function ( Callable ) — The training function to execute. args ( Tuple ) — Tuple of arguments to pass to the function (it will receive *args ). num_processes ( int , optional , defaults to 2) — The number of processes to use for training. Launches a training function using several processes on CPU for debugging purposes. This function is provided for internal testing and debugging, but it’s not intended for real trainings. It will only use the CPU. < > Update on GitHub ← Experiment trackers DeepSpeed utilities → Launchers notebook_launcher debug_launcher
Judges.txt
Judges Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Judges TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Judges TRL Judges is an experimental API which is subject to change at any time. TRL provides judges to easily compare two completions. Make sure to have installed the required dependencies by running: Copied pip install trl[judges] Using the provided judges TRL provides several judges out of the box. For example, you can use the HfPairwiseJudge to compare two completions using a pre-trained model from the Hugging Face model hub: Copied from trl import HfPairwiseJudge judge = HfPairwiseJudge() judge.judge( prompts=[ "What is the capital of France?" , "What is the biggest planet in the solar system?" ], completions=[[ "Paris" , "Lyon" ], [ "Saturn" , "Jupiter" ]], ) # Outputs: [0, 1] Define your own judge To define your own judge, we provide several base classes that you can subclass. For rank-based judges, you need to subclass BaseRankJudge and implement the BaseRankJudge.judge() method. For pairwise judges, you need to subclass BasePairJudge and implement the BasePairJudge.judge method. If you want to define a judge that doesn’t fit into these categories, you need to subclass BaseJudge and implement the BaseJudge.judge() method. As an example, let’s define a pairwise judge that prefers shorter completions: Copied from trl import BasePairwiseJudge class PrefersShorterJudge ( BasePairwiseJudge ): def judge ( self, prompts, completions, shuffle_order= False ): return [ 0 if len (completion[ 0 ]) > len (completion[ 1 ]) else 1 for completion in completions] You can then use this judge as follows: Copied judge = PrefersShorterJudge() judge.judge( prompts=[ "What is the capital of France?" , "What is the biggest planet in the solar system?" ], completions=[[ "Paris" , "The capital of France is Paris." ], [ "Jupiter is the biggest planet in the solar system." , "Jupiter" ]], ) # Outputs: [0, 1] Provided judges PairRMJudge class trl. PairRMJudge < source > ( ) LLM judge based on the PairRM model from AllenAI. This judge uses the PairRM model to rank pairs of completions for given prompts. It’s designed for pairwise comparison of language model outputs. The PairRM model is loaded using the llm-blender library and runs on the default Accelerator device. Attributes : blender ( llm_blender.Blender ): An instance of the Blender class from llm-blender. Example : Copied >>> pairrm_judge = PairRMJudge() >>> prompts = [ "Translate 'hello' to French" , "What's the capital of Japan?" ] >>> completions = [[ "Bonjour" , "Salut" ], [ "Kyoto" , "Tokyo" ]] >>> results = pairrm_judge.judge(prompts, completions) >>> print (results) # [0, 1] (indicating the first completion is preferred for the first prompt and the second) This class requires the llm-blender library to be installed. Install it with: pip install llm-blender . judge < source > ( prompts : list completions : list shuffle_order : bool = True return_scores : bool = False temperature : float = 1.0 ) → Union[list[int, float]] Parameters prompts ( list[str] ) — List of prompts to judge. completions ( list[list[str]] ) — List of completion pairs for each prompt. shuffle_order ( bool , optional , defaults to True ) — Whether to shuffle the order of the completions to avoid positional bias. return_scores ( bool , optional , defaults to False ) — If True , return probability scores of the first completion instead of ranks (i.e. a soft-judge ). temperature ( float , optional , defaults to 1.0 ) — Temperature for scaling logits if return_scores is True. Returns Union[list[int, float]] If return_scores is False , returns a list of ranks ( 0 or 1 ) for each prompt, indicating which completion is preferred. If return_scores is True , returns softmax probabilities for the first completion. Raises ValueError ValueError — If the number of completions per prompt is not exactly 2. Judge the completion pairs for the given prompts using the PairRM model. Note: Unlike llm-blender, ranks are 0-indexed ( 0 means the first completion is preferred). HfPairwiseJudge class trl. HfPairwiseJudge < source > ( model = 'meta-llama/Meta-Llama-3-70B-Instruct' token : typing.Optional[str] = None system_prompt : typing.Optional[str] = None ) Parameters model ( str , optional , defaults to "meta-llama/Meta-Llama-3-70B-Instruct" ) — Model to use for the judge. token ( str , optional ) — Hugging Face API token to use for the huggingface_hub.InferenceClient . system_prompt ( str or None , optional , defaults to None ) — The system prompt to be used for the judge. If not provided, a default prompt is used. Note that the system prompt should contain the following placeholders: {prompt} , {response0} , and {response1} . Also, the inference is called with max_tokens=1 , consequently the system prompt should ask for a single token response. Pairwise judge based on the Hugging Face API with chat completion. This judge is relevant for assessing the quality chat models, where the completion is a response to a given prompt. OpenAIPairwiseJudge class trl. OpenAIPairwiseJudge < source > ( model = 'gpt-4-turbo-preview' system_prompt : typing.Optional[str] = None max_requests : typing.Optional[int] = 1000 ) Parameters model ( str , optional , defaults to "gpt-4-turbo-preview" ) — Model to use for the judge. system_prompt ( str or None , optional , defaults to None ) — System prompt to be used for the judge. If not provided, a default prompt is used. Note that the system prompt should contain the following placeholders: {prompt} , {response0} , and {response1} . Also, the inference is called with max_tokens=1 , consequently the system prompt should ask for a single token response. max_requests ( int or None , optional , defaults to 1000 ) — Maximum number of requests to make to the OpenAI API. If set to None , there is no limit. Judge based on the OpenAI API. This judge is relevant for assessing the quality chat models, where the completion is a response to a given prompt. AllTrueJudge class trl. AllTrueJudge < source > ( judges : list ) Parameters judges ( list[BaseBinaryJudge] ) — A list of BaseBinaryJudge instances whose decisions will be unified. Unify the decision of multiple BaseBinaryJudge instances. Returns 1 only if all inner binary judges return 1 . If any judge returns 0 , it returns 0 . If any judge returns -1 , indicating a failure in its process, this judge will also return -1 . Implements the Mixture of Judges as described in the CGPO paper . Base classes BaseJudge class trl. BaseJudge < source > ( ) Base class for judges. The subclasses of this class should implement the judge method. BaseBinaryJudge class trl. BaseBinaryJudge < source > ( ) Base class for binary judges. judge < source > ( prompts : list completions : list gold_completions : typing.Optional[list[str]] = None shuffle_order : bool = True ) → list[int] Parameters prompts ( list[str] ) — List of prompts. completions ( list[str] ) — List of completions. gold_completions ( list[str] , optional ) — List of gold completions if it exists. shuffle_order ( bool ) — Whether to shuffle the order of the completions to avoid positional bias. Returns list[int] A list of binary labels: 1 indicates that the completion satisfies the evaluated constraint. 0 indicates that the completion does not satisfy the evaluated constraint. Judge the completion for a given prompt. Used to assess if a completion satisfies a constraint. This base class should be used to implement binary evaluations as done in section 4.1.4 of the CGPO paper . It is relevant for assessing whether or not a prompt completion pair satisfies a specific contraint. Note: If the judge returns -1 for any prompt, it indicates that the inner process used to compute the preference has failed. For instance, this could occur if the underlying language model or rule based contraint returned an invalid answer. In such cases, the caller should handle these invalid indices appropriately, possibly by implementing fallback logic or error handling. BaseRankJudge class trl. BaseRankJudge < source > ( ) Base class for LLM ranking judges. Example : Copied class MyRankJudge ( BaseRankJudge ): def judge ( self, prompts, completions, shuffle_order= True ): return ... # Your ranking logic here judge = MyRankJudge() judge.judge( prompts=[ "The capital of France is" , "The capital of Germany is" ], completions=[[ " Paris" , " Marseille" , "Lyon" ], [ " Munich" , " Berlin" ]] ) # [[0, 1, 2], [1, 0]] judge < source > ( prompts : list completions : list shuffle_order : bool = True ) → list[list[int]] Parameters prompts ( list[str] ) — List of prompts. completions ( list[list[str]] ) — List of completions list, where each element is a list of completions for the corresponding prompt. shuffle_order ( bool , optional , defaults to True ) — Whether to shuffle the order of the completions to avoid positional bias. Returns list[list[int]] List of lists of idxs, where each list contains the ranks of the completions for the corresponding prompt. E.g., [1, 2, 0] means that the second completion ( idx=1 ) is the best, followed by the third, and then the first. Judge the completion for the given prompts and return the ranks of each completion. BasePairwiseJudge class trl. BasePairwiseJudge < source > ( ) Base class for pairwise judges. judge < source > ( prompts : list completions : list shuffle_order : bool = True ) → list[int] Parameters prompts ( list[str] ) — List of prompts. completions ( list[list[str]] ) — List of completions pairs, where each element is a pair of completions for the corresponding prompt. shuffle_order ( bool , optional , defaults to True ) — Whether to shuffle the order of the completions to avoid positional bias. Returns list[int] List of idxs, where each idx is the rank of the best completion for the corresponding prompt. E.g., 1 means that the second completion ( idx=1 ) is the best. Judge the completion pairs for the given prompts. Note: If the judge returns -1 for any prompt, it indicates that the inner process used to compute the preference has failed. For instance, this could occur if the underlying language model returned an invalid answer. In such cases, the caller should handle these invalid indices appropriately, possibly by implementing fallback logic or error handling. < > Update on GitHub ← Best of N Sampling Callbacks → Judges Using the provided judges Define your own judge Provided judges PairRM Judge Hf Pairwise Judge OpenAI Pairwise Judge All True Judge Base classes Base Judge Base Binary Judge Base Rank Judge Base Pairwise Judge
Custom_hardware_for_training.txt
Custom hardware for training Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Custom hardware for training Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Custom hardware for training The hardware you use to run model training and inference can have a big effect on performance. For a deep dive into GPUs make sure to check out Tim Dettmer’s excellent blog post . Let’s have a look at some practical advice for GPU setups. GPU When you train bigger models you have essentially three options: bigger GPUs more GPUs more CPU and NVMe (offloaded to by DeepSpeed-Infinity ) Let’s start at the case where you have a single GPU. Power and Cooling If you bought an expensive high end GPU make sure you give it the correct power and sufficient cooling. Power : Some high end consumer GPU cards have 2 and sometimes 3 PCI-E 8-Pin power sockets. Make sure you have as many independent 12V PCI-E 8-Pin cables plugged into the card as there are sockets. Do not use the 2 splits at one end of the same cable (also known as pigtail cable). That is if you have 2 sockets on the GPU, you want 2 PCI-E 8-Pin cables going from your PSU to the card and not one that has 2 PCI-E 8-Pin connectors at the end! You won’t get the full performance out of your card otherwise. Each PCI-E 8-Pin power cable needs to be plugged into a 12V rail on the PSU side and can supply up to 150W of power. Some other cards may use a PCI-E 12-Pin connectors, and these can deliver up to 500-600W of power. Low end cards may use 6-Pin connectors, which supply up to 75W of power. Additionally you want the high-end PSU that has stable voltage. Some lower quality ones may not give the card the stable voltage it needs to function at its peak. And of course the PSU needs to have enough unused Watts to power the card. Cooling : When a GPU gets overheated it will start throttling down and will not deliver full performance and it can even shutdown if it gets too hot. It’s hard to tell the exact best temperature to strive for when a GPU is heavily loaded, but probably anything under +80C is good, but lower is better - perhaps 70-75C is an excellent range to be in. The throttling down is likely to start at around 84-90C. But other than throttling performance a prolonged very high temperature is likely to reduce the lifespan of a GPU. Next let’s have a look at one of the most important aspects when having multiple GPUs: connectivity. Multi-GPU Connectivity If you use multiple GPUs the way cards are inter-connected can have a huge impact on the total training time. If the GPUs are on the same physical node, you can run: Copied nvidia-smi topo -m and it will tell you how the GPUs are inter-connected. On a machine with dual-GPU and which are connected with NVLink, you will most likely see something like: Copied GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0 - 23 N/A GPU1 NV2 X 0 - 23 N/A on a different machine w/o NVLink we may see: Copied GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X PHB 0 - 11 N/A GPU1 PHB X 0 - 11 N/A The report includes this legend: Copied X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges ( without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV # = Connection traversing a bonded set of # NVLinks So the first report NV2 tells us the GPUs are interconnected with 2 NVLinks, and the second report PHB we have a typical consumer-level PCIe+Bridge setup. Check what type of connectivity you have on your setup. Some of these will make the communication between cards faster (e.g. NVLink), others slower (e.g. PHB). Depending on the type of scalability solution used, the connectivity speed could have a major or a minor impact. If the GPUs need to sync rarely, as in DDP, the impact of a slower connection will be less significant. If the GPUs need to send messages to each other often, as in ZeRO-DP, then faster connectivity becomes super important to achieve faster training. NVlink NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia. Each new generation provides a faster bandwidth, e.g. here is a quote from Nvidia Ampere GA102 GPU Architecture : Third-Generation NVLink® GA102 GPUs utilize NVIDIA’s third-generation NVLink interface, which includes four x4 links, with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. (Note that 3-Way and 4-Way SLI configurations are not supported.) So the higher X you get in the report of NVX in the output of nvidia-smi topo -m the better. The generation will depend on your GPU architecture. Let’s compare the execution of an openai-community/gpt2 language model training over a small sample of wikitext. The results are: NVlink Time Y 101s N 131s You can see that NVLink completes the training ~23% faster. In the second benchmark we use NCCL_P2P_DISABLE=1 to tell the GPUs not to use NVLink. Here is the full benchmark code and outputs: Copied # DDP w/ NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 { 'train_runtime' : 101.9003, 'train_samples_per_second' : 1.963, 'epoch' : 0.69} # DDP w/o NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 { 'train_runtime' : 131.4367, 'train_samples_per_second' : 1.522, 'epoch' : 0.69} Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks ( NV2 in nvidia-smi topo -m ) Software: pytorch-1.8-to-be + cuda-11.0 / transformers==4.3.0.dev0 < > Update on GitHub ← PyTorch training on Apple silicon Hyperparameter Search using Trainer API → Custom hardware for training GPU Power and Cooling Multi-GP U Connectivity N Vlink
Hyperparameter_Search_using_Trainer_API.txt
Hyperparameter Search using Trainer API Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Hyperparameter Search using Trainer API Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Hyperparameter Search using Trainer API 🤗 Transformers provides a Trainer class optimized for training 🤗 Transformers models, making it easier to start training without manually writing your own training loop. The Trainer provides API for hyperparameter search. This doc shows how to enable it in example. Hyperparameter Search backend Trainer supports four hyperparameter search backends currently: optuna , sigopt , raytune and wandb . you should install them before using them as the hyperparameter search backend Copied pip install optuna/sigopt/wandb/ray[tune] How to enable Hyperparameter search in example Define the hyperparameter search space, different backends need different format. For sigopt, see sigopt object_parameter , it’s like following: Copied >>> def sigopt_hp_space ( trial ): ... return [ ... { "bounds" : { "min" : 1e-6 , "max" : 1e-4 }, "name" : "learning_rate" , "type" : "double" }, ... { ... "categorical_values" : [ "16" , "32" , "64" , "128" ], ... "name" : "per_device_train_batch_size" , ... "type" : "categorical" , ... }, ... ] For optuna, see optuna object_parameter , it’s like following: Copied >>> def optuna_hp_space ( trial ): ... return { ... "learning_rate" : trial.suggest_float( "learning_rate" , 1e-6 , 1e-4 , log= True ), ... "per_device_train_batch_size" : trial.suggest_categorical( "per_device_train_batch_size" , [ 16 , 32 , 64 , 128 ]), ... } Optuna provides multi-objective HPO. You can pass direction in hyperparameter_search and define your own compute_objective to return multiple objective values. The Pareto Front ( List[BestRun] ) will be returned in hyperparameter_search, you should refer to the test case TrainerHyperParameterMultiObjectOptunaIntegrationTest in test_trainer . It’s like following Copied >>> best_trials = trainer.hyperparameter_search( ... direction=[ "minimize" , "maximize" ], ... backend= "optuna" , ... hp_space=optuna_hp_space, ... n_trials= 20 , ... compute_objective=compute_objective, ... ) For raytune, see raytune object_parameter , it’s like following: Copied >>> def ray_hp_space ( trial ): ... return { ... "learning_rate" : tune.loguniform( 1e-6 , 1e-4 ), ... "per_device_train_batch_size" : tune.choice([ 16 , 32 , 64 , 128 ]), ... } For wandb, see wandb object_parameter , it’s like following: Copied >>> def wandb_hp_space ( trial ): ... return { ... "method" : "random" , ... "metric" : { "name" : "objective" , "goal" : "minimize" }, ... "parameters" : { ... "learning_rate" : { "distribution" : "uniform" , "min" : 1e-6 , "max" : 1e-4 }, ... "per_device_train_batch_size" : { "values" : [ 16 , 32 , 64 , 128 ]}, ... }, ... } Define a model_init function and pass it to the Trainer , as an example: Copied >>> def model_init ( trial ): ... return AutoModelForSequenceClassification.from_pretrained( ... model_args.model_name_or_path, ... from_tf= bool ( ".ckpt" in model_args.model_name_or_path), ... config=config, ... cache_dir=model_args.cache_dir, ... revision=model_args.model_revision, ... token= True if model_args.use_auth_token else None , ... ) Create a Trainer with your model_init function, training arguments, training and test datasets, and evaluation function: Copied >>> trainer = Trainer( ... model= None , ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... processing_class=tokenizer, ... model_init=model_init, ... data_collator=data_collator, ... ) Call hyperparameter search, get the best trial parameters, backend could be "optuna" / "sigopt" / "wandb" / "ray" . direction can be "minimize" or "maximize" , which indicates whether to optimize greater or lower objective. You could define your own compute_objective function, if not defined, the default compute_objective will be called, and the sum of eval metric like f1 is returned as objective value. Copied >>> best_trial = trainer.hyperparameter_search( ... direction= "maximize" , ... backend= "optuna" , ... hp_space=optuna_hp_space, ... n_trials= 20 , ... compute_objective=compute_objective, ... ) Hyperparameter search For DDP finetune Currently, Hyperparameter search for DDP is enabled for optuna and sigopt. Only the rank-zero process will generate the search trial and pass the argument to other ranks. < > Update on GitHub ← Custom hardware for training CPU inference → Hyperparameter Search using Trainer API Hyperparameter Search backend How to enable Hyperparameter search in example Hyperparameter search For DD P finetune
Supervised_Fine_Tuning_of_Llama_3_8B_on_one_AWS_Tr.txt
Supervised Fine-Tuning of Llama 3 8B on one AWS Trainium instance Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Supervised Fine-Tuning of Llama 3 8B on one AWS Trainium instance AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Supervised Fine-Tuning of Llama 3 8B on one AWS Trainium instance Note: The complete script for this tutorial can be downloaded here . This tutorial will teach you how to fine-tune open source LLMs like Llama 3 on AWS Trainium. In our example, we are going to leverage the Optimum Neuron , Transformers and Datasets libraries. You will learn how to: Setup AWS Environment Load and process the dataset Supervised Fine-Tuning of Llama on AWS Trainium with the NeuronSFTTrainer Launch Training Evaluate and test fine-tuned Llama model While we will use Llama-3 8B in this tutorial, it is completely possible to use other models, simply by swtiching the model_id . 1. Setup AWS Environment Before starting this tutorial, you will need to setup your environment: Create an AWS Trainium instance. You will need a trn1.32xlarge , which contains 16 Neuron Devices. You can follow this guide to create one. Make sure you are logged in on the Hugging Face Hub: Copied huggingface-cli login --token YOUR_TOKEN Check that you have access to the model. Some open source models are gated, meaning that users need to apply to the model owner to be able to use the model weights. Here we will be training Llama-3 8B, for which there are two possibilities: The official gated repo: meta-llama/Meta-Llama-3-8B The non-official un-gated repo: NousResearch/Meta-Llama-3-8B Clone the Optimum Neuron repository, which contains the complete script described in this tutorial: Copied git clone https://github.com/huggingface/optimum-neuron.git 2. Load and prepare the dataset For this tutorial, we will use Dolly , an open source dataset of instruction-following records on categories outlined in the InstructGPT paper , including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. Example: Copied { "instruction" : "What is world of warcraft" , "context" : "" , "response" : ( "World of warcraft is a massive online multi player role playing game. " "It was released in 2004 by blizarre entertainment" ) } We can use the load_dataset() method from the 🤗 Datasets library to load the dolly dataset very easily. Copied from datasets import load_dataset from random import randrange # Load dataset from the hub dataset = load_dataset( "databricks/databricks-dolly-15k" , split= "train" ) print ( f"dataset size: { len (dataset)} " ) print (dataset[randrange( len (dataset))]) # dataset size: 15011 To instruct fine-tune our model we need to: Convert our structured examples into collection of tasks described via instructions (Optional) Pack multiple examples to one sequence for more efficient training. In other words, we are stacking multiple examples into one example, and split them with the EOS token. We could do this manually, but we will use the NeuronSFTTrainer instead. 3. Supervised Fine-Tuning of Llama on AWS Trainium with the NeuronSFTTrainer Normally you would use the SFTConfig and SFTTrainer classes to perform supervised fine-tuning of PyTorch-based transformer models. Instead, here we will be using the NeuronSFTConfig and NeuronSFTTrainer . These classes replicate the ones from the trl library while making sure they work properly on Neuron cores. Formatting our dataset There are multiple ways to give a dataset to the NeuronSFTTrainer , and one of them consists in providing a formatting function. For dolly without packing the examples it looks as follows: Copied def format_dolly ( examples ): output_text = [] for i in range ( len (examples[ "instruction" ])): instruction = f"### Instruction\n {examples[ 'instruction' ][i]} " context = f"### Context\n {examples[ 'context' ][i]} " if len (examples[ "context" ][i]) > 0 else None response = f"### Answer\n {examples[ 'response' ][i]} " prompt = "\n\n" .join([i for i in [instruction, context, response] if i is not None ]) output_text.append(prompt) return output_text Preparing the model Since Llama-3 8B is a big model it will not fit on a single trn1.32xlarge instance, even with distributed training. To actually fine-tune a 8B model using only one Trainium instance we need to use both LoRA and distributed training. If you want to know more about distributed training you can take a look at the documentation . Here, we will use tensor parallelism in conjuction with LoRA. Our training code will look as follows: Copied from peft import LoraConfig from optimum.neuron import NeuronSFTConfig, NeuronSFTTrainer from optimum.neuron.distributed import lazy_load_for_parallelism # Define the tensor_parallel_size tensor_parallel_size = 2 dataset = load_dataset( "databricks/databricks-dolly-15k" , split= "train" ) model_id = "meta-llama/Meta-Llama-3-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token with lazy_load_for_parallelism(tensor_parallel_size=tensor_parallel_size): model = AutoModelForCausalLM.from_pretrained(model_id) config = LoraConfig( r= 16 , lora_alpha= 16 , lora_dropout= 0.05 , target_modules=[ "q_proj" , "gate_proj" , "v_proj" , "o_proj" , "k_proj" , "up_proj" , "down_proj" ], bias= "none" , task_type= "CAUSAL_LM" , ) # training_args is an instance of NeuronTrainingArguments args = training_args.to_dict() sft_config = NeuronSFTConfig( max_seq_length= 1024 , packing= False , **args, ) trainer = NeuronSFTTrainer( args=sft_config, model=model, peft_config=config, tokenizer=tokenizer, train_dataset=dataset, formatting_func=format_dolly, ) # Start training trainer.train() trainer.save_model() # Saves the tokenizer too for easy upload The key points here are: We use the lazy_load_for_parallelism context manager to lazily load the model. This will not load the full model weights on each worker, but instead only load the required weights (sharded or full). This is much more memory efficient, and often mandatory to use. We define a LoraConfig that specifies which layers should have adapters, and the hyperparameters for theses adapters. We create a NeuronSFTConfig from regular NeuronTrainingArguments . Here we specify that we do not want to pack our examples, and that the max sequence length should be 1024 , meaning that every example will be either padded or truncated to a length of 1024 . We use the NeuronSFTTrainer to perform training. It will take the lazily loaded model, along with lora_config , sft_config and format_dolly and prepare the dataset and model for supervised fine-tuning. 4. Launch Training We prepared a script called sft_lora_finetune_llm.py summing up everything mentioned in this tutorial. PyTorch Neuron uses torch_xla . It evaluates operations lazily during the execution of the training loops, which means it builds a symbolic graph in the background, and the graph is executed on the hardware only when the tensor is printed, transferred to CPU, or when xm.mark_step() is called. During execution, multiple graphs can be built depending on control-flow, and it can take time to compile each graph sequentially. To alleviate that, the Neuron SDK provides neuron_parallel_compile , a tool which performs a fast trial run that builds all the graphs and compile them in parallel. This step is usually called precompilation. Precompilation When training models on AWS Trainium we first need to compile our model with our training arguments. To ease this step, we added a model cache repository , which allows us to use precompiled models from the Hugging Face Hub to skip the compilation step. But be careful: every change in the model configuration might lead to a new compilation, which could result in some cache misses. To learn more about the caching system, and how you can create your own private cache repository, check this guide . The compilation command simply consists in calling your script as an input to the neuron_parallel_compile utility: Copied #!/bin/bash set -ex export NEURON_FUSE_SOFTMAX=1 export NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS=3 export MALLOC_ARENA_MAX=64 export NEURON_CC_FLAGS= "--model-type=transformer --distribution-strategy=llm-training --enable-saturate-infinity --cache_dir=/home/ubuntu/cache_dir_neuron/" PROCESSES_PER_NODE=8 NUM_EPOCHS=1 TP_DEGREE=2 PP_DEGREE=1 BS=1 GRADIENT_ACCUMULATION_STEPS=8 LOGGING_STEPS=1 MODEL_NAME= "meta-llama/Meta-Llama-3-8B" OUTPUT_DIR=output- $SLURM_JOB_ID if [ " $NEURON_EXTRACT_GRAPHS_ONLY " = "1" ]; then MAX_STEPS=$((LOGGING_STEPS + 5 )) else MAX_STEPS=-1 fi XLA_USE_BF16=1 neuron_parallel_compile torchrun --nproc_per_node $PROCESSES_PER_NODE docs/source/training_tutorials/sft_lora_finetune_llm.py \ --model_id $MODEL_NAME \ --num_train_epochs $NUM_EPOCHS \ --do_train \ --learning_rate 5e-5 \ --warmup_ratio 0.03 \ --max_steps $MAX_STEPS \ --per_device_train_batch_size $BS \ --per_device_eval_batch_size $BS \ --gradient_accumulation_steps $GRADIENT_ACCUMULATION_STEPS \ --gradient_checkpointing true \ --bf16 \ --zero_1 false \ --tensor_parallel_size $TP_DEGREE \ --pipeline_parallel_size $PP_DEGREE \ --logging_steps $LOGGING_STEPS \ --save_total_limit 1 \ --output_dir $OUTPUT_DIR \ --lr_scheduler_type "constant" \ --overwrite_output_dir Make sure to run this precompilation phase for around 10 training steps. It is usually enough to accumulate and compile all the graphs that will be needed during the actual training. Note: Compiling without a cache can take a while. It will also create dummy files in the dolly_llama_sharded during compilation you will have to remove them afterwards. We also need to add MALLOC_ARENA_MAX=64 to limit the CPU allocation to avoid potential crashes, don’t remove it for now. Copied # remove dummy artifacts which are created by the precompilation command rm -rf dolly_llama Actual Training After compilation is done we can start our actual training with a similar command, we just need to remove the use of neuron_parallel_compile . We will use torchrun to launch our training script. torchrun is a tool that automatically distributes a PyTorch model across multiple accelerators. We can pass the number of accelerators as nproc_per_node arguments alongside our hyperparameters. The difference to the compilation command is that we changed from max_steps=10 to num_train_epochs=3 . Launch the training, with the following command. Copied #!/bin/bash set -ex export NEURON_FUSE_SOFTMAX=1 export NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS=3 export MALLOC_ARENA_MAX=64 export NEURON_CC_FLAGS= "--model-type=transformer --distribution-strategy=llm-training --enable-saturate-infinity --cache_dir=/home/ubuntu/cache_dir_neuron/" PROCESSES_PER_NODE=8 NUM_EPOCHS=1 TP_DEGREE=2 PP_DEGREE=1 BS=1 GRADIENT_ACCUMULATION_STEPS=8 LOGGING_STEPS=1 MODEL_NAME= "meta-llama/Meta-Llama-3-8B" OUTPUT_DIR=output- $SLURM_JOB_ID if [ " $NEURON_EXTRACT_GRAPHS_ONLY " = "1" ]; then MAX_STEPS=$((LOGGING_STEPS + 5 )) else MAX_STEPS=-1 fi XLA_USE_BF16=1 torchrun --nproc_per_node $PROCESSES_PER_NODE docs/source/training_tutorials/sft_lora_finetune_llm.py \ --model_id $MODEL_NAME \ --num_train_epochs $NUM_EPOCHS \ --do_train \ --learning_rate 5e-5 \ --warmup_ratio 0.03 \ --max_steps $MAX_STEPS \ --per_device_train_batch_size $BS \ --per_device_eval_batch_size $BS \ --gradient_accumulation_steps $GRADIENT_ACCUMULATION_STEPS \ --gradient_checkpointing true \ --bf16 \ --zero_1 false \ --tensor_parallel_size $TP_DEGREE \ --pipeline_parallel_size $PP_DEGREE \ --logging_steps $LOGGING_STEPS \ --save_total_limit 1 \ --output_dir $OUTPUT_DIR \ --lr_scheduler_type "constant" \ --overwrite_output_dir That’s it, we successfully trained Llama-3 8B on AWS Trainium! But before we can share and test our model we need to consolidate our model. Since we used tensor parallelism during training, we saved sharded versions of the checkpoints. We need to consolidate them now. Consolidate the Checkpoint The Optimum CLI provides a way of doing that very easily via the optimum neuron consolidate [sharded_checkpoint] [output_dir] command: Copied optimum-cli neuron consolidate dolly_llama dolly_llama 5. Evaluate and test fine-tuned Llama model As for training, to be able to run inference on AWS Trainium or AWS Inferentia2 we need to compile our model. In this case, we will use our Trainium instance for the inference test, but we recommend customer to switch to Inferentia2 ( inf2.24xlarge ) for inference. Optimum Neuron implements similar to Transformers AutoModel classes for easy inference use. We will use the NeuronModelForCausalLM class to load our vanilla transformers checkpoint and convert it to neuron. Copied from optimum.neuron import NeuronModelForCausalLM from transformers import AutoTokenizer compiler_args = { "num_cores" : 2 , "auto_cast_type" : 'fp16' } input_shapes = { "batch_size" : 1 , "sequence_length" : 2048 } tokenizer = AutoTokenizer.from_pretrained( "dolly_llama" ) model = NeuronModelForCausalLM.from_pretrained( "dolly_llama" , export= True , **compiler_args, **input_shapes) Note: Inference compilation can take ~25minutes. Luckily, you need to only run this onces. Since you can save the model afterwards. If you are going to run on Inferentia2 you need to recompile again. The compilation is parameter and hardware specific. Copied # COMMENT IN if you want to save the compiled model # model.save_pretrained("compiled_dolly_llama") We can now test inference, but have to make sure we format our input to our prompt format we used for fine-tuning. Therefore we created a helper method, which accepts a dict with our instruction and optionally a context . Copied def format_dolly_inference ( sample ): instruction = f"### Instruction\n {sample[ 'instruction' ]} " context = f"### Context\n {sample[ 'context' ]} " if "context" in sample else None response = f"### Answer\n" prompt = "\n\n" .join([i for i in [instruction, context, response] if i is not None ]) return prompt def generate ( sample ): prompt = format_dolly_inference(sample) inputs = tokenizer(prompt, return_tensors= "pt" ) outputs = model.generate( **inputs, max_new_tokens= 512 , do_sample= True , temperature= 0.9 , top_k= 50 , top_p= 0.9 ) return tokenizer.decode(outputs[ 0 ], skip_special_tokens= False )[ len (prompt):] Let’s test inference. First we test without a context. Note: Inference is not expected to be super fast on AWS Trainium using 2 cores. For Inference we recommend using Inferentia2. Copied prompt = { "instruction" : "Can you tell me something about AWS?" } res = generate(prompt) print (res) AWS stands for Amazon Web Services. AWS is a suite of remote computing services offered by Amazon. The most widely used of these include Amazon Elastic Compute Cloud (Amazon EC2), which provides resizable compute capacity in the cloud; Amazon Simple Storage Service (Amazon S3), which is an object storage service; and Amazon Elastic Block Store (Amazon EBS), which is designed to provide high performance, durable block storage volumes for use with AWS instances. AWS also provides other services, such as AWS Identity and Access Management (IAM), a service that enables organizations to control access to their AWS resources, and AWS Key Management Service (AWS KMS), which helps customers create and control the use of encryption keys. That looks correct. Now, lets add some context, e.g. as you would do for RAG applications: Copied prompt = { "instruction" : "How can I train models on AWS Trainium?" , "context" : "🤗 Optimum Neuron is the interface between the 🤗 Transformers library and AWS Accelerators including [AWS Trainium](https://aws.amazon.com/machine-learning/trainium/?nc1=h_ls) and [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/?nc1=h_ls). It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks." } res = generate(prompt) print (res) You can use the Optimum Neuron interface to train models on AWS Trainium. Awesome, our model also correctly uses the provided context. We are done. Congrats on fine-tuning Llama on AWS Trainium. ← Fine-tune Llama 3 8B on AWS Trainium Notebooks → Supervised Fine- Tuning of Llama 3 8 B on one AW S Trainium instance 1. Setup AW S Environment 2. Load and prepare the dataset 3. Supervised Fine- Tuning of Llama on AW S Trainium with the NeuronSFT Trainer Formatting our dataset Preparing the model 4. Launch Training Precompilation Actual Training Consolidate the Checkpoint 5. Evaluate and test fine-tuned Llama model
🤗_Optimum_notebooks.txt
🤗 Optimum notebooks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Optimum documentation 🤗 Optimum notebooks Optimum 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.23.3 v1.22.0 v1.21.4 v1.20.0 v1.19.0 v1.18.1 v1.17.1 v1.16.2 v1.15.0 v1.14.0 v1.13.2 v1.12.0 v1.11.2 v1.10.1 v1.9.0 v1.8.6 v1.7.3 v1.6.4 v1.5.2 v1.4.1 v1.3.0 v1.2.3 v1.0.0 EN Overview 🤗 Optimum Installation Quick tour Notebooks Conceptual guides Quantization Nvidia AMD Intel AWS Trainium/Inferentia Google TPUs Habana Furiosa ONNX Runtime Exporters BetterTransformer Torch FX LLM quantization Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🤗 Optimum notebooks You can find here a list of the notebooks associated with each accelerator in 🤗 Optimum. Optimum Habana Notebook Description Colab Studio Lab How to use DeepSpeed to train models with billions of parameters on Habana Gaudi Show how to use DeepSpeed to pre-train/fine-tune the 1.6B-parameter GPT2-XL for causal language modeling on Habana Gaudi. Optimum Intel OpenVINO Notebook Description Colab Studio Lab How to run inference with OpenVINO Explains how to export your model to OpenVINO and run inference with OpenVINO Runtime on various tasks How to quantize a question answering model with NNCF Show how to apply post-training quantization on a question answering model using NNCF and to accelerate inference with OpenVINO Compare outputs of a quantized Stable Diffusion model with its full-precision counterpart Show how to load and compare outputs from two Stable Diffusion models with different precision Neural Compressor Notebook Description Colab Studio Lab How to quantize a model with Intel Neural Compressor for text classification Show how to apply quantization while training your model using Intel Neural Compressor for any GLUE task. Optimum ONNX Runtime Notebook Description Colab Studio Lab How to quantize a model with ONNX Runtime for text classification Show how to apply static and dynamic quantization on a model using ONNX Runtime for any GLUE task. How to fine-tune a model for text classification with ONNX Runtime Show how to DistilBERT model on GLUE tasks using ONNX Runtime . How to fine-tune a model for summarization with ONNX Runtime Show how to fine-tune a T5 model on the BBC news corpus. How to fine-tune DeBERTa for question-answering with ONNX Runtime Show how to fine-tune a DeBERTa model on the squad. < > Update on GitHub ← Quick tour Quantization → 🤗 Optimum notebooks Optimum Habana Optimum Intel OpenVINO Neural Compressor Optimum ONN X Runtime
Spaces_ZeroGPU__Dynamic_GPU_Allocation_for_Spaces_.txt
Spaces ZeroGPU: Dynamic GPU Allocation for Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Spaces ZeroGPU: Dynamic GPU Allocation for Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Spaces ZeroGPU: Dynamic GPU Allocation for Spaces ZeroGPU is a shared infrastructure that optimizes GPU usage for AI models and demos on Hugging Face Spaces. It dynamically allocates and releases NVIDIA A100 GPUs as needed, offering: Free GPU Access : Enables cost-effective GPU usage for Spaces. Multi-GPU Support : Allows Spaces to leverage multiple GPUs concurrently on a single application. Unlike traditional single-GPU allocations, ZeroGPU’s efficient system lowers barriers for developers, researchers, and organizations to deploy AI models by maximizing resource utilization and power efficiency. Using and hosting ZeroGPU Spaces Using existing ZeroGPU Spaces ZeroGPU Spaces are available to use for free to all users. (Visit the curated list ). PRO users get x5 more daily usage quota and highest priority in GPU queues when using any ZeroGPU Spaces. Hosting your own ZeroGPU Spaces Personal accounts: Subscribe to PRO to access ZeroGPU in the hardware options when creating a new Gradio SDK Space. Organizations: Subscribe to the Enterprise Hub to enable ZeroGPU Spaces for all organization members. Technical Specifications GPU Type : Nvidia A100 Available VRAM : 40GB per workload Compatibility ZeroGPU Spaces are designed to be compatible with most PyTorch-based GPU Spaces. While compatibility is enhanced for high-level Hugging Face libraries like transformers and diffusers , users should be aware that: Currently, ZeroGPU Spaces are exclusively compatible with the Gradio SDK . ZeroGPU Spaces may have limited compatibility compared to standard GPU Spaces. Unexpected issues may arise in some scenarios. Supported Versions Gradio: 4+ PyTorch: 2.0.1, 2.1.2, 2.2.2, 2.4.0 (Note: 2.3.x is not supported due to a PyTorch bug ) Python: 3.10.13 Getting started with ZeroGPU To utilize ZeroGPU in your Space, follow these steps: Make sure the ZeroGPU hardware is selected in your Space settings. Import the spaces module. Decorate GPU-dependent functions with @spaces.GPU . This decoration process allows the Space to request a GPU when the function is called and release it upon completion. Example Usage Copied import spaces from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained(...) pipe.to( 'cuda' ) @spaces.GPU def generate ( prompt ): return pipe(prompt).images gr.Interface( fn=generate, inputs=gr.Text(), outputs=gr.Gallery(), ).launch() Note: The @spaces.GPU decorator is designed to be effect-free in non-ZeroGPU environments, ensuring compatibility across different setups. Duration Management For functions expected to exceed the default 60-second of GPU runtime, you can specify a custom duration: Copied @spaces.GPU( duration= 120 ) def generate ( prompt ): return pipe(prompt).images This sets the maximum function runtime to 120 seconds. Specifying shorter durations for quicker functions will improve queue priority for Space visitors. Hosting Limitations Personal accounts ( PRO subscribers ) : Maximum of 10 ZeroGPU Spaces. Organization accounts ( Enterprise Hub ) : Maximum of 50 ZeroGPU Spaces. By leveraging ZeroGPU, developers can create more efficient and scalable Spaces, maximizing GPU utilization while minimizing costs. Feedback You can share your feedback on Spaces ZeroGPU directly on the HF Hub: https://huggingface.co/spaces/zero-gpu-explorers/README/discussions < > Update on GitHub ← Spaces GPU Upgrades Spaces Dev Mode → Spaces ZeroGP U: Dynamic GP U Allocation for Spaces Using and hosting ZeroGP U Spaces Technical Specifications Compatibility Supported Versions Getting started with ZeroGPU Example Usage Duration Management Hosting Limitations Feedback
Trainer.txt
Trainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Trainer Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Trainer The Trainer class provides an API for feature-complete training in PyTorch, and it supports distributed training on multiple GPUs/TPUs, mixed precision for NVIDIA GPUs , AMD GPUs , and torch.amp for PyTorch. Trainer goes hand-in-hand with the TrainingArguments class, which offers a wide range of options to customize how a model is trained. Together, these two classes provide a complete training API. Seq2SeqTrainer and Seq2SeqTrainingArguments inherit from the Trainer and TrainingArguments classes and they’re adapted for training models for sequence-to-sequence tasks such as summarization or translation. The Trainer class is optimized for 🤗 Transformers models and can have surprising behaviors when used with other models. When using it with your own model, make sure: your model always return tuples or subclasses of ModelOutput your model can compute the loss if a labels argument is provided and that loss is returned as the first element of the tuple (if your model returns tuples) your model can accept multiple label arguments (use label_names in TrainingArguments to indicate their name to the Trainer ) but none of them should be named "label" Trainer class transformers. Trainer < source > ( model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None args : TrainingArguments = None data_collator : typing.Optional[transformers.data.data_collator.DataCollator] = None train_dataset : typing.Union[torch.utils.data.dataset.Dataset, torch.utils.data.dataset.IterableDataset, ForwardRef('datasets.Dataset'), NoneType] = None eval_dataset : typing.Union[torch.utils.data.dataset.Dataset, typing.Dict[str, torch.utils.data.dataset.Dataset], ForwardRef('datasets.Dataset'), NoneType] = None processing_class : typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = None model_init : typing.Optional[typing.Callable[[], transformers.modeling_utils.PreTrainedModel]] = None compute_loss_func : typing.Optional[typing.Callable] = None compute_metrics : typing.Optional[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict]] = None callbacks : typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = None optimizers : typing.Tuple[typing.Optional[torch.optim.optimizer.Optimizer], typing.Optional[torch.optim.lr_scheduler.LambdaLR]] = (None, None) optimizer_cls_and_kwargs : typing.Optional[typing.Tuple[typing.Type[torch.optim.optimizer.Optimizer], typing.Dict[str, typing.Any]]] = None preprocess_logits_for_metrics : typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None ) Parameters model ( PreTrainedModel or torch.nn.Module , optional ) — The model to train, evaluate or use for predictions. If not provided, a model_init must be passed. Trainer is optimized to work with the PreTrainedModel provided by the library. You can still use your own models defined as torch.nn.Module as long as they work the same way as the 🤗 Transformers models. args ( TrainingArguments , optional ) — The arguments to tweak for training. Will default to a basic instance of TrainingArguments with the output_dir set to a directory named tmp_trainer in the current directory if not provided. data_collator ( DataCollator , optional ) — The function to use to form a batch from a list of elements of train_dataset or eval_dataset . Will default to default_data_collator() if no processing_class is provided, an instance of DataCollatorWithPadding otherwise if the processing_class is a feature extractor or tokenizer. train_dataset (Union[ torch.utils.data.Dataset , torch.utils.data.IterableDataset , datasets.Dataset ], optional ) — The dataset to use for training. If it is a Dataset , columns not accepted by the model.forward() method are automatically removed. Note that if it’s a torch.utils.data.IterableDataset with some randomization and you are training in a distributed fashion, your iterable dataset should either use a internal attribute generator that is a torch.Generator for the randomization that must be identical on all processes (and the Trainer will manually set the seed of this generator at each epoch) or have a set_epoch() method that internally sets the seed of the RNGs used. eval_dataset (Union[ torch.utils.data.Dataset , Dict[str, torch.utils.data.Dataset , datasets.Dataset ]), optional ) — The dataset to use for evaluation. If it is a Dataset , columns not accepted by the model.forward() method are automatically removed. If it is a dictionary, it will evaluate on each dataset prepending the dictionary key to the metric name. processing_class ( PreTrainedTokenizerBase or BaseImageProcessor or FeatureExtractionMixin or ProcessorMixin , optional ) — Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model. This supercedes the tokenizer argument, which is now deprecated. model_init ( Callable[[], PreTrainedModel] , optional ) — A function that instantiates the model to be used. If provided, each call to train() will start from a new instance of the model as given by this function. The function may have zero argument, or a single one containing the optuna/Ray Tune/SigOpt trial object, to be able to choose different architectures according to hyper parameters (such as layer count, sizes of inner layers, dropout probabilities etc). compute_loss_func ( Callable , optional ) — A function that accepts the raw model outputs, labels, and the number of items in the entire accumulated batch (batch_size * gradient_accumulation_steps) and returns the loss. For example, see the default loss function used by Trainer . compute_metrics ( Callable[[EvalPrediction], Dict] , optional ) — The function that will be used to compute metrics at evaluation. Must take a EvalPrediction and return a dictionary string to metric values. Note When passing TrainingArgs with batch_eval_metrics set to True , your compute_metrics function must take a boolean compute_result argument. This will be triggered after the last eval batch to signal that the function needs to calculate and return the global summary statistics rather than accumulating the batch-level statistics callbacks (List of TrainerCallback , optional ) — A list of callbacks to customize the training loop. Will add those to the list of default callbacks detailed in here . If you want to remove one of the default callbacks used, use the Trainer.remove_callback() method. optimizers ( Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] , optional , defaults to (None, None) ) — A tuple containing the optimizer and the scheduler to use. Will default to an instance of AdamW on your model and a scheduler given by get_linear_schedule_with_warmup() controlled by args . optimizer_cls_and_kwargs ( Tuple[Type[torch.optim.Optimizer], Dict[str, Any]] , optional ) — A tuple containing the optimizer class and keyword arguments to use. Overrides optim and optim_args in args . Incompatible with the optimizers argument. Unlike optimizers , this argument avoids the need to place model parameters on the correct devices before initializing the Trainer. preprocess_logits_for_metrics ( Callable[[torch.Tensor, torch.Tensor], torch.Tensor] , optional ) — A function that preprocess the logits right before caching them at each evaluation step. Must take two tensors, the logits and the labels, and return the logits once processed as desired. The modifications made by this function will be reflected in the predictions received by compute_metrics . Note that the labels (second parameter) will be None if the dataset does not have them. Trainer is a simple but feature-complete training and eval loop for PyTorch, optimized for 🤗 Transformers. Important attributes: model — Always points to the core model. If using a transformers model, it will be a PreTrainedModel subclass. model_wrapped — Always points to the most external model in case one or more other modules wrap the original model. This is the model that should be used for the forward pass. For example, under DeepSpeed , the inner model is wrapped in DeepSpeed and then again in torch.nn.DistributedDataParallel . If the inner model hasn’t been wrapped, then self.model_wrapped is the same as self.model . is_model_parallel — Whether or not a model has been switched to a model parallel mode (different from data parallelism, this means some of the model layers are split on different GPUs). place_model_on_device — Whether or not to automatically place the model on the device - it will be set to False if model parallel or deepspeed is used, or if the default TrainingArguments.place_model_on_device is overridden to return False . is_in_train — Whether or not a model is currently running train (e.g. when evaluate is called while in train ) add_callback < source > ( callback ) Parameters callback ( type or [`~transformers.TrainerCallback]`) — A TrainerCallback class or an instance of a TrainerCallback . In the first case, will instantiate a member of that class. Add a callback to the current list of TrainerCallback . autocast_smart_context_manager < source > ( cache_enabled : typing.Optional[bool] = True ) A helper wrapper that creates an appropriate context manager for autocast while feeding it the desired arguments, depending on the situation. compute_loss < source > ( model inputs return_outputs = False num_items_in_batch = None ) How the loss is computed by Trainer. By default, all models return the loss in the first element. Subclass and override for custom behavior. compute_loss_context_manager < source > ( ) A helper wrapper to group together context managers. create_model_card < source > ( language : typing.Optional[str] = None license : typing.Optional[str] = None tags : typing.Union[str, typing.List[str], NoneType] = None model_name : typing.Optional[str] = None finetuned_from : typing.Optional[str] = None tasks : typing.Union[str, typing.List[str], NoneType] = None dataset_tags : typing.Union[str, typing.List[str], NoneType] = None dataset : typing.Union[str, typing.List[str], NoneType] = None dataset_args : typing.Union[str, typing.List[str], NoneType] = None ) Parameters language ( str , optional ) — The language of the model (if applicable) license ( str , optional ) — The license of the model. Will default to the license of the pretrained model used, if the original model given to the Trainer comes from a repo on the Hub. tags ( str or List[str] , optional ) — Some tags to be included in the metadata of the model card. model_name ( str , optional ) — The name of the model. finetuned_from ( str , optional ) — The name of the model used to fine-tune this one (if applicable). Will default to the name of the repo of the original model given to the Trainer (if it comes from the Hub). tasks ( str or List[str] , optional ) — One or several task identifiers, to be included in the metadata of the model card. dataset_tags ( str or List[str] , optional ) — One or several dataset tags, to be included in the metadata of the model card. dataset ( str or List[str] , optional ) — One or several dataset identifiers, to be included in the metadata of the model card. dataset_args ( str or List[str] , optional ) — One or several dataset arguments, to be included in the metadata of the model card. Creates a draft of a model card using the information available to the Trainer . create_optimizer < source > ( ) Setup the optimizer. We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through optimizers , or subclass and override this method in a subclass. create_optimizer_and_scheduler < source > ( num_training_steps : int ) Setup the optimizer and the learning rate scheduler. We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through optimizers , or subclass and override this method (or create_optimizer and/or create_scheduler ) in a subclass. create_scheduler < source > ( num_training_steps : int optimizer : Optimizer = None ) Parameters num_training_steps (int) — The number of training steps to do. Setup the scheduler. The optimizer of the trainer must have been set up either before this method is called or passed as an argument. evaluate < source > ( eval_dataset : typing.Union[torch.utils.data.dataset.Dataset, typing.Dict[str, torch.utils.data.dataset.Dataset], NoneType] = None ignore_keys : typing.Optional[typing.List[str]] = None metric_key_prefix : str = 'eval' ) Parameters eval_dataset (Union[ Dataset , Dict[str, Dataset ]), optional ) — Pass a dataset if you wish to override self.eval_dataset . If it is a Dataset , columns not accepted by the model.forward() method are automatically removed. If it is a dictionary, it will evaluate on each dataset, prepending the dictionary key to the metric name. Datasets must implement the __len__ method. If you pass a dictionary with names of datasets as keys and datasets as values, evaluate will run separate evaluations on each dataset. This can be useful to monitor how training affects other datasets or simply to get a more fine-grained evaluation. When used with load_best_model_at_end , make sure metric_for_best_model references exactly one of the datasets. If you, for example, pass in {"data1": data1, "data2": data2} for two datasets data1 and data2 , you could specify metric_for_best_model="eval_data1_loss" for using the loss on data1 and metric_for_best_model="eval_data2_loss" for the loss on data2 . ignore_keys ( List[str] , optional ) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions. metric_key_prefix ( str , optional , defaults to "eval" ) — An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is “eval” (default) Run evaluation and returns metrics. The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init compute_metrics argument). You can also subclass and override this method to inject custom behavior. evaluation_loop < source > ( dataloader : DataLoader description : str prediction_loss_only : typing.Optional[bool] = None ignore_keys : typing.Optional[typing.List[str]] = None metric_key_prefix : str = 'eval' ) Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict() . Works both with or without labels. floating_point_ops < source > ( inputs : typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] ) → int Parameters inputs ( Dict[str, Union[torch.Tensor, Any]] ) — The inputs and targets of the model. Returns int The number of floating-point operations. For models that inherit from PreTrainedModel , uses that method to compute the number of floating point operations for every backward + forward pass. If using another model, either implement such a method in the model or subclass and override this method. get_decay_parameter_names < source > ( model ) Get all parameter names that weight decay will be applied to Note that some models implement their own layernorm instead of calling nn.LayerNorm, weight decay could still apply to those modules since this function only filter out instance of nn.LayerNorm get_eval_dataloader < source > ( eval_dataset : typing.Union[str, torch.utils.data.dataset.Dataset, NoneType] = None ) Parameters eval_dataset ( str or torch.utils.data.Dataset , optional ) — If a str , will use self.eval_dataset[eval_dataset] as the evaluation dataset. If a Dataset , will override self.eval_dataset and must implement __len__ . If it is a Dataset , columns not accepted by the model.forward() method are automatically removed. Returns the evaluation ~torch.utils.data.DataLoader . Subclass and override this method if you want to inject some custom behavior. get_learning_rates < source > ( ) Returns the learning rate of each parameter from self.optimizer. get_num_trainable_parameters < source > ( ) Get the number of trainable parameters. get_optimizer_cls_and_kwargs < source > ( args : TrainingArguments model : typing.Optional[transformers.modeling_utils.PreTrainedModel] = None ) Parameters args ( transformers.training_args.TrainingArguments ) — The training arguments for the training session. Returns the optimizer class and optimizer parameters based on the training arguments. get_optimizer_group < source > ( param : typing.Union[str, torch.nn.parameter.Parameter, NoneType] = None ) Parameters param ( str or torch.nn.parameter.Parameter , optional ) — The parameter for which optimizer group needs to be returned. Returns optimizer group for a parameter if given, else returns all optimizer groups for params. get_test_dataloader < source > ( test_dataset : Dataset ) Parameters test_dataset ( torch.utils.data.Dataset , optional ) — The test dataset to use. If it is a Dataset , columns not accepted by the model.forward() method are automatically removed. It must implement __len__ . Returns the test ~torch.utils.data.DataLoader . Subclass and override this method if you want to inject some custom behavior. get_train_dataloader < source > ( ) Returns the training ~torch.utils.data.DataLoader . Will use no sampler if train_dataset does not implement __len__ , a random sampler (adapted to distributed training if necessary) otherwise. Subclass and override this method if you want to inject some custom behavior. hyperparameter_search < source > ( hp_space : typing.Optional[typing.Callable[[ForwardRef('optuna.Trial')], typing.Dict[str, float]]] = None compute_objective : typing.Optional[typing.Callable[[typing.Dict[str, float]], float]] = None n_trials : int = 20 direction : typing.Union[str, typing.List[str]] = 'minimize' backend : typing.Union[ForwardRef('str'), transformers.trainer_utils.HPSearchBackend, NoneType] = None hp_name : typing.Optional[typing.Callable[[ForwardRef('optuna.Trial')], str]] = None **kwargs ) → [ trainer_utils.BestRun or List[trainer_utils.BestRun] ] Parameters hp_space ( Callable[["optuna.Trial"], Dict[str, float]] , optional ) — A function that defines the hyperparameter search space. Will default to default_hp_space_optuna() or default_hp_space_ray() or default_hp_space_sigopt() depending on your backend. compute_objective ( Callable[[Dict[str, float]], float] , optional ) — A function computing the objective to minimize or maximize from the metrics returned by the evaluate method. Will default to default_compute_objective() . n_trials ( int , optional , defaults to 100) — The number of trial runs to test. direction ( str or List[str] , optional , defaults to "minimize" ) — If it’s single objective optimization, direction is str , can be "minimize" or "maximize" , you should pick "minimize" when optimizing the validation loss, "maximize" when optimizing one or several metrics. If it’s multi objectives optimization, direction is List[str] , can be List of "minimize" and "maximize" , you should pick "minimize" when optimizing the validation loss, "maximize" when optimizing one or several metrics. backend ( str or ~training_utils.HPSearchBackend , optional ) — The backend to use for hyperparameter search. Will default to optuna or Ray Tune or SigOpt, depending on which one is installed. If all are installed, will default to optuna. hp_name ( Callable[["optuna.Trial"], str]] , optional ) — A function that defines the trial/run name. Will default to None. kwargs ( Dict[str, Any] , optional ) — Additional keyword arguments for each backend: optuna : parameters from optuna.study.create_study and also the parameters timeout , n_jobs and gc_after_trial from optuna.study.Study.optimize ray : parameters from tune.run . If resources_per_trial is not set in the kwargs , it defaults to 1 CPU core and 1 GPU (if available). If progress_reporter is not set in the kwargs , ray.tune.CLIReporter is used. sigopt : the parameter proxies from sigopt.Connection.set_proxies . Returns [ trainer_utils.BestRun or List[trainer_utils.BestRun] ] All the information about the best run or best runs for multi-objective optimization. Experiment summary can be found in run_summary attribute for Ray backend. Launch an hyperparameter search using optuna or Ray Tune or SigOpt . The optimized quantity is determined by compute_objective , which defaults to a function returning the evaluation loss when no metric is provided, the sum of all metrics otherwise. To use this method, you need to have provided a model_init when initializing your Trainer : we need to reinitialize the model at each new run. This is incompatible with the optimizers argument, so you need to subclass Trainer and override the method create_optimizer_and_scheduler() for custom optimizer/scheduler. init_hf_repo < source > ( token : typing.Optional[str] = None ) Initializes a git repo in self.args.hub_model_id . is_local_process_zero < source > ( ) Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several machines) main process. is_world_process_zero < source > ( ) Whether or not this process is the global main process (when training in a distributed fashion on several machines, this is only going to be True for one process). log < source > ( logs : typing.Dict[str, float] start_time : typing.Optional[float] = None ) Parameters logs ( Dict[str, float] ) — The values to log. start_time ( Optional[float] ) — The start of training. Log logs on the various objects watching training. Subclass and override this method to inject custom behavior. log_metrics < source > ( split metrics ) Parameters split ( str ) — Mode/split name: one of train , eval , test metrics ( Dict[str, float] ) — The metrics returned from train/evaluate/predictmetrics: metrics dict Log metrics in a specially formatted way Under distributed environment this is done only for a process with rank 0. Notes on memory reports: In order to get memory usage report you need to install psutil . You can do that with pip install psutil . Now when this method is run, you will see a report that will include: : Copied init_mem_cpu_alloc_delta = 1301 MB init_mem_cpu_peaked_delta = 154 MB init_mem_gpu_alloc_delta = 230 MB init_mem_gpu_peaked_delta = 0 MB train_mem_cpu_alloc_delta = 1345 MB train_mem_cpu_peaked_delta = 0 MB train_mem_gpu_alloc_delta = 693 MB train_mem_gpu_peaked_delta = 7 MB Understanding the reports: the first segment, e.g., train__ , tells you which stage the metrics are for. Reports starting with init_ will be added to the first stage that gets run. So that if only evaluation is run, the memory usage for the __init__ will be reported along with the eval_ metrics. the third segment, is either cpu or gpu , tells you whether it’s the general RAM or the gpu0 memory metric. *_alloc_delta - is the difference in the used/allocated memory counter between the end and the start of the stage - it can be negative if a function released more memory than it allocated. *_peaked_delta - is any extra memory that was consumed and then freed - relative to the current allocated memory counter - it is never negative. When you look at the metrics of any stage you add up alloc_delta + peaked_delta and you know how much memory was needed to complete that stage. The reporting happens only for process of rank 0 and gpu 0 (if there is a gpu). Typically this is enough since the main process does the bulk of work, but it could be not quite so if model parallel is used and then other GPUs may use a different amount of gpu memory. This is also not the same under DataParallel where gpu0 may require much more memory than the rest since it stores the gradient and optimizer states for all participating GPUS. Perhaps in the future these reports will evolve to measure those too. The CPU RAM metric measures RSS (Resident Set Size) includes both the memory which is unique to the process and the memory shared with other processes. It is important to note that it does not include swapped out memory, so the reports could be imprecise. The CPU peak memory is measured using a sampling thread. Due to python’s GIL it may miss some of the peak memory if that thread didn’t get a chance to run when the highest memory was used. Therefore this report can be less than reality. Using tracemalloc would have reported the exact peak memory, but it doesn’t report memory allocations outside of python. So if some C++ CUDA extension allocated its own memory it won’t be reported. And therefore it was dropped in favor of the memory sampling approach, which reads the current process memory usage. The GPU allocated and peak memory reporting is done with torch.cuda.memory_allocated() and torch.cuda.max_memory_allocated() . This metric reports only “deltas” for pytorch-specific allocations, as torch.cuda memory management system doesn’t track any memory allocated outside of pytorch. For example, the very first cuda call typically loads CUDA kernels, which may take from 0.5 to 2GB of GPU memory. Note that this tracker doesn’t account for memory allocations outside of Trainer ’s __init__ , train , evaluate and predict calls. Because evaluation calls may happen during train , we can’t handle nested invocations because torch.cuda.max_memory_allocated is a single counter, so if it gets reset by a nested eval call, train ’s tracker will report incorrect info. If this pytorch issue gets resolved it will be possible to change this class to be re-entrant. Until then we will only track the outer level of train , evaluate and predict methods. Which means that if eval is called during train , it’s the latter that will account for its memory usage and that of the former. This also means that if any other tool that is used along the Trainer calls torch.cuda.reset_peak_memory_stats , the gpu peak memory stats could be invalid. And the Trainer will disrupt the normal behavior of any such tools that rely on calling torch.cuda.reset_peak_memory_stats themselves. For best performance you may want to consider turning the memory profiling off for production runs. metrics_format < source > ( metrics : typing.Dict[str, float] ) → metrics ( Dict[str, float] ) Parameters metrics ( Dict[str, float] ) — The metrics returned from train/evaluate/predict Returns metrics ( Dict[str, float] ) The reformatted metrics Reformat Trainer metrics values to a human-readable format num_examples < source > ( dataloader : DataLoader ) Helper to get number of samples in a ~torch.utils.data.DataLoader by accessing its dataset. When dataloader.dataset does not exist or has no length, estimates as best it can num_tokens < source > ( train_dl : DataLoader max_steps : typing.Optional[int] = None ) Helper to get number of tokens in a ~torch.utils.data.DataLoader by enumerating dataloader. pop_callback < source > ( callback ) → TrainerCallback Parameters callback ( type or [`~transformers.TrainerCallback]`) — A TrainerCallback class or an instance of a TrainerCallback . In the first case, will pop the first member of that class found in the list of callbacks. Returns TrainerCallback The callback removed, if found. Remove a callback from the current list of TrainerCallback and returns it. If the callback is not found, returns None (and no error is raised). predict < source > ( test_dataset : Dataset ignore_keys : typing.Optional[typing.List[str]] = None metric_key_prefix : str = 'test' ) Parameters test_dataset ( Dataset ) — Dataset to run the predictions on. If it is an datasets.Dataset , columns not accepted by the model.forward() method are automatically removed. Has to implement the method __len__ ignore_keys ( List[str] , optional ) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions. metric_key_prefix ( str , optional , defaults to "test" ) — An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “test_bleu” if the prefix is “test” (default) Run prediction and returns predictions and potential metrics. Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in evaluate() . If your predictions or labels have different sequence length (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100. Returns: NamedTuple A namedtuple with the following keys: predictions ( np.ndarray ): The predictions on test_dataset . label_ids ( np.ndarray , optional ): The labels (if the dataset contained some). metrics ( Dict[str, float] , optional ): The potential dictionary of metrics (if the dataset contained labels). prediction_loop < source > ( dataloader : DataLoader description : str prediction_loss_only : typing.Optional[bool] = None ignore_keys : typing.Optional[typing.List[str]] = None metric_key_prefix : str = 'eval' ) Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict() . Works both with or without labels. prediction_step < source > ( model : Module inputs : typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] prediction_loss_only : bool ignore_keys : typing.Optional[typing.List[str]] = None ) → Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]] Parameters model ( nn.Module ) — The model to evaluate. inputs ( Dict[str, Union[torch.Tensor, Any]] ) — The inputs and targets of the model. The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument labels . Check your model’s documentation for all accepted arguments. prediction_loss_only ( bool ) — Whether or not to return the loss only. ignore_keys ( List[str] , optional ) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions. Returns Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]] A tuple with the loss, logits and labels (each being optional). Perform an evaluation step on model using inputs . Subclass and override to inject custom behavior. propagate_args_to_deepspeed < source > ( auto_find_batch_size = False ) Sets values in the deepspeed plugin based on the Trainer args push_to_hub < source > ( commit_message : typing.Optional[str] = 'End of training' blocking : bool = True token : typing.Optional[str] = None revision : typing.Optional[str] = None **kwargs ) Parameters commit_message ( str , optional , defaults to "End of training" ) — Message to commit while pushing. blocking ( bool , optional , defaults to True ) — Whether the function should return only when the git push has finished. token ( str , optional , defaults to None ) — Token with write permission to overwrite Trainer’s original args. revision ( str , optional ) — The git revision to commit from. Defaults to the head of the “main” branch. kwargs ( Dict[str, Any] , optional ) — Additional keyword arguments passed along to create_model_card() . Upload self.model and self.processing_class to the 🤗 model hub on the repo self.args.hub_model_id . remove_callback < source > ( callback ) Parameters callback ( type or [`~transformers.TrainerCallback]`) — A TrainerCallback class or an instance of a TrainerCallback . In the first case, will remove the first member of that class found in the list of callbacks. Remove a callback from the current list of TrainerCallback . save_metrics < source > ( split metrics combined = True ) Parameters split ( str ) — Mode/split name: one of train , eval , test , all metrics ( Dict[str, float] ) — The metrics returned from train/evaluate/predict combined ( bool , optional , defaults to True ) — Creates combined metrics by updating all_results.json with metrics of this call Save metrics into a json file for that split, e.g. train_results.json . Under distributed environment this is done only for a process with rank 0. To understand the metrics please read the docstring of log_metrics() . The only difference is that raw unformatted numbers are saved in the current method. save_model < source > ( output_dir : typing.Optional[str] = None _internal_call : bool = False ) Will save the model, so you can reload it using from_pretrained() . Will only save from the main process. save_state < source > ( ) Saves the Trainer state, since Trainer.save_model saves only the tokenizer with the model Under distributed environment this is done only for a process with rank 0. train < source > ( resume_from_checkpoint : typing.Union[str, bool, NoneType] = None trial : typing.Union[ForwardRef('optuna.Trial'), typing.Dict[str, typing.Any]] = None ignore_keys_for_eval : typing.Optional[typing.List[str]] = None **kwargs ) Parameters resume_from_checkpoint ( str or bool , optional ) — If a str , local path to a saved checkpoint as saved by a previous instance of Trainer . If a bool and equals True , load the last checkpoint in args.output_dir as saved by a previous instance of Trainer . If present, training will resume from the model/optimizer/scheduler states loaded here. trial ( optuna.Trial or Dict[str, Any] , optional ) — The trial run or the hyperparameter dictionary for hyperparameter search. ignore_keys_for_eval ( List[str] , optional ) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training. kwargs ( Dict[str, Any] , optional ) — Additional keyword arguments used to hide deprecated arguments Main training entry point. training_step < source > ( model : Module inputs : typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] num_items_in_batch = None ) → torch.Tensor Parameters model ( nn.Module ) — The model to train. inputs ( Dict[str, Union[torch.Tensor, Any]] ) — The inputs and targets of the model. The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument labels . Check your model’s documentation for all accepted arguments. Returns torch.Tensor The tensor with training loss on this batch. Perform a training step on a batch of inputs. Subclass and override to inject custom behavior. Seq2SeqTrainer class transformers. Seq2SeqTrainer < source > ( model : typing.Union[ForwardRef('PreTrainedModel'), torch.nn.modules.module.Module] = None args : TrainingArguments = None data_collator : typing.Optional[ForwardRef('DataCollator')] = None train_dataset : typing.Union[torch.utils.data.dataset.Dataset, ForwardRef('IterableDataset'), ForwardRef('datasets.Dataset'), NoneType] = None eval_dataset : typing.Union[torch.utils.data.dataset.Dataset, typing.Dict[str, torch.utils.data.dataset.Dataset], NoneType] = None processing_class : typing.Union[ForwardRef('PreTrainedTokenizerBase'), ForwardRef('BaseImageProcessor'), ForwardRef('FeatureExtractionMixin'), ForwardRef('ProcessorMixin'), NoneType] = None model_init : typing.Optional[typing.Callable[[], ForwardRef('PreTrainedModel')]] = None compute_loss_func : typing.Optional[typing.Callable] = None compute_metrics : typing.Optional[typing.Callable[[ForwardRef('EvalPrediction')], typing.Dict]] = None callbacks : typing.Optional[typing.List[ForwardRef('TrainerCallback')]] = None optimizers : typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) preprocess_logits_for_metrics : typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None ) evaluate < source > ( eval_dataset : typing.Optional[torch.utils.data.dataset.Dataset] = None ignore_keys : typing.Optional[typing.List[str]] = None metric_key_prefix : str = 'eval' **gen_kwargs ) Parameters eval_dataset ( Dataset , optional ) — Pass a dataset if you wish to override self.eval_dataset . If it is an Dataset , columns not accepted by the model.forward() method are automatically removed. It must implement the __len__ method. ignore_keys ( List[str] , optional ) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions. metric_key_prefix ( str , optional , defaults to "eval" ) — An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is "eval" (default) max_length ( int , optional ) — The maximum target length to use when predicting with the generate method. num_beams ( int , optional ) — Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search. gen_kwargs — Additional generate specific kwargs. Run evaluation and returns metrics. The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init compute_metrics argument). You can also subclass and override this method to inject custom behavior. predict < source > ( test_dataset : Dataset ignore_keys : typing.Optional[typing.List[str]] = None metric_key_prefix : str = 'test' **gen_kwargs ) Parameters test_dataset ( Dataset ) — Dataset to run the predictions on. If it is a Dataset , columns not accepted by the model.forward() method are automatically removed. Has to implement the method __len__ ignore_keys ( List[str] , optional ) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions. metric_key_prefix ( str , optional , defaults to "eval" ) — An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is "eval" (default) max_length ( int , optional ) — The maximum target length to use when predicting with the generate method. num_beams ( int , optional ) — Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search. gen_kwargs — Additional generate specific kwargs. Run prediction and returns predictions and potential metrics. Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in evaluate() . If your predictions or labels have different sequence lengths (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100. Returns: NamedTuple A namedtuple with the following keys: predictions ( np.ndarray ): The predictions on test_dataset . label_ids ( np.ndarray , optional ): The labels (if the dataset contained some). metrics ( Dict[str, float] , optional ): The potential dictionary of metrics (if the dataset contained labels). TrainingArguments class transformers. TrainingArguments < source > ( output_dir : str overwrite_output_dir : bool = False do_train : bool = False do_eval : bool = False do_predict : bool = False eval_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only : bool = False per_device_train_batch_size : int = 8 per_device_eval_batch_size : int = 8 per_gpu_train_batch_size : typing.Optional[int] = None per_gpu_eval_batch_size : typing.Optional[int] = None gradient_accumulation_steps : int = 1 eval_accumulation_steps : typing.Optional[int] = None eval_delay : typing.Optional[float] = 0 torch_empty_cache_steps : typing.Optional[int] = None learning_rate : float = 5e-05 weight_decay : float = 0.0 adam_beta1 : float = 0.9 adam_beta2 : float = 0.999 adam_epsilon : float = 1e-08 max_grad_norm : float = 1.0 num_train_epochs : float = 3.0 max_steps : int = -1 lr_scheduler_type : typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs : typing.Union[dict, str, NoneType] = <factory> warmup_ratio : float = 0.0 warmup_steps : int = 0 log_level : typing.Optional[str] = 'passive' log_level_replica : typing.Optional[str] = 'warning' log_on_each_node : bool = True logging_dir : typing.Optional[str] = None logging_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step : bool = False logging_steps : float = 500 logging_nan_inf_filter : bool = True save_strategy : typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps : float = 500 save_total_limit : typing.Optional[int] = None save_safetensors : typing.Optional[bool] = True save_on_each_node : bool = False save_only_model : bool = False restore_callback_states_from_checkpoint : bool = False no_cuda : bool = False use_cpu : bool = False use_mps_device : bool = False seed : int = 42 data_seed : typing.Optional[int] = None jit_mode_eval : bool = False use_ipex : bool = False bf16 : bool = False fp16 : bool = False fp16_opt_level : str = 'O1' half_precision_backend : str = 'auto' bf16_full_eval : bool = False fp16_full_eval : bool = False tf32 : typing.Optional[bool] = None local_rank : int = -1 ddp_backend : typing.Optional[str] = None tpu_num_cores : typing.Optional[int] = None tpu_metrics_debug : bool = False debug : typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last : bool = False eval_steps : typing.Optional[float] = None dataloader_num_workers : int = 0 dataloader_prefetch_factor : typing.Optional[int] = None past_index : int = -1 run_name : typing.Optional[str] = None disable_tqdm : typing.Optional[bool] = None remove_unused_columns : typing.Optional[bool] = True label_names : typing.Optional[typing.List[str]] = None load_best_model_at_end : typing.Optional[bool] = False metric_for_best_model : typing.Optional[str] = None greater_is_better : typing.Optional[bool] = None ignore_data_skip : bool = False fsdp : typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params : int = 0 fsdp_config : typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap : typing.Optional[str] = None accelerator_config : typing.Union[dict, str, NoneType] = None deepspeed : typing.Union[dict, str, NoneType] = None label_smoothing_factor : float = 0.0 optim : typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args : typing.Optional[str] = None adafactor : bool = False group_by_length : bool = False length_column_name : typing.Optional[str] = 'length' report_to : typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters : typing.Optional[bool] = None ddp_bucket_cap_mb : typing.Optional[int] = None ddp_broadcast_buffers : typing.Optional[bool] = None dataloader_pin_memory : bool = True dataloader_persistent_workers : bool = False skip_memory_metrics : bool = True use_legacy_prediction_loop : bool = False push_to_hub : bool = False resume_from_checkpoint : typing.Optional[str] = None hub_model_id : typing.Optional[str] = None hub_strategy : typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token : typing.Optional[str] = None hub_private_repo : typing.Optional[bool] = None hub_always_push : bool = False gradient_checkpointing : bool = False gradient_checkpointing_kwargs : typing.Union[dict, str, NoneType] = None include_inputs_for_metrics : bool = False include_for_metrics : typing.List[str] = <factory> eval_do_concat_batches : bool = True fp16_backend : str = 'auto' evaluation_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id : typing.Optional[str] = None push_to_hub_organization : typing.Optional[str] = None push_to_hub_token : typing.Optional[str] = None mp_parameters : str = '' auto_find_batch_size : bool = False full_determinism : bool = False torchdynamo : typing.Optional[str] = None ray_scope : typing.Optional[str] = 'last' ddp_timeout : typing.Optional[int] = 1800 torch_compile : bool = False torch_compile_backend : typing.Optional[str] = None torch_compile_mode : typing.Optional[str] = None dispatch_batches : typing.Optional[bool] = None split_batches : typing.Optional[bool] = None include_tokens_per_second : typing.Optional[bool] = False include_num_input_tokens_seen : typing.Optional[bool] = False neftune_noise_alpha : typing.Optional[float] = None optim_target_modules : typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics : bool = False eval_on_start : bool = False use_liger_kernel : typing.Optional[bool] = False eval_use_gather_object : typing.Optional[bool] = False average_tokens_across_devices : typing.Optional[bool] = False ) Parameters output_dir ( str ) — The output directory where the model predictions and checkpoints will be written. overwrite_output_dir ( bool , optional , defaults to False ) — If True , overwrite the content of the output directory. Use this to continue training if output_dir points to a checkpoint directory. do_train ( bool , optional , defaults to False ) — Whether to run training or not. This argument is not directly used by Trainer , it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details. do_eval ( bool , optional ) — Whether to run evaluation on the validation set or not. Will be set to True if eval_strategy is different from "no" . This argument is not directly used by Trainer , it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details. do_predict ( bool , optional , defaults to False ) — Whether to run predictions on the test set or not. This argument is not directly used by Trainer , it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details. eval_strategy ( str or IntervalStrategy , optional , defaults to "no" ) — The evaluation strategy to adopt during training. Possible values are: "no" : No evaluation is done during training. "steps" : Evaluation is done (and logged) every eval_steps . "epoch" : Evaluation is done at the end of each epoch. prediction_loss_only ( bool , optional , defaults to False ) — When performing evaluation and generating predictions, only returns the loss. per_device_train_batch_size ( int , optional , defaults to 8) — The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for training. per_device_eval_batch_size ( int , optional , defaults to 8) — The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for evaluation. gradient_accumulation_steps ( int , optional , defaults to 1) — Number of updates steps to accumulate the gradients for, before performing a backward/update pass. When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every gradient_accumulation_steps * xxx_step training examples. eval_accumulation_steps ( int , optional ) — Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/NPU/TPU before being moved to the CPU (faster but requires more memory). eval_delay ( float , optional ) — Number of epochs or steps to wait for before the first evaluation can be performed, depending on the eval_strategy. torch_empty_cache_steps ( int , optional ) — Number of steps to wait before calling torch.<device>.empty_cache() . If left unset or set to None, cache will not be emptied. This can help avoid CUDA out-of-memory errors by lowering peak VRAM usage at a cost of about 10% slower performance . learning_rate ( float , optional , defaults to 5e-5) — The initial learning rate for AdamW optimizer. weight_decay ( float , optional , defaults to 0) — The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in AdamW optimizer. adam_beta1 ( float , optional , defaults to 0.9) — The beta1 hyperparameter for the AdamW optimizer. adam_beta2 ( float , optional , defaults to 0.999) — The beta2 hyperparameter for the AdamW optimizer. adam_epsilon ( float , optional , defaults to 1e-8) — The epsilon hyperparameter for the AdamW optimizer. max_grad_norm ( float , optional , defaults to 1.0) — Maximum gradient norm (for gradient clipping). num_train_epochs( float , optional , defaults to 3.0) — Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training). max_steps ( int , optional , defaults to -1) — If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs . For a finite dataset, training is reiterated through the dataset (if all data is exhausted) until max_steps is reached. lr_scheduler_type ( str or SchedulerType , optional , defaults to "linear" ) — The scheduler type to use. See the documentation of SchedulerType for all possible values. lr_scheduler_kwargs (‘dict’, optional , defaults to {}) — The extra arguments for the lr_scheduler. See the documentation of each scheduler for possible values. warmup_ratio ( float , optional , defaults to 0.0) — Ratio of total training steps used for a linear warmup from 0 to learning_rate . warmup_steps ( int , optional , defaults to 0) — Number of steps used for a linear warmup from 0 to learning_rate . Overrides any effect of warmup_ratio . log_level ( str , optional , defaults to passive ) — Logger log level to use on the main process. Possible choices are the log levels as strings: ‘debug’, ‘info’, ‘warning’, ‘error’ and ‘critical’, plus a ‘passive’ level which doesn’t set anything and keeps the current log level for the Transformers library (which will be "warning" by default). log_level_replica ( str , optional , defaults to "warning" ) — Logger log level to use on replicas. Same choices as log_level ” log_on_each_node ( bool , optional , defaults to True ) — In multinode distributed training, whether to log using log_level once per node, or only on the main node. logging_dir ( str , optional ) — TensorBoard log directory. Will default to *output_dir/runs/ CURRENT_DATETIME_HOSTNAME* . logging_strategy ( str or IntervalStrategy , optional , defaults to "steps" ) — The logging strategy to adopt during training. Possible values are: "no" : No logging is done during training. "epoch" : Logging is done at the end of each epoch. "steps" : Logging is done every logging_steps . logging_first_step ( bool , optional , defaults to False ) — Whether to log the first global_step or not. logging_steps ( int or float , optional , defaults to 500) — Number of update steps between two logs if logging_strategy="steps" . Should be an integer or a float in range [0,1) . If smaller than 1, will be interpreted as ratio of total training steps. logging_nan_inf_filter ( bool , optional , defaults to True ) — Whether to filter nan and inf losses for logging. If set to True the loss of every step that is nan or inf is filtered and the average loss of the current logging window is taken instead. logging_nan_inf_filter only influences the logging of loss values, it does not change the behavior the gradient is computed or applied to the model. save_strategy ( str or SaveStrategy , optional , defaults to "steps" ) — The checkpoint save strategy to adopt during training. Possible values are: "no" : No save is done during training. "epoch" : Save is done at the end of each epoch. "steps" : Save is done every save_steps . "best" : Save is done whenever a new best_metric is achieved. If "epoch" or "steps" is chosen, saving will also be performed at the very end of training, always. save_steps ( int or float , optional , defaults to 500) — Number of updates steps before two checkpoint saves if save_strategy="steps" . Should be an integer or a float in range [0,1) . If smaller than 1, will be interpreted as ratio of total training steps. save_total_limit ( int , optional ) — If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir . When load_best_model_at_end is enabled, the “best” checkpoint according to metric_for_best_model will always be retained in addition to the most recent ones. For example, for save_total_limit=5 and load_best_model_at_end , the four last checkpoints will always be retained alongside the best model. When save_total_limit=1 and load_best_model_at_end , it is possible that two checkpoints are saved: the last one and the best one (if they are different). save_safetensors ( bool , optional , defaults to True ) — Use safetensors saving and loading for state dicts instead of default torch.load and torch.save . save_on_each_node ( bool , optional , defaults to False ) — When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one. This should not be activated when the different nodes use the same storage as the files will be saved with the same names for each node. save_only_model ( bool , optional , defaults to False ) — When checkpointing, whether to only save the model, or also the optimizer, scheduler & rng state. Note that when this is true, you won’t be able to resume training from checkpoint. This enables you to save storage by not storing the optimizer, scheduler & rng state. You can only load the model using from_pretrained with this option set to True . restore_callback_states_from_checkpoint ( bool , optional , defaults to False ) — Whether to restore the callback states from the checkpoint. If True , will override callbacks passed to the Trainer if they exist in the checkpoint.” use_cpu ( bool , optional , defaults to False ) — Whether or not to use cpu. If set to False, we will use cuda or mps device if available. seed ( int , optional , defaults to 42) — Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the ~Trainer.model_init function to instantiate the model if it has some randomly initialized parameters. data_seed ( int , optional ) — Random seed to be used with data samplers. If not set, random generators for data sampling will use the same seed as seed . This can be used to ensure reproducibility of data sampling, independent of the model seed. jit_mode_eval ( bool , optional , defaults to False ) — Whether or not to use PyTorch jit trace for inference. use_ipex ( bool , optional , defaults to False ) — Use Intel extension for PyTorch when it is available. IPEX installation . bf16 ( bool , optional , defaults to False ) — Whether to use bf16 16-bit (mixed) precision training instead of 32-bit training. Requires Ampere or higher NVIDIA architecture or using CPU (use_cpu) or Ascend NPU. This is an experimental API and it may change. fp16 ( bool , optional , defaults to False ) — Whether to use fp16 16-bit (mixed) precision training instead of 32-bit training. fp16_opt_level ( str , optional , defaults to ‘O1’) — For fp16 training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation . fp16_backend ( str , optional , defaults to "auto" ) — This argument is deprecated. Use half_precision_backend instead. half_precision_backend ( str , optional , defaults to "auto" ) — The backend to use for mixed precision training. Must be one of "auto", "apex", "cpu_amp" . "auto" will use CPU/CUDA AMP or APEX depending on the PyTorch version detected, while the other choices will force the requested backend. bf16_full_eval ( bool , optional , defaults to False ) — Whether to use full bfloat16 evaluation instead of 32-bit. This will be faster and save memory but can harm metric values. This is an experimental API and it may change. fp16_full_eval ( bool , optional , defaults to False ) — Whether to use full float16 evaluation instead of 32-bit. This will be faster and save memory but can harm metric values. tf32 ( bool , optional ) — Whether to enable the TF32 mode, available in Ampere and newer GPU architectures. The default value depends on PyTorch’s version default of torch.backends.cuda.matmul.allow_tf32 . For more details please refer to the TF32 documentation. This is an experimental API and it may change. local_rank ( int , optional , defaults to -1) — Rank of the process during distributed training. ddp_backend ( str , optional ) — The backend to use for distributed training. Must be one of "nccl" , "mpi" , "ccl" , "gloo" , "hccl" . tpu_num_cores ( int , optional ) — When training on TPU, the number of TPU cores (automatically passed by launcher script). dataloader_drop_last ( bool , optional , defaults to False ) — Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not. eval_steps ( int or float , optional ) — Number of update steps between two evaluations if eval_strategy="steps" . Will default to the same value as logging_steps if not set. Should be an integer or a float in range [0,1) . If smaller than 1, will be interpreted as ratio of total training steps. dataloader_num_workers ( int , optional , defaults to 0) — Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process. past_index ( int , optional , defaults to -1) — Some models like TransformerXL or XLNet can make use of the past hidden states for their predictions. If this argument is set to a positive int, the Trainer will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argument mems . run_name ( str , optional , defaults to output_dir ) — A descriptor for the run. Typically used for wandb , mlflow and comet logging. If not specified, will be the same as output_dir . disable_tqdm ( bool , optional ) — Whether or not to disable the tqdm progress bars and table of metrics produced by ~notebook.NotebookTrainingTracker in Jupyter Notebooks. Will default to True if the logging level is set to warn or lower (default), False otherwise. remove_unused_columns ( bool , optional , defaults to True ) — Whether or not to automatically remove the columns unused by the model forward method. label_names ( List[str] , optional ) — The list of keys in your dictionary of inputs that correspond to the labels. Will eventually default to the list of argument names accepted by the model that contain the word “label”, except if the model used is one of the XxxForQuestionAnswering in which case it will also include the ["start_positions", "end_positions"] keys. load_best_model_at_end ( bool , optional , defaults to False ) — Whether or not to load the best model found during training at the end of training. When this option is enabled, the best checkpoint will always be saved. See save_total_limit for more. When set to True , the parameters save_strategy needs to be the same as eval_strategy , and in the case it is “steps”, save_steps must be a round multiple of eval_steps . metric_for_best_model ( str , optional ) — Use in conjunction with load_best_model_at_end to specify the metric to use to compare two different models. Must be the name of a metric returned by the evaluation with or without the prefix "eval_" . If not specified, this will default to "loss" when either load_best_model_at_end == True or lr_scheduler_type == SchedulerType.REDUCE_ON_PLATEAU (to use the evaluation loss). If you set this value, greater_is_better will default to True unless the name ends with “loss”. Don’t forget to set it to False if your metric is better when lower. greater_is_better ( bool , optional ) — Use in conjunction with load_best_model_at_end and metric_for_best_model to specify if better models should have a greater metric or not. Will default to: True if metric_for_best_model is set to a value that doesn’t end in "loss" . False if metric_for_best_model is not set, or set to a value that ends in "loss" . ignore_data_skip ( bool , optional , defaults to False ) — When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set to True , the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have. fsdp ( bool , str or list of FSDPOption , optional , defaults to '' ) — Use PyTorch Distributed Parallel Training (in distributed training only). A list of options along the following: "full_shard" : Shard parameters, gradients and optimizer states. "shard_grad_op" : Shard optimizer states and gradients. "hybrid_shard" : Apply FULL_SHARD within a node, and replicate parameters across nodes. "hybrid_shard_zero2" : Apply SHARD_GRAD_OP within a node, and replicate parameters across nodes. "offload" : Offload parameters and gradients to CPUs (only compatible with "full_shard" and "shard_grad_op" ). "auto_wrap" : Automatically recursively wrap layers with FSDP using default_auto_wrap_policy . fsdp_config ( str or dict , optional ) — Config to be used with fsdp (Pytorch Distributed Parallel Training). The value is either a location of fsdp json config file (e.g., fsdp_config.json ) or an already loaded json file as dict . A List of config and its options: min_num_params ( int , optional , defaults to 0 ): FSDP’s minimum number of parameters for Default Auto Wrapping. (useful only when fsdp field is passed). transformer_layer_cls_to_wrap ( List[str] , optional ): List of transformer layer class names (case-sensitive) to wrap, e.g, BertLayer , GPTJBlock , T5Block … (useful only when fsdp flag is passed). backward_prefetch ( str , optional ) FSDP’s backward prefetch mode. Controls when to prefetch next set of parameters (useful only when fsdp field is passed). A list of options along the following: "backward_pre" : Prefetches the next set of parameters before the current set of parameter’s gradient computation. "backward_post" : This prefetches the next set of parameters after the current set of parameter’s gradient computation. forward_prefetch ( bool , optional , defaults to False ) FSDP’s forward prefetch mode (useful only when fsdp field is passed). If "True" , then FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass. limit_all_gathers ( bool , optional , defaults to False ) FSDP’s limit_all_gathers (useful only when fsdp field is passed). If "True" , FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers. use_orig_params ( bool , optional , defaults to True ) If "True" , allows non-uniform requires_grad during init, which means support for interspersed frozen and trainable paramteres. Useful in cases such as parameter-efficient fine-tuning. Please refer this [blog]( https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019 sync_module_states ( bool , optional , defaults to True ) If "True" , each individually wrapped FSDP unit will broadcast module parameters from rank 0 to ensure they are the same across all ranks after initialization cpu_ram_efficient_loading ( bool , optional , defaults to False ) If "True" , only the first process loads the pretrained model checkpoint while all other processes have empty weights. When this setting as "True" , sync_module_states also must to be "True" , otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. activation_checkpointing ( bool , optional , defaults to False ): If "True" , activation checkpointing is a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time for reduced memory usage. xla ( bool , optional , defaults to False ): Whether to use PyTorch/XLA Fully Sharded Data Parallel Training. This is an experimental feature and its API may evolve in the future. xla_fsdp_settings ( dict , optional ) The value is a dictionary which stores the XLA FSDP wrapping parameters. For a complete list of options, please see here . xla_fsdp_grad_ckpt ( bool , optional , defaults to False ): Will use gradient checkpointing over each nested XLA FSDP wrapped layer. This setting can only be used when the xla flag is set to true, and an auto wrapping policy is specified through fsdp_min_num_params or fsdp_transformer_layer_cls_to_wrap. deepspeed ( str or dict , optional ) — Use Deepspeed . This is an experimental feature and its API may evolve in the future. The value is either the location of DeepSpeed json config file (e.g., ds_config.json ) or an already loaded json file as a dict ” If enabling any Zero-init, make sure that your model is not initialized until *after* initializing the `TrainingArguments`, else it will not be applied. accelerator_config ( str , dict , or AcceleratorConfig , optional ) — Config to be used with the internal Accelerator implementation. The value is either a location of accelerator json config file (e.g., accelerator_config.json ), an already loaded json file as dict , or an instance of AcceleratorConfig . A list of config and its options: split_batches ( bool , optional , defaults to False ): Whether or not the accelerator should split the batches yielded by the dataloaders across the devices. If True the actual batch size used will be the same on any kind of distributed processes, but it must be a round multiple of the num_processes you are using. If False , actual batch size used will be the one set in your script multiplied by the number of processes. dispatch_batches ( bool , optional ): If set to True , the dataloader prepared by the Accelerator is only iterated through on the main process and then the batches are split and broadcast to each process. Will default to True for DataLoader whose underlying dataset is an IterableDataset , False otherwise. even_batches ( bool , optional , defaults to True ): If set to True , in cases where the total batch size across all processes does not exactly divide the dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among all workers. use_seedable_sampler ( bool , optional , defaults to True ): Whether or not use a fully seedable random sampler ( accelerate.data_loader.SeedableRandomSampler ). Ensures training results are fully reproducable using a different sampling technique. While seed-to-seed results may differ, on average the differences are neglible when using multiple different seeds to compare. Should also be ran with ~utils.set_seed for the best results. use_configured_state ( bool , optional , defaults to False ): Whether or not to use a pre-configured AcceleratorState or PartialState defined before calling TrainingArguments . If True , an Accelerator or PartialState must be initialized. Note that by doing so, this could lead to issues with hyperparameter tuning. label_smoothing_factor ( float , optional , defaults to 0.0) — The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded labels are changed from 0s and 1s to label_smoothing_factor/num_labels and 1 - label_smoothing_factor + label_smoothing_factor/num_labels respectively. debug ( str or list of DebugOption , optional , defaults to "" ) — Enable one or more debug features. This is an experimental feature. Possible options are: "underflow_overflow" : detects overflow in model’s input/outputs and reports the last frames that led to the event "tpu_metrics_debug" : print debug metrics on TPU The options should be separated by whitespaces. optim ( str or training_args.OptimizerNames , optional , defaults to "adamw_torch" ) — The optimizer to use, such as “adamw_hf”, “adamw_torch”, “adamw_torch_fused”, “adamw_apex_fused”, “adamw_anyprecision”, “adafactor”. See OptimizerNames in training_args.py for a full list of optimizers. optim_args ( str , optional ) — Optional arguments that are supplied to optimizers such as AnyPrecisionAdamW, AdEMAMix, and GaLore. group_by_length ( bool , optional , defaults to False ) — Whether or not to group together samples of roughly the same length in the training dataset (to minimize padding applied and be more efficient). Only useful if applying dynamic padding. length_column_name ( str , optional , defaults to "length" ) — Column name for precomputed lengths. If the column exists, grouping by length will use these values rather than computing them on train startup. Ignored unless group_by_length is True and the dataset is an instance of Dataset . report_to ( str or List[str] , optional , defaults to "all" ) — The list of integrations to report the results and logs to. Supported platforms are "azure_ml" , "clearml" , "codecarbon" , "comet_ml" , "dagshub" , "dvclive" , "flyte" , "mlflow" , "neptune" , "tensorboard" , and "wandb" . Use "all" to report to all integrations installed, "none" for no integrations. ddp_find_unused_parameters ( bool , optional ) — When using distributed training, the value of the flag find_unused_parameters passed to DistributedDataParallel . Will default to False if gradient checkpointing is used, True otherwise. ddp_bucket_cap_mb ( int , optional ) — When using distributed training, the value of the flag bucket_cap_mb passed to DistributedDataParallel . ddp_broadcast_buffers ( bool , optional ) — When using distributed training, the value of the flag broadcast_buffers passed to DistributedDataParallel . Will default to False if gradient checkpointing is used, True otherwise. dataloader_pin_memory ( bool , optional , defaults to True ) — Whether you want to pin memory in data loaders or not. Will default to True . dataloader_persistent_workers ( bool , optional , defaults to False ) — If True, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. Can potentially speed up training, but will increase RAM usage. Will default to False . dataloader_prefetch_factor ( int , optional ) — Number of batches loaded in advance by each worker. 2 means there will be a total of 2 * num_workers batches prefetched across all workers. skip_memory_metrics ( bool , optional , defaults to True ) — Whether to skip adding of memory profiler reports to metrics. This is skipped by default because it slows down the training and evaluation speed. push_to_hub ( bool , optional , defaults to False ) — Whether or not to push the model to the Hub every time the model is saved. If this is activated, output_dir will begin a git directory synced with the repo (determined by hub_model_id ) and the content will be pushed each time a save is triggered (depending on your save_strategy ). Calling save_model() will also trigger a push. If output_dir exists, it needs to be a local clone of the repository to which the Trainer will be pushed. resume_from_checkpoint ( str , optional ) — The path to a folder with a valid checkpoint for your model. This argument is not directly used by Trainer , it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details. hub_model_id ( str , optional ) — The name of the repository to keep in sync with the local output_dir . It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance "user_name/model" , which allows you to push to an organization you are a member of with "organization_name/model" . Will default to user_name/output_dir_name with output_dir_name being the name of output_dir . Will default to the name of output_dir . hub_strategy ( str or HubStrategy , optional , defaults to "every_save" ) — Defines the scope of what is pushed to the Hub and when. Possible values are: "end" : push the model, its configuration, the processing class e.g. tokenizer (if passed along to the Trainer ) and a draft of a model card when the save_model() method is called. "every_save" : push the model, its configuration, the processing class e.g. tokenizer (if passed along to the Trainer ) and a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training. "checkpoint" : like "every_save" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint="last-checkpoint") . "all_checkpoints" : like "checkpoint" but all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository) hub_token ( str , optional ) — The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with huggingface-cli login . hub_private_repo ( bool , optional ) — Whether to make the repo private. If None (default), the repo will be public unless the organization’s default is private. This value is ignored if the repo already exists. hub_always_push ( bool , optional , defaults to False ) — Unless this is True , the Trainer will skip pushing a checkpoint when the previous push is not finished. gradient_checkpointing ( bool , optional , defaults to False ) — If True, use gradient checkpointing to save memory at the expense of slower backward pass. gradient_checkpointing_kwargs ( dict , optional , defaults to None ) — Key word arguments to be passed to the gradient_checkpointing_enable method. include_inputs_for_metrics ( bool , optional , defaults to False ) — This argument is deprecated. Use include_for_metrics instead, e.g, include_for_metrics = ["inputs"] . include_for_metrics ( List[str] , optional , defaults to [] ) — Include additional data in the compute_metrics function if needed for metrics computation. Possible options to add to include_for_metrics list: "inputs" : Input data passed to the model, intended for calculating input dependent metrics. "loss" : Loss values computed during evaluation, intended for calculating loss dependent metrics. eval_do_concat_batches ( bool , optional , defaults to True ) — Whether to recursively concat inputs/losses/labels/predictions across batches. If False , will instead store them as lists, with each batch kept separate. auto_find_batch_size ( bool , optional , defaults to False ) — Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding CUDA Out-of-Memory errors. Requires accelerate to be installed ( pip install accelerate ) full_determinism ( bool , optional , defaults to False ) — If True , enable_full_determinism() is called instead of set_seed() to ensure reproducible results in distributed training. Important: this will negatively impact the performance, so only use it for debugging. torchdynamo ( str , optional ) — If set, the backend compiler for TorchDynamo. Possible choices are "eager" , "aot_eager" , "inductor" , "nvfuser" , "aot_nvfuser" , "aot_cudagraphs" , "ofi" , "fx2trt" , "onnxrt" and "ipex" . ray_scope ( str , optional , defaults to "last" ) — The scope to use when doing hyperparameter search with Ray. By default, "last" will be used. Ray will then use the last checkpoint of all trials, compare those, and select the best one. However, other options are also available. See the Ray documentation for more options. ddp_timeout ( int , optional , defaults to 1800) — The timeout for torch.distributed.init_process_group calls, used to avoid GPU socket timeouts when performing slow operations in distributed runnings. Please refer the [PyTorch documentation] ( https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group ) for more information. use_mps_device ( bool , optional , defaults to False ) — This argument is deprecated. mps device will be used if it is available similar to cuda device. torch_compile ( bool , optional , defaults to False ) — Whether or not to compile the model using PyTorch 2.0 torch.compile . This will use the best defaults for the torch.compile API . You can customize the defaults with the argument torch_compile_backend and torch_compile_mode but we don’t guarantee any of them will work as the support is progressively rolled in in PyTorch. This flag and the whole compile API is experimental and subject to change in future releases. torch_compile_backend ( str , optional ) — The backend to use in torch.compile . If set to any value, torch_compile will be set to True . Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions. This flag is experimental and subject to change in future releases. torch_compile_mode ( str , optional ) — The mode to use in torch.compile . If set to any value, torch_compile will be set to True . Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions. This flag is experimental and subject to change in future releases. split_batches ( bool , optional ) — Whether or not the accelerator should split the batches yielded by the dataloaders across the devices during distributed training. If set to True , the actual batch size used will be the same on any kind of distributed processes, but it must be a round multiple of the number of processes you are using (such as GPUs). include_tokens_per_second ( bool , optional ) — Whether or not to compute the number of tokens per second per device for training speed metrics. This will iterate over the entire training dataloader once beforehand, and will slow down the entire process. include_num_input_tokens_seen ( bool , optional ) — Whether or not to track the number of input tokens seen throughout training. May be slower in distributed training as gather operations must be called. neftune_noise_alpha ( Optional[float] ) — If not None , this will activate NEFTune noise embeddings. This can drastically improve model performance for instruction fine-tuning. Check out the original paper and the original code . Support transformers PreTrainedModel and also PeftModel from peft. The original paper used values in the range [5.0, 15.0]. optim_target_modules ( Union[str, List[str]] , optional ) — The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm https://arxiv.org/abs/2403.03507 See: https://github.com/jiaweizzhao/GaLore for more details. You need to make sure to pass a valid GaloRe optimizer, e.g. one of: “galore_adamw”, “galore_adamw_8bit”, “galore_adafactor” and make sure that the target modules are nn.Linear modules only. batch_eval_metrics ( Optional[bool] , defaults to False ) — If set to True , evaluation will call compute_metrics at the end of each batch to accumulate statistics rather than saving all eval logits in memory. When set to True , you must pass a compute_metrics function that takes a boolean argument compute_result , which when passed True , will trigger the final global summary statistics from the batch-level summary statistics you’ve accumulated over the evaluation set. eval_on_start ( bool , optional , defaults to False ) — Whether to perform a evaluation step (sanity check) before the training to ensure the validation steps works correctly. eval_use_gather_object ( bool , optional , defaults to False ) — Whether to run recursively gather object in a nested list/tuple/dictionary of objects from all devices. This should only be enabled if users are not just returning tensors, and this is actively discouraged by PyTorch. use_liger_kernel ( bool , optional , defaults to False ) — Whether enable Liger Kernel for LLM model training. It can effectively increase multi-GPU training throughput by ~20% and reduces memory usage by ~60%, works out of the box with flash attention, PyTorch FSDP, and Microsoft DeepSpeed. Currently, it supports llama, mistral, mixtral and gemma models. TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself . Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. get_process_log_level < source > ( ) Returns the log level to be used depending on whether this process is the main process of node 0, main process of node non-0, or a non-main process. For the main process the log level defaults to the logging level set ( logging.WARNING if you didn’t do anything) unless overridden by log_level argument. For the replica processes the log level defaults to logging.WARNING unless overridden by log_level_replica argument. The choice between the main and replica process settings is made according to the return value of should_log . get_warmup_steps < source > ( num_training_steps : int ) Get number of steps used for a linear warmup. main_process_first < source > ( local = True desc = 'work' ) Parameters local ( bool , optional , defaults to True ) — if True first means process of rank 0 of each node if False first means process of rank 0 of node rank 0 In multi-node environment with a shared filesystem you most likely will want to use local=False so that only the main process of the first node will do the processing. If however, the filesystem is not shared, then the main process of each node will need to do the processing, which is the default behavior. desc ( str , optional , defaults to "work" ) — a work description to be used in debug logs A context manager for torch distributed environment where on needs to do something on the main process, while blocking replicas, and when it’s finished releasing the replicas. One such use is for datasets ’s map feature which to be efficient should be run once on the main process, which upon completion saves a cached version of results and which then automatically gets loaded by the replicas. set_dataloader < source > ( train_batch_size : int = 8 eval_batch_size : int = 8 drop_last : bool = False num_workers : int = 0 pin_memory : bool = True persistent_workers : bool = False prefetch_factor : typing.Optional[int] = None auto_find_batch_size : bool = False ignore_data_skip : bool = False sampler_seed : typing.Optional[int] = None ) Parameters drop_last ( bool , optional , defaults to False ) — Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not. num_workers ( int , optional , defaults to 0) — Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process. pin_memory ( bool , optional , defaults to True ) — Whether you want to pin memory in data loaders or not. Will default to True . persistent_workers ( bool , optional , defaults to False ) — If True, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. Can potentially speed up training, but will increase RAM usage. Will default to False . prefetch_factor ( int , optional ) — Number of batches loaded in advance by each worker. 2 means there will be a total of 2 * num_workers batches prefetched across all workers. auto_find_batch_size ( bool , optional , defaults to False ) — Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding CUDA Out-of-Memory errors. Requires accelerate to be installed ( pip install accelerate ) ignore_data_skip ( bool , optional , defaults to False ) — When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set to True , the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have. sampler_seed ( int , optional ) — Random seed to be used with data samplers. If not set, random generators for data sampling will use the same seed as self.seed . This can be used to ensure reproducibility of data sampling, independent of the model seed. A method that regroups all arguments linked to the dataloaders creation. Example: Copied >>> from transformers import TrainingArguments >>> args = TrainingArguments( "working_dir" ) >>> args = args.set_dataloader(train_batch_size= 16 , eval_batch_size= 64 ) >>> args.per_device_train_batch_size 16 set_evaluate < source > ( strategy : typing.Union[str, transformers.trainer_utils.IntervalStrategy] = 'no' steps : int = 500 batch_size : int = 8 accumulation_steps : typing.Optional[int] = None delay : typing.Optional[float] = None loss_only : bool = False jit_mode : bool = False ) Parameters strategy ( str or IntervalStrategy , optional , defaults to "no" ) — The evaluation strategy to adopt during training. Possible values are: "no" : No evaluation is done during training. "steps" : Evaluation is done (and logged) every steps . "epoch" : Evaluation is done at the end of each epoch. Setting a strategy different from "no" will set self.do_eval to True . steps ( int , optional , defaults to 500) — Number of update steps between two evaluations if strategy="steps" . batch_size ( int optional , defaults to 8) — The batch size per device (GPU/TPU core/CPU…) used for evaluation. accumulation_steps ( int , optional ) — Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/TPU before being moved to the CPU (faster but requires more memory). delay ( float , optional ) — Number of epochs or steps to wait for before the first evaluation can be performed, depending on the eval_strategy. loss_only ( bool , optional , defaults to False ) — Ignores all outputs except the loss. jit_mode ( bool , optional ) — Whether or not to use PyTorch jit trace for inference. A method that regroups all arguments linked to evaluation. Example: Copied >>> from transformers import TrainingArguments >>> args = TrainingArguments( "working_dir" ) >>> args = args.set_evaluate(strategy= "steps" , steps= 100 ) >>> args.eval_steps 100 set_logging < source > ( strategy : typing.Union[str, transformers.trainer_utils.IntervalStrategy] = 'steps' steps : int = 500 report_to : typing.Union[str, typing.List[str]] = 'none' level : str = 'passive' first_step : bool = False nan_inf_filter : bool = False on_each_node : bool = False replica_level : str = 'passive' ) Parameters strategy ( str or IntervalStrategy , optional , defaults to "steps" ) — The logging strategy to adopt during training. Possible values are: "no" : No logging is done during training. "epoch" : Logging is done at the end of each epoch. "steps" : Logging is done every logging_steps . steps ( int , optional , defaults to 500) — Number of update steps between two logs if strategy="steps" . level ( str , optional , defaults to "passive" ) — Logger log level to use on the main process. Possible choices are the log levels as strings: "debug" , "info" , "warning" , "error" and "critical" , plus a "passive" level which doesn’t set anything and lets the application set the level. report_to ( str or List[str] , optional , defaults to "all" ) — The list of integrations to report the results and logs to. Supported platforms are "azure_ml" , "clearml" , "codecarbon" , "comet_ml" , "dagshub" , "dvclive" , "flyte" , "mlflow" , "neptune" , "tensorboard" , and "wandb" . Use "all" to report to all integrations installed, "none" for no integrations. first_step ( bool , optional , defaults to False ) — Whether to log and evaluate the first global_step or not. nan_inf_filter ( bool , optional , defaults to True ) — Whether to filter nan and inf losses for logging. If set to True the loss of every step that is nan or inf is filtered and the average loss of the current logging window is taken instead. nan_inf_filter only influences the logging of loss values, it does not change the behavior the gradient is computed or applied to the model. on_each_node ( bool , optional , defaults to True ) — In multinode distributed training, whether to log using log_level once per node, or only on the main node. replica_level ( str , optional , defaults to "passive" ) — Logger log level to use on replicas. Same choices as log_level A method that regroups all arguments linked to logging. Example: Copied >>> from transformers import TrainingArguments >>> args = TrainingArguments( "working_dir" ) >>> args = args.set_logging(strategy= "steps" , steps= 100 ) >>> args.logging_steps 100 set_lr_scheduler < source > ( name : typing.Union[str, transformers.trainer_utils.SchedulerType] = 'linear' num_epochs : float = 3.0 max_steps : int = -1 warmup_ratio : float = 0 warmup_steps : int = 0 ) Parameters name ( str or SchedulerType , optional , defaults to "linear" ) — The scheduler type to use. See the documentation of SchedulerType for all possible values. num_epochs( float , optional , defaults to 3.0) — Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training). max_steps ( int , optional , defaults to -1) — If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs . For a finite dataset, training is reiterated through the dataset (if all data is exhausted) until max_steps is reached. warmup_ratio ( float , optional , defaults to 0.0) — Ratio of total training steps used for a linear warmup from 0 to learning_rate . warmup_steps ( int , optional , defaults to 0) — Number of steps used for a linear warmup from 0 to learning_rate . Overrides any effect of warmup_ratio . A method that regroups all arguments linked to the learning rate scheduler and its hyperparameters. Example: Copied >>> from transformers import TrainingArguments >>> args = TrainingArguments( "working_dir" ) >>> args = args.set_lr_scheduler(name= "cosine" , warmup_ratio= 0.05 ) >>> args.warmup_ratio 0.05 set_optimizer < source > ( name : typing.Union[str, transformers.training_args.OptimizerNames] = 'adamw_torch' learning_rate : float = 5e-05 weight_decay : float = 0 beta1 : float = 0.9 beta2 : float = 0.999 epsilon : float = 1e-08 args : typing.Optional[str] = None ) Parameters name ( str or training_args.OptimizerNames , optional , defaults to "adamw_torch" ) — The optimizer to use: "adamw_hf" , "adamw_torch" , "adamw_torch_fused" , "adamw_apex_fused" , "adamw_anyprecision" or "adafactor" . learning_rate ( float , optional , defaults to 5e-5) — The initial learning rate. weight_decay ( float , optional , defaults to 0) — The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights. beta1 ( float , optional , defaults to 0.9) — The beta1 hyperparameter for the adam optimizer or its variants. beta2 ( float , optional , defaults to 0.999) — The beta2 hyperparameter for the adam optimizer or its variants. epsilon ( float , optional , defaults to 1e-8) — The epsilon hyperparameter for the adam optimizer or its variants. args ( str , optional ) — Optional arguments that are supplied to AnyPrecisionAdamW (only useful when optim="adamw_anyprecision" ). A method that regroups all arguments linked to the optimizer and its hyperparameters. Example: Copied >>> from transformers import TrainingArguments >>> args = TrainingArguments( "working_dir" ) >>> args = args.set_optimizer(name= "adamw_torch" , beta1= 0.8 ) >>> args.optim 'adamw_torch' set_push_to_hub < source > ( model_id : str strategy : typing.Union[str, transformers.trainer_utils.HubStrategy] = 'every_save' token : typing.Optional[str] = None private_repo : typing.Optional[bool] = None always_push : bool = False ) Parameters model_id ( str ) — The name of the repository to keep in sync with the local output_dir . It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance "user_name/model" , which allows you to push to an organization you are a member of with "organization_name/model" . strategy ( str or HubStrategy , optional , defaults to "every_save" ) — Defines the scope of what is pushed to the Hub and when. Possible values are: "end" : push the model, its configuration, the processing_class e.g. tokenizer (if passed along to the Trainer ) and a draft of a model card when the save_model() method is called. "every_save" : push the model, its configuration, the processing_class e.g. tokenizer (if passed along to the Trainer ) and a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training. "checkpoint" : like "every_save" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint="last-checkpoint") . "all_checkpoints" : like "checkpoint" but all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository) token ( str , optional ) — The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with huggingface-cli login . private_repo ( bool , optional , defaults to False ) — Whether to make the repo private. If None (default), the repo will be public unless the organization’s default is private. This value is ignored if the repo already exists. always_push ( bool , optional , defaults to False ) — Unless this is True , the Trainer will skip pushing a checkpoint when the previous push is not finished. A method that regroups all arguments linked to synchronizing checkpoints with the Hub. Calling this method will set self.push_to_hub to True , which means the output_dir will begin a git directory synced with the repo (determined by model_id ) and the content will be pushed each time a save is triggered (depending on your self.save_strategy ). Calling save_model() will also trigger a push. Example: Copied >>> from transformers import TrainingArguments >>> args = TrainingArguments( "working_dir" ) >>> args = args.set_push_to_hub( "me/awesome-model" ) >>> args.hub_model_id 'me/awesome-model' set_save < source > ( strategy : typing.Union[str, transformers.trainer_utils.IntervalStrategy] = 'steps' steps : int = 500 total_limit : typing.Optional[int] = None on_each_node : bool = False ) Parameters strategy ( str or IntervalStrategy , optional , defaults to "steps" ) — The checkpoint save strategy to adopt during training. Possible values are: "no" : No save is done during training. "epoch" : Save is done at the end of each epoch. "steps" : Save is done every save_steps . steps ( int , optional , defaults to 500) — Number of updates steps before two checkpoint saves if strategy="steps" . total_limit ( int , optional ) — If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir . on_each_node ( bool , optional , defaults to False ) — When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one. This should not be activated when the different nodes use the same storage as the files will be saved with the same names for each node. A method that regroups all arguments linked to checkpoint saving. Example: Copied >>> from transformers import TrainingArguments >>> args = TrainingArguments( "working_dir" ) >>> args = args.set_save(strategy= "steps" , steps= 100 ) >>> args.save_steps 100 set_testing < source > ( batch_size : int = 8 loss_only : bool = False jit_mode : bool = False ) Parameters batch_size ( int optional , defaults to 8) — The batch size per device (GPU/TPU core/CPU…) used for testing. loss_only ( bool , optional , defaults to False ) — Ignores all outputs except the loss. jit_mode ( bool , optional ) — Whether or not to use PyTorch jit trace for inference. A method that regroups all basic arguments linked to testing on a held-out dataset. Calling this method will automatically set self.do_predict to True . Example: Copied >>> from transformers import TrainingArguments >>> args = TrainingArguments( "working_dir" ) >>> args = args.set_testing(batch_size= 32 ) >>> args.per_device_eval_batch_size 32 set_training < source > ( learning_rate : float = 5e-05 batch_size : int = 8 weight_decay : float = 0 num_epochs : float = 3 max_steps : int = -1 gradient_accumulation_steps : int = 1 seed : int = 42 gradient_checkpointing : bool = False ) Parameters learning_rate ( float , optional , defaults to 5e-5) — The initial learning rate for the optimizer. batch_size ( int optional , defaults to 8) — The batch size per device (GPU/TPU core/CPU…) used for training. weight_decay ( float , optional , defaults to 0) — The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in the optimizer. num_train_epochs( float , optional , defaults to 3.0) — Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training). max_steps ( int , optional , defaults to -1) — If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs . For a finite dataset, training is reiterated through the dataset (if all data is exhausted) until max_steps is reached. gradient_accumulation_steps ( int , optional , defaults to 1) — Number of updates steps to accumulate the gradients for, before performing a backward/update pass. When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every gradient_accumulation_steps * xxx_step training examples. seed ( int , optional , defaults to 42) — Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the ~Trainer.model_init function to instantiate the model if it has some randomly initialized parameters. gradient_checkpointing ( bool , optional , defaults to False ) — If True, use gradient checkpointing to save memory at the expense of slower backward pass. A method that regroups all basic arguments linked to the training. Calling this method will automatically set self.do_train to True . Example: Copied >>> from transformers import TrainingArguments >>> args = TrainingArguments( "working_dir" ) >>> args = args.set_training(learning_rate= 1e-4 , batch_size= 32 ) >>> args.learning_rate 1e-4 to_dict < source > ( ) Serializes this instance while replace Enum by their values (for JSON serialization support). It obfuscates the token values by removing their value. to_json_string < source > ( ) Serializes this instance to a JSON string. to_sanitized_dict < source > ( ) Sanitized serialization to use with TensorBoard’s hparams Seq2SeqTrainingArguments class transformers. Seq2SeqTrainingArguments < source > ( output_dir : str overwrite_output_dir : bool = False do_train : bool = False do_eval : bool = False do_predict : bool = False eval_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only : bool = False per_device_train_batch_size : int = 8 per_device_eval_batch_size : int = 8 per_gpu_train_batch_size : typing.Optional[int] = None per_gpu_eval_batch_size : typing.Optional[int] = None gradient_accumulation_steps : int = 1 eval_accumulation_steps : typing.Optional[int] = None eval_delay : typing.Optional[float] = 0 torch_empty_cache_steps : typing.Optional[int] = None learning_rate : float = 5e-05 weight_decay : float = 0.0 adam_beta1 : float = 0.9 adam_beta2 : float = 0.999 adam_epsilon : float = 1e-08 max_grad_norm : float = 1.0 num_train_epochs : float = 3.0 max_steps : int = -1 lr_scheduler_type : typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs : typing.Union[dict, str, NoneType] = <factory> warmup_ratio : float = 0.0 warmup_steps : int = 0 log_level : typing.Optional[str] = 'passive' log_level_replica : typing.Optional[str] = 'warning' log_on_each_node : bool = True logging_dir : typing.Optional[str] = None logging_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step : bool = False logging_steps : float = 500 logging_nan_inf_filter : bool = True save_strategy : typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps : float = 500 save_total_limit : typing.Optional[int] = None save_safetensors : typing.Optional[bool] = True save_on_each_node : bool = False save_only_model : bool = False restore_callback_states_from_checkpoint : bool = False no_cuda : bool = False use_cpu : bool = False use_mps_device : bool = False seed : int = 42 data_seed : typing.Optional[int] = None jit_mode_eval : bool = False use_ipex : bool = False bf16 : bool = False fp16 : bool = False fp16_opt_level : str = 'O1' half_precision_backend : str = 'auto' bf16_full_eval : bool = False fp16_full_eval : bool = False tf32 : typing.Optional[bool] = None local_rank : int = -1 ddp_backend : typing.Optional[str] = None tpu_num_cores : typing.Optional[int] = None tpu_metrics_debug : bool = False debug : typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last : bool = False eval_steps : typing.Optional[float] = None dataloader_num_workers : int = 0 dataloader_prefetch_factor : typing.Optional[int] = None past_index : int = -1 run_name : typing.Optional[str] = None disable_tqdm : typing.Optional[bool] = None remove_unused_columns : typing.Optional[bool] = True label_names : typing.Optional[typing.List[str]] = None load_best_model_at_end : typing.Optional[bool] = False metric_for_best_model : typing.Optional[str] = None greater_is_better : typing.Optional[bool] = None ignore_data_skip : bool = False fsdp : typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params : int = 0 fsdp_config : typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap : typing.Optional[str] = None accelerator_config : typing.Union[dict, str, NoneType] = None deepspeed : typing.Union[dict, str, NoneType] = None label_smoothing_factor : float = 0.0 optim : typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args : typing.Optional[str] = None adafactor : bool = False group_by_length : bool = False length_column_name : typing.Optional[str] = 'length' report_to : typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters : typing.Optional[bool] = None ddp_bucket_cap_mb : typing.Optional[int] = None ddp_broadcast_buffers : typing.Optional[bool] = None dataloader_pin_memory : bool = True dataloader_persistent_workers : bool = False skip_memory_metrics : bool = True use_legacy_prediction_loop : bool = False push_to_hub : bool = False resume_from_checkpoint : typing.Optional[str] = None hub_model_id : typing.Optional[str] = None hub_strategy : typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token : typing.Optional[str] = None hub_private_repo : typing.Optional[bool] = None hub_always_push : bool = False gradient_checkpointing : bool = False gradient_checkpointing_kwargs : typing.Union[dict, str, NoneType] = None include_inputs_for_metrics : bool = False include_for_metrics : typing.List[str] = <factory> eval_do_concat_batches : bool = True fp16_backend : str = 'auto' evaluation_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id : typing.Optional[str] = None push_to_hub_organization : typing.Optional[str] = None push_to_hub_token : typing.Optional[str] = None mp_parameters : str = '' auto_find_batch_size : bool = False full_determinism : bool = False torchdynamo : typing.Optional[str] = None ray_scope : typing.Optional[str] = 'last' ddp_timeout : typing.Optional[int] = 1800 torch_compile : bool = False torch_compile_backend : typing.Optional[str] = None torch_compile_mode : typing.Optional[str] = None dispatch_batches : typing.Optional[bool] = None split_batches : typing.Optional[bool] = None include_tokens_per_second : typing.Optional[bool] = False include_num_input_tokens_seen : typing.Optional[bool] = False neftune_noise_alpha : typing.Optional[float] = None optim_target_modules : typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics : bool = False eval_on_start : bool = False use_liger_kernel : typing.Optional[bool] = False eval_use_gather_object : typing.Optional[bool] = False average_tokens_across_devices : typing.Optional[bool] = False sortish_sampler : bool = False predict_with_generate : bool = False generation_max_length : typing.Optional[int] = None generation_num_beams : typing.Optional[int] = None generation_config : typing.Union[str, pathlib.Path, transformers.generation.configuration_utils.GenerationConfig, NoneType] = None ) Parameters output_dir ( str ) — The output directory where the model predictions and checkpoints will be written. overwrite_output_dir ( bool , optional , defaults to False ) — If True , overwrite the content of the output directory. Use this to continue training if output_dir points to a checkpoint directory. do_train ( bool , optional , defaults to False ) — Whether to run training or not. This argument is not directly used by Trainer , it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details. do_eval ( bool , optional ) — Whether to run evaluation on the validation set or not. Will be set to True if eval_strategy is different from "no" . This argument is not directly used by Trainer , it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details. do_predict ( bool , optional , defaults to False ) — Whether to run predictions on the test set or not. This argument is not directly used by Trainer , it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details. eval_strategy ( str or IntervalStrategy , optional , defaults to "no" ) — The evaluation strategy to adopt during training. Possible values are: "no" : No evaluation is done during training. "steps" : Evaluation is done (and logged) every eval_steps . "epoch" : Evaluation is done at the end of each epoch. prediction_loss_only ( bool , optional , defaults to False ) — When performing evaluation and generating predictions, only returns the loss. per_device_train_batch_size ( int , optional , defaults to 8) — The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for training. per_device_eval_batch_size ( int , optional , defaults to 8) — The batch size per GPU/XPU/TPU/MPS/NPU core/CPU for evaluation. gradient_accumulation_steps ( int , optional , defaults to 1) — Number of updates steps to accumulate the gradients for, before performing a backward/update pass. When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every gradient_accumulation_steps * xxx_step training examples. eval_accumulation_steps ( int , optional ) — Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/NPU/TPU before being moved to the CPU (faster but requires more memory). eval_delay ( float , optional ) — Number of epochs or steps to wait for before the first evaluation can be performed, depending on the eval_strategy. torch_empty_cache_steps ( int , optional ) — Number of steps to wait before calling torch.<device>.empty_cache() . If left unset or set to None, cache will not be emptied. This can help avoid CUDA out-of-memory errors by lowering peak VRAM usage at a cost of about 10% slower performance . learning_rate ( float , optional , defaults to 5e-5) — The initial learning rate for AdamW optimizer. weight_decay ( float , optional , defaults to 0) — The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in AdamW optimizer. adam_beta1 ( float , optional , defaults to 0.9) — The beta1 hyperparameter for the AdamW optimizer. adam_beta2 ( float , optional , defaults to 0.999) — The beta2 hyperparameter for the AdamW optimizer. adam_epsilon ( float , optional , defaults to 1e-8) — The epsilon hyperparameter for the AdamW optimizer. max_grad_norm ( float , optional , defaults to 1.0) — Maximum gradient norm (for gradient clipping). num_train_epochs( float , optional , defaults to 3.0) — Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training). max_steps ( int , optional , defaults to -1) — If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs . For a finite dataset, training is reiterated through the dataset (if all data is exhausted) until max_steps is reached. lr_scheduler_type ( str or SchedulerType , optional , defaults to "linear" ) — The scheduler type to use. See the documentation of SchedulerType for all possible values. lr_scheduler_kwargs (‘dict’, optional , defaults to {}) — The extra arguments for the lr_scheduler. See the documentation of each scheduler for possible values. warmup_ratio ( float , optional , defaults to 0.0) — Ratio of total training steps used for a linear warmup from 0 to learning_rate . warmup_steps ( int , optional , defaults to 0) — Number of steps used for a linear warmup from 0 to learning_rate . Overrides any effect of warmup_ratio . log_level ( str , optional , defaults to passive ) — Logger log level to use on the main process. Possible choices are the log levels as strings: ‘debug’, ‘info’, ‘warning’, ‘error’ and ‘critical’, plus a ‘passive’ level which doesn’t set anything and keeps the current log level for the Transformers library (which will be "warning" by default). log_level_replica ( str , optional , defaults to "warning" ) — Logger log level to use on replicas. Same choices as log_level ” log_on_each_node ( bool , optional , defaults to True ) — In multinode distributed training, whether to log using log_level once per node, or only on the main node. logging_dir ( str , optional ) — TensorBoard log directory. Will default to *output_dir/runs/ CURRENT_DATETIME_HOSTNAME* . logging_strategy ( str or IntervalStrategy , optional , defaults to "steps" ) — The logging strategy to adopt during training. Possible values are: "no" : No logging is done during training. "epoch" : Logging is done at the end of each epoch. "steps" : Logging is done every logging_steps . logging_first_step ( bool , optional , defaults to False ) — Whether to log the first global_step or not. logging_steps ( int or float , optional , defaults to 500) — Number of update steps between two logs if logging_strategy="steps" . Should be an integer or a float in range [0,1) . If smaller than 1, will be interpreted as ratio of total training steps. logging_nan_inf_filter ( bool , optional , defaults to True ) — Whether to filter nan and inf losses for logging. If set to True the loss of every step that is nan or inf is filtered and the average loss of the current logging window is taken instead. logging_nan_inf_filter only influences the logging of loss values, it does not change the behavior the gradient is computed or applied to the model. save_strategy ( str or SaveStrategy , optional , defaults to "steps" ) — The checkpoint save strategy to adopt during training. Possible values are: "no" : No save is done during training. "epoch" : Save is done at the end of each epoch. "steps" : Save is done every save_steps . "best" : Save is done whenever a new best_metric is achieved. If "epoch" or "steps" is chosen, saving will also be performed at the very end of training, always. save_steps ( int or float , optional , defaults to 500) — Number of updates steps before two checkpoint saves if save_strategy="steps" . Should be an integer or a float in range [0,1) . If smaller than 1, will be interpreted as ratio of total training steps. save_total_limit ( int , optional ) — If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir . When load_best_model_at_end is enabled, the “best” checkpoint according to metric_for_best_model will always be retained in addition to the most recent ones. For example, for save_total_limit=5 and load_best_model_at_end , the four last checkpoints will always be retained alongside the best model. When save_total_limit=1 and load_best_model_at_end , it is possible that two checkpoints are saved: the last one and the best one (if they are different). save_safetensors ( bool , optional , defaults to True ) — Use safetensors saving and loading for state dicts instead of default torch.load and torch.save . save_on_each_node ( bool , optional , defaults to False ) — When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one. This should not be activated when the different nodes use the same storage as the files will be saved with the same names for each node. save_only_model ( bool , optional , defaults to False ) — When checkpointing, whether to only save the model, or also the optimizer, scheduler & rng state. Note that when this is true, you won’t be able to resume training from checkpoint. This enables you to save storage by not storing the optimizer, scheduler & rng state. You can only load the model using from_pretrained with this option set to True . restore_callback_states_from_checkpoint ( bool , optional , defaults to False ) — Whether to restore the callback states from the checkpoint. If True , will override callbacks passed to the Trainer if they exist in the checkpoint.” use_cpu ( bool , optional , defaults to False ) — Whether or not to use cpu. If set to False, we will use cuda or mps device if available. seed ( int , optional , defaults to 42) — Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the ~Trainer.model_init function to instantiate the model if it has some randomly initialized parameters. data_seed ( int , optional ) — Random seed to be used with data samplers. If not set, random generators for data sampling will use the same seed as seed . This can be used to ensure reproducibility of data sampling, independent of the model seed. jit_mode_eval ( bool , optional , defaults to False ) — Whether or not to use PyTorch jit trace for inference. use_ipex ( bool , optional , defaults to False ) — Use Intel extension for PyTorch when it is available. IPEX installation . bf16 ( bool , optional , defaults to False ) — Whether to use bf16 16-bit (mixed) precision training instead of 32-bit training. Requires Ampere or higher NVIDIA architecture or using CPU (use_cpu) or Ascend NPU. This is an experimental API and it may change. fp16 ( bool , optional , defaults to False ) — Whether to use fp16 16-bit (mixed) precision training instead of 32-bit training. fp16_opt_level ( str , optional , defaults to ‘O1’) — For fp16 training, Apex AMP optimization level selected in [‘O0’, ‘O1’, ‘O2’, and ‘O3’]. See details on the Apex documentation . fp16_backend ( str , optional , defaults to "auto" ) — This argument is deprecated. Use half_precision_backend instead. half_precision_backend ( str , optional , defaults to "auto" ) — The backend to use for mixed precision training. Must be one of "auto", "apex", "cpu_amp" . "auto" will use CPU/CUDA AMP or APEX depending on the PyTorch version detected, while the other choices will force the requested backend. bf16_full_eval ( bool , optional , defaults to False ) — Whether to use full bfloat16 evaluation instead of 32-bit. This will be faster and save memory but can harm metric values. This is an experimental API and it may change. fp16_full_eval ( bool , optional , defaults to False ) — Whether to use full float16 evaluation instead of 32-bit. This will be faster and save memory but can harm metric values. tf32 ( bool , optional ) — Whether to enable the TF32 mode, available in Ampere and newer GPU architectures. The default value depends on PyTorch’s version default of torch.backends.cuda.matmul.allow_tf32 . For more details please refer to the TF32 documentation. This is an experimental API and it may change. local_rank ( int , optional , defaults to -1) — Rank of the process during distributed training. ddp_backend ( str , optional ) — The backend to use for distributed training. Must be one of "nccl" , "mpi" , "ccl" , "gloo" , "hccl" . tpu_num_cores ( int , optional ) — When training on TPU, the number of TPU cores (automatically passed by launcher script). dataloader_drop_last ( bool , optional , defaults to False ) — Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not. eval_steps ( int or float , optional ) — Number of update steps between two evaluations if eval_strategy="steps" . Will default to the same value as logging_steps if not set. Should be an integer or a float in range [0,1) . If smaller than 1, will be interpreted as ratio of total training steps. dataloader_num_workers ( int , optional , defaults to 0) — Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process. past_index ( int , optional , defaults to -1) — Some models like TransformerXL or XLNet can make use of the past hidden states for their predictions. If this argument is set to a positive int, the Trainer will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argument mems . run_name ( str , optional , defaults to output_dir ) — A descriptor for the run. Typically used for wandb , mlflow and comet logging. If not specified, will be the same as output_dir . disable_tqdm ( bool , optional ) — Whether or not to disable the tqdm progress bars and table of metrics produced by ~notebook.NotebookTrainingTracker in Jupyter Notebooks. Will default to True if the logging level is set to warn or lower (default), False otherwise. remove_unused_columns ( bool , optional , defaults to True ) — Whether or not to automatically remove the columns unused by the model forward method. label_names ( List[str] , optional ) — The list of keys in your dictionary of inputs that correspond to the labels. Will eventually default to the list of argument names accepted by the model that contain the word “label”, except if the model used is one of the XxxForQuestionAnswering in which case it will also include the ["start_positions", "end_positions"] keys. load_best_model_at_end ( bool , optional , defaults to False ) — Whether or not to load the best model found during training at the end of training. When this option is enabled, the best checkpoint will always be saved. See save_total_limit for more. When set to True , the parameters save_strategy needs to be the same as eval_strategy , and in the case it is “steps”, save_steps must be a round multiple of eval_steps . metric_for_best_model ( str , optional ) — Use in conjunction with load_best_model_at_end to specify the metric to use to compare two different models. Must be the name of a metric returned by the evaluation with or without the prefix "eval_" . If not specified, this will default to "loss" when either load_best_model_at_end == True or lr_scheduler_type == SchedulerType.REDUCE_ON_PLATEAU (to use the evaluation loss). If you set this value, greater_is_better will default to True unless the name ends with “loss”. Don’t forget to set it to False if your metric is better when lower. greater_is_better ( bool , optional ) — Use in conjunction with load_best_model_at_end and metric_for_best_model to specify if better models should have a greater metric or not. Will default to: True if metric_for_best_model is set to a value that doesn’t end in "loss" . False if metric_for_best_model is not set, or set to a value that ends in "loss" . ignore_data_skip ( bool , optional , defaults to False ) — When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set to True , the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have. fsdp ( bool , str or list of FSDPOption , optional , defaults to '' ) — Use PyTorch Distributed Parallel Training (in distributed training only). A list of options along the following: "full_shard" : Shard parameters, gradients and optimizer states. "shard_grad_op" : Shard optimizer states and gradients. "hybrid_shard" : Apply FULL_SHARD within a node, and replicate parameters across nodes. "hybrid_shard_zero2" : Apply SHARD_GRAD_OP within a node, and replicate parameters across nodes. "offload" : Offload parameters and gradients to CPUs (only compatible with "full_shard" and "shard_grad_op" ). "auto_wrap" : Automatically recursively wrap layers with FSDP using default_auto_wrap_policy . fsdp_config ( str or dict , optional ) — Config to be used with fsdp (Pytorch Distributed Parallel Training). The value is either a location of fsdp json config file (e.g., fsdp_config.json ) or an already loaded json file as dict . A List of config and its options: min_num_params ( int , optional , defaults to 0 ): FSDP’s minimum number of parameters for Default Auto Wrapping. (useful only when fsdp field is passed). transformer_layer_cls_to_wrap ( List[str] , optional ): List of transformer layer class names (case-sensitive) to wrap, e.g, BertLayer , GPTJBlock , T5Block … (useful only when fsdp flag is passed). backward_prefetch ( str , optional ) FSDP’s backward prefetch mode. Controls when to prefetch next set of parameters (useful only when fsdp field is passed). A list of options along the following: "backward_pre" : Prefetches the next set of parameters before the current set of parameter’s gradient computation. "backward_post" : This prefetches the next set of parameters after the current set of parameter’s gradient computation. forward_prefetch ( bool , optional , defaults to False ) FSDP’s forward prefetch mode (useful only when fsdp field is passed). If "True" , then FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass. limit_all_gathers ( bool , optional , defaults to False ) FSDP’s limit_all_gathers (useful only when fsdp field is passed). If "True" , FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers. use_orig_params ( bool , optional , defaults to True ) If "True" , allows non-uniform requires_grad during init, which means support for interspersed frozen and trainable paramteres. Useful in cases such as parameter-efficient fine-tuning. Please refer this [blog]( https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019 sync_module_states ( bool , optional , defaults to True ) If "True" , each individually wrapped FSDP unit will broadcast module parameters from rank 0 to ensure they are the same across all ranks after initialization cpu_ram_efficient_loading ( bool , optional , defaults to False ) If "True" , only the first process loads the pretrained model checkpoint while all other processes have empty weights. When this setting as "True" , sync_module_states also must to be "True" , otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. activation_checkpointing ( bool , optional , defaults to False ): If "True" , activation checkpointing is a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time for reduced memory usage. xla ( bool , optional , defaults to False ): Whether to use PyTorch/XLA Fully Sharded Data Parallel Training. This is an experimental feature and its API may evolve in the future. xla_fsdp_settings ( dict , optional ) The value is a dictionary which stores the XLA FSDP wrapping parameters. For a complete list of options, please see here . xla_fsdp_grad_ckpt ( bool , optional , defaults to False ): Will use gradient checkpointing over each nested XLA FSDP wrapped layer. This setting can only be used when the xla flag is set to true, and an auto wrapping policy is specified through fsdp_min_num_params or fsdp_transformer_layer_cls_to_wrap. deepspeed ( str or dict , optional ) — Use Deepspeed . This is an experimental feature and its API may evolve in the future. The value is either the location of DeepSpeed json config file (e.g., ds_config.json ) or an already loaded json file as a dict ” If enabling any Zero-init, make sure that your model is not initialized until *after* initializing the `TrainingArguments`, else it will not be applied. accelerator_config ( str , dict , or AcceleratorConfig , optional ) — Config to be used with the internal Accelerator implementation. The value is either a location of accelerator json config file (e.g., accelerator_config.json ), an already loaded json file as dict , or an instance of AcceleratorConfig . A list of config and its options: split_batches ( bool , optional , defaults to False ): Whether or not the accelerator should split the batches yielded by the dataloaders across the devices. If True the actual batch size used will be the same on any kind of distributed processes, but it must be a round multiple of the num_processes you are using. If False , actual batch size used will be the one set in your script multiplied by the number of processes. dispatch_batches ( bool , optional ): If set to True , the dataloader prepared by the Accelerator is only iterated through on the main process and then the batches are split and broadcast to each process. Will default to True for DataLoader whose underlying dataset is an IterableDataset , False otherwise. even_batches ( bool , optional , defaults to True ): If set to True , in cases where the total batch size across all processes does not exactly divide the dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among all workers. use_seedable_sampler ( bool , optional , defaults to True ): Whether or not use a fully seedable random sampler ( accelerate.data_loader.SeedableRandomSampler ). Ensures training results are fully reproducable using a different sampling technique. While seed-to-seed results may differ, on average the differences are neglible when using multiple different seeds to compare. Should also be ran with ~utils.set_seed for the best results. use_configured_state ( bool , optional , defaults to False ): Whether or not to use a pre-configured AcceleratorState or PartialState defined before calling TrainingArguments . If True , an Accelerator or PartialState must be initialized. Note that by doing so, this could lead to issues with hyperparameter tuning. label_smoothing_factor ( float , optional , defaults to 0.0) — The label smoothing factor to use. Zero means no label smoothing, otherwise the underlying onehot-encoded labels are changed from 0s and 1s to label_smoothing_factor/num_labels and 1 - label_smoothing_factor + label_smoothing_factor/num_labels respectively. debug ( str or list of DebugOption , optional , defaults to "" ) — Enable one or more debug features. This is an experimental feature. Possible options are: "underflow_overflow" : detects overflow in model’s input/outputs and reports the last frames that led to the event "tpu_metrics_debug" : print debug metrics on TPU The options should be separated by whitespaces. optim ( str or training_args.OptimizerNames , optional , defaults to "adamw_torch" ) — The optimizer to use, such as “adamw_hf”, “adamw_torch”, “adamw_torch_fused”, “adamw_apex_fused”, “adamw_anyprecision”, “adafactor”. See OptimizerNames in training_args.py for a full list of optimizers. optim_args ( str , optional ) — Optional arguments that are supplied to optimizers such as AnyPrecisionAdamW, AdEMAMix, and GaLore. group_by_length ( bool , optional , defaults to False ) — Whether or not to group together samples of roughly the same length in the training dataset (to minimize padding applied and be more efficient). Only useful if applying dynamic padding. length_column_name ( str , optional , defaults to "length" ) — Column name for precomputed lengths. If the column exists, grouping by length will use these values rather than computing them on train startup. Ignored unless group_by_length is True and the dataset is an instance of Dataset . report_to ( str or List[str] , optional , defaults to "all" ) — The list of integrations to report the results and logs to. Supported platforms are "azure_ml" , "clearml" , "codecarbon" , "comet_ml" , "dagshub" , "dvclive" , "flyte" , "mlflow" , "neptune" , "tensorboard" , and "wandb" . Use "all" to report to all integrations installed, "none" for no integrations. ddp_find_unused_parameters ( bool , optional ) — When using distributed training, the value of the flag find_unused_parameters passed to DistributedDataParallel . Will default to False if gradient checkpointing is used, True otherwise. ddp_bucket_cap_mb ( int , optional ) — When using distributed training, the value of the flag bucket_cap_mb passed to DistributedDataParallel . ddp_broadcast_buffers ( bool , optional ) — When using distributed training, the value of the flag broadcast_buffers passed to DistributedDataParallel . Will default to False if gradient checkpointing is used, True otherwise. dataloader_pin_memory ( bool , optional , defaults to True ) — Whether you want to pin memory in data loaders or not. Will default to True . dataloader_persistent_workers ( bool , optional , defaults to False ) — If True, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. Can potentially speed up training, but will increase RAM usage. Will default to False . dataloader_prefetch_factor ( int , optional ) — Number of batches loaded in advance by each worker. 2 means there will be a total of 2 * num_workers batches prefetched across all workers. skip_memory_metrics ( bool , optional , defaults to True ) — Whether to skip adding of memory profiler reports to metrics. This is skipped by default because it slows down the training and evaluation speed. push_to_hub ( bool , optional , defaults to False ) — Whether or not to push the model to the Hub every time the model is saved. If this is activated, output_dir will begin a git directory synced with the repo (determined by hub_model_id ) and the content will be pushed each time a save is triggered (depending on your save_strategy ). Calling save_model() will also trigger a push. If output_dir exists, it needs to be a local clone of the repository to which the Trainer will be pushed. resume_from_checkpoint ( str , optional ) — The path to a folder with a valid checkpoint for your model. This argument is not directly used by Trainer , it’s intended to be used by your training/evaluation scripts instead. See the example scripts for more details. hub_model_id ( str , optional ) — The name of the repository to keep in sync with the local output_dir . It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance "user_name/model" , which allows you to push to an organization you are a member of with "organization_name/model" . Will default to user_name/output_dir_name with output_dir_name being the name of output_dir . Will default to the name of output_dir . hub_strategy ( str or HubStrategy , optional , defaults to "every_save" ) — Defines the scope of what is pushed to the Hub and when. Possible values are: "end" : push the model, its configuration, the processing class e.g. tokenizer (if passed along to the Trainer ) and a draft of a model card when the save_model() method is called. "every_save" : push the model, its configuration, the processing class e.g. tokenizer (if passed along to the Trainer ) and a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training. "checkpoint" : like "every_save" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint="last-checkpoint") . "all_checkpoints" : like "checkpoint" but all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository) hub_token ( str , optional ) — The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with huggingface-cli login . hub_private_repo ( bool , optional ) — Whether to make the repo private. If None (default), the repo will be public unless the organization’s default is private. This value is ignored if the repo already exists. hub_always_push ( bool , optional , defaults to False ) — Unless this is True , the Trainer will skip pushing a checkpoint when the previous push is not finished. gradient_checkpointing ( bool , optional , defaults to False ) — If True, use gradient checkpointing to save memory at the expense of slower backward pass. gradient_checkpointing_kwargs ( dict , optional , defaults to None ) — Key word arguments to be passed to the gradient_checkpointing_enable method. include_inputs_for_metrics ( bool , optional , defaults to False ) — This argument is deprecated. Use include_for_metrics instead, e.g, include_for_metrics = ["inputs"] . include_for_metrics ( List[str] , optional , defaults to [] ) — Include additional data in the compute_metrics function if needed for metrics computation. Possible options to add to include_for_metrics list: "inputs" : Input data passed to the model, intended for calculating input dependent metrics. "loss" : Loss values computed during evaluation, intended for calculating loss dependent metrics. eval_do_concat_batches ( bool , optional , defaults to True ) — Whether to recursively concat inputs/losses/labels/predictions across batches. If False , will instead store them as lists, with each batch kept separate. auto_find_batch_size ( bool , optional , defaults to False ) — Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding CUDA Out-of-Memory errors. Requires accelerate to be installed ( pip install accelerate ) full_determinism ( bool , optional , defaults to False ) — If True , enable_full_determinism() is called instead of set_seed() to ensure reproducible results in distributed training. Important: this will negatively impact the performance, so only use it for debugging. torchdynamo ( str , optional ) — If set, the backend compiler for TorchDynamo. Possible choices are "eager" , "aot_eager" , "inductor" , "nvfuser" , "aot_nvfuser" , "aot_cudagraphs" , "ofi" , "fx2trt" , "onnxrt" and "ipex" . ray_scope ( str , optional , defaults to "last" ) — The scope to use when doing hyperparameter search with Ray. By default, "last" will be used. Ray will then use the last checkpoint of all trials, compare those, and select the best one. However, other options are also available. See the Ray documentation for more options. ddp_timeout ( int , optional , defaults to 1800) — The timeout for torch.distributed.init_process_group calls, used to avoid GPU socket timeouts when performing slow operations in distributed runnings. Please refer the [PyTorch documentation] ( https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group ) for more information. use_mps_device ( bool , optional , defaults to False ) — This argument is deprecated. mps device will be used if it is available similar to cuda device. torch_compile ( bool , optional , defaults to False ) — Whether or not to compile the model using PyTorch 2.0 torch.compile . This will use the best defaults for the torch.compile API . You can customize the defaults with the argument torch_compile_backend and torch_compile_mode but we don’t guarantee any of them will work as the support is progressively rolled in in PyTorch. This flag and the whole compile API is experimental and subject to change in future releases. torch_compile_backend ( str , optional ) — The backend to use in torch.compile . If set to any value, torch_compile will be set to True . Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions. This flag is experimental and subject to change in future releases. torch_compile_mode ( str , optional ) — The mode to use in torch.compile . If set to any value, torch_compile will be set to True . Refer to the PyTorch doc for possible values and note that they may change across PyTorch versions. This flag is experimental and subject to change in future releases. split_batches ( bool , optional ) — Whether or not the accelerator should split the batches yielded by the dataloaders across the devices during distributed training. If set to True , the actual batch size used will be the same on any kind of distributed processes, but it must be a round multiple of the number of processes you are using (such as GPUs). include_tokens_per_second ( bool , optional ) — Whether or not to compute the number of tokens per second per device for training speed metrics. This will iterate over the entire training dataloader once beforehand, and will slow down the entire process. include_num_input_tokens_seen ( bool , optional ) — Whether or not to track the number of input tokens seen throughout training. May be slower in distributed training as gather operations must be called. neftune_noise_alpha ( Optional[float] ) — If not None , this will activate NEFTune noise embeddings. This can drastically improve model performance for instruction fine-tuning. Check out the original paper and the original code . Support transformers PreTrainedModel and also PeftModel from peft. The original paper used values in the range [5.0, 15.0]. optim_target_modules ( Union[str, List[str]] , optional ) — The target modules to optimize, i.e. the module names that you would like to train, right now this is used only for GaLore algorithm https://arxiv.org/abs/2403.03507 See: https://github.com/jiaweizzhao/GaLore for more details. You need to make sure to pass a valid GaloRe optimizer, e.g. one of: “galore_adamw”, “galore_adamw_8bit”, “galore_adafactor” and make sure that the target modules are nn.Linear modules only. batch_eval_metrics ( Optional[bool] , defaults to False ) — If set to True , evaluation will call compute_metrics at the end of each batch to accumulate statistics rather than saving all eval logits in memory. When set to True , you must pass a compute_metrics function that takes a boolean argument compute_result , which when passed True , will trigger the final global summary statistics from the batch-level summary statistics you’ve accumulated over the evaluation set. eval_on_start ( bool , optional , defaults to False ) — Whether to perform a evaluation step (sanity check) before the training to ensure the validation steps works correctly. eval_use_gather_object ( bool , optional , defaults to False ) — Whether to run recursively gather object in a nested list/tuple/dictionary of objects from all devices. This should only be enabled if users are not just returning tensors, and this is actively discouraged by PyTorch. use_liger_kernel ( bool , optional , defaults to False ) — Whether enable Liger Kernel for LLM model training. It can effectively increase multi-GPU training throughput by ~20% and reduces memory usage by ~60%, works out of the box with flash attention, PyTorch FSDP, and Microsoft DeepSpeed. Currently, it supports llama, mistral, mixtral and gemma models. predict_with_generate ( bool , optional , defaults to False ) — Whether to use generate to calculate generative metrics (ROUGE, BLEU). generation_max_length ( int , optional ) — The max_length to use on each evaluation loop when predict_with_generate=True . Will default to the max_length value of the model configuration. generation_num_beams ( int , optional ) — The num_beams to use on each evaluation loop when predict_with_generate=True . Will default to the num_beams value of the model configuration. generation_config ( str or Path or GenerationConfig , optional ) — Allows to load a GenerationConfig from the from_pretrained method. This can be either: a string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co. a path to a directory containing a configuration file saved using the save_pretrained() method, e.g., ./my_model_directory/ . a GenerationConfig object. TrainingArguments is the subset of the arguments we use in our example scripts which relate to the training loop itself . Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. to_dict < source > ( ) Serializes this instance while replace Enum by their values and GenerationConfig by dictionaries (for JSON serialization support). It obfuscates the token values by removing their value. < > Update on GitHub ← Tokenizer DeepSpeed → Trainer Trainer Seq2 Seq Trainer Training Arguments Seq2 Seq Training Arguments
Advanced_Security.txt
Advanced Security Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Advanced Security Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Single Sign-On (SSO) Audit Logs Storage Regions Dataset viewer for Private datasets Resource Groups (Access Control) Advanced Compute Options Advanced Security Tokens Management Analytics Network Security Gating Group Collections Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Advanced Security This feature is part of the Enterprise Hub . Enterprise Hub organizations can improve their security with advanced security controls for both members and repositories. Members Security Configure additional security settings to protect your organization: Two-Factor Authentication (2FA) : Require all organization members to enable 2FA for enhanced account security. User Approval : For organizations with a verified domain name, require admin approval for new users with matching email addresses. This adds a verified badge to your organization page. Repository Visibility Controls Manage the default visibility of repositories in your organization: Public by default : New repositories are created with public visibility Private by default : New repositories are created with private visibility. Note that changing this setting will not affect existing repositories. Private only : Enforce private visibility for all new repositories, with only organization admins able to change visibility settings These settings help organizations maintain control of their ownership while enabling collaboration when needed. < > Update on GitHub ← Advanced Compute Options Tokens Management → Advanced Security Members Security Repository Visibility Controls
Structure_your_repository.txt
Structure your repository Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Structure your repository Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Structure your repository To host and share your dataset, create a dataset repository on the Hugging Face Hub and upload your data files. This guide will show you how to structure your dataset repository when you upload it. A dataset with a supported structure and file format ( .txt , .csv , .parquet , .jsonl , .mp3 , .jpg , .zip etc.) are loaded automatically with load_dataset() , and it’ll have a dataset viewer on its dataset page on the Hub. Main use-case The simplest dataset structure has two files: train.csv and test.csv (this works with any supported file format). Your repository will also contain a README.md file, the dataset card displayed on your dataset page. Copied my_dataset_repository/ ├── README.md ├── train. csv └── test. csv In this simple case, you’ll get a dataset with two splits: train (containing examples from train.csv ) and test (containing examples from test.csv ). Define your splits and subsets in YAML Splits If you have multiple files and want to define which file goes into which split, you can use the YAML configs field at the top of your README.md. For example, given a repository like this one: Copied my_dataset_repository / ├── README .md ├── data .csv └── holdout.csv You can define your splits by adding the configs field in the YAML block at the top of your README.md: Copied --- configs: - config_name: default data_files: - split: train path: "data.csv" - split: test path: "holdout.csv" --- You can select multiple files per split using a list of paths: Copied my_dataset_repository/ ├── README.md ├── data/ │ ├── abc. csv │ └── def. csv └── holdout/ └── ghi. csv Copied --- configs: - config_name: default data_files: - split: train path: - "data/abc.csv" - "data/def.csv" - split: test path: "holdout/ghi.csv" --- Or you can use glob patterns to automatically list all the files you need: Copied --- configs: - config_name: default data_files: - split: train path: "data/*.csv" - split: test path: "holdout/*.csv" --- Note that config_name field is required even if you have a single configuration. Configurations Your dataset might have several subsets of data that you want to be able to load separately. In that case you can define a list of configurations inside the configs field in YAML: Copied my_dataset_repository/ ├── README.md ├── main_data. csv └── additional_data. csv Copied --- configs: - config_name: main_data data_files: "main_data.csv" - config_name: additional_data data_files: "additional_data.csv" --- Each configuration is shown separately on the Hugging Face Hub, and can be loaded by passing its name as a second parameter: Copied from datasets import load_dataset main_data = load_dataset( "my_dataset_repository" , "main_data" ) additional_data = load_dataset( "my_dataset_repository" , "additional_data" ) Builder parameters Not only data_files , but other builder-specific parameters can be passed via YAML, allowing for more flexibility on how to load the data while not requiring any custom code. For example, define which separator to use in which configuration to load your csv files: Copied --- configs: - config_name: tab data_files: "main_data.csv" sep: "\t" - config_name: comma data_files: "additional_data.csv" sep: "," --- Refer to specific builders’ documentation to see what configuration parameters they have. You can set a default configuration using default: true , e.g. you can run main_data = load_dataset("my_dataset_repository") if you set Copied - config_name: main_data data_files: "main_data.csv" default: true Automatic splits detection If no YAML is provided, 🤗 Datasets searches for certain patterns in the dataset repository to automatically infer the dataset splits. There is an order to the patterns, beginning with the custom filename split format to treating all files as a single split if no pattern is found. Directory name Your data files may also be placed into different directories named train , test , and validation where each directory contains the data files for that split: Copied my_dataset_repository/ ├── README.md └── data/ ├── train/ │ └── bees. csv ├── test/ │ └── more_bees. csv └── validation/ └── even_more_bees. csv Filename splits If you don’t have any non-traditional splits, then you can place the split name anywhere in the data file and it is automatically inferred. The only rule is that the split name must be delimited by non-word characters, like test-file.csv for example instead of testfile.csv . Supported delimiters include underscores, dashes, spaces, dots, and numbers. For example, the following file names are all acceptable: train split: train.csv , my_train_file.csv , train1.csv validation split: validation.csv , my_validation_file.csv , validation1.csv test split: test.csv , my_test_file.csv , test1.csv Here is an example where all the files are placed into a directory named data : Copied my_dataset_repository/ ├── README.md └── data/ ├── train. csv ├── test. csv └── validation. csv Custom filename split If your dataset splits have custom names that aren’t train , test , or validation , then you can name your data files like data/<split_name>-xxxxx-of-xxxxx.csv . Here is an example with three splits, train , test , and random : Copied my_dataset_repository/ ├── README.md └── data/ ├── train -00000 -of -00003 .csv ├── train -00001 -of -00003 .csv ├── train -00002 -of -00003 .csv ├── test -00000 -of -00001 .csv ├── random -00000 -of -00003 .csv ├── random -00001 -of -00003 .csv └── random -00002 -of -00003 .csv Single split When 🤗 Datasets can’t find any of the above patterns, then it’ll treat all the files as a single train split. If your dataset splits aren’t loading as expected, it may be due to an incorrect pattern. Split name keywords There are several ways to name splits. Validation splits are sometimes called “dev”, and test splits may be referred to as “eval”. These other split names are also supported, and the following keywords are equivalent: train, training validation, valid, val, dev test, testing, eval, evaluation The structure below is a valid repository: Copied my_dataset_repository/ ├── README.md └── data/ ├── training. csv ├── eval . csv └── valid. csv Multiple files per split If one of your splits comprises several files, 🤗 Datasets can still infer whether it is the train, validation, and test split from the file name. For example, if your train and test splits span several files: Copied my_dataset_repository/ ├── README.md ├── train_0. csv ├── train_1. csv ├── train_2. csv ├── train_3. csv ├── test_0. csv └── test_1. csv Make sure all the files of your train set have train in their names (same for test and validation). Even if you add a prefix or suffix to train in the file name (like my_train_file_00001.csv for example), 🤗 Datasets can still infer the appropriate split. For convenience, you can also place your data files into different directories. In this case, the split name is inferred from the directory name. Copied my_dataset_repository/ ├── README.md └── data/ ├── train/ │ ├── shard_0. csv │ ├── shard_1. csv │ ├── shard_2. csv │ └── shard_3. csv └── test/ ├── shard_0. csv └── shard_1. csv < > Update on GitHub ← Create a dataset card Create a dataset loading script → Structure your repository Main use-case Define your splits and subsets in YAML Splits Configurations Builder parameters Automatic splits detection Directory name Filename splits Custom filename split Single split Split name keywords Multiple files per split
Interface__ZeroShotClassificationOutputValue.txt
Interface: ZeroShotClassificationOutputValue Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: ZeroShotClassificationOutputValue Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: ZeroShotClassificationOutputValue Properties labels • labels : string [] Defined in inference/src/tasks/nlp/zeroShotClassification.ts:24 scores • scores : number [] Defined in inference/src/tasks/nlp/zeroShotClassification.ts:25 sequence • sequence : string Defined in inference/src/tasks/nlp/zeroShotClassification.ts:26 < > Update on GitHub ← VisualQuestionAnsweringOutput ZeroShotImageClassificationOutputValue → Interface: Zero Shot Classification Output Value Properties labels Defined in scores Defined in sequence Defined in
Accessing_Private_Gated_Models.txt
Accessing Private/Gated Models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Accessing Private/Gated Models Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Accessing Private/Gated Models Due to the possibility of leaking access tokens to users of your website or web application, we only support accessing private/gated models from server-side environments (e.g., Node.js) that have access to the process’ environment variables. Step 1: Generating a User Access Token User Access Tokens are the preferred way to authenticate an application to Hugging Face services. To generate an access token, navigate to the Access Tokens tab in your settings and click on the New token button. Choose a name for your token and click Generate a token (we recommend keeping the “Role” as read-only). You can then click the Copy button next to your newly-created token to copy it to your clipboard. To delete or refresh User Access Tokens, you can click the Manage button. Step 2: Using the access token in Transformers.js Transformers.js will attach an Authorization header to requests made to the Hugging Face Hub when the HF_TOKEN environment variable is set and visible to the process. One way to do this is to call your program with the environment variable set. For example, let’s say you have a file called llama.js with the following code: Copied import { AutoTokenizer } from '@huggingface/transformers' ; // Load tokenizer for a gated repository. const tokenizer = await AutoTokenizer . from_pretrained ( 'meta-llama/Llama-2-7b-hf' ); // Encode text. const text = 'Hello world!' ; const encoded = tokenizer. encode (text); console . log (encoded); You can then use the following command to set the HF_TOKEN environment variable and run the file: Copied HF_TOKEN=hf_... node tests/llama.js (remember to replace hf_... with your actual access token). If done correctly, you should see the following output: Copied [ 1, 15043, 3186, 29991 ] Alternatively, you can set the environment variable directly in your code: Copied // Set access token (NB: Keep this private!) process. env . HF_TOKEN = 'hf_...' ; // ... rest of your code < > Update on GitHub ← Server-side Inference in Node.js Server-side Audio Processing in Node.js → Accessing Private/ Gated Models Step 1: Generating a User Access Token Step 2: Using the access token in Transformers.js
Distributed_training_with_🤗_Accelerate.txt
Distributed training with 🤗 Accelerate Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Distributed training with 🤗 Accelerate Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Distributed training with 🤗 Accelerate As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the 🤗 Accelerate library to help users easily train a 🤗 Transformers model on any type of distributed setup, whether it is multiple GPU’s on one machine or multiple GPU’s across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment. Setup Get started by installing 🤗 Accelerate: Copied pip install accelerate Then import and create an Accelerator object. The Accelerator will automatically detect your type of distributed setup and initialize all the necessary components for training. You don’t need to explicitly place your model on a device. Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() Prepare to accelerate The next step is to pass all the relevant training objects to the prepare method. This includes your training and evaluation DataLoaders, a model and an optimizer: Copied >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) Backward The last addition is to replace the typical loss.backward() in your training loop with 🤗 Accelerate’s backward method: Copied >>> for epoch in range (num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update( 1 ) As you can see in the following code, you only need to add four additional lines of code to your training loop to enable distributed training! Copied + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) Train Once you’ve added the relevant lines of code, launch your training in a script or a notebook like Colaboratory. Train with a script If you are running your training from a script, run the following command to create and save a configuration file: Copied accelerate config Then launch your training with: Copied accelerate launch train.py Train with a notebook 🤗 Accelerate can also run in a notebook if you’re planning on using Colaboratory’s TPUs. Wrap all the code responsible for training in a function, and pass it to notebook_launcher : Copied >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) For more information about 🤗 Accelerate and its rich features, refer to the documentation . < > Update on GitHub ← Train with a script Load and train adapters with 🤗 PEFT → Distributed training with 🤗 Accelerate Setup Prepare to accelerate Backward Train Train with a script Train with a notebook
Get_the_number_of_rows_and_the_size_in_bytes.txt
Get the number of rows and the size in bytes Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Get the number of rows and the size in bytes Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Get the number of rows and the size in bytes This guide shows you how to use the dataset viewer’s /size endpoint to retrieve a dataset’s size programmatically. Feel free to also try it out with ReDoc . The /size endpoint accepts the dataset name as its query parameter: Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://datasets-server.huggingface.co/size?dataset=ibm/duorc" def query (): response = requests.get(API_URL, headers=headers) return response.json() data = query() The endpoint response is a JSON containing the size of the dataset, as well as each of its subsets and splits. It provides the number of rows, the number of colums (where applicable) and the size in bytes for the different forms of the data: original files, size in memory (RAM) and auto-converted parquet files. For example, the ibm/duorc dataset has 187.213 rows along all its subsets and splits, for a total of 97MB. Copied { "size" : { "dataset" : { "dataset" : "ibm/duorc" , "num_bytes_original_files" : 58710973 , "num_bytes_parquet_files" : 58710973 , "num_bytes_memory" : 1060742354 , "num_rows" : 187213 } , "configs" : [ { "dataset" : "ibm/duorc" , "config" : "ParaphraseRC" , "num_bytes_original_files" : 37709127 , "num_bytes_parquet_files" : 37709127 , "num_bytes_memory" : 704394283 , "num_rows" : 100972 , "num_columns" : 7 } , { "dataset" : "ibm/duorc" , "config" : "SelfRC" , "num_bytes_original_files" : 21001846 , "num_bytes_parquet_files" : 21001846 , "num_bytes_memory" : 356348071 , "num_rows" : 86241 , "num_columns" : 7 } ] , "splits" : [ { "dataset" : "ibm/duorc" , "config" : "ParaphraseRC" , "split" : "train" , "num_bytes_parquet_files" : 26005668 , "num_bytes_memory" : 494389683 , "num_rows" : 69524 , "num_columns" : 7 } , { "dataset" : "ibm/duorc" , "config" : "ParaphraseRC" , "split" : "validation" , "num_bytes_parquet_files" : 5566868 , "num_bytes_memory" : 106733319 , "num_rows" : 15591 , "num_columns" : 7 } , { "dataset" : "ibm/duorc" , "config" : "ParaphraseRC" , "split" : "test" , "num_bytes_parquet_files" : 6136591 , "num_bytes_memory" : 103271281 , "num_rows" : 15857 , "num_columns" : 7 } , { "dataset" : "ibm/duorc" , "config" : "SelfRC" , "split" : "train" , "num_bytes_parquet_files" : 14851720 , "num_bytes_memory" : 248966361 , "num_rows" : 60721 , "num_columns" : 7 } , { "dataset" : "ibm/duorc" , "config" : "SelfRC" , "split" : "validation" , "num_bytes_parquet_files" : 3114390 , "num_bytes_memory" : 56359392 , "num_rows" : 12961 , "num_columns" : 7 } , { "dataset" : "ibm/duorc" , "config" : "SelfRC" , "split" : "test" , "num_bytes_parquet_files" : 3035736 , "num_bytes_memory" : 51022318 , "num_rows" : 12559 , "num_columns" : 7 } ] } , "pending" : [ ] , "failed" : [ ] , "partial" : false } If the size has partial: true it means that the actual size of the dataset couldn’t been determined because it’s too big. In that case the number of rows and bytes can be inferior to the actual numbers. < > Update on GitHub ← List Parquet files Explore dataset statistics → Get the number of rows and the size in bytes
Cache_management.txt
Cache management Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Cache management Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Cache management When you download a dataset from Hugging Face, the data are stored locally on your computer. Files from Hugging Face are stored as usual in the huggingface_hub cache, which is at ~/.cache/huggingface/hub by default. See the Hub cache documentation for more details and how to change its location. The Hub cache allows 🤗 Datasets to avoid re-downloading dataset files from Hugging Face every time you use them. 🤗 Datasets also has its own cache to store datasets converted in Arrow format (the format used by Dataset objects). This guide focuses on the 🤗 Datasets cache and will show you how to: Change the cache directory. Control how a dataset is loaded from the cache. Clean up cache files in the directory. Enable or disable caching. Cache directory The default 🤗 Datasets cache directory is ~/.cache/huggingface/datasets . Change the cache location by setting the shell environment variable, HF_HOME to another directory: Copied $ export HF_HOME = "/path/to/another/directory/datasets" When you load a dataset, you also have the option to change where the data is cached. Change the cache_dir parameter to the path you want: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( 'username/dataset' , cache_dir= "/path/to/another/directory/datasets" ) Download mode After you download a dataset, control how it is loaded by load_dataset() with the download_mode parameter. By default, 🤗 Datasets will reuse a dataset if it exists. But if you need the original dataset without any processing functions applied, re-download the files as shown below: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( 'squad' , download_mode= 'force_redownload' ) Refer to DownloadMode for a full list of download modes. Cache files Clean up the Arrow cache files in the directory with Dataset.cleanup_cache_files() : Copied # Returns the number of removed cache files >>> dataset.cleanup_cache_files() 2 Enable or disable caching If you’re using a cached file locally, it will automatically reload the dataset with any previous transforms you applied to the dataset. Disable this behavior by setting the argument load_from_cache_file=False in Dataset.map() : Copied >>> updated_dataset = small_dataset. map (add_prefix, load_from_cache_file= False ) In the example above, 🤗 Datasets will execute the function add_prefix over the entire dataset again instead of loading the dataset from its previous state. Disable caching on a global scale with disable_caching() : Copied >>> from datasets import disable_caching >>> disable_caching() When you disable caching, 🤗 Datasets will no longer reload cached files when applying transforms to datasets. Any transform you apply on your dataset will be need to be reapplied. If you want to reuse a dataset from scratch, try setting the download_mode parameter in load_dataset() instead. Improve performance Disabling the cache and copying the dataset in-memory will speed up dataset operations. There are two options for copying the dataset in-memory: Set datasets.config.IN_MEMORY_MAX_SIZE to a nonzero value (in bytes) that fits in your RAM memory. Set the environment variable HF_DATASETS_IN_MEMORY_MAX_SIZE to a nonzero value. Note that the first method takes higher precedence. < > Update on GitHub ← Use with Spark Cloud storage → Cache management Cache directory Download mode Cache files Enable or disable caching Improve performance
PPO_Trainer.txt
PPO Trainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation PPO Trainer TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started PPO Trainer TRL supports training LLMs with Proximal Policy Optimization (PPO) . References: Fine-Tuning Language Models from Human Preferences Learning to Summarize from Human Feedback The N Implementation Details of RLHF with PPO The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization Get started To just run a PPO script to make sure the trainer can run, you can run the following command to train a PPO model with a dummy reward model. Copied python examples/scripts/ppo/ppo.py \ --dataset_name trl-internal-testing/descriptiveness-sentiment-trl-style \ --dataset_train_split descriptiveness \ --learning_rate 3e-6 \ --num_ppo_epochs 1 \ --num_mini_batches 1 \ --output_dir models/minimal/ppo \ --per_device_train_batch_size 64 \ --gradient_accumulation_steps 1 \ --total_episodes 10000 \ --model_name_or_path EleutherAI/pythia-1b-deduped \ --missing_eos_penalty 1.0 Explanation of the logged metrics The logged metrics are as follows. Here is an example tracked run at Weights and Biases eps : Tracks the number of episodes per second. objective/kl : The mean Kullback-Leibler (KL) divergence between the current policy and reference policy. objective/entropy : The mean entropy of the policy, indicating the randomness of the actions chosen by the policy. objective/non_score_reward : The mean reward from non-score-related sources, basically beta * kl.sum(1) , where beta is the KL penalty coefficient and kl is the per-token KL divergence. objective/rlhf_reward : The mean RLHF reward, which is score - non_score_reward . objective/scores : The mean scores returned by the reward model / environment. policy/approxkl_avg : The average approximate KL divergence between consecutive PPO policies. Note that this is not the same as objective/kl . policy/clipfrac_avg : The average fraction of policy updates that are clipped, indicating how often the policy updates are constrained to prevent large changes. loss/policy_avg : The average policy loss, indicating how well the policy is performing. loss/value_avg : The average value loss, indicating the difference between the predicted value and the actual reward. val/clipfrac_avg : The average fraction of value function updates that are clipped, similar to policy/clipfrac_avg but for the value function. policy/entropy_avg : The average entropy of the policy during training, indicating how diverse the policy’s actions are. val/ratio : The mean ratio of the current policy probability to the old policy probability, providing a measure of how much the policy has changed. val/ratio_var : The variance of the val/ratio , indicating the variability in policy changes. val/num_eos_tokens : The number of end-of-sequence (EOS) tokens generated, which can indicate the number of complete responses. lr : lr: The current learning rate used by the optimizer. episode : episode: The current global step or episode count in the training process. Cookbook Debugging TIP: objective/rlhf_reward : this is the ultimate objective of the RLHF training. If training works as intended, this metric should keep going up. Debugging TIP: val/ratio : this number should float around 1.0, and it gets clipped by --cliprange 0.2 with PPO’s surrogate loss. So if this ratio is too high like 2.0 or 1000.0 or too small like 0.1, it means the updates between consecutive policies are too drastic. You should try undertand why this is happening and try to fix it. Memory TIP: If you are running out of memory, you can try to reduce the --per_device_train_batch_size or increase the --gradient_accumulation_steps to reduce the memory footprint. Memory TIP: If you have multiple GPUs, you can also run training with DeepSpeed stage 3 to reduce the memory footprint accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml . Usage TIP: We recommend to use the “EOS trick” via --missing_eos_penalty , which subtracts a static scalar penalty from the score of completions that do not end with an EOS token. This can help the model learn to generate more coherent completions. What is my model doing exactly? To help you understand what your model is doing, we periodically log some sample completions from the model. Here is an example of a completion. In an example tracked run at Weights and Biases , it looks like the following, allowing you to see the model’s response at different stages of training. By default we generate --num_sample_generations 10 during training, but you can customize the number of generations. In the logs the sampled generations look like Copied ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓ ┃ query ┃ model response ┃ score ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩ │ SUBREDDIT: r/AskReddit │ I'm in love with a friend, and │ 3.921875 │ │ │ I don't know how to get rid of │ │ │ TITLE: How do you get someone │ those feelings. I'm │ │ │ out of your head? │ desperate.<|endoftext|>[PAD][P… │ │ │ │ │ │ │ POST: Hi, │ │ │ │ I'm 22 , and I have been with my │ │ │ │ girlfriend for 5 years now. We │ │ │ │ recently moved together. We've │ │ │ │ always loved each other │ │ │ │ intensely. │ │ │ │ │ │ │ │ Problem, I recently started to │ │ │ │ have feelings for an other │ │ │ │ person (a friend). This person │ │ │ │ has had a boyfriend for now 3 │ │ │ │ years, and has absolutely no │ │ │ │ ideas. Those feelings were so │ │ │ │ strong, it was hard to hide │ │ │ │ them. After 2 months of me │ │ │ │ being distant and really sad, │ │ │ │ my girlfriend forced me to say │ │ │ │ what was bothering me . I'm not │ │ │ │ a good liar, and now she knows. │ │ │ │ │ │ │ │ We decided to give us a week │ │ │ │ alone, I went to my parents. │ │ │ │ │ │ │ │ Now, I'm completely lost. I │ │ │ │ keep on thinking about this │ │ │ │ person, and I hate that . I │ │ │ │ would like for those feelings │ │ │ │ to go away, to leave me alone. │ │ │ │ But I can't. │ │ │ │ │ │ │ │ What do I do? It's been 3 │ │ │ │ months now, and I'm just │ │ │ │ desperate. │ │ │ │ │ │ │ │ TL;DR: │ │ │ ├─────────────────────────────────┼─────────────────────────────────┼──────────┤ │ SUBREDDIT: r/pettyrevenge │ My mom woke me up with a loud │ 6.84375 │ │ │ TV. I blasted Gangnam Style on │ │ │ TITLE: So, my mom woke me up │ repeat , with the bass cranked │ │ │ with a loud TV. │ up as high as it could │ │ │ │ go.<|endoftext|>[PAD][PAD][PAD… │ │ │ POST: She was in her living │ │ │ │ room, watching TV. This was at │ │ │ │ about 8 : 30 in the morning, and │ │ │ │ she was exercising. She turned │ │ │ │ the TV up extra loud to hear it │ │ │ │ over her excercycle, and woke │ │ │ │ me up. I went in there asking │ │ │ │ for her to turn it down. She │ │ │ │ said she didn't have to ; I │ │ │ │ explained that I always used │ │ │ │ headphones so she didn't have │ │ │ │ to deal with my noise and that │ │ │ │ she should give me a little │ │ │ │ more respect, given that I paid │ │ │ │ rent at the time . │ │ │ │ │ │ │ │ She disagreed. I went back to │ │ │ │ my room, rather pissed off at │ │ │ │ the lack of equality. I had no │ │ │ │ lock on my door; but I had a │ │ │ │ dresser right next to it , so I │ │ │ │ pulled one of the drawers out │ │ │ │ enough so that it caused the │ │ │ │ door to not be openable. Then, │ │ │ │ I turned my speakers up really │ │ │ │ loud and blasted Gangnam Style │ │ │ │ on repeat , with the bass │ │ │ │ cranked up as high as it could │ │ │ │ go. │ │ │ │ │ │ │ │ If you hate Gangnam Style for │ │ │ │ being overplayed, you will see │ │ │ │ why I chose that particular │ │ │ │ song. I personally don't mind │ │ │ │ it . But here's the thing about │ │ │ │ my bass; it vibrates the walls, │ │ │ │ making one hell of a lot of │ │ │ │ noise. Needless to say , my mom │ │ │ │ was not pleased and shut off │ │ │ │ the internet. But it was oh so │ │ │ │ worth it . │ │ │ │ │ │ │ │ TL;DR: │ │ │ └─────────────────────────────────┴─────────────────────────────────┴──────────┘ Implementation details This PPO implementation is based on the The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization . Benchmark experiments To validate the PPO implementation works, we ran experiment on the 1B model. Here are the command we used to run the experiment. We take the SFT / RM models directly from The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization . Copied accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2 .yaml \ examples/scripts/ppo/ppo_tldr .py \ --output_dir models/minimal/ppo_tldr \ --learning_rate 3 e- 6 \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 4 \ --total_episodes 1000000 \ --model_name_or_path EleutherAI/pythia- 1 b-deduped \ --sft_model_path cleanrl/EleutherAI_pythia- 1 b-deduped__sft__tldr \ --reward_model_path cleanrl/EleutherAI_pythia- 1 b-deduped__reward__tldr \ --local_rollout_forward_batch_size 16 \ --missing_eos_penalty 1.0 \ --stop_token eos Checkpoints and experiment tracking are available at: 🤗 Model checkpoint 🐝 Tracked experiment To evaluate, we use vLLM to load the checkpoints and GPT-4o mini as a judge model to evaluate the generated TL;DR against the reference TL;DR. For more information on how to use judges, see Judges . Copied $ python examples/scripts/evals/judge_tldr.py --model_name_or_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 33.00% $ python examples/scripts/evals/judge_tldr.py --model_name_or_path vwxyzjn/ppo_tldr --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 64.70% The PPO checkpoint gets a 64.7% preferred rate vs the 33.0% preference rate of the SFT checkpoint. This is a good sign that the PPO training is working as intended. Metrics: Copied # pip install openrlbenchmark==0.2.1a5 # see https://github.com/openrlbenchmark/openrlbenchmark#get-started for documentation # to use it, change `?we=huggingface&wpn=trl` to your own project and `?tag=pr-1540` to your own tag python -m openrlbenchmark.rlops_multi_metrics \ --filters '?we=huggingface&wpn=trl&xaxis=train/episode&ceik=output_dir&cen=sft_model_path&metrics=train/objective/rlhf_reward&metrics=train/objective/scores&metrics=train/objective/kl&metrics=train/objective/non_score_reward&metrics=train/objective/entropy&metrics=train/policy/approxkl_avg&metrics=train/policy/clipfrac_avg&metrics=train/loss/policy_avg&metrics=train/loss/value_avg&metrics=train/val/clipfrac_avg&metrics=train/policy/entropy_avg&metrics=train/val/ratio&metrics=train/val/ratio_var&metrics=train/val/num_eos_tokens&metrics=train/lr&metrics=train/eps' \ "cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr?tag=pr-1540" \ --env-ids models/minimal/ppo_tldr \ --pc.ncols 4 \ --pc.ncols-legend 1 \ --pc.xlabel "Episode" \ --output-filename benchmark/trl/pr-1540/ppo \ --scan-history PPOTrainer class trl. PPOTrainer < source > ( args : PPOConfig processing_class : typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] model : Module ref_model : typing.Optional[torch.nn.modules.module.Module] reward_model : Module train_dataset : Dataset value_model : typing.Optional[torch.nn.modules.module.Module] = None data_collator : typing.Optional[transformers.data.data_collator.DataCollatorWithPadding] = None eval_dataset : typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = None optimizers : tuple = (None, None) callbacks : typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None peft_config : typing.Optional[ForwardRef('PeftConfig')] = None ) create_model_card < source > ( model_name : typing.Optional[str] = None dataset_name : typing.Optional[str] = None tags : typing.Union[str, list[str], NoneType] = None ) Parameters model_name ( str , optional , defaults to None ) — The name of the model. dataset_name ( str , optional , defaults to None ) — The name of the dataset used for training. tags ( str , list[str] or None , optional , defaults to None ) — Tags to be associated with the model card. Creates a draft of a model card using the information available to the Trainer . null_ref_context < source > ( ) Context manager for handling null reference model (that is, peft adapter manipulation). PPOConfig class trl. PPOConfig < source > ( output_dir : str overwrite_output_dir : bool = False do_train : bool = False do_eval : bool = False do_predict : bool = False eval_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only : bool = False per_device_train_batch_size : int = 8 per_device_eval_batch_size : int = 8 per_gpu_train_batch_size : typing.Optional[int] = None per_gpu_eval_batch_size : typing.Optional[int] = None gradient_accumulation_steps : int = 1 eval_accumulation_steps : typing.Optional[int] = None eval_delay : typing.Optional[float] = 0 torch_empty_cache_steps : typing.Optional[int] = None learning_rate : float = 5e-05 weight_decay : float = 0.0 adam_beta1 : float = 0.9 adam_beta2 : float = 0.999 adam_epsilon : float = 1e-08 max_grad_norm : float = 1.0 num_train_epochs : float = 3.0 max_steps : int = -1 lr_scheduler_type : typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs : typing.Union[dict, str, NoneType] = <factory> warmup_ratio : float = 0.0 warmup_steps : int = 0 log_level : typing.Optional[str] = 'passive' log_level_replica : typing.Optional[str] = 'warning' log_on_each_node : bool = True logging_dir : typing.Optional[str] = None logging_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step : bool = False logging_steps : float = 500 logging_nan_inf_filter : bool = True save_strategy : typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps : float = 500 save_total_limit : typing.Optional[int] = None save_safetensors : typing.Optional[bool] = True save_on_each_node : bool = False save_only_model : bool = False restore_callback_states_from_checkpoint : bool = False no_cuda : bool = False use_cpu : bool = False use_mps_device : bool = False seed : int = 42 data_seed : typing.Optional[int] = None jit_mode_eval : bool = False use_ipex : bool = False bf16 : bool = False fp16 : bool = False fp16_opt_level : str = 'O1' half_precision_backend : str = 'auto' bf16_full_eval : bool = False fp16_full_eval : bool = False tf32 : typing.Optional[bool] = None local_rank : int = -1 ddp_backend : typing.Optional[str] = None tpu_num_cores : typing.Optional[int] = None tpu_metrics_debug : bool = False debug : typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last : bool = False eval_steps : typing.Optional[float] = None dataloader_num_workers : int = 0 dataloader_prefetch_factor : typing.Optional[int] = None past_index : int = -1 run_name : typing.Optional[str] = None disable_tqdm : typing.Optional[bool] = None remove_unused_columns : typing.Optional[bool] = True label_names : typing.Optional[typing.List[str]] = None load_best_model_at_end : typing.Optional[bool] = False metric_for_best_model : typing.Optional[str] = None greater_is_better : typing.Optional[bool] = None ignore_data_skip : bool = False fsdp : typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params : int = 0 fsdp_config : typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap : typing.Optional[str] = None accelerator_config : typing.Union[dict, str, NoneType] = None deepspeed : typing.Union[dict, str, NoneType] = None label_smoothing_factor : float = 0.0 optim : typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args : typing.Optional[str] = None adafactor : bool = False group_by_length : bool = False length_column_name : typing.Optional[str] = 'length' report_to : typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters : typing.Optional[bool] = None ddp_bucket_cap_mb : typing.Optional[int] = None ddp_broadcast_buffers : typing.Optional[bool] = None dataloader_pin_memory : bool = True dataloader_persistent_workers : bool = False skip_memory_metrics : bool = True use_legacy_prediction_loop : bool = False push_to_hub : bool = False resume_from_checkpoint : typing.Optional[str] = None hub_model_id : typing.Optional[str] = None hub_strategy : typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token : typing.Optional[str] = None hub_private_repo : typing.Optional[bool] = None hub_always_push : bool = False gradient_checkpointing : bool = False gradient_checkpointing_kwargs : typing.Union[dict, str, NoneType] = None include_inputs_for_metrics : bool = False include_for_metrics : typing.List[str] = <factory> eval_do_concat_batches : bool = True fp16_backend : str = 'auto' evaluation_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id : typing.Optional[str] = None push_to_hub_organization : typing.Optional[str] = None push_to_hub_token : typing.Optional[str] = None mp_parameters : str = '' auto_find_batch_size : bool = False full_determinism : bool = False torchdynamo : typing.Optional[str] = None ray_scope : typing.Optional[str] = 'last' ddp_timeout : typing.Optional[int] = 1800 torch_compile : bool = False torch_compile_backend : typing.Optional[str] = None torch_compile_mode : typing.Optional[str] = None dispatch_batches : typing.Optional[bool] = None split_batches : typing.Optional[bool] = None include_tokens_per_second : typing.Optional[bool] = False include_num_input_tokens_seen : typing.Optional[bool] = False neftune_noise_alpha : typing.Optional[float] = None optim_target_modules : typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics : bool = False eval_on_start : bool = False use_liger_kernel : typing.Optional[bool] = False eval_use_gather_object : typing.Optional[bool] = False average_tokens_across_devices : typing.Optional[bool] = False dataset_num_proc : typing.Optional[int] = None num_mini_batches : int = 1 total_episodes : typing.Optional[int] = None local_rollout_forward_batch_size : int = 64 num_sample_generations : int = 10 response_length : int = 53 stop_token : typing.Optional[typing.Literal['eos']] = None stop_token_id : typing.Optional[int] = None temperature : float = 0.7 missing_eos_penalty : typing.Optional[float] = None sft_model_path : str = 'EleutherAI/pythia-160m' world_size : typing.Optional[int] = None num_total_batches : typing.Optional[int] = None micro_batch_size : typing.Optional[int] = None local_batch_size : typing.Optional[int] = None batch_size : typing.Optional[int] = None local_mini_batch_size : typing.Optional[int] = None mini_batch_size : typing.Optional[int] = None exp_name : str = 'ppo_config' reward_model_path : str = 'EleutherAI/pythia-160m' model_adapter_name : typing.Optional[str] = None ref_adapter_name : typing.Optional[str] = None num_ppo_epochs : int = 4 whiten_rewards : bool = False kl_coef : float = 0.05 cliprange : float = 0.2 vf_coef : float = 0.1 cliprange_value : float = 0.2 gamma : float = 1.0 lam : float = 0.95 ) Parameters exp_name ( str , optional , defaults to os.path.basename(__file__)[ ---3] ): Name of this experiment. reward_model_path ( str , optional , defaults to "EleutherAI/pythia-160m" ) — Path to the reward model. model_adapter_name ( Optional[str] , optional , defaults to None ) — Name of the train target PEFT adapter, when using LoRA with multiple adapters. ref_adapter_name ( Optional[str] , optional , defaults to None ) — Name of the reference PEFT adapter, when using LoRA with multiple adapters. num_ppo_epochs ( int , optional , defaults to 4 ) — Number of epochs to train. whiten_rewards ( bool , optional , defaults to False ) — Whether to whiten the rewards. kl_coef ( float , optional , defaults to 0.05 ) — KL coefficient. cliprange ( float , optional , defaults to 0.2 ) — Clip range. vf_coef ( float , optional , defaults to 0.1 ) — Value function coefficient. cliprange_value ( float , optional , defaults to 0.2 ) — Clip range for the value function. gamma ( float , optional , defaults to 1.0 ) — Discount factor. lam ( float , optional , defaults to 0.95 ) — Lambda value for GAE. Configuration class for the PPOTrainer . Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. < > Update on GitHub ← ORPO PRM → PP O Trainer Get started Explanation of the logged metrics Cookbook What is my model doing exactly? Implementation details Benchmark experiments PPO Trainer PPO Config
Chat_Completion.txt
Chat Completion Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Chat Completion api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Chat Completion Generate a response given a list of messages in a conversational context, supporting both conversational Language Models (LLMs) and conversational Vision-Language Models (VLMs). This is a subtask of text-generation and image-text-to-text . Recommended models Conversational Large Language Models (LLMs) google/gemma-2-2b-it : A text-generation model trained to follow instructions. meta-llama/Meta-Llama-3.1-8B-Instruct : Very powerful text generation model trained to follow instructions. microsoft/Phi-3-mini-4k-instruct : Small yet powerful text generation model. Qwen/Qwen2.5-7B-Instruct : Strong text generation model to follow instructions. Conversational Vision-Language Models (VLMs) meta-llama/Llama-3.2-11B-Vision-Instruct : Powerful vision language model with great visual understanding and reasoning capabilities. Qwen/Qwen2-VL-7B-Instruct : Strong image-text-to-text model. API Playground For Chat Completion models, we provide an interactive UI Playground for easier testing: Quickly iterate on your prompts from the UI. Set and override system, assistant and user messages. Browse and select models currently available on the Inference API. Compare the output of two models side-by-side. Adjust requests parameters from the UI. Easily switch between UI view and code snippets. Access the Inference UI Playground and start exploring: https://huggingface.co/playground Using the API The API supports: Using the chat completion API compatible with the OpenAI SDK. Using grammars, constraints, and tools. Streaming the output Code snippet example for conversational LLMs Python JavaScript cURL Using huggingface_hub : Copied from huggingface_hub import InferenceClient client = InferenceClient(api_key= "hf_***" ) messages = [ { "role" : "user" , "content" : "What is the capital of France?" } ] stream = client.chat.completions.create( model= "google/gemma-2-2b-it" , messages=messages, max_tokens= 500 , stream= True ) for chunk in stream: print (chunk.choices[ 0 ].delta.content, end= "" ) Using openai : Copied from openai import OpenAI client = OpenAI( base_url= "https://api-inference.huggingface.co/v1/" , api_key= "hf_***" ) messages = [ { "role" : "user" , "content" : "What is the capital of France?" } ] stream = client.chat.completions.create( model= "google/gemma-2-2b-it" , messages=messages, max_tokens= 500 , stream= True ) for chunk in stream: print (chunk.choices[ 0 ].delta.content, end= "" ) To use the Python client, see huggingface_hub ’s package reference . Code snippet example for conversational VLMs Python JavaScript cURL Using huggingface_hub : Copied from huggingface_hub import InferenceClient client = InferenceClient(api_key= "hf_***" ) messages = [ { "role" : "user" , "content" : [ { "type" : "text" , "text" : "Describe this image in one sentence." }, { "type" : "image_url" , "image_url" : { "url" : "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] stream = client.chat.completions.create( model= "meta-llama/Llama-3.2-11B-Vision-Instruct" , messages=messages, max_tokens= 500 , stream= True ) for chunk in stream: print (chunk.choices[ 0 ].delta.content, end= "" ) Using openai : Copied from openai import OpenAI client = OpenAI( base_url= "https://api-inference.huggingface.co/v1/" , api_key= "hf_***" ) messages = [ { "role" : "user" , "content" : [ { "type" : "text" , "text" : "Describe this image in one sentence." }, { "type" : "image_url" , "image_url" : { "url" : "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] stream = client.chat.completions.create( model= "meta-llama/Llama-3.2-11B-Vision-Instruct" , messages=messages, max_tokens= 500 , stream= True ) for chunk in stream: print (chunk.choices[ 0 ].delta.content, end= "" ) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload frequency_penalty number Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. logprobs boolean Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. max_tokens integer The maximum number of tokens that can be generated in the chat completion. messages* object[] A list of messages comprising the conversation so far. content* unknown One of the following: (#1) string (#2) object[] (#1) object text* string type* enum Possible values: text. (#2) object image_url* object url* string type* enum Possible values: image_url. name string role* string presence_penalty number Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics response_format unknown One of the following: (#1) object type* enum Possible values: json. value* unknown A string that represents a JSON Schema . JSON Schema is a declarative language that allows to annotate JSON documents with types and descriptions. (#2) object type* enum Possible values: regex. value* string seed integer stop string[] Up to 4 sequences where the API will stop generating further tokens. stream boolean stream_options object include_usage* boolean If set, an additional chunk will be streamed before the data: [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, and the choices field will always be an empty array. All other chunks will also include a usage field, but with a null value. temperature number What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. tool_choice unknown One of the following: (#1) enum Possible values: auto. (#2) enum Possible values: none. (#3) enum Possible values: required. (#4) object function* object name* string tool_prompt string A prompt to be appended before the tools tools object[] A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. function* object arguments* unknown description string name* string type* string top_logprobs integer An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used. top_p number An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Output type depends on the stream input parameter. If stream is false (default), the response will be a JSON object with the following fields: Body choices object[] finish_reason string index integer logprobs object content object[] logprob number token string top_logprobs object[] logprob number token string message unknown One of the following: (#1) object content string role string (#2) object role string tool_calls object[] function object arguments unknown description string name string id string type string created integer id string model string system_fingerprint string usage object completion_tokens integer prompt_tokens integer total_tokens integer If stream is true , generated tokens are returned as a stream, using Server-Sent Events (SSE). For more information about streaming, check out this guide . Body choices object[] delta unknown One of the following: (#1) object content string role string (#2) object role string tool_calls object function object arguments string name string id string index integer type string finish_reason string index integer logprobs object content object[] logprob number token string top_logprobs object[] logprob number token string created integer id string model string system_fingerprint string usage object completion_tokens integer prompt_tokens integer total_tokens integer < > Update on GitHub ← Automatic Speech Recognition Feature Extraction → Chat Completion Recommended models Conversational Large Language Models (LL Ms) Conversational Vision- Language Models (VL Ms) AP I Playground Using the API Code snippet example for conversational LL Ms Code snippet example for conversational VL Ms AP I specification Request Response
Supported_Transformers_&_Diffusers_Tasks.txt
Supported Transformers & Diffusers Tasks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Supported Transformers & Diffusers Tasks Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Supported Transformers & Diffusers Tasks Inference Endpoints offers out-of-the-box support for Machine Learning tasks from the following libraries: Transformers Sentence-Transformers Diffusers (for the Text To Image task) Below is a table of Hugging Face managed supported tasks for Inference Endpoint. These tasks don’t require any form of code or “custom container” to deploy an Endpoint. If you want to customize any of the tasks below, or want to write your own custom task, check out the “Create your own inference handler” section for more information. Most of the tasks below uses the pipeline object, and more information about what additional parameters can be sent to the endpoint is available here . Task Framework Out of the box Support Text To Image Diffusers ✅ Text Classification Transformers ✅ Zero Shot Classification Transformers ✅ Token Classifiation Transformers ✅ Question Answering Transformers ✅ Fill Mask Transformers ✅ Summarization Transformers ✅ Translation Transformers ✅ Text to Text Generation Transformers ✅ Text Generation Transformers ✅ Feature Extraction Transformers ✅ Sentence Embeddings Sentence Transformers ✅ Sentence Similarity Sentence Transformers ✅ Ranking Sentence Transformers ✅ Image Classification Transformers ✅ Automatic Speech Recognition Transformers ✅ Audio Classification Transformers ✅ Object Detection Transformers ✅ Image Segmentation Transformers ✅ Table Question Answering Transformers ✅ Conversational Transformers ✅ Custom Custom ✅ Visual Question Answering Transformers ❌ Zero Shot Image Classification Transformers ❌ Example Request payloads See the following request examples for some of the tasks: Custom Handler Copied { "inputs" : "This is a sample input" , "moreData" : 1 , "customTask" : true } Text Classification For additional parameters, see this reference . Classifying a single text Copied { "inputs" : "This sound track was beautiful! It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } Classifying a text pair Copied { "inputs" : { "text" : "This sound track was beautiful!" , "text_pair" : "It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } } Zero Shot Classification For additional parameters, see this reference . Copied { "inputs" : "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!" , "parameters" : { "candidate_labels" : [ "refund" , "legal" , "faq" ] } } Token Classifiation For additional parameters, see this reference . Copied { "inputs" : "This sound track was beautiful! It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } Question Answering For additional parameters, see this reference . Copied { "inputs" : { "question" : "What is used for inference?" , "context" : "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference." } } Fill Mask For additional parameters, see this reference . Copied { "inputs" : "This sound track was <mask>! It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } Summarization For additional parameters, see this reference . Copied { "inputs" : "This sound track was beautiful! It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } Translation For additional parameters, see this reference . Copied { "inputs" : "This sound track was beautiful! It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } Text to Text Generation For additional parameters, see this reference . Copied { "inputs" : "This sound track was beautiful! It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } Text Generation For additional parameters, see this reference . Copied { "inputs" : "This sound track was beautiful! It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } Feature Extraction For additional parameters, see this reference . Copied { "inputs" : "This sound track was beautiful! It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } Sentence Embeddings If using a TEI container , see this reference for additional parameters. Copied { "inputs" : "This sound track was beautiful! It paints the scenery in your mind so well I would recomend it even to people who hate vid. game music!" } Sentence similarity Copied { "inputs" : { "sentences" : [ "This sound track was beautiful!" , "It paints the scenery in your mind so well" ] , "source_sentence" : "What a wonderful day to listen to music" } } Ranking Copied { "inputs" : [ "This sound track was beautiful!" , "It paints the scenery in your mind so well" ] } Image Classification Image Classification can receive json payloads or binary data from a image directly. JSON Copied { "inputs" : "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgI" } Binary Copied curl --request POST \ --url https://{ENDPOINT}/ \ --header 'Content-Type: image/jpg' \ --header 'Authorization: Bearer {HF_TOKEN}' \ --data-binary '@test.jpg' Automatic Speech Recognition Automatic Speech Recognition can receive json payloads or binary data from a audio directly. For additional parameters, see this reference . JSON Copied { "inputs" : "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgI" } Binary Copied curl --request POST \ --url https://{ENDPOINT}/ \ --header 'Content-Type: audio/x-flac' \ --header 'Authorization: Bearer {HF_TOKEN}' \ --data-binary '@sample.flac' Audio Classification Audio Classification can receive json payloads or binary data from a audio directly. For additional parameters, see this reference . JSON Copied { "inputs" : "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgI" } Binary Copied curl --request POST \ --url https://{ENDPOINT}/ \ --header 'Content-Type: audio/x-flac' \ --header 'Authorization: Bearer {HF_TOKEN}' \ --data-binary '@sample.flac' Object Detection Object Detection can receive json payloads or binary data from a image directly. For additional parameters, see this reference . JSON Copied { "inputs" : "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgI" } Binary Copied curl --request POST \ --url https://{ENDPOINT}/ \ --header 'Content-Type: image/jpg' \ --header 'Authorization: Bearer {HF_TOKEN}' \ --data-binary '@test.jpg' Image Segmentation Image Segmentation can receive json payloads or binary data from a image directly. For additional parameters, see this reference . JSON Copied { "inputs" : "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgI" } Binary Copied curl --request POST \ --url https://{ENDPOINT}/ \ --header 'Content-Type: image/jpg' \ --header 'Authorization: Bearer {HF_TOKEN}' \ --data-binary '@test.jpg' Table Question Answering For additional parameters, see this reference . Copied { "inputs" : { "query" : "How many stars does the transformers repository have?" , "table" : { "Repository" : [ "Transformers" , "Datasets" , "Tokenizers" ] , "Stars" : [ "36542" , "4512" , "3934" ] , "Contributors" : [ "651" , "77" , "34" ] , "Programming language" : [ "Python" , "Python" , "Rust, Python and NodeJS" ] } } } Conversational For additional parameters, see this reference . Copied { "inputs" : [ { "role" : "user" , "content" : "Which movie is the best ?" } , { "role" : "assistant" , "content" : "It's Die Hard for sure." } , { "role" : "user" , "content" : "Can you explain why?" } ] } Text To Image Copied { "inputs" : "realistic render portrait realistic render portrait of group of flying blue whales towards the moon, intricate, toy, sci - fi, extremely detailed, digital painting, sculpted in zbrush, artstation, concept art, smooth, sharp focus, illustration, chiaroscuro lighting, golden ratio, incredible art by artgerm and greg rutkowski and alphonse mucha and simon stalenhag" , } For text-to-image models, note that currently your model repo needs to be a diffusers model with the full weights in it (i.e., not just a LoRA). Additional parameters You can add additional parameters, which are supported by the pipelines api from transformers. For Example if you have a text-generation pipeline you can provide generation_kwargs for repetition_penalty or max_length Copied { "inputs" : "Hugging Face, the winner of VentureBeat’s Innovation in Natural Language Process/Understanding Award for 2021, is looking to level the playing field. The team, launched by Clément Delangue and Julien Chaumond in 2016, was recognized for its work in democratizing NLP, the global market value for which is expected to hit $35.1 billion by 2026. This week, Google’s former head of Ethical AI Margaret Mitchell joined the team." , "parameters" : { "repetition_penalty" : 4.0 , "max_length" : 128 } } < > Update on GitHub ← Security & Compliance API Reference (Swagger) → Supported Transformers & Diffusers Tasks Example Request payloads Custom Handler Text Classification Classifying a single text Classifying a text pair Zero Shot Classification Token Classifiation Question Answering Fill Mask Summarization Translation Text to Text Generation Text Generation Feature Extraction Sentence Embeddings Sentence similarity Ranking Image Classification Automatic Speech Recognition Audio Classification Object Detection Image Segmentation Table Question Answering Conversational Text To Image Additional parameters
KTO_Trainer.txt
KTO Trainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation KTO Trainer TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started KTO Trainer Overview Kahneman-Tversky Optimization (KTO) was introduced in KTO: Model Alignment as Prospect Theoretic Optimization by Kawin Ethayarajh , Winnie Xu , Niklas Muennighoff , Dan Jurafsky, Douwe Kiela . The abstract from the paper is the following: Kahneman & Tversky’s prospect theory tells us that humans perceive random variables in a biased but well-defined manner; for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases — the success of these objectives (e.g., DPO) over cross-entropy minimization can partly be ascribed to them being human-aware loss functions (HALOs). However, the utility functions these methods attribute to humans still differ from those in the prospect theory literature. Using a Kahneman-Tversky model of human utility, we propose a HALO that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences, as current methods do. We call this approach Kahneman-Tversky Optimization (KTO), and it matches or exceeds the performance of preference-based methods at scales from 1B to 30B. Crucially, KTO does not need preferences — only a binary signal of whether an output is desirable or undesirable for a given input. This makes it far easier to use in the real world, where preference data is scarce and expensive. The official code can be found in ContextualAI/HALOs . This post-training method was contributed by Kashif Rasul , Younes Belkada , Lewis Tunstall and Pablo Vicente. Quick start This example demonstrates how to train a model using the KTO method. We use the Qwen 0.5B model as the base model. We use the preference data from the KTO Mix 14k . You can view the data in the dataset here: Below is the script to train the model: Copied # train_kto.py from datasets import load_dataset from trl import KTOConfig, KTOTrainer from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) train_dataset = load_dataset( "trl-lib/kto-mix-14k" , split= "train" ) training_args = KTOConfig(output_dir= "Qwen2-0.5B-KTO" , logging_steps= 10 ) trainer = KTOTrainer(model=model, args=training_args, processing_class=tokenizer, train_dataset=train_dataset) trainer.train() Execute the script using the following command: Copied accelerate launch train_kto.py Distributed across 8 x H100 GPUs, the training takes approximately 30 minutes. You can verify the training progress by checking the reward graph. An increasing trend in the reward margin indicates that the model is improving and generating better responses over time. To see how the trained model performs, you can use the TRL Chat CLI . $ trl chat --model_name_or_path trl-lib/Qwen2-0.5B-KTO <quentin_gallouedec>: What is the best programming language? <trl-lib/Qwen2-0.5B-KTO>: The best programming language can vary depending on individual preferences, industry-specific requirements, technical skills, and familiarity with the specific use case or task. Here are some widely-used programming languages that have been noted as popular and widely used: Here are some other factors to consider when choosing a programming language for a project: 1 JavaScript : JavaScript is at the heart of the web and can be used for building web applications, APIs, and interactive front-end applications like frameworks like React and Angular. It's similar to C, C++, and F# in syntax structure and is accessible and easy to learn, making it a popular choice for beginners and professionals alike. 2 Java : Known for its object-oriented programming (OOP) and support for Java 8 and .NET, Java is used for developing enterprise-level software applications, high-performance games, as well as mobile apps, game development, and desktop applications. 3 C++ : Known for its flexibility and scalability, C++ offers comprehensive object-oriented programming and is a popular choice for high-performance computing and other technical fields. It's a powerful platform for building real-world applications and games at scale. 4 Python : Developed by Guido van Rossum in 1991, Python is a high-level, interpreted, and dynamically typed language known for its simplicity, readability, and versatility. Expected dataset format KTO requires an unpaired preference dataset . Alternatively, you can provide a paired preference dataset (also known simply as a preference dataset ). In this case, the trainer will automatically convert it to an unpaired format by separating the chosen and rejected responses, assigning label = True to the chosen completions and label = False to the rejected ones. The KTOTrainer supports both conversational and standard dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset. In theory, the dataset should contain at least one chosen and one rejected completion. However, some users have successfully run KTO using only chosen or only rejected data. If using only rejected data, it is advisable to adopt a conservative learning rate. Example script We provide an example script to train a model using the KTO method. The script is available in trl/scripts/kto.py To test the KTO script with the Qwen2 0.5B model on the UltraFeedback dataset , run the following command: Copied accelerate launch trl/scripts/kto.py \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --dataset_name trl-lib/kto-mix-14k \ --num_train_epochs 1 \ --logging_steps 25 \ --output_dir Qwen2-0.5B-KTO Usage tips For Mixture of Experts Models: Enabling the auxiliary loss MOEs are the most efficient if the load is about equally distributed between experts. To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss. This option is enabled by setting output_router_logits=True in the model config (e.g. MixtralConfig ). To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter router_aux_loss_coef=... (default: 0.001 ) in the model config. Batch size recommendations Use a per-step batch size that is at least 4, and an effective batch size between 16 and 128. Even if your effective batch size is large, if your per-step batch size is poor, then the KL estimate in KTO will be poor. Learning rate recommendations Each choice of beta has a maximum learning rate it can tolerate before learning performance degrades. For the default setting of beta = 0.1 , the learning rate should typically not exceed 1e-6 for most models. As beta decreases, the learning rate should also be reduced accordingly. In general, we strongly recommend keeping the learning rate between 5e-7 and 5e-6 . Even with small datasets, we advise against using a learning rate outside this range. Instead, opt for more epochs to achieve better results. Imbalanced data The desirable_weight and undesirable_weight of the KTOConfig refer to the weights placed on the losses for desirable/positive and undesirable/negative examples. By default, they are both 1. However, if you have more of one or the other, then you should upweight the less common type such that the ratio of ( desirable_weight × \times × number of positives) to ( undesirable_weight × \times × number of negatives) is in the range 1:1 to 4:3. Logged metrics While training and evaluating we record the following reward metrics: rewards/chosen : the mean log probabilities of the policy model for the chosen responses scaled by beta rewards/rejected : the mean log probabilities of the policy model for the rejected responses scaled by beta rewards/margins : the mean difference between the chosen and corresponding rejected rewards logps/chosen : the mean log probabilities of the chosen completions logps/rejected : the mean log probabilities of the rejected completions logits/chosen : the mean logits of the chosen completions logits/rejected : the mean logits of the rejected completions kl : the KL divergence between the policy model and the reference model KTOTrainer class trl. KTOTrainer < source > ( model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str] = None ref_model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str, NoneType] = None args : KTOConfig = None train_dataset : typing.Optional[datasets.arrow_dataset.Dataset] = None eval_dataset : typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = None processing_class : typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = None data_collator : typing.Optional[transformers.data.data_collator.DataCollator] = None model_init : typing.Optional[typing.Callable[[], transformers.modeling_utils.PreTrainedModel]] = None callbacks : typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None optimizers : tuple = (None, None) preprocess_logits_for_metrics : typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None peft_config : typing.Optional[dict] = None compute_metrics : typing.Optional[typing.Callable[[transformers.trainer_utils.EvalLoopOutput], dict]] = None model_adapter_name : typing.Optional[str] = None ref_adapter_name : typing.Optional[str] = None ) Parameters model ( transformers.PreTrainedModel ) — The model to train, preferably an AutoModelForSequenceClassification . ref_model ( PreTrainedModelWrapper ) — Hugging Face transformer model with a casual language modelling head. Used for implicit reward computation and loss. If no reference model is provided, the trainer will create a reference model with the same architecture as the model to be optimized. args ( KTOConfig ) — The arguments to use for training. train_dataset ( datasets.Dataset ) — The dataset to use for training. eval_dataset ( datasets.Dataset ) — The dataset to use for evaluation. processing_class ( PreTrainedTokenizerBase or BaseImageProcessor or FeatureExtractionMixin or ProcessorMixin , optional ) — Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model. data_collator ( transformers.DataCollator , optional , defaults to None ) — The data collator to use for training. If None is specified, the default data collator ( DPODataCollatorWithPadding ) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences. model_init ( Callable[[], transformers.PreTrainedModel] ) — The model initializer to use for training. If None is specified, the default model initializer will be used. callbacks ( list[transformers.TrainerCallback] ) — The callbacks to use for training. optimizers ( tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] ) — The optimizer and scheduler to use for training. preprocess_logits_for_metrics ( Callable[[torch.Tensor, torch.Tensor], torch.Tensor] ) — The function to use to preprocess the logits before computing the metrics. peft_config ( dict , defaults to None ) — The PEFT configuration to use for training. If you pass a PEFT configuration, the model will be wrapped in a PEFT model. disable_dropout ( bool , defaults to True ) — Whether or not to disable dropouts in model and ref_model . compute_metrics ( Callable[[EvalPrediction], dict] , optional ) — The function to use to compute the metrics. Must take a EvalPrediction and return a dictionary string to metric values. model_adapter_name ( str , defaults to None ) — Name of the train target PEFT adapter, when using LoRA with multiple adapters. ref_adapter_name ( str , defaults to None ) — Name of the reference PEFT adapter, when using LoRA with multiple adapters. Initialize KTOTrainer. compute_reference_log_probs < source > ( padded_batch : dict ) Computes log probabilities of the reference model for a single padded batch of a KTO specific dataset. create_model_card < source > ( model_name : typing.Optional[str] = None dataset_name : typing.Optional[str] = None tags : typing.Union[str, list[str], NoneType] = None ) Parameters model_name ( str , optional , defaults to None ) — The name of the model. dataset_name ( str , optional , defaults to None ) — The name of the dataset used for training. tags ( str , list[str] or None , optional , defaults to None ) — Tags to be associated with the model card. Creates a draft of a model card using the information available to the Trainer . evaluation_loop < source > ( dataloader : DataLoader description : str prediction_loss_only : typing.Optional[bool] = None ignore_keys : typing.Optional[list[str]] = None metric_key_prefix : str = 'eval' ) Overriding built-in evaluation loop to store metrics for each batch. Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict() . Works both with or without labels. generate_from_model_and_ref < source > ( model batch : dict ) Generate samples from the model and reference model for the given batch of inputs. get_batch_logps < source > ( logits : FloatTensor labels : LongTensor average_log_prob : bool = False label_pad_token_id : int = -100 is_encoder_decoder : bool = False ) Parameters logits — Logits of the model (unnormalized). Shape: (batch_size, sequence_length, vocab_size) labels — Labels for which to compute the log probabilities. Label tokens with a value of label_pad_token_id are ignored. Shape: (batch_size, sequence_length) average_log_prob — If True, return the average log probability per (non-masked) token. Otherwise, return the sum of the log probabilities of the (non-masked) tokens. Compute the log probabilities of the given labels under the given logits. get_batch_loss_metrics < source > ( model batch : dict ) Compute the KTO loss and other metrics for the given batch of inputs for train or test. get_eval_dataloader < source > ( eval_dataset : typing.Optional[datasets.arrow_dataset.Dataset] = None ) Parameters eval_dataset ( torch.utils.data.Dataset , optional ) — If provided, will override self.eval_dataset . If it is a Dataset , columns not accepted by the model.forward() method are automatically removed. It must implement __len__ . Returns the evaluation ~torch.utils.data.DataLoader . Subclass of transformers.src.transformers.trainer.get_eval_dataloader to precompute ref_log_probs . get_train_dataloader < source > ( ) Returns the training ~torch.utils.data.DataLoader . Subclass of transformers.src.transformers.trainer.get_train_dataloader to precompute ref_log_probs . kto_loss < source > ( policy_chosen_logps : FloatTensor policy_rejected_logps : FloatTensor policy_KL_logps : FloatTensor reference_chosen_logps : FloatTensor reference_rejected_logps : FloatTensor reference_KL_logps : FloatTensor ) → A tuple of four tensors Parameters policy_chosen_logps — Log probabilities of the policy model for the chosen responses. Shape: (num(chosen) in batch_size,) policy_rejected_logps — Log probabilities of the policy model for the rejected responses. Shape: (num(rejected) in batch_size,) policy_KL_logps — Log probabilities of the policy model for the KL responses. Shape: (batch_size,) reference_chosen_logps — Log probabilities of the reference model for the chosen responses. Shape: (num(chosen) in batch_size,) reference_rejected_logps — Log probabilities of the reference model for the rejected responses. Shape: (num(rejected) in batch_size,) reference_KL_logps — Log probabilities of the reference model for the KL responses. Shape: (batch_size,) Returns A tuple of four tensors (losses, chosen_rewards, rejected_rewards, KL). The losses tensor contains the KTO loss for each example in the batch. The chosen_rewards and rejected_rewards tensors contain the rewards for the chosen and rejected responses, respectively. The KL tensor contains the detached KL divergence estimate between the policy and reference models. Compute the KTO loss for a batch of policy and reference model log probabilities. log < source > ( logs : dict start_time : typing.Optional[float] = None ) Parameters logs ( dict[str, float] ) — The values to log. start_time ( float or None , optional , defaults to None ) — Start time of the training. Log logs on the various objects watching training, including stored metrics. null_ref_context < source > ( ) Context manager for handling null reference model (that is, peft adapter manipulation). KTOConfig class trl. KTOConfig < source > ( output_dir : str overwrite_output_dir : bool = False do_train : bool = False do_eval : bool = False do_predict : bool = False eval_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only : bool = False per_device_train_batch_size : int = 8 per_device_eval_batch_size : int = 8 per_gpu_train_batch_size : typing.Optional[int] = None per_gpu_eval_batch_size : typing.Optional[int] = None gradient_accumulation_steps : int = 1 eval_accumulation_steps : typing.Optional[int] = None eval_delay : typing.Optional[float] = 0 torch_empty_cache_steps : typing.Optional[int] = None learning_rate : float = 1e-06 weight_decay : float = 0.0 adam_beta1 : float = 0.9 adam_beta2 : float = 0.999 adam_epsilon : float = 1e-08 max_grad_norm : float = 1.0 num_train_epochs : float = 3.0 max_steps : int = -1 lr_scheduler_type : typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs : typing.Union[dict, str, NoneType] = <factory> warmup_ratio : float = 0.0 warmup_steps : int = 0 log_level : typing.Optional[str] = 'passive' log_level_replica : typing.Optional[str] = 'warning' log_on_each_node : bool = True logging_dir : typing.Optional[str] = None logging_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step : bool = False logging_steps : float = 500 logging_nan_inf_filter : bool = True save_strategy : typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps : float = 500 save_total_limit : typing.Optional[int] = None save_safetensors : typing.Optional[bool] = True save_on_each_node : bool = False save_only_model : bool = False restore_callback_states_from_checkpoint : bool = False no_cuda : bool = False use_cpu : bool = False use_mps_device : bool = False seed : int = 42 data_seed : typing.Optional[int] = None jit_mode_eval : bool = False use_ipex : bool = False bf16 : bool = False fp16 : bool = False fp16_opt_level : str = 'O1' half_precision_backend : str = 'auto' bf16_full_eval : bool = False fp16_full_eval : bool = False tf32 : typing.Optional[bool] = None local_rank : int = -1 ddp_backend : typing.Optional[str] = None tpu_num_cores : typing.Optional[int] = None tpu_metrics_debug : bool = False debug : typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last : bool = False eval_steps : typing.Optional[float] = None dataloader_num_workers : int = 0 dataloader_prefetch_factor : typing.Optional[int] = None past_index : int = -1 run_name : typing.Optional[str] = None disable_tqdm : typing.Optional[bool] = None remove_unused_columns : typing.Optional[bool] = True label_names : typing.Optional[typing.List[str]] = None load_best_model_at_end : typing.Optional[bool] = False metric_for_best_model : typing.Optional[str] = None greater_is_better : typing.Optional[bool] = None ignore_data_skip : bool = False fsdp : typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params : int = 0 fsdp_config : typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap : typing.Optional[str] = None accelerator_config : typing.Union[dict, str, NoneType] = None deepspeed : typing.Union[dict, str, NoneType] = None label_smoothing_factor : float = 0.0 optim : typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args : typing.Optional[str] = None adafactor : bool = False group_by_length : bool = False length_column_name : typing.Optional[str] = 'length' report_to : typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters : typing.Optional[bool] = None ddp_bucket_cap_mb : typing.Optional[int] = None ddp_broadcast_buffers : typing.Optional[bool] = None dataloader_pin_memory : bool = True dataloader_persistent_workers : bool = False skip_memory_metrics : bool = True use_legacy_prediction_loop : bool = False push_to_hub : bool = False resume_from_checkpoint : typing.Optional[str] = None hub_model_id : typing.Optional[str] = None hub_strategy : typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token : typing.Optional[str] = None hub_private_repo : typing.Optional[bool] = None hub_always_push : bool = False gradient_checkpointing : bool = False gradient_checkpointing_kwargs : typing.Union[dict, str, NoneType] = None include_inputs_for_metrics : bool = False include_for_metrics : typing.List[str] = <factory> eval_do_concat_batches : bool = True fp16_backend : str = 'auto' evaluation_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id : typing.Optional[str] = None push_to_hub_organization : typing.Optional[str] = None push_to_hub_token : typing.Optional[str] = None mp_parameters : str = '' auto_find_batch_size : bool = False full_determinism : bool = False torchdynamo : typing.Optional[str] = None ray_scope : typing.Optional[str] = 'last' ddp_timeout : typing.Optional[int] = 1800 torch_compile : bool = False torch_compile_backend : typing.Optional[str] = None torch_compile_mode : typing.Optional[str] = None dispatch_batches : typing.Optional[bool] = None split_batches : typing.Optional[bool] = None include_tokens_per_second : typing.Optional[bool] = False include_num_input_tokens_seen : typing.Optional[bool] = False neftune_noise_alpha : typing.Optional[float] = None optim_target_modules : typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics : bool = False eval_on_start : bool = False use_liger_kernel : typing.Optional[bool] = False eval_use_gather_object : typing.Optional[bool] = False average_tokens_across_devices : typing.Optional[bool] = False max_length : typing.Optional[int] = None max_prompt_length : typing.Optional[int] = None max_completion_length : typing.Optional[int] = None beta : float = 0.1 loss_type : typing.Literal['kto', 'apo_zero_unpaired'] = 'kto' desirable_weight : float = 1.0 undesirable_weight : float = 1.0 label_pad_token_id : int = -100 padding_value : typing.Optional[int] = None truncation_mode : str = 'keep_end' generate_during_eval : bool = False is_encoder_decoder : typing.Optional[bool] = None disable_dropout : bool = True precompute_ref_log_probs : bool = False model_init_kwargs : typing.Optional[dict[str, typing.Any]] = None ref_model_init_kwargs : typing.Optional[dict[str, typing.Any]] = None dataset_num_proc : typing.Optional[int] = None ) Parameters learning_rate ( float , optional , defaults to 5e-7 ) — Initial learning rate for AdamW optimizer. The default value replaces that of TrainingArguments . max_length ( Optional[int] , optional , defaults to None ) — Maximum length of the sequences (prompt + completion) in the batch. This argument is required if you want to use the default data collator. max_prompt_length ( Optional[int] , optional , defaults to None ) — Maximum length of the prompt. This argument is required if you want to use the default data collator. max_completion_length ( Optional[int] , optional , defaults to None ) — Maximum length of the completion. This argument is required if you want to use the default data collator and your model is an encoder-decoder. beta ( float , optional , defaults to 0.1 ) — Parameter controlling the deviation from the reference model. Higher β means less deviation from the reference model. loss_type ( str , optional , defaults to "kto" ) — Type of loss to use. Possible values are: "kto" : KTO loss from the KTO paper. "apo_zero_unpaired" : Unpaired variant of APO-zero loss from the APO paper. desirable_weight ( float , optional , defaults to 1.0 ) — Desirable losses are weighed by this factor to counter unequal number of desirable and undesirable paris. undesirable_weight ( float , optional , defaults to 1.0 ) — Undesirable losses are weighed by this factor to counter unequal number of desirable and undesirable pairs. label_pad_token_id ( int , optional , defaults to -100 ) — Label pad token id. This argument is required if you want to use the default data collator. padding_value ( Optional[int] , optional , defaults to None ) — Padding value to use. If None , the padding value of the tokenizer is used. truncation_mode ( str , optional , defaults to "keep_end" ) — Truncation mode to use when the prompt is too long. Possible values are "keep_end" or "keep_start" . This argument is required if you want to use the default data collator. generate_during_eval ( bool , optional , defaults to False ) — If True , generates and logs completions from both the model and the reference model to W&B during evaluation. is_encoder_decoder ( Optional[bool] , optional , defaults to None ) — When using the model_init argument (callable) to instantiate the model instead of the model argument, you need to specify if the model returned by the callable is an encoder-decoder model. precompute_ref_log_probs ( bool , optional , defaults to False ) — Whether to precompute reference model log probabilities for training and evaluation datasets. This is useful when training without the reference model to reduce the total GPU memory needed. model_init_kwargs ( Optional[dict[str, Any]] , optional , defaults to None ) — Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the model from a string. ref_model_init_kwargs ( Optional[dict[str, Any]] , optional , defaults to None ) — Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the reference model from a string. dataset_num_proc — ( Optional[int] , optional , defaults to None ): Number of processes to use for processing the dataset. disable_dropout ( bool , optional , defaults to True ) — Whether to disable dropout in the model. Configuration class for the KTOTrainer . Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. < > Update on GitHub ← GKD Nash-MD → KT O Trainer Overview Quick start Expected dataset format Example script Usage tips For Mixture of Experts Models: Enabling the auxiliary loss Batch size recommendations Learning rate recommendations Imbalanced data Logged metrics KTO Trainer KTO Config
Training_from_memory.txt
Training from memory Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Tokenizers documentation Training from memory Tokenizers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.20.3 v0.13.4.rc2 v0.10.0 v0.9.4 EN Getting started 🤗 Tokenizers Quicktour Installation The tokenization pipeline Components Training from memory API Input Sequences Encode Inputs Tokenizer Encoding Added Tokens Models Normalizers Pre-tokenizers Post-processors Trainers Decoders Visualizer Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Training from memory In the Quicktour , we saw how to build and train a tokenizer using text files, but we can actually use any Python Iterator. In this section we’ll see a few different ways of training our tokenizer. For all the examples listed below, we’ll use the same Tokenizer and Trainer , built as following: Copied from tokenizers import Tokenizer, decoders, models, normalizers, pre_tokenizers, trainers tokenizer = Tokenizer(models.Unigram()) tokenizer.normalizer = normalizers.NFKC() tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel() tokenizer.decoder = decoders.ByteLevel() trainer = trainers.UnigramTrainer( vocab_size= 20000 , initial_alphabet=pre_tokenizers.ByteLevel.alphabet(), special_tokens=[ "<PAD>" , "<BOS>" , "<EOS>" ], ) This tokenizer is based on the Unigram model. It takes care of normalizing the input using the NFKC Unicode normalization method, and uses a ByteLevel pre-tokenizer with the corresponding decoder. For more information on the components used here, you can check here . The most basic way As you probably guessed already, the easiest way to train our tokenizer is by using a List {.interpreted-text role=“obj”}: Copied # First few lines of the "Zen of Python" https://www.python.org/dev/peps/pep-0020/ data = [ "Beautiful is better than ugly." "Explicit is better than implicit." "Simple is better than complex." "Complex is better than complicated." "Flat is better than nested." "Sparse is better than dense." "Readability counts." ] tokenizer.train_from_iterator(data, trainer=trainer) Easy, right? You can use anything working as an iterator here, be it a List {.interpreted-text role=“obj”}, Tuple {.interpreted-text role=“obj”}, or a np.Array {.interpreted-text role=“obj”}. Anything works as long as it provides strings. Using the 🤗 Datasets library An awesome way to access one of the many datasets that exist out there is by using the 🤗 Datasets library. For more information about it, you should check the official documentation here . Let’s start by loading our dataset: Copied import datasets dataset = datasets.load_dataset( "wikitext" , "wikitext-103-raw-v1" , split= "train+test+validation" ) The next step is to build an iterator over this dataset. The easiest way to do this is probably by using a generator: Copied def batch_iterator ( batch_size= 1000 ): # Only keep the text column to avoid decoding the rest of the columns unnecessarily tok_dataset = dataset.select_columns( "text" ) for batch in tok_dataset. iter (batch_size): yield batch[ "text" ] As you can see here, for improved efficiency we can actually provide a batch of examples used to train, instead of iterating over them one by one. By doing so, we can expect performances very similar to those we got while training directly from files. With our iterator ready, we just need to launch the training. In order to improve the look of our progress bars, we can specify the total length of the dataset: Copied tokenizer.train_from_iterator(batch_iterator(), trainer=trainer, length= len (dataset)) And that’s it! Using gzip files Since gzip files in Python can be used as iterators, it is extremely simple to train on such files: Copied import gzip with gzip. open ( "data/my-file.0.gz" , "rt" ) as f: tokenizer.train_from_iterator(f, trainer=trainer) Now if we wanted to train from multiple gzip files, it wouldn’t be much harder: Copied files = [ "data/my-file.0.gz" , "data/my-file.1.gz" , "data/my-file.2.gz" ] def gzip_iterator (): for path in files: with gzip. open (path, "rt" ) as f: for line in f: yield line tokenizer.train_from_iterator(gzip_iterator(), trainer=trainer) And voilà! < > Update on GitHub ← Components Input Sequences → Training from memory The most basic way Using the 🤗 Datasets library Using gzip files
Access_and_read_Logs.txt
Access and read Logs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Access and read Logs Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Access and read Logs Hugging Face Endpoints provides access to the logs of your Endpoints through the UI in the “Logs” tab of your Endpoint. You will have access to the build logs of your Image artifacts as well as access to the Container Logs during inference. The Container Logs are only available when your Endpoint is in the “Running” state. Note: If your Endpoint creation is in the “Failed” state, you can check the Build Logs to see what the reason was, e.g. wrong version of a dependency, etc. Build Logs: Container Logs: < > Update on GitHub ← Use a custom Container Image Access and view Metrics → Access and read Logs
Using_Sentence_Transformers_at_Hugging_Face.txt
Using Sentence Transformers at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using Sentence Transformers at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using Sentence Transformers at Hugging Face sentence-transformers is a library that provides easy methods to compute embeddings (dense vector representations) for sentences, paragraphs and images. Texts are embedded in a vector space such that similar text is close, which enables applications such as semantic search, clustering, and retrieval. Exploring sentence-transformers in the Hub You can find over 500 hundred sentence-transformer models by filtering at the left of the models page . Most of these models support different tasks, such as doing feature-extraction to generate the embedding, and sentence-similarity as a way to determine how similar is a given sentence to other. You can also find an overview of the official pre-trained models in the official docs . All models on the Hub come up with features: An automatically generated model card with a description, example code snippets, architecture overview, and more. Metadata tags that help for discoverability and contain information such as license. An interactive widget you can use to play out with the model directly in the browser. An Inference API that allows to make inference requests. Using existing models The pre-trained models on the Hub can be loaded with a single line of code Copied from sentence_transformers import SentenceTransformer model = SentenceTransformer( 'model_name' ) Here is an example that encodes sentences and then computes the distance between them for doing semantic search. Copied from sentence_transformers import SentenceTransformer, util model = SentenceTransformer( 'multi-qa-MiniLM-L6-cos-v1' ) query_embedding = model.encode( 'How big is London' ) passage_embedding = model.encode([ 'London has 9,787,426 inhabitants at the 2011 census' , 'London is known for its finacial district' ]) print ( "Similarity:" , util.dot_score(query_embedding, passage_embedding)) If you want to see how to load a specific model, you can click Use in sentence-transformers and you will be given a working snippet that you can load it! Sharing your models You can share your Sentence Transformers by using the save_to_hub method from a trained model. Copied from sentence_transformers import SentenceTransformer # Load or train a model model.save_to_hub( "my_new_model" ) This command creates a repository with an automatically generated model card, an inference widget, example code snippets, and more! Here is an example. Additional resources Sentence Transformers library . Sentence Transformers docs . Integration with Hub announcement . < > Update on GitHub ← Sample Factory SetFit → Using Sentence Transformers at Hugging Face Exploring sentence-transformers in the Hub Using existing models Sharing your models Additional resources
Efficient_Training_on_Multiple_GPUs.txt
Efficient Training on Multiple GPUs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Efficient Training on Multiple GPUs Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Efficient Training on Multiple GPUs If training a model on a single GPU is too slow or if the model’s weights do not fit in a single GPU’s memory, transitioning to a multi-GPU setup may be a viable option. Prior to making this transition, thoroughly explore all the strategies covered in the Methods and tools for efficient training on a single GPU as they are universally applicable to model training on any number of GPUs. Once you have employed those strategies and found them insufficient for your case on a single GPU, consider moving to multiple GPUs. Transitioning from a single GPU to multiple GPUs requires the introduction of some form of parallelism, as the workload must be distributed across the resources. Multiple techniques can be employed to achieve parallelism, such as data parallelism, tensor parallelism, and pipeline parallelism. It’s important to note that there isn’t a one-size-fits-all solution, and the optimal settings depend on the specific hardware configuration you are using. This guide offers an in-depth overview of individual types of parallelism, as well as guidance on ways to combine techniques and choosing an appropriate approach. For step-by-step tutorials on distributed training, please refer to the 🤗 Accelerate documentation . While the main concepts discussed in this guide are likely applicable across frameworks, here we focus on PyTorch-based implementations. Before diving deeper into the specifics of each technique, let’s go over the rough decision process when training large models on a large infrastructure. Scalability strategy Begin by estimating how much vRAM is required to train your model. For models hosted on the 🤗 Hub, use our Model Memory Calculator , which gives you accurate calculations within a few percent margin. Parallelization strategy for a single Node / multi-GPU setup When training a model on a single node with multiple GPUs, your choice of parallelization strategy can significantly impact performance. Here’s a breakdown of your options: Case 1: Your model fits onto a single GPU If your model can comfortably fit onto a single GPU, you have two primary options: DDP - Distributed DataParallel Zero Redundancy Optimizer (ZeRO) - depending on the situation and configuration used, this method may or may not be faster, however, it’s worth experimenting with it. Case 2: Your model doesn’t fit onto a single GPU: If your model is too large for a single GPU, you have several alternatives to consider: PipelineParallel (PP) ZeRO TensorParallel (TP) With very fast inter-node connectivity (e.g., NVLINK or NVSwitch) all three strategies (PP, ZeRO, TP) should result in similar performance. However, without these, PP will be faster than TP or ZeRO. The degree of TP may also make a difference. It’s best to experiment with your specific setup to determine the most suitable strategy. TP is almost always used within a single node. That is TP size <= GPUs per node. Case 3: Largest layer of your model does not fit onto a single GPU If you are not using ZeRO, you have to use TensorParallel (TP), because PipelineParallel (PP) alone won’t be sufficient to accommodate the large layer. If you are using ZeRO, additionally adopt techniques from the Methods and tools for efficient training on a single GPU . Parallelization strategy for a multi-Node / multi-GPU setup When you have fast inter-node connectivity (e.g., NVLINK or NVSwitch) consider using one of these options: ZeRO - as it requires close to no modifications to the model A combination of PipelineParallel(PP) with TensorParallel(TP) and DataParallel(DP) - this approach will result in fewer communications, but requires significant changes to the model When you have slow inter-node connectivity and still low on GPU memory: Employ a combination of DataParallel(DP) with PipelineParallel(PP), TensorParallel(TP), and ZeRO. In the following sections of this guide we dig deeper into how these different parallelism methods work. Data Parallelism Even with only 2 GPUs, you can readily leverage the accelerated training capabilities offered by PyTorch’s built-in features, such as DataParallel (DP) and DistributedDataParallel (DDP). Note that PyTorch documentation recommends to prefer DistributedDataParallel (DDP) over DataParallel (DP) for multi-GPU training as it works for all models. Let’s take a look at how these two methods work and what makes them different. DataParallel vs DistributedDataParallel To understand the key differences in inter-GPU communication overhead between the two methods, let’s review the processes per batch: DDP : At the start time the main process replicates the model once from GPU 0 to the rest of GPUs Then for each batch: Each GPU directly consumes its mini-batch of data. During backward , once the local gradients are ready, they are averaged across all processes. DP : For each batch: GPU 0 reads the batch of data and then sends a mini-batch to each GPU. The up-to-date model is replicated from GPU 0 to each GPU. forward is executed, and output from each GPU is sent to GPU 0 to compute the loss. The loss is distributed from GPU 0 to all GPUs, and backward is run. Gradients from each GPU are sent to GPU 0 and averaged. Key differences include: DDP performs only a single communication per batch - sending gradients, while DP performs five different data exchanges per batch. DDP copies data using torch.distributed , while DP copies data within the process via Python threads (which introduces limitations associated with GIL). As a result, DistributedDataParallel (DDP) is generally faster than DataParallel (DP) unless you have slow GPU card inter-connectivity. Under DP, GPU 0 performs significantly more work than other GPUs, resulting in GPU under-utilization. DDP supports distributed training across multiple machines, whereas DP does not. This is not an exhaustive list of differences between DP and DDP, however, other nuances are out of scope of this guide. You can get a deeper understanding of these methods by reading this article . Let’s illustrate the differences between DP and DDP with an experiment. We’ll benchmark the differences between DP and DDP with an added context of NVLink presence: Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks ( NV2 in nvidia-smi topo -m ). Software: pytorch-1.8-to-be + cuda-11.0 / transformers==4.3.0.dev0 . To disable the NVLink feature on one of the benchmarks, we use NCCL_P2P_DISABLE=1 . Here is the benchmarking code and outputs: DP Copied rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 { 'train_runtime' : 110.5948, 'train_samples_per_second' : 1.808, 'epoch' : 0.69} DDP w/ NVlink Copied rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 { 'train_runtime' : 101.9003, 'train_samples_per_second' : 1.963, 'epoch' : 0.69} DDP w/o NVlink Copied rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path openai-community/gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 { 'train_runtime' : 131.4367, 'train_samples_per_second' : 1.522, 'epoch' : 0.69} Here are the same benchmarking results gathered in a table for convenience: Type NVlink Time 2:DP Y 110s 2:DDP Y 101s 2:DDP N 131s As you can see, in this case DP is ~10% slower than DDP with NVlink, but ~15% faster than DDP without NVlink. The real difference will depend on how much data each GPU needs to sync with the others - the more there is to sync, the more a slow link will impede the overall runtime. ZeRO Data Parallelism ZeRO-powered data parallelism (ZeRO-DP) is illustrated in the following diagram from this blog post . While it may appear complex, it is a very similar concept to DataParallel (DP). The difference is that instead of replicating the full model parameters, gradients and optimizer states, each GPU stores only a slice of it. Then, at run-time when the full layer parameters are needed just for the given layer, all GPUs synchronize to give each other parts that they miss. To illustrate this idea, consider a simple model with 3 layers (La, Lb, and Lc), where each layer has 3 parameters. Layer La, for example, has weights a0, a1 and a2: Copied La | Lb | Lc --- | ---- | --- a0 | b0 | c0 a1 | b1 | c1 a2 | b2 | c2 If we have 3 GPUs, ZeRO-DP splits the model onto 3 GPUs like so: Copied GPU0: La | Lb | Lc --- | ---- | --- a0 | b0 | c0 GPU1: La | Lb | Lc --- | ---- | --- a1 | b1 | c1 GPU2: La | Lb | Lc --- | ---- | --- a2 | b2 | c2 In a way, this is the same horizontal slicing as tensor parallelism, as opposed to Vertical slicing, where one puts whole layer-groups on different GPUs. Now let’s see how this works: Each of these GPUs will get the usual mini-batch as it works in DP: Copied x0 = > GPU0 x1 = > GPU1 x2 = > GPU2 The inputs are passed without modifications as if they would be processed by the original model. First, the inputs get to the layer La . What happens at this point? On GPU0: the x0 mini-batch requires the a0, a1, a2 parameters to do its forward path through the layer, but the GPU0 has only a0. It will get a1 from GPU1 and a2 from GPU2, bringing all the pieces of the model together. In parallel, GPU1 gets another mini-batch - x1. GPU1 has the a1 parameter, but needs a0 and a2, so it gets those from GPU0 and GPU2. Same happens to GPU2 that gets the mini-batch x2. It gets a0 and a1 from GPU0 and GPU1. This way each of the 3 GPUs gets the full tensors reconstructed and makes a forward pass with its own mini-batch. As soon as the calculation is done, the data that is no longer needed gets dropped - it’s only used during the calculation. The reconstruction is done efficiently via a pre-fetch. Then the whole process is repeated for layer Lb, then Lc forward-wise, and then backward Lc -> Lb -> La. This mechanism is similar to an efficient group backpacking strategy: person A carries the tent, person B carries the stove, and person C carries the axe. Each night they all share what they have with others and get from others what they don’t have, and in the morning they pack up their allocated type of gear and continue on their way. This is what ZeRO DP/Sharded DDP is. Compare this strategy to the simple one where each person has to carry their own tent, stove and axe (similar to DataParallel (DP and DDP) in PyTorch), which would be far more inefficient. While reading the literature on this topic you may encounter the following synonyms: Sharded, Partitioned. If you pay close attention the way ZeRO partitions the model’s weights - it looks very similar to tensor parallelism which will be discussed later. This is because it partitions/shards each layer’s weights, unlike vertical model parallelism which is discussed next. Implementations: DeepSpeed ZeRO-DP stages 1+2+3 Accelerate integration transformers integration From Naive Model Parallelism to Pipeline Parallelism To explain Pipeline parallelism, we’ll first look into Naive Model Parallelism (MP), also known as Vertical MP. This approach involves distributing groups of model layers across multiple GPUs by assigning specific layers to specific GPUs with .to() . As data flows through these layers, it is moved to the same GPU as the layer, while the other layers remain untouched. We refer to this Model parallelism as “Vertical” because of how models are typically visualized. For example, the following diagram shows an 8-layer model split vertically into two slices, placing layers 0-3 onto GPU0 and 4-7 to GPU1: Copied ================ | Layer | | | 0 | | | 1 | GPU0 | | 2 | | | 3 | | ================ | Layer | | | 4 | | | 5 | GPU1 | | 6 | | | 7 | | ================ In this example, when data moves from layer 0 to 3, it’s no different from regular forward pass. However, passing data from layer 3 to 4 requires moving it from GPU0 to GPU1, introducing a communication overhead. If the participating GPUs are on the same compute node (e.g. same physical machine) this copying is fast, but if the GPUs are distributed across different compute nodes (e.g. multiple machines), the communication overhead could be substantially greater. Following that, layers 4 to 7 work as they would in the original model. Upon completion of the 7th layer, there is often a need to send the data back to layer 0 where the labels are (or alternatively send the labels to the last layer). Now the loss can be computed and the optimizer can do its work. Naive Model Parallelism comes several shortcomings: All but one GPU are idle at any given moment : if 4 GPUs are used, it’s nearly identical to quadrupling the amount of memory of a single GPU, and ignoring the rest of the hardware. Overhead in data transfer between devices : E.g. 4x 6GB cards will be able to accommodate the same size as 1x 24GB card using naive MP, but a single 24GB card will complete the training faster, because it doesn’t have the data copying overhead. But, say, if you have 40GB cards and need to fit a 45GB model you can with 4x 40GB cards (but barely because of the gradient and optimizer states) Copying shared embeddings : Shared embeddings may need to get copied back and forth between GPUs. Now that you are familiar with how the naive approach to model parallelism works and its shortcomings, let’s look at Pipeline Parallelism (PP). PP is almost identical to a naive MP, but it solves the GPU idling problem by chunking the incoming batch into micro-batches and artificially creating a pipeline, which allows different GPUs to concurrently participate in the computation process. The following illustration from the GPipe paper shows the naive MP on the top, and PP on the bottom: At the bottom of the diagram, you can observe that the Pipeline Parallelism (PP) approach minimizes the number of idle GPU zones, referred to as ‘bubbles’. Both parts of the diagram show a parallelism level of degree 4, meaning that 4 GPUs are involved in the pipeline. You can see that there’s a forward path of 4 pipe stages (F0, F1, F2 and F3) followed by a backward path in reverse order (B3, B2, B1, and B0). PP introduces a new hyperparameter to tune - chunks , which determines how many data chunks are sent in a sequence through the same pipe stage. For example, in the bottom diagram you can see chunks=4 . GPU0 performs the same forward path on chunk 0, 1, 2 and 3 (F0,0, F0,1, F0,2, F0,3) and then it waits for other GPUs to do complete their work. Only when the other GPUs begin to complete their work, GPU0 starts to work again doing the backward path for chunks 3, 2, 1 and 0 (B0,3, B0,2, B0,1, B0,0). Note that this is the same concept as gradient accumulation steps. PyTorch uses chunks , while DeepSpeed refers to the same hyperparameter as gradient accumulation steps. Because of the chunks, PP introduces the notion of micro-batches (MBS). DP splits the global data batch size into mini-batches, so if you have a DP degree of 4, a global batch size of 1024 gets split up into 4 mini-batches of 256 each (1024/4). And if the number of chunks (or GAS) is 32 we end up with a micro-batch size of 8 (256/32). Each Pipeline stage works with a single micro-batch at a time. To calculate the global batch size of the DP + PP setup, use the formula: mbs * chunks * dp_degree ( 8 * 32 * 4 = 1024 ). With chunks=1 you end up with the naive MP, which is inefficient. With a large chunks value you end up with tiny micro-batch sizes which is also inefficient. For this reason, we encourage to experiment with the chunks value to find the one that leads to the most efficient GPUs utilization. You may notice a bubble of “dead” time on the diagram that can’t be parallelized because the last forward stage has to wait for backward to complete the pipeline. The purpose of finding the best value for chunks is to enable a high concurrent GPU utilization across all participating GPUs which translates to minimizing the size of the bubble. Pipeline API solutions have been implemented in: PyTorch DeepSpeed Megatron-LM These come with some shortcomings: They have to modify the model quite heavily, because Pipeline requires one to rewrite the normal flow of modules into a nn.Sequential sequence of the same, which may require changes to the design of the model. Currently the Pipeline API is very restricted. If you had a bunch of Python variables being passed in the very first stage of the Pipeline, you will have to find a way around it. Currently, the pipeline interface requires either a single Tensor or a tuple of Tensors as the only input and output. These tensors must have a batch size as the very first dimension, since pipeline is going to chunk the mini batch into micro-batches. Possible improvements are being discussed here https://github.com/pytorch/pytorch/pull/50693 Conditional control flow at the level of pipe stages is not possible - e.g., Encoder-Decoder models like T5 require special workarounds to handle a conditional encoder stage. They have to arrange each layer so that the output of one layer becomes an input to the other layer. More recent solutions include: Varuna Sagemaker We have not experimented with Varuna and SageMaker but their papers report that they have overcome the list of problems mentioned above and that they require smaller changes to the user’s model. Implementations: PyTorch (initial support in pytorch-1.8, and progressively getting improved in 1.9 and more so in 1.10). Some examples DeepSpeed Megatron-LM has an internal implementation - no API. Varuna SageMaker - this is a proprietary solution that can only be used on AWS. OSLO - this is implemented based on the Hugging Face Transformers. 🤗 Transformers status: as of this writing none of the models supports full-PP. GPT2 and T5 models have naive MP support. The main obstacle is being unable to convert the models to nn.Sequential and have all the inputs to be Tensors. This is because currently the models include many features that make the conversion very complicated, and will need to be removed to accomplish that. DeepSpeed and Megatron-LM integrations are available in 🤗 Accelerate Other approaches: DeepSpeed, Varuna and SageMaker use the concept of an Interleaved Pipeline Here the bubble (idle time) is further minimized by prioritizing backward passes. Varuna further attempts to improve the schedule by using simulations to discover the most efficient scheduling. OSLO has pipeline parallelism implementation based on the Transformers without nn.Sequential conversion. Tensor Parallelism In Tensor Parallelism, each GPU processes a slice of a tensor and only aggregates the full tensor for operations requiring it. To describe this method, this section of the guide relies on the concepts and diagrams from the Megatron-LM paper: Efficient Large-Scale Language Model Training on GPU Clusters . The main building block of any transformer is a fully connected nn.Linear followed by a nonlinear activation GeLU . The dot dot-product part of it, following the Megatron’s paper notation, can be written as Y = GeLU(XA) , where X is an input vector, Y is the output vector, and A is the weight matrix. If we look at the computation in matrix form, you can see how the matrix multiplication can be split between multiple GPUs: If we split the weight matrix A column-wise across N GPUs and perform matrix multiplications XA_1 through XA_n in parallel, then we will end up with N output vectors Y_1, Y_2, ..., Y_n which can be fed into GeLU independently: Using this principle, we can update a multi-layer perceptron of arbitrary depth, without the need for any synchronization between GPUs until the very end, where we need to reconstruct the output vector from shards. The Megatron-LM paper authors provide a helpful illustration for that: Parallelizing the multi-headed attention layers is even simpler, since they are already inherently parallel, due to having multiple independent heads! Special considerations: TP requires very fast network, and therefore it’s not advisable to do TP across more than one node. Practically, if a node has 4 GPUs, the highest TP degree is therefore 4. If you need a TP degree of 8, you need to use nodes that have at least 8 GPUs. This section is based on the original much more detailed TP overview . by @anton-l . Alternative names: DeepSpeed calls it tensor slicing Implementations: Megatron-LM has an internal implementation, as it’s very model-specific parallelformers (only inference at the moment) SageMaker - this is a proprietary solution that can only be used on AWS. OSLO has the tensor parallelism implementation based on the Transformers. SageMaker combines TP with DP for a more efficient processing. 🤗 Transformers status: core: not yet implemented in the core but if you want inference parallelformers provides this support for most of our models. So until this is implemented in the core you can use theirs. And hopefully training mode will be supported too. Deepspeed-Inference also supports our BERT, GPT-2, and GPT-Neo models in their super-fast CUDA-kernel-based inference mode, see more here 🤗 Accelerate integrates with TP from Megatron-LM . Data Parallelism + Pipeline Parallelism The following diagram from the DeepSpeed pipeline tutorial demonstrates how one can combine DP with PP. Here it’s important to see how DP rank 0 doesn’t see GPU2 and DP rank 1 doesn’t see GPU3. To DP there is just GPUs 0 and 1 where it feeds data as if there were just 2 GPUs. GPU0 “secretly” offloads some of its load to GPU2 using PP. And GPU1 does the same by enlisting GPU3 to its aid. Since each dimension requires at least 2 GPUs, here you’d need at least 4 GPUs. Implementations: DeepSpeed Megatron-LM Varuna SageMaker OSLO 🤗 Transformers status: not yet implemented Data Parallelism + Pipeline Parallelism + Tensor Parallelism To get an even more efficient training a 3D parallelism is used where PP is combined with TP and DP. This can be seen in the following diagram. This diagram is from a blog post 3D parallelism: Scaling to trillion-parameter models , which is a good read as well. Since each dimension requires at least 2 GPUs, here you’d need at least 8 GPUs. Implementations: DeepSpeed - DeepSpeed also includes an even more efficient DP, which they call ZeRO-DP. Megatron-LM Varuna SageMaker OSLO 🤗 Transformers status: not yet implemented, since we have no PP and TP. ZeRO Data Parallelism + Pipeline Parallelism + Tensor Parallelism One of the main features of DeepSpeed is ZeRO, which is a super-scalable extension of DP. It has already been discussed in ZeRO Data Parallelism . Normally it’s a standalone feature that doesn’t require PP or TP. But it can be combined with PP and TP. When ZeRO-DP is combined with PP (and optionally TP) it typically enables only ZeRO stage 1 (optimizer sharding). While it’s theoretically possible to use ZeRO stage 2 (gradient sharding) with Pipeline Parallelism, it will have negative performance impacts. There would need to be an additional reduce-scatter collective for every micro-batch to aggregate the gradients before sharding, which adds a potentially significant communication overhead. By nature of Pipeline Parallelism, small micro-batches are used and instead the focus is on trying to balance arithmetic intensity (micro-batch size) with minimizing the Pipeline bubble (number of micro-batches). Therefore those communication costs are going to impact the performance. In addition, there are already fewer layers than normal due to PP and so the memory savings won’t be huge. PP already reduces gradient size by 1/PP , and so gradient sharding savings on top of that are less significant than pure DP. ZeRO stage 3 is not a good choice either for the same reason - more inter-node communications required. And since we have ZeRO, the other benefit is ZeRO-Offload. Since this is stage 1 optimizer states can be offloaded to CPU. Implementations: Megatron-DeepSpeed and Megatron-Deepspeed from BigScience , which is the fork of the former repo. OSLO Important papers: Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model 🤗 Transformers status: not yet implemented, since we have no PP and TP. FlexFlow FlexFlow also solves the parallelization problem in a slightly different approach. Paper: “Beyond Data and Model Parallelism for Deep Neural Networks” by Zhihao Jia, Matei Zaharia, Alex Aiken It performs a sort of 4D Parallelism over Sample-Operator-Attribute-Parameter. Sample = Data Parallelism (sample-wise parallel) Operator = Parallelize a single operation into several sub-operations Attribute = Data Parallelism (length-wise parallel) Parameter = Model Parallelism (regardless of dimension - horizontal or vertical) Examples: Sample Let’s take 10 batches of sequence length 512. If we parallelize them by sample dimension into 2 devices, we get 10 x 512 which becomes 5 x 2 x 512. Operator If we perform layer normalization, we compute std first and mean second, and then we can normalize data. Operator parallelism allows computing std and mean in parallel. So if we parallelize them by operator dimension into 2 devices (cuda:0, cuda:1), first we copy input data into both devices, and cuda:0 computes std, cuda:1 computes mean at the same time. Attribute We have 10 batches of 512 length. If we parallelize them by attribute dimension into 2 devices, 10 x 512 will be 10 x 2 x 256. Parameter It is similar with tensor model parallelism or naive layer-wise model parallelism. The significance of this framework is that it takes resources like (1) GPU/TPU/CPU vs. (2) RAM/DRAM vs. (3) fast-intra-connect/slow-inter-connect and it automatically optimizes all these algorithmically deciding which parallelisation to use where. One very important aspect is that FlexFlow is designed for optimizing DNN parallelizations for models with static and fixed workloads, since models with dynamic behavior may prefer different parallelization strategies across iterations. So the promise is very attractive - it runs a 30min simulation on the cluster of choice and it comes up with the best strategy to utilise this specific environment. If you add/remove/replace any parts it’ll run and re-optimize the plan for that. And then you can train. A different setup will have its own custom optimization. 🤗 Transformers status: Transformers models are FX-trace-able via transformers.utils.fx , which is a prerequisite for FlexFlow, however, changes are required on the FlexFlow side to make it work with Transformers models. GPU selection When training on multiple GPUs, you can specify the number of GPUs to use and in what order. This can be useful for instance when you have GPUs with different computing power and want to use the faster GPU first. The selection process works for both DistributedDataParallel and DataParallel to use only a subset of the available GPUs, and you don’t need Accelerate or the DeepSpeed integration . Number of GPUs For example, if you have 4 GPUs and you only want to use the first 2: torchrun Accelerate DeepSpeed Use the --nproc_per_node to select how many GPUs to use. Copied torchrun --nproc_per_node=2 trainer-program.py ... Order of GPUs Now, to select which GPUs to use and their order, you’ll use the CUDA_VISIBLE_DEVICES environment variable. It is easiest to set the environment variable in a ~/bashrc or another startup config file. CUDA_VISIBLE_DEVICES is used to map which GPUs are used. For example, if you have 4 GPUs (0, 1, 2, 3) and you only want to run GPUs 0 and 2: Copied CUDA_VISIBLE_DEVICES=0,2 torchrun trainer-program.py ... Only the 2 physical GPUs (0 and 2) are “visible” to PyTorch and these are mapped to cuda:0 and cuda:1 respectively. You can also reverse the order of the GPUs to use 2 first. Now, the mapping is cuda:1 for GPU 0 and cuda:0 for GPU 2. Copied CUDA_VISIBLE_DEVICES=2,0 torchrun trainer-program.py ... You can also set the CUDA_VISIBLE_DEVICES environment variable to an empty value to create an environment without GPUs. Copied CUDA_VISIBLE_DEVICES= python trainer-program.py ... As with any environment variable, they can be exported instead of being added to the command line. However, this is not recommended because it can be confusing if you forget how the environment variable was setup and you end up using the wrong GPUs. Instead, it is common practice to set the environment variable for a specific training run on the same command line. CUDA_DEVICE_ORDER is an alternative environment variable you can use to control how the GPUs are ordered. You can either order them by: PCIe bus ID’s that matches the order of nvidia-smi and rocm-smi for NVIDIA and AMD GPUs respectively Copied export CUDA_DEVICE_ORDER=PCI_BUS_ID GPU compute ability Copied export CUDA_DEVICE_ORDER=FASTEST_FIRST The CUDA_DEVICE_ORDER is especially useful if your training setup consists of an older and newer GPU, where the older GPU appears first, but you cannot physically swap the cards to make the newer GPU appear first. In this case, set CUDA_DEVICE_ORDER=FASTEST_FIRST to always use the newer and faster GPU first ( nvidia-smi or rocm-smi still reports the GPUs in their PCIe order). Or you could also set export CUDA_VISIBLE_DEVICES=1,0 . < > Update on GitHub ← Methods and tools for efficient training on a single GPU Fully Sharded Data Parallel → Efficient Training on Multiple GP Us Scalability strategy Data Parallelism Data Parallel vs Distributed Data Parallel ZeR O Data Parallelism From Naive Model Parallelism to Pipeline Parallelism Tensor Parallelism Data Parallelism + Pipeline Parallelism Data Parallelism + Pipeline Parallelism + Tensor Parallelism ZeR O Data Parallelism + Pipeline Parallelism + Tensor Parallelism Flex Flow GP U selection Number of GP Us Order of GP Us
Pipeline_callbacks.txt
Pipeline callbacks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Pipeline callbacks Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. The callback function is executed at the end of each step, and modifies the pipeline attributes and variables for the next step. This is really useful for dynamically adjusting certain pipeline attributes or modifying tensor variables. This versatility allows for interesting use cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale. With callbacks, you can implement new features without modifying the underlying code! 🤗 Diffusers currently only supports callback_on_step_end , but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point! This guide will demonstrate how callbacks work by a few features you can implement with them. Official callbacks We provide a list of callbacks you can plug into an existing pipeline and modify the denoising loop. This is the current list of official callbacks: SDCFGCutoffCallback : Disables the CFG after a certain number of steps for all SD 1.5 pipelines, including text-to-image, image-to-image, inpaint, and controlnet. SDXLCFGCutoffCallback : Disables the CFG after a certain number of steps for all SDXL pipelines, including text-to-image, image-to-image, inpaint, and controlnet. IPAdapterScaleCutoffCallback : Disables the IP Adapter after a certain number of steps for all pipelines supporting IP-Adapter. If you want to add a new official callback, feel free to open a feature request or submit a PR . To set up a callback, you need to specify the number of denoising steps after which the callback comes into effect. You can do so by using either one of these two arguments cutoff_step_ratio : Float number with the ratio of the steps. cutoff_step_index : Integer number with the exact number of the step. Copied import torch from diffusers import DPMSolverMultistepScheduler, StableDiffusionXLPipeline from diffusers.callbacks import SDXLCFGCutoffCallback callback = SDXLCFGCutoffCallback(cutoff_step_ratio= 0.4 ) # can also be used with cutoff_step_index # callback = SDXLCFGCutoffCallback(cutoff_step_ratio=None, cutoff_step_index=10) pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , ).to( "cuda" ) pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, use_karras_sigmas= True ) prompt = "a sports car at the road, best quality, high quality, high detail, 8k resolution" generator = torch.Generator(device= "cpu" ).manual_seed( 2628670641 ) out = pipeline( prompt=prompt, negative_prompt= "" , guidance_scale= 6.5 , num_inference_steps= 25 , generator=generator, callback_on_step_end=callback, ) out.images[ 0 ].save( "official_callback.png" ) without SDXLCFGCutoffCallback with SDXLCFGCutoffCallback Dynamic classifier-free guidance Dynamic classifier-free guidance (CFG) is a feature that allows you to disable CFG after a certain number of inference steps which can help you save compute with minimal cost to performance. The callback function for this should have the following arguments: pipeline (or the pipeline instance) provides access to important properties such as num_timesteps and guidance_scale . You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by setting pipeline._guidance_scale=0.0 . step_index and timestep tell you where you are in the denoising loop. Use step_index to turn off CFG after reaching 40% of num_timesteps . callback_kwargs is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in the callback_on_step_end_tensor_inputs argument, which is passed to the pipeline’s __call__ method. Different pipelines may use different sets of variables, so please check a pipeline’s _callback_tensor_inputs attribute for the list of variables you can modify. Some common variables include latents and prompt_embeds . For this function, change the batch size of prompt_embeds after setting guidance_scale=0.0 in order for it to work properly. Your callback function should look something like this: Copied def callback_dynamic_cfg ( pipe, step_index, timestep, callback_kwargs ): # adjust the batch_size of prompt_embeds according to guidance_scale if step_index == int (pipeline.num_timesteps * 0.4 ): prompt_embeds = callback_kwargs[ "prompt_embeds" ] prompt_embeds = prompt_embeds.chunk( 2 )[- 1 ] # update guidance_scale and prompt_embeds pipeline._guidance_scale = 0.0 callback_kwargs[ "prompt_embeds" ] = prompt_embeds return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs . Copied import torch from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16) pipeline = pipeline.to( "cuda" ) prompt = "a photo of an astronaut riding a horse on mars" generator = torch.Generator(device= "cuda" ).manual_seed( 1 ) out = pipeline( prompt, generator=generator, callback_on_step_end=callback_dynamic_cfg, callback_on_step_end_tensor_inputs=[ 'prompt_embeds' ] ) out.images[ 0 ].save( "out_custom_cfg.png" ) Interrupt the diffusion process The interruption callback is supported for text-to-image, image-to-image, and inpainting for the StableDiffusionPipeline and StableDiffusionXLPipeline . Stopping the diffusion process early is useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback. This callback function should take the following arguments: pipeline , i , t , and callback_kwargs (this must be returned). Set the pipeline’s _interrupt attribute to True to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback. In this example, the diffusion process is stopped after 10 steps even though num_inference_steps is set to 50. Copied from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" ) pipeline.enable_model_cpu_offload() num_inference_steps = 50 def interrupt_callback ( pipeline, i, t, callback_kwargs ): stop_idx = 10 if i == stop_idx: pipeline._interrupt = True return callback_kwargs pipeline( "A photo of a cat" , num_inference_steps=num_inference_steps, callback_on_step_end=interrupt_callback, ) Display image after each generation step This tip was contributed by asomoza . Display an image after each generation step by accessing and converting the latents after each step into an image. The latent space is compressed to 128x128, so the images are also 128x128 which is useful for a quick preview. Use the function below to convert the SDXL latents (4 channels) to RGB tensors (3 channels) as explained in the Explaining the SDXL latent space blog post. Copied def latents_to_rgb ( latents ): weights = ( ( 60 , - 60 , 25 , - 70 ), ( 60 , - 5 , 15 , - 50 ), ( 60 , 10 , - 5 , - 35 ), ) weights_tensor = torch.t(torch.tensor(weights, dtype=latents.dtype).to(latents.device)) biases_tensor = torch.tensor(( 150 , 140 , 130 ), dtype=latents.dtype).to(latents.device) rgb_tensor = torch.einsum( "...lxy,lr -> ...rxy" , latents, weights_tensor) + biases_tensor.unsqueeze(- 1 ).unsqueeze(- 1 ) image_array = rgb_tensor.clamp( 0 , 255 ).byte().cpu().numpy().transpose( 1 , 2 , 0 ) return Image.fromarray(image_array) Create a function to decode and save the latents into an image. Copied def decode_tensors ( pipe, step, timestep, callback_kwargs ): latents = callback_kwargs[ "latents" ] image = latents_to_rgb(latents[ 0 ]) image.save( f" {step} .png" ) return callback_kwargs Pass the decode_tensors function to the callback_on_step_end parameter to decode the tensors after each step. You also need to specify what you want to modify in the callback_on_step_end_tensor_inputs parameter, which in this case are the latents. Copied from diffusers import AutoPipelineForText2Image import torch from PIL import Image pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) image = pipeline( prompt= "A croissant shaped like a cute bear." , negative_prompt= "Deformed, ugly, bad anatomy" , callback_on_step_end=decode_tensors, callback_on_step_end_tensor_inputs=[ "latents" ], ).images[ 0 ] step 0 step 19 step 29 step 39 step 49 < > Update on GitHub ← Scheduler features Reproducible pipelines → Pipeline callbacks Official callbacks Dynamic classifier-free guidance Interrupt the diffusion process Display image after each generation step
List_Parquet_files.txt
List Parquet files Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation List Parquet files Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started List Parquet files Datasets can be published in any format (CSV, JSONL, directories of images, etc.) to the Hub, and they are easily accessed with the 🤗 Datasets library. For a more performant experience (especially when it comes to large datasets), the dataset viewer automatically converts every dataset to the Parquet format. What is Parquet? Parquet is a columnar storage format optimized for querying and processing large datasets. Parquet is a popular choice for big data processing and analytics and is widely used for data processing and machine learning. In Parquet, data is divided into chunks called “row groups”, and within each row group, it is stored in columns rather than rows. Each row group column is compressed separately using the best compression algorithm depending on the data, and contains metadata and statistics (min/max value, number of NULL values) about the data it contains. This structure allows for efficient data reading and querying: only the necessary columns are read from disk (projection pushdown); no need to read the entire file. This reduces the memory requirement for working with Parquet data. entire row groups are skipped if the statistics stored in its metadata do not match the data of interest (automatic filtering) the data is compressed, which reduces the amount of data that needs to be stored and transferred. A Parquet file contains a single table. If a dataset has multiple tables (e.g. multiple splits or subsets), each table is stored in a separate Parquet file. Conversion to Parquet The Parquet files are published to the Hub on a specific refs/convert/parquet branch (like this fancyzhx/amazon_polarity branch for example) that parallels the main branch. In order for the dataset viewer to generate a Parquet version of a dataset, the dataset must be public , or owned by a PRO user or an Enterprise Hub organization . Using the dataset viewer API This guide shows you how to use the dataset viewer’s /parquet endpoint to retrieve a list of a dataset’s files converted to Parquet. Feel free to also try it out with Postman , RapidAPI , or ReDoc . The /parquet endpoint accepts the dataset name as its query parameter: Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://datasets-server.huggingface.co/parquet?dataset=ibm/duorc" def query (): response = requests.get(API_URL, headers=headers) return response.json() data = query() The endpoint response is a JSON containing a list of the dataset’s files in the Parquet format. For example, the ibm/duorc dataset has six Parquet files, which corresponds to the test , train and validation splits of its two subsets, ParaphraseRC and SelfRC (see the List splits and subsets guide for more details about splits and subsets). The endpoint also gives the filename and size of each file: Copied { "parquet_files" : [ { "dataset" : "ibm/duorc" , "config" : "ParaphraseRC" , "split" : "test" , "url" : "https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/test/0000.parquet" , "filename" : "0000.parquet" , "size" : 6136591 } , { "dataset" : "ibm/duorc" , "config" : "ParaphraseRC" , "split" : "train" , "url" : "https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/train/0000.parquet" , "filename" : "0000.parquet" , "size" : 26005668 } , { "dataset" : "ibm/duorc" , "config" : "ParaphraseRC" , "split" : "validation" , "url" : "https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/validation/0000.parquet" , "filename" : "0000.parquet" , "size" : 5566868 } , { "dataset" : "ibm/duorc" , "config" : "SelfRC" , "split" : "test" , "url" : "https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/SelfRC/test/0000.parquet" , "filename" : "0000.parquet" , "size" : 3035736 } , { "dataset" : "ibm/duorc" , "config" : "SelfRC" , "split" : "train" , "url" : "https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/SelfRC/train/0000.parquet" , "filename" : "0000.parquet" , "size" : 14851720 } , { "dataset" : "ibm/duorc" , "config" : "SelfRC" , "split" : "validation" , "url" : "https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/SelfRC/validation/0000.parquet" , "filename" : "0000.parquet" , "size" : 3114390 } ] , "pending" : [ ] , "failed" : [ ] , "partial" : false } Sharded Parquet files Big datasets are partitioned into Parquet files (shards) of about 500MB each. The filename contains the name of the dataset, the split, the shard index, and the total number of shards ( dataset-name-train-0000-of-0004.parquet ). For a given split, the elements in the list are sorted by their shard index, in ascending order. For example, the train split of the fancyzhx/amazon_polarity dataset is partitioned into 4 shards: Copied { "parquet_files" : [ { "dataset" : "fancyzhx/amazon_polarity" , "config" : "amazon_polarity" , "split" : "test" , "url" : "https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/test/0000.parquet" , "filename" : "0000.parquet" , "size" : 117422360 } , { "dataset" : "fancyzhx/amazon_polarity" , "config" : "amazon_polarity" , "split" : "train" , "url" : "https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/train/0000.parquet" , "filename" : "0000.parquet" , "size" : 259761770 } , { "dataset" : "fancyzhx/amazon_polarity" , "config" : "amazon_polarity" , "split" : "train" , "url" : "https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/train/0001.parquet" , "filename" : "0001.parquet" , "size" : 258363554 } , { "dataset" : "fancyzhx/amazon_polarity" , "config" : "amazon_polarity" , "split" : "train" , "url" : "https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/train/0002.parquet" , "filename" : "0002.parquet" , "size" : 255471883 } , { "dataset" : "fancyzhx/amazon_polarity" , "config" : "amazon_polarity" , "split" : "train" , "url" : "https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/train/0003.parquet" , "filename" : "0003.parquet" , "size" : 254410930 } ] , "pending" : [ ] , "failed" : [ ] , "partial" : false } To read and query the Parquet files, take a look at the Query datasets from the dataset viewer API guide. Partially converted datasets The Parquet version can be partial in two cases: if the dataset is already in Parquet format but it contains row groups bigger than the recommended size (100-300MB uncompressed). This size is better for memory usage since Parquet is streamed row group per row group in most data libraries. if the dataset is not already in Parquet format or if it is bigger than 5GB. In that case the Parquet files are generated up to 5GB and placed in a split directory prefixed with “partial”, e.g. “partial-train” instead of “train”. You can check the row groups size directly on Hugging Face using the Parquet metadata sidebar, for example here : Parquet-native datasets When the dataset is already in Parquet format, the data are not converted and the files in refs/convert/parquet are links to the original files. This rule suffers an exception to ensure the dataset viewer API to stay fast: if the row group size of the original Parquet files is too big, new Parquet files are generated. Using the Hugging Face Hub API For convenience, you can directly use the Hugging Face Hub /api/parquet endpoint which returns the list of Parquet URLs: Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://huggingface.co/api/datasets/ibm/duorc/parquet" def query (): response = requests.get(API_URL, headers=headers) return response.json() urls = query() The endpoint response is a JSON containing a list of the dataset’s files URLs in the Parquet format for each split and subset. For example, the ibm/duorc dataset has one Parquet file for the train split of the “ParaphraseRC” subset (see the List splits and subsets guide for more details about splits and subsets). Copied { "ParaphraseRC" : { "test" : [ "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/test/0.parquet" ] , "train" : [ "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train/0.parquet" ] , "validation" : [ "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/validation/0.parquet" ] } , "SelfRC" : { "test" : [ "https://huggingface.co/api/datasets/ibm/duorc/parquet/SelfRC/test/0.parquet" ] , "train" : [ "https://huggingface.co/api/datasets/ibm/duorc/parquet/SelfRC/train/0.parquet" ] , "validation" : [ "https://huggingface.co/api/datasets/ibm/duorc/parquet/SelfRC/validation/0.parquet" ] } } Optionally you can specify which subset name to return, as well as which split: Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train" def query (): response = requests.get(API_URL, headers=headers) return response.json() urls = query() Copied [ "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train/0.parquet" ] Each parquet file can also be accessed using its shard index: https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train/0.parquet redirects to https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/train/0000.parquet for example. < > Update on GitHub ← Filter rows in a dataset Get the number of rows and the bytes size → List Parquet files What is Parquet? Conversion to Parquet Using the dataset viewer API Sharded Parquet files Partially converted datasets Parquet-native datasets Using the Hugging Face Hub API
Quickstart_with_Python.txt
Quickstart with Python Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AutoTrain documentation Quickstart with Python AutoTrain 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.8.24 v0.7.129 v0.6.48 v0.5.2 EN Getting Started 🤗 AutoTrain How much does it cost? Get help and support Frequently Asked Questions Quickstart Train on Spaces Python SDK Train Locally Config File Tasks LLM Finetuning Text Classification/Regression Extractive QA Sentence Transformer Image Classification / Regression Object Detection Seq2Seq Token Classification Tabular Miscellaneous Understanding Column Mapping AutoTrain API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.8.24 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Quickstart with Python AutoTrain is a library that allows you to train state of the art models on Hugging Face Spaces, or locally. It provides a simple and easy-to-use interface to train models for various tasks like llm finetuning, text classification, image classification, object detection, and more. In this quickstart guide, we will show you how to train a model using AutoTrain in Python. Getting Started AutoTrain can be installed using pip: Copied $ pip install autotrain-advanced The example code below shows how to finetune an LLM model using AutoTrain in Python: Copied import os from autotrain.params import LLMTrainingParams from autotrain.project import AutoTrainProject params = LLMTrainingParams( model= "meta-llama/Llama-3.2-1B-Instruct" , data_path= "HuggingFaceH4/no_robots" , chat_template= "tokenizer" , text_column= "messages" , train_split= "train" , trainer= "sft" , epochs= 3 , batch_size= 1 , lr= 1e-5 , peft= True , quantization= "int4" , target_modules= "all-linear" , padding= "right" , optimizer= "paged_adamw_8bit" , scheduler= "cosine" , gradient_accumulation= 8 , mixed_precision= "bf16" , merge_adapter= True , project_name= "autotrain-llama32-1b-finetune" , log= "tensorboard" , push_to_hub= True , username=os.environ.get( "HF_USERNAME" ), token=os.environ.get( "HF_TOKEN" ), ) backend = "local" project = AutoTrainProject(params=params, backend=backend, process= True ) project.create() In this example, we are finetuning the meta-llama/Llama-3.2-1B-Instruct model on the HuggingFaceH4/no_robots dataset. We are training the model for 3 epochs with a batch size of 1 and a learning rate of 1e-5 . We are using the paged_adamw_8bit optimizer and the cosine scheduler. We are also using mixed precision training with a gradient accumulation of 8. The final model will be pushed to the Hugging Face Hub after training. To train the model, run the following command: Copied $ export HF_USERNAME=<your-hf-username> $ export HF_TOKEN=<your-hf-write-token> $ python train.py This will create a new project directory with the name autotrain-llama32-1b-finetune and start the training process. Once the training is complete, the model will be pushed to the Hugging Face Hub. Your HF_TOKEN and HF_USERNAME are only required if you want to push the model or if you are accessing a gated model or dataset. AutoTrainProject Class class autotrain.project. AutoTrainProject < source > ( params : typing.Union[autotrain.trainers.clm.params.LLMTrainingParams, autotrain.trainers.text_classification.params.TextClassificationParams, autotrain.trainers.tabular.params.TabularParams, autotrain.trainers.seq2seq.params.Seq2SeqParams, autotrain.trainers.image_classification.params.ImageClassificationParams, autotrain.trainers.text_regression.params.TextRegressionParams, autotrain.trainers.object_detection.params.ObjectDetectionParams, autotrain.trainers.token_classification.params.TokenClassificationParams, autotrain.trainers.sent_transformers.params.SentenceTransformersParams, autotrain.trainers.image_regression.params.ImageRegressionParams, autotrain.trainers.extractive_question_answering.params.ExtractiveQuestionAnsweringParams, autotrain.trainers.vlm.params.VLMTrainingParams] backend : str process : bool = False ) A class to train an AutoTrain project Attributes params : Union[ LLMTrainingParams, TextClassificationParams, TabularParams, Seq2SeqParams, ImageClassificationParams, TextRegressionParams, ObjectDetectionParams, TokenClassificationParams, SentenceTransformersParams, ImageRegressionParams, ExtractiveQuestionAnsweringParams, VLMTrainingParams, ] The parameters for the AutoTrain project. backend : str The backend to be used for the AutoTrain project. It should be one of the following: local spaces-a10g-large spaces-a10g-small spaces-a100-large spaces-t4-medium spaces-t4-small spaces-cpu-upgrade spaces-cpu-basic spaces-l4x1 spaces-l4x4 spaces-l40sx1 spaces-l40sx4 spaces-l40sx8 spaces-a10g-largex2 spaces-a10g-largex4 process : bool Flag to indicate if the params and dataset should be processed. If your data format is not AutoTrain-readable, set it to True. Set it to True when in doubt. Defaults to False. Methods post_init (): Validates the backend attribute. create(): Creates a runner based on the backend and initializes the AutoTrain project. Parameters Text Tasks class autotrain.trainers.clm.params. LLMTrainingParams < source > ( model : str = 'gpt2' project_name : str = 'project-name' data_path : str = 'data' train_split : str = 'train' valid_split : typing.Optional[str] = None add_eos_token : bool = True block_size : typing.Union[int, typing.List[int]] = -1 model_max_length : int = 2048 padding : typing.Optional[str] = 'right' trainer : str = 'default' use_flash_attention_2 : bool = False log : str = 'none' disable_gradient_checkpointing : bool = False logging_steps : int = -1 eval_strategy : str = 'epoch' save_total_limit : int = 1 auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None lr : float = 3e-05 epochs : int = 1 batch_size : int = 2 warmup_ratio : float = 0.1 gradient_accumulation : int = 4 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 chat_template : typing.Optional[str] = None quantization : typing.Optional[str] = 'int4' target_modules : typing.Optional[str] = 'all-linear' merge_adapter : bool = False peft : bool = False lora_r : int = 16 lora_alpha : int = 32 lora_dropout : float = 0.05 model_ref : typing.Optional[str] = None dpo_beta : float = 0.1 max_prompt_length : int = 128 max_completion_length : typing.Optional[int] = None prompt_text_column : typing.Optional[str] = None text_column : str = 'text' rejected_text_column : typing.Optional[str] = None push_to_hub : bool = False username : typing.Optional[str] = None token : typing.Optional[str] = None unsloth : bool = False distributed_backend : typing.Optional[str] = None ) Parameters model (str) — Model name to be used for training. Default is “gpt2”. project_name (str) — Name of the project and output directory. Default is “project-name”. data_path (str) — Path to the dataset. Default is “data”. train_split (str) — Configuration for the training data split. Default is “train”. valid_split (Optional[str]) — Configuration for the validation data split. Default is None. add_eos_token (bool) — Whether to add an EOS token at the end of sequences. Default is True. block_size (Union[int, List[int]]) — Size of the blocks for training, can be a single integer or a list of integers. Default is -1. model_max_length (int) — Maximum length of the model input. Default is 2048. padding (Optional[str]) — Side on which to pad sequences (left or right). Default is “right”. trainer (str) — Type of trainer to use. Default is “default”. use_flash_attention_2 (bool) — Whether to use flash attention version 2. Default is False. log (str) — Logging method for experiment tracking. Default is “none”. disable_gradient_checkpointing (bool) — Whether to disable gradient checkpointing. Default is False. logging_steps (int) — Number of steps between logging events. Default is -1. eval_strategy (str) — Strategy for evaluation (e.g., ‘epoch’). Default is “epoch”. save_total_limit (int) — Maximum number of checkpoints to keep. Default is 1. auto_find_batch_size (bool) — Whether to automatically find the optimal batch size. Default is False. mixed_precision (Optional[str]) — Type of mixed precision to use (e.g., ‘fp16’, ‘bf16’, or None). Default is None. lr (float) — Learning rate for training. Default is 3e-5. epochs (int) — Number of training epochs. Default is 1. batch_size (int) — Batch size for training. Default is 2. warmup_ratio (float) — Proportion of training to perform learning rate warmup. Default is 0.1. gradient_accumulation (int) — Number of steps to accumulate gradients before updating. Default is 4. optimizer (str) — Optimizer to use for training. Default is “adamw_torch”. scheduler (str) — Learning rate scheduler to use. Default is “linear”. weight_decay (float) — Weight decay to apply to the optimizer. Default is 0.0. max_grad_norm (float) — Maximum norm for gradient clipping. Default is 1.0. seed (int) — Random seed for reproducibility. Default is 42. chat_template (Optional[str]) — Template for chat-based models, options include: None, zephyr, chatml, or tokenizer. Default is None. quantization (Optional[str]) — Quantization method to use (e.g., ‘int4’, ‘int8’, or None). Default is “int4”. target_modules (Optional[str]) — Target modules for quantization or fine-tuning. Default is “all-linear”. merge_adapter (bool) — Whether to merge the adapter layers. Default is False. peft (bool) — Whether to use Parameter-Efficient Fine-Tuning (PEFT). Default is False. lora_r (int) — Rank of the LoRA matrices. Default is 16. lora_alpha (int) — Alpha parameter for LoRA. Default is 32. lora_dropout (float) — Dropout rate for LoRA. Default is 0.05. model_ref (Optional[str]) — Reference model for DPO trainer. Default is None. dpo_beta (float) — Beta parameter for DPO trainer. Default is 0.1. max_prompt_length (int) — Maximum length of the prompt. Default is 128. max_completion_length (Optional[int]) — Maximum length of the completion. Default is None. prompt_text_column (Optional[str]) — Column name for the prompt text. Default is None. text_column (str) — Column name for the text data. Default is “text”. rejected_text_column (Optional[str]) — Column name for the rejected text data. Default is None. push_to_hub (bool) — Whether to push the model to the Hugging Face Hub. Default is False. username (Optional[str]) — Hugging Face username for authentication. Default is None. token (Optional[str]) — Hugging Face token for authentication. Default is None. unsloth (bool) — Whether to use the unsloth library. Default is False. distributed_backend (Optional[str]) — Backend to use for distributed training. Default is None. LLMTrainingParams: Parameters for training a language model using the autotrain library. class autotrain.trainers.sent_transformers.params. SentenceTransformersParams < source > ( data_path : str = None model : str = 'microsoft/mpnet-base' lr : float = 3e-05 epochs : int = 3 max_seq_length : int = 128 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' username : typing.Optional[str] = None log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 trainer : str = 'pair_score' sentence1_column : str = 'sentence1' sentence2_column : str = 'sentence2' sentence3_column : typing.Optional[str] = None target_column : typing.Optional[str] = None ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the pre-trained model to use. Default is “microsoft/mpnet-base”. lr (float) — Learning rate for training. Default is 3e-5. epochs (int) — Number of training epochs. Default is 3. max_seq_length (int) — Maximum sequence length for the input. Default is 128. batch_size (int) — Batch size for training. Default is 8. warmup_ratio (float) — Proportion of training to perform learning rate warmup. Default is 0.1. gradient_accumulation (int) — Number of steps to accumulate gradients before updating. Default is 1. optimizer (str) — Optimizer to use. Default is “adamw_torch”. scheduler (str) — Learning rate scheduler to use. Default is “linear”. weight_decay (float) — Weight decay to apply. Default is 0.0. max_grad_norm (float) — Maximum gradient norm for clipping. Default is 1.0. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. Default is None. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project for output directory. Default is “project-name”. auto_find_batch_size (bool) — Whether to automatically find the optimal batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision training mode (fp16, bf16, or None). Default is None. save_total_limit (int) — Maximum number of checkpoints to save. Default is 1. token (Optional[str]) — Token for accessing Hugging Face Hub. Default is None. push_to_hub (bool) — Whether to push the model to Hugging Face Hub. Default is False. eval_strategy (str) — Evaluation strategy to use. Default is “epoch”. username (Optional[str]) — Hugging Face username. Default is None. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Number of epochs with no improvement after which training will be stopped. Default is 5. early_stopping_threshold (float) — Threshold for measuring the new optimum, to qualify as an improvement. Default is 0.01. trainer (str) — Name of the trainer to use. Default is “pair_score”. sentence1_column (str) — Name of the column containing the first sentence. Default is “sentence1”. sentence2_column (str) — Name of the column containing the second sentence. Default is “sentence2”. sentence3_column (Optional[str]) — Name of the column containing the third sentence (if applicable). Default is None. target_column (Optional[str]) — Name of the column containing the target variable. Default is None. SentenceTransformersParams is a configuration class for setting up parameters for training sentence transformers. class autotrain.trainers.seq2seq.params. Seq2SeqParams < source > ( data_path : str = None model : str = 'google/flan-t5-base' username : typing.Optional[str] = None seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None project_name : str = 'project-name' token : typing.Optional[str] = None push_to_hub : bool = False text_column : str = 'text' target_column : str = 'target' lr : float = 5e-05 epochs : int = 3 max_seq_length : int = 128 max_target_length : int = 128 batch_size : int = 2 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 logging_steps : int = -1 eval_strategy : str = 'epoch' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 peft : bool = False quantization : typing.Optional[str] = 'int8' lora_r : int = 16 lora_alpha : int = 32 lora_dropout : float = 0.05 target_modules : str = 'all-linear' log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the model to be used. Default is “google/flan-t5-base”. username (Optional[str]) — Hugging Face Username. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. project_name (str) — Name of the project or output directory. Default is “project-name”. token (Optional[str]) — Hub Token for authentication. push_to_hub (bool) — Whether to push the model to the Hugging Face Hub. Default is False. text_column (str) — Name of the text column in the dataset. Default is “text”. target_column (str) — Name of the target text column in the dataset. Default is “target”. lr (float) — Learning rate for training. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. max_seq_length (int) — Maximum sequence length for input text. Default is 128. max_target_length (int) — Maximum sequence length for target text. Default is 128. batch_size (int) — Training batch size. Default is 2. warmup_ratio (float) — Proportion of warmup steps. Default is 0.1. gradient_accumulation (int) — Number of gradient accumulation steps. Default is 1. optimizer (str) — Optimizer to be used. Default is “adamw_torch”. scheduler (str) — Learning rate scheduler to be used. Default is “linear”. weight_decay (float) — Weight decay for the optimizer. Default is 0.0. max_grad_norm (float) — Maximum gradient norm for clipping. Default is 1.0. logging_steps (int) — Number of steps between logging. Default is -1 (disabled). eval_strategy (str) — Evaluation strategy. Default is “epoch”. auto_find_batch_size (bool) — Whether to automatically find the batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision training mode (fp16, bf16, or None). save_total_limit (int) — Maximum number of checkpoints to save. Default is 1. peft (bool) — Whether to use Parameter-Efficient Fine-Tuning (PEFT). Default is False. quantization (Optional[str]) — Quantization mode (int4, int8, or None). Default is “int8”. lora_r (int) — LoRA-R parameter for PEFT. Default is 16. lora_alpha (int) — LoRA-Alpha parameter for PEFT. Default is 32. lora_dropout (float) — LoRA-Dropout parameter for PEFT. Default is 0.05. target_modules (str) — Target modules for PEFT. Default is “all-linear”. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Patience for early stopping. Default is 5. early_stopping_threshold (float) — Threshold for early stopping. Default is 0.01. Seq2SeqParams is a configuration class for sequence-to-sequence training parameters. class autotrain.trainers.token_classification.params. TokenClassificationParams < source > ( data_path : str = None model : str = 'bert-base-uncased' lr : float = 5e-05 epochs : int = 3 max_seq_length : int = 128 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None tokens_column : str = 'tokens' tags_column : str = 'tags' logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' username : typing.Optional[str] = None log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the model to use. Default is “bert-base-uncased”. lr (float) — Learning rate. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. max_seq_length (int) — Maximum sequence length. Default is 128. batch_size (int) — Training batch size. Default is 8. warmup_ratio (float) — Warmup proportion. Default is 0.1. gradient_accumulation (int) — Gradient accumulation steps. Default is 1. optimizer (str) — Optimizer to use. Default is “adamw_torch”. scheduler (str) — Scheduler to use. Default is “linear”. weight_decay (float) — Weight decay. Default is 0.0. max_grad_norm (float) — Maximum gradient norm. Default is 1.0. seed (int) — Random seed. Default is 42. train_split (str) — Name of the training split. Default is “train”. valid_split (Optional[str]) — Name of the validation split. Default is None. tokens_column (str) — Name of the tokens column. Default is “tokens”. tags_column (str) — Name of the tags column. Default is “tags”. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project. Default is “project-name”. auto_find_batch_size (bool) — Whether to automatically find the batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision setting (fp16, bf16, or None). Default is None. save_total_limit (int) — Total number of checkpoints to save. Default is 1. token (Optional[str]) — Hub token for authentication. Default is None. push_to_hub (bool) — Whether to push the model to the Hugging Face hub. Default is False. eval_strategy (str) — Evaluation strategy. Default is “epoch”. username (Optional[str]) — Hugging Face username. Default is None. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Patience for early stopping. Default is 5. early_stopping_threshold (float) — Threshold for early stopping. Default is 0.01. TokenClassificationParams is a configuration class for token classification training parameters. class autotrain.trainers.extractive_question_answering.params. ExtractiveQuestionAnsweringParams < source > ( data_path : str = None model : str = 'bert-base-uncased' lr : float = 5e-05 epochs : int = 3 max_seq_length : int = 128 max_doc_stride : int = 128 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None text_column : str = 'context' question_column : str = 'question' answer_column : str = 'answers' logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' username : typing.Optional[str] = None log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Pre-trained model name. Default is “bert-base-uncased”. lr (float) — Learning rate for the optimizer. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. max_seq_length (int) — Maximum sequence length for inputs. Default is 128. max_doc_stride (int) — Maximum document stride for splitting context. Default is 128. batch_size (int) — Batch size for training. Default is 8. warmup_ratio (float) — Warmup proportion for learning rate scheduler. Default is 0.1. gradient_accumulation (int) — Number of gradient accumulation steps. Default is 1. optimizer (str) — Optimizer type. Default is “adamw_torch”. scheduler (str) — Learning rate scheduler type. Default is “linear”. weight_decay (float) — Weight decay for the optimizer. Default is 0.0. max_grad_norm (float) — Maximum gradient norm for clipping. Default is 1.0. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. Default is None. text_column (str) — Column name for context/text. Default is “context”. question_column (str) — Column name for questions. Default is “question”. answer_column (str) — Column name for answers. Default is “answers”. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project for output directory. Default is “project-name”. auto_find_batch_size (bool) — Automatically find optimal batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision training mode (fp16, bf16, or None). Default is None. save_total_limit (int) — Maximum number of checkpoints to save. Default is 1. token (Optional[str]) — Authentication token for Hugging Face Hub. Default is None. push_to_hub (bool) — Whether to push the model to Hugging Face Hub. Default is False. eval_strategy (str) — Evaluation strategy during training. Default is “epoch”. username (Optional[str]) — Hugging Face username for authentication. Default is None. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Number of epochs with no improvement for early stopping. Default is 5. early_stopping_threshold (float) — Threshold for early stopping improvement. Default is 0.01. ExtractiveQuestionAnsweringParams class autotrain.trainers.text_classification.params. TextClassificationParams < source > ( data_path : str = None model : str = 'bert-base-uncased' lr : float = 5e-05 epochs : int = 3 max_seq_length : int = 128 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None text_column : str = 'text' target_column : str = 'target' logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' username : typing.Optional[str] = None log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the model to use. Default is “bert-base-uncased”. lr (float) — Learning rate. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. max_seq_length (int) — Maximum sequence length. Default is 128. batch_size (int) — Training batch size. Default is 8. warmup_ratio (float) — Warmup proportion. Default is 0.1. gradient_accumulation (int) — Number of gradient accumulation steps. Default is 1. optimizer (str) — Optimizer to use. Default is “adamw_torch”. scheduler (str) — Scheduler to use. Default is “linear”. weight_decay (float) — Weight decay. Default is 0.0. max_grad_norm (float) — Maximum gradient norm. Default is 1.0. seed (int) — Random seed. Default is 42. train_split (str) — Name of the training split. Default is “train”. valid_split (Optional[str]) — Name of the validation split. Default is None. text_column (str) — Name of the text column in the dataset. Default is “text”. target_column (str) — Name of the target column in the dataset. Default is “target”. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project. Default is “project-name”. auto_find_batch_size (bool) — Whether to automatically find the batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision setting (fp16, bf16, or None). Default is None. save_total_limit (int) — Total number of checkpoints to save. Default is 1. token (Optional[str]) — Hub token for authentication. Default is None. push_to_hub (bool) — Whether to push the model to the hub. Default is False. eval_strategy (str) — Evaluation strategy. Default is “epoch”. username (Optional[str]) — Hugging Face username. Default is None. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Number of epochs with no improvement after which training will be stopped. Default is 5. early_stopping_threshold (float) — Threshold for measuring the new optimum to continue training. Default is 0.01. TextClassificationParams is a configuration class for text classification training parameters. class autotrain.trainers.text_regression.params. TextRegressionParams < source > ( data_path : str = None model : str = 'bert-base-uncased' lr : float = 5e-05 epochs : int = 3 max_seq_length : int = 128 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None text_column : str = 'text' target_column : str = 'target' logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' username : typing.Optional[str] = None log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the pre-trained model to use. Default is “bert-base-uncased”. lr (float) — Learning rate for the optimizer. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. max_seq_length (int) — Maximum sequence length for the inputs. Default is 128. batch_size (int) — Batch size for training. Default is 8. warmup_ratio (float) — Proportion of training to perform learning rate warmup. Default is 0.1. gradient_accumulation (int) — Number of steps to accumulate gradients before updating. Default is 1. optimizer (str) — Optimizer to use. Default is “adamw_torch”. scheduler (str) — Learning rate scheduler to use. Default is “linear”. weight_decay (float) — Weight decay to apply. Default is 0.0. max_grad_norm (float) — Maximum norm for the gradients. Default is 1.0. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. Default is None. text_column (str) — Name of the column containing text data. Default is “text”. target_column (str) — Name of the column containing target data. Default is “target”. logging_steps (int) — Number of steps between logging. Default is -1 (no logging). project_name (str) — Name of the project for output directory. Default is “project-name”. auto_find_batch_size (bool) — Whether to automatically find the batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision training mode (fp16, bf16, or None). Default is None. save_total_limit (int) — Maximum number of checkpoints to save. Default is 1. token (Optional[str]) — Token for accessing Hugging Face Hub. Default is None. push_to_hub (bool) — Whether to push the model to Hugging Face Hub. Default is False. eval_strategy (str) — Evaluation strategy to use. Default is “epoch”. username (Optional[str]) — Hugging Face username. Default is None. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Number of epochs with no improvement after which training will be stopped. Default is 5. early_stopping_threshold (float) — Threshold for measuring the new optimum, to qualify as an improvement. Default is 0.01. TextRegressionParams is a configuration class for setting up text regression training parameters. Image Tasks class autotrain.trainers.image_classification.params. ImageClassificationParams < source > ( data_path : str = None model : str = 'google/vit-base-patch16-224' username : typing.Optional[str] = None lr : float = 5e-05 epochs : int = 3 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' image_column : str = 'image' target_column : str = 'target' log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Pre-trained model name or path. Default is “google/vit-base-patch16-224”. username (Optional[str]) — Hugging Face account username. lr (float) — Learning rate for the optimizer. Default is 5e-5. epochs (int) — Number of epochs for training. Default is 3. batch_size (int) — Batch size for training. Default is 8. warmup_ratio (float) — Warmup ratio for learning rate scheduler. Default is 0.1. gradient_accumulation (int) — Number of gradient accumulation steps. Default is 1. optimizer (str) — Optimizer type. Default is “adamw_torch”. scheduler (str) — Learning rate scheduler type. Default is “linear”. weight_decay (float) — Weight decay for the optimizer. Default is 0.0. max_grad_norm (float) — Maximum gradient norm for clipping. Default is 1.0. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project for output directory. Default is “project-name”. auto_find_batch_size (bool) — Automatically find optimal batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision training mode (fp16, bf16, or None). save_total_limit (int) — Maximum number of checkpoints to keep. Default is 1. token (Optional[str]) — Hugging Face Hub token for authentication. push_to_hub (bool) — Whether to push the model to Hugging Face Hub. Default is False. eval_strategy (str) — Evaluation strategy during training. Default is “epoch”. image_column (str) — Column name for images in the dataset. Default is “image”. target_column (str) — Column name for target labels in the dataset. Default is “target”. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Number of epochs with no improvement for early stopping. Default is 5. early_stopping_threshold (float) — Threshold for early stopping. Default is 0.01. ImageClassificationParams is a configuration class for image classification training parameters. class autotrain.trainers.image_regression.params. ImageRegressionParams < source > ( data_path : str = None model : str = 'google/vit-base-patch16-224' username : typing.Optional[str] = None lr : float = 5e-05 epochs : int = 3 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' image_column : str = 'image' target_column : str = 'target' log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the model to use. Default is “google/vit-base-patch16-224”. username (Optional[str]) — Hugging Face Username. lr (float) — Learning rate. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. batch_size (int) — Training batch size. Default is 8. warmup_ratio (float) — Warmup proportion. Default is 0.1. gradient_accumulation (int) — Gradient accumulation steps. Default is 1. optimizer (str) — Optimizer to use. Default is “adamw_torch”. scheduler (str) — Scheduler to use. Default is “linear”. weight_decay (float) — Weight decay. Default is 0.0. max_grad_norm (float) — Max gradient norm. Default is 1.0. seed (int) — Random seed. Default is 42. train_split (str) — Train split name. Default is “train”. valid_split (Optional[str]) — Validation split name. logging_steps (int) — Logging steps. Default is -1. project_name (str) — Output directory name. Default is “project-name”. auto_find_batch_size (bool) — Whether to auto find batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision type (fp16, bf16, or None). save_total_limit (int) — Save total limit. Default is 1. token (Optional[str]) — Hub Token. push_to_hub (bool) — Whether to push to hub. Default is False. eval_strategy (str) — Evaluation strategy. Default is “epoch”. image_column (str) — Image column name. Default is “image”. target_column (str) — Target column name. Default is “target”. log (str) — Logging using experiment tracking. Default is “none”. early_stopping_patience (int) — Early stopping patience. Default is 5. early_stopping_threshold (float) — Early stopping threshold. Default is 0.01. ImageRegressionParams is a configuration class for image regression training parameters. class autotrain.trainers.object_detection.params. ObjectDetectionParams < source > ( data_path : str = None model : str = 'google/vit-base-patch16-224' username : typing.Optional[str] = None lr : float = 5e-05 epochs : int = 3 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' image_column : str = 'image' objects_column : str = 'objects' log : str = 'none' image_square_size : typing.Optional[int] = 600 early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the model to be used. Default is “google/vit-base-patch16-224”. username (Optional[str]) — Hugging Face Username. lr (float) — Learning rate. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. batch_size (int) — Training batch size. Default is 8. warmup_ratio (float) — Warmup proportion. Default is 0.1. gradient_accumulation (int) — Gradient accumulation steps. Default is 1. optimizer (str) — Optimizer to be used. Default is “adamw_torch”. scheduler (str) — Scheduler to be used. Default is “linear”. weight_decay (float) — Weight decay. Default is 0.0. max_grad_norm (float) — Max gradient norm. Default is 1.0. seed (int) — Random seed. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project for output directory. Default is “project-name”. auto_find_batch_size (bool) — Whether to automatically find batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision type (fp16, bf16, or None). save_total_limit (int) — Total number of checkpoints to save. Default is 1. token (Optional[str]) — Hub Token for authentication. push_to_hub (bool) — Whether to push the model to the Hugging Face Hub. Default is False. eval_strategy (str) — Evaluation strategy. Default is “epoch”. image_column (str) — Name of the image column in the dataset. Default is “image”. objects_column (str) — Name of the target column in the dataset. Default is “objects”. log (str) — Logging method for experiment tracking. Default is “none”. image_square_size (Optional[int]) — Longest size to which the image will be resized, then padded to square. Default is 600. early_stopping_patience (int) — Number of epochs with no improvement after which training will be stopped. Default is 5. early_stopping_threshold (float) — Minimum change to qualify as an improvement. Default is 0.01. ObjectDetectionParams is a configuration class for object detection training parameters. Tabular Tasks class autotrain.trainers.tabular.params. TabularParams < source > ( data_path : str = None model : str = 'xgboost' username : typing.Optional[str] = None seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None project_name : str = 'project-name' token : typing.Optional[str] = None push_to_hub : bool = False id_column : str = 'id' target_columns : typing.Union[typing.List[str], str] = ['target'] categorical_columns : typing.Optional[typing.List[str]] = None numerical_columns : typing.Optional[typing.List[str]] = None task : str = 'classification' num_trials : int = 10 time_limit : int = 600 categorical_imputer : typing.Optional[str] = None numerical_imputer : typing.Optional[str] = None numeric_scaler : typing.Optional[str] = None ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the model to use. Default is “xgboost”. username (Optional[str]) — Hugging Face Username. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. project_name (str) — Name of the output directory. Default is “project-name”. token (Optional[str]) — Hub Token for authentication. push_to_hub (bool) — Whether to push the model to the hub. Default is False. id_column (str) — Name of the ID column. Default is “id”. target_columns (Union[List[str], str]) — Target column(s) in the dataset. Default is [“target”]. categorical_columns (Optional[List[str]]) — List of categorical columns. numerical_columns (Optional[List[str]]) — List of numerical columns. task (str) — Type of task (e.g., “classification”). Default is “classification”. num_trials (int) — Number of trials for hyperparameter optimization. Default is 10. time_limit (int) — Time limit for training in seconds. Default is 600. categorical_imputer (Optional[str]) — Imputer strategy for categorical columns. numerical_imputer (Optional[str]) — Imputer strategy for numerical columns. numeric_scaler (Optional[str]) — Scaler strategy for numerical columns. TabularParams is a configuration class for tabular data training parameters. < > Update on GitHub ← Train on Spaces Train Locally → Quickstart with Python Getting Started Auto Train Project Class Attributes Methods Parameters Text Tasks Image Tasks Tabular Tasks
Interface__SecurityFileStatus.txt
Interface: SecurityFileStatus Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: SecurityFileStatus Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: SecurityFileStatus Properties status • status : string Defined in hub/src/lib/paths-info.ts:20 < > Update on GitHub ← SafetensorsShardFileInfo SpaceEntry → Interface: Security File Status Properties status Defined in
Scikit-Learn.txt
Scikit-Learn Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Scikit-Learn Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Scikit-Learn To run the scikit-learn examples make sure you have installed the following library: Copied pip install -U scikit-learn The metrics in evaluate can be easily integrated with an Scikit-Learn estimator or pipeline . However, these metrics require that we generate the predictions from the model. The predictions and labels from the estimators can be passed to evaluate mertics to compute the required values. Copied import numpy as np np.random.seed( 0 ) import evaluate from sklearn.compose import ColumnTransformer from sklearn.datasets import fetch_openml from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split Load data from https://www.openml.org/d/40945 : Copied X, y = fetch_openml( "titanic" , version= 1 , as_frame= True , return_X_y= True ) Alternatively X and y can be obtained directly from the frame attribute: Copied X = titanic.frame.drop( 'survived' , axis= 1 ) y = titanic.frame[ 'survived' ] We create the preprocessing pipelines for both numeric and categorical data. Note that pclass could either be treated as a categorical or numeric feature. Copied numeric_features = [ "age" , "fare" ] numeric_transformer = Pipeline( steps=[( "imputer" , SimpleImputer(strategy= "median" )), ( "scaler" , StandardScaler())] ) categorical_features = [ "embarked" , "sex" , "pclass" ] categorical_transformer = OneHotEncoder(handle_unknown= "ignore" ) preprocessor = ColumnTransformer( transformers=[ ( "num" , numeric_transformer, numeric_features), ( "cat" , categorical_transformer, categorical_features), ] ) Append classifier to preprocessing pipeline. Now we have a full prediction pipeline. Copied clf = Pipeline( steps=[( "preprocessor" , preprocessor), ( "classifier" , LogisticRegression())] ) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.2 , random_state= 0 ) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) As Evaluate metrics use lists as inputs for references and predictions, we need to convert them to Python lists. Copied # Evaluate metrics accept lists as inputs for values of references and predictions y_test = y_test.tolist() y_pred = y_pred.tolist() # Accuracy accuracy_metric = evaluate.load( "accuracy" ) accuracy = accuracy_metric.compute(references=y_test, predictions=y_pred) print ( "Accuracy:" , accuracy) # Accuracy: 0.79 You can use any suitable evaluate metric with the estimators as long as they are compatible with the task and predictions. ← Keras and Tensorflow Types of evaluations → Scikit- Learn
Share_a_dataset_to_the_Hub.txt
Share a dataset to the Hub Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Share a dataset to the Hub Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Share a dataset to the Hub The Hub is home to an extensive collection of community-curated and popular research datasets. We encourage you to share your dataset to the Hub to help grow the ML community and accelerate progress for everyone. All contributions are welcome; adding a dataset is just a drag and drop away! Start by creating a Hugging Face Hub account if you don’t have one yet. Upload with the Hub UI The Hub’s web-based interface allows users without any developer experience to upload a dataset. Create a repository A repository hosts all your dataset files, including the revision history, making storing more than one dataset version possible. Click on your profile and select New Dataset to create a new dataset repository. Pick a name for your dataset, and choose whether it is a public or private dataset. A public dataset is visible to anyone, whereas a private dataset can only be viewed by you or members of your organization. Upload dataset Once you’ve created a repository, navigate to the Files and versions tab to add a file. Select Add file to upload your dataset files. We support many text, audio, and image data extensions such as .csv , .mp3 , and .jpg among many others. For text data extensions like .csv , .json , .jsonl , and .txt , we recommend compressing them before uploading to the Hub (to .zip or .gz file extension for example). Text file extensions are not tracked by Git LFS by default, and if they’re greater than 10MB, they will not be committed and uploaded. Take a look at the .gitattributes file in your repository for a complete list of tracked file extensions. For this tutorial, you can use the following sample .csv files since they’re small: train.csv , test.csv . Drag and drop your dataset files and add a brief descriptive commit message. After uploading your dataset files, they are stored in your dataset repository. Create a Dataset card Adding a Dataset card is super valuable for helping users find your dataset and understand how to use it responsibly. Click on Create Dataset Card to create a Dataset card. This button creates a README.md file in your repository. At the top, you’ll see the Metadata UI with several fields to select from like license, language, and task categories. These are the most important tags for helping users discover your dataset on the Hub. When you select an option from each field, they’ll be automatically added to the top of the dataset card. You can also look at the Dataset Card specifications , which has a complete set of (but not required) tag options like annotations_creators , to help you choose the appropriate tags. Click on the Import dataset card template link at the top of the editor to automatically create a dataset card template. Filling out the template is a great way to introduce your dataset to the community and help users understand how to use it. For a detailed example of what a good Dataset card should look like, take a look at the CNN DailyMail Dataset card . Load dataset Once your dataset is stored on the Hub, anyone can load it with the load_dataset() function: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "stevhliu/demo" ) Upload with Python Users who prefer to upload a dataset programmatically can use the huggingface_hub library. This library allows users to interact with the Hub from Python. Begin by installing the library: Copied pip install huggingface_hub To upload a dataset on the Hub in Python, you need to log in to your Hugging Face account: Copied huggingface-cli login Use the push_to_hub() function to help you add, commit, and push a file to your repository: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "stevhliu/demo" ) # dataset = dataset.map(...) # do all your processing here >>> dataset.push_to_hub( "stevhliu/processed_demo" ) To set your dataset as private, set the private parameter to True . This parameter will only work if you are creating a repository for the first time. Copied >>> dataset.push_to_hub( "stevhliu/private_processed_demo" , private= True ) To add a new configuration (or subset) to a dataset or to add a new split (train/validation/test), please refer to the Dataset.push_to_hub() documentation. Privacy A private dataset is only accessible by you. Similarly, if you share a dataset within your organization, then members of the organization can also access the dataset. Load a private dataset by providing your authentication token to the token parameter: Copied >>> from datasets import load_dataset # Load a private individual dataset >>> dataset = load_dataset( "stevhliu/demo" , token= True ) # Load a private organization dataset >>> dataset = load_dataset( "organization/dataset_name" , token= True ) What’s next? Congratulations, you’ve completed the tutorials! 🥳 From here, you can go on to: Learn more about how to use 🤗 Datasets other functions to process your dataset . Stream large datasets without downloading it locally. Define your dataset splits and configurations and share your dataset with the community. If you have any questions about 🤗 Datasets, feel free to join and ask the community on our forum . < > Update on GitHub ← Create a dataset Overview → Share a dataset to the Hub Upload with the Hub UI Create a repository Upload dataset Create a Dataset card Load dataset Upload with Python Privacy What’s next?
Check_dataset_validity.txt
Check dataset validity Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Check dataset validity Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Check dataset validity Before you download a dataset from the Hub, it is helpful to know if a specific dataset you’re interested in is available. The dataset viewer provides the /is-valid endpoint to check if a specific dataset works without any errors. The API endpoint will return an error for datasets that cannot be loaded with the 🤗 Datasets library, for example, because the data hasn’t been uploaded or the format is not supported. The largest datasets are partially supported by the dataset viewer. If they are streamable , Datasets Server can extract the first 100 rows without downloading the whole dataset. This is especially useful for previewing large datasets where downloading the whole dataset may take hours! See the preview field in the response of /is-valid to check if a dataset is partially supported. This guide shows you how to check dataset validity programmatically, but free to try it out with Postman , RapidAPI , or ReDoc . Check if a dataset is valid /is-valid checks whether a specific dataset loads without any error. This endpoint’s query parameter requires you to specify the name of the dataset: Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://datasets-server.huggingface.co/is-valid?dataset=cornell-movie-review-data/rotten_tomatoes" def query (): response = requests.get(API_URL, headers=headers) return response.json() data = query() The response looks like this if a dataset is valid: Copied { "viewer" : true , "preview" : true , "search" : true , "filter" : true , "statistics" : true , } The response looks like this if a dataset is valid but /search is not available for it: Copied { "viewer" : true , "preview" : true , "search" : false , "filter" : true , "statistics" : true , } The response looks like this if a dataset is valid but /filter is not available for it: Copied { "viewer" : true , "preview" : true , "search" : true , "filter" : false , "statistics" : true , } Similarly, if the statistics are not available: Copied { "viewer" : true , "preview" : true , "search" : true , "filter" : true , "statistics" : false , } If only the first rows of a dataset are available, then the response looks like: Copied { "viewer" : false , "preview" : true , "search" : true , "filter" : true , "statistics" : true , } Finally, if the dataset is not valid at all, then the response is: Copied { "viewer" : false , "preview" : false , "search" : false , "filter" : false , "statistics" : false , } Some cases where a dataset is not valid are: the dataset viewer is disabled the dataset is gated but the access is not granted: no token is passed or the passed token is not authorized the dataset is private but the owner is not a PRO user or an Enterprise Hub org the dataset contains no data or the data format is not supported Remember if a dataset is gated , you'll need to provide your user token to submit a successful query! < > Update on GitHub ← Analyze a dataset on the Hub List splits and subsets → Check dataset validity Check if a dataset is valid
Manual_Configuration.txt
Manual Configuration Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Manual Configuration Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration File names and splits Manual Configuration Audio Dataset Image Dataset Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Manual Configuration This guide will show you how to configure a custom structure for your dataset repository. The companion collection of example datasets showcases each section of the documentation. A dataset with a supported structure and file formats automatically has a Dataset Viewer on its dataset page on the Hub. You can use YAML to define the splits, subsets and builder parameters that are used by the Viewer. It is also possible to define multiple subsets (also called “configurations”) for the same dataset (e.g. if the dataset has various independent files). Splits If you have multiple files and want to define which file goes into which split, you can use YAML at the top of your README.md. For example, given a repository like this one: Copied my_dataset_repository / ├── README .md ├── data .csv └── holdout.csv You can define a subset for your splits by adding the configs field in the YAML block at the top of your README.md: Copied --- configs: - config_name: default data_files: - split: train path: "data.csv" - split: test path: "holdout.csv" --- You can select multiple files per split using a list of paths: Copied my_dataset_repository/ ├── README.md ├── data/ │ ├── abc. csv │ └── def. csv └── holdout/ └── ghi. csv Copied --- configs: - config_name: default data_files: - split: train path: - "data/abc.csv" - "data/def.csv" - split: test path: "holdout/ghi.csv" --- Or you can use glob patterns to automatically list all the files you need: Copied --- configs: - config_name: default data_files: - split: train path: "data/*.csv" - split: test path: "holdout/*.csv" --- Note that config_name field is required even if you have a single subset. Multiple Subsets Your dataset might have several subsets of data that you want to be able to use separately. For example each subset has its own dropdown in the Dataset Viewer the Hugging Face Hub. In that case you can define a list of subsets inside the configs field in YAML: Copied my_dataset_repository/ ├── README.md ├── main_data. csv └── additional_data. csv Copied --- configs: - config_name: main_data data_files: "main_data.csv" - config_name: additional_data data_files: "additional_data.csv" --- You can set a default subset using default: true Copied - config_name: main_data data_files: "main_data.csv" default: true This is useful to set which subset the Dataset Viewer shows first, and which subset data libraries load by default. Builder parameters Not only data_files , but other builder-specific parameters can be passed via YAML, allowing for more flexibility on how to load the data while not requiring any custom code. For example, define which separator to use in which subset to load your csv files: Copied --- configs: - config_name: tab data_files: "main_data.csv" sep: "\t" - config_name: comma data_files: "additional_data.csv" sep: "," --- Refer to the specific builders’ documentation to see what parameters they have. < > Update on GitHub ← File names and splits Audio Dataset → Manual Configuration Splits Multiple Subsets Builder parameters
Understanding_pipelines,_models_and_schedulers.txt
Understanding pipelines, models and schedulers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Understanding pipelines, models and schedulers Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline >>> ddpm = DDPMPipeline.from_pretrained( "google/ddpm-cat-256" , use_safetensors= True ).to( "cuda" ) >>> image = ddpm(num_inference_steps= 25 ).images[ 0 ] >>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler . The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel >>> scheduler = DDPMScheduler.from_pretrained( "google/ddpm-cat-256" ) >>> model = UNet2DModel.from_pretrained( "google/ddpm-cat-256" , use_safetensors= True ).to( "cuda" ) Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps( 50 ) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps tensor([ 980 , 960 , 940 , 920 , 900 , 880 , 860 , 840 , 820 , 800 , 780 , 760 , 740 , 720 , 700 , 680 , 660 , 640 , 620 , 600 , 580 , 560 , 540 , 520 , 500 , 480 , 460 , 440 , 420 , 400 , 380 , 360 , 340 , 320 , 300 , 280 , 260 , 240 , 220 , 200 , 180 , 160 , 140 , 120 , 100 , 80 , 60 , 40 , 20 , 0 ]) Create some random noise with the same shape as the desired output: Copied >>> import torch >>> sample_size = model.config.sample_size >>> noise = torch.randn(( 1 , 3 , sample_size, sample_size), device= "cuda" ) Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. Copied >>> input = noise >>> for t in scheduler.timesteps: ... with torch.no_grad(): ... noisy_residual = model( input , t).sample ... previous_noisy_sample = scheduler.step(noisy_residual, t, input ).prev_sample ... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image >>> import numpy as np >>> image = ( input / 2 + 0.5 ).clamp( 0 , 1 ).squeeze() >>> image = (image.permute( 1 , 2 , 0 ) * 255 ). round ().to(torch.uint8).cpu().numpy() >>> image = Image.fromarray(image) >>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep ’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. Let’s try it out! Deconstruct the Stable Diffusion pipeline Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. 💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models work. Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained stable-diffusion-v1-5/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: Copied >>> from PIL import Image >>> import torch >>> from transformers import CLIPTextModel, CLIPTokenizer >>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler >>> vae = AutoencoderKL.from_pretrained( "CompVis/stable-diffusion-v1-4" , subfolder= "vae" , use_safetensors= True ) >>> tokenizer = CLIPTokenizer.from_pretrained( "CompVis/stable-diffusion-v1-4" , subfolder= "tokenizer" ) >>> text_encoder = CLIPTextModel.from_pretrained( ... "CompVis/stable-diffusion-v1-4" , subfolder= "text_encoder" , use_safetensors= True ... ) >>> unet = UNet2DConditionModel.from_pretrained( ... "CompVis/stable-diffusion-v1-4" , subfolder= "unet" , use_safetensors= True ... ) Instead of the default PNDMScheduler , exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler >>> scheduler = UniPCMultistepScheduler.from_pretrained( "CompVis/stable-diffusion-v1-4" , subfolder= "scheduler" ) To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" >>> vae.to(torch_device) >>> text_encoder.to(torch_device) >>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. Feel free to choose any prompt you like if you want to generate something else! Copied >>> prompt = [ "a photograph of an astronaut riding a horse" ] >>> height = 512 # default height of Stable Diffusion >>> width = 512 # default width of Stable Diffusion >>> num_inference_steps = 25 # Number of denoising steps >>> guidance_scale = 7.5 # Scale for classifier-free guidance >>> generator = torch.manual_seed( 0 ) # Seed generator to create the initial latent noise >>> batch_size = len (prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( ... prompt, padding= "max_length" , max_length=tokenizer.model_max_length, truncation= True , return_tensors= "pt" ... ) >>> with torch.no_grad(): ... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[ 0 ] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape ( batch_size and seq_length ) as the conditional text_embeddings : Copied >>> max_length = text_input.input_ids.shape[- 1 ] >>> uncond_input = tokenizer([ "" ] * batch_size, padding= "max_length" , max_length=max_length, return_tensors= "pt" ) >>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[ 0 ] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. 💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: Copied 2 ** ( len (vae.config.block_out_channels) - 1 ) == 8 Copied >>> latents = torch.randn( ... (batch_size, unet.config.in_channels, height // 8 , width // 8 ), ... generator=generator, ... device=torch_device, ... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma , the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler : Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: Set the scheduler’s timesteps to use during denoising. Iterate over the timesteps. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. Copied >>> from tqdm.auto import tqdm >>> scheduler.set_timesteps(num_inference_steps) >>> for t in tqdm(scheduler.timesteps): ... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. ... latent_model_input = torch.cat([latents] * 2 ) ... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) ... # predict the noise residual ... with torch.no_grad(): ... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample ... # perform guidance ... noise_pred_uncond, noise_pred_text = noise_pred.chunk( 2 ) ... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) ... # compute the previous noisy sample x_t -> x_t-1 ... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample : Copied # scale and decode the image latents with vae latents = 1 / 0.18215 * latents with torch.no_grad(): image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5 ).clamp( 0 , 1 ).squeeze() >>> image = (image.permute( 1 , 2 , 0 ) * 255 ).to(torch.uint8).cpu().numpy() >>> image = Image.fromarray(image) >>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. For your next steps, feel free to: Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. < > Update on GitHub ← Overview AutoPipeline → Understanding pipelines, models and schedulers Deconstruct a basic pipeline Deconstruct the Stable Diffusion pipeline Create text embeddings Create random noise Denoise the image Decode the image Next steps
Dataset_Cards.txt
Dataset Cards Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Dataset Cards Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Dataset Cards What are Dataset Cards? Each dataset may be documented by the README.md file in the repository. This file is called a dataset card , and the Hugging Face Hub will render its contents on the dataset’s main page. To inform users about how to responsibly use the data, it’s a good idea to include information about any potential biases within the dataset. Generally, dataset cards help users understand the contents of the dataset and give context for how the dataset should be used. You can also add dataset metadata to your card. The metadata describes important information about a dataset such as its license, language, and size. It also contains tags to help users discover a dataset on the Hub, and data files configuration options. Tags are defined in a YAML metadata section at the top of the README.md file. Dataset card metadata A dataset repo will render its README.md as a dataset card. To control how the Hub displays the card, you should create a YAML section in the README file to define some metadata. Start by adding three --- at the top, then include all of the relevant metadata, and close the section with another group of --- like the example below: Copied language: - "List of ISO 639-1 code for your language" - lang1 - lang2 pretty_name: "Pretty Name of the Dataset" tags: - tag1 - tag2 license: "any valid license identifier" task_categories: - task1 - task2 The metadata that you add to the dataset card enables certain interactions on the Hub. For example: Allow users to filter and discover datasets at https://huggingface.co/datasets . If you choose a license using the keywords listed in the right column of this table , the license will be displayed on the dataset page. When creating a README.md file in a dataset repository on the Hub, use Metadata UI to fill the main metadata: To see metadata fields, see the detailed Dataset Card specifications . Dataset card creation guide For a step-by-step guide on creating a dataset card, check out the Create a dataset card guide. Reading through existing dataset cards, such as the ELI5 dataset card , is a great way to familiarize yourself with the common conventions. Linking a Paper If the dataset card includes a link to a paper on arXiv, the Hub will extract the arXiv ID and include it in the dataset tags with the format arxiv:<PAPER ID> . Clicking on the tag will let you: Visit the Paper page Filter for other models on the Hub that cite the same paper. Read more about paper pages here . Force set a dataset modality The Hub will automatically detect the modality of a dataset based on the files it contains (audio, video, geospatial, etc.). If you want to force a specific modality, you can add a tag to the dataset card metadata: 3d , audio , geospatial , image , tabular , text , timeseries , video . For example, to force the modality to audio , add the following to the dataset card metadata: Copied tags: - audio Associate a library to the dataset The dataset page automatically shows libraries and tools that are able to natively load the dataset, but if you want to show another specific library, you can add a tag to the dataset card metadata: argilla , dask , datasets , distilabel , fiftyone , mlcroissant , pandas , webdataset . See the list of supported libraries for more information, or to propose to add a new library. For example, to associate the argilla library to the dataset card, add the following to the dataset card metadata: Copied tags: - argilla < > Update on GitHub ← Datasets Overview Gated Datasets → Dataset Cards What are Dataset Cards? Dataset card metadata Dataset card creation guide Linking a Paper Force set a dataset modality Associate a library to the dataset
Interface__SafetensorsIndexJson.txt
Interface: SafetensorsIndexJson Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: SafetensorsIndexJson Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: SafetensorsIndexJson Properties dtype • Optional dtype : string Defined in hub/src/lib/parse-safetensors-metadata.ts:58 metadata • Optional metadata : Record \< string , string > Defined in hub/src/lib/parse-safetensors-metadata.ts:60 weight _ map • weight_map : Record \< string , string > Defined in hub/src/lib/parse-safetensors-metadata.ts:62 < > Update on GitHub ← RepoId SafetensorsShardFileInfo → Interface: Safetensors Index Json Properties dtype Defined in metadata Defined in weight _ map Defined in
Using_ESPnet_at_Hugging_Face.txt
Using ESPnet at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using ESPnet at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using ESPnet at Hugging Face espnet is an end-to-end toolkit for speech processing, including automatic speech recognition, text to speech, speech enhancement, dirarization and other tasks. Exploring ESPnet in the Hub You can find hundreds of espnet models by filtering at the left of the models page . All models on the Hub come up with useful features: An automatically generated model card with a description, a training configuration, licenses and more. Metadata tags that help for discoverability and contain information such as license, language and datasets. An interactive widget you can use to play out with the model directly in the browser. An Inference API that allows to make inference requests. Using existing models For a full guide on loading pre-trained models, we recommend checking out the official guide ). If you’re interested in doing inference, different classes for different tasks have a from_pretrained method that allows loading models from the Hub. For example: Speech2Text for Automatic Speech Recognition. Text2Speech for Text to Speech. SeparateSpeech for Audio Source Separation. Here is an inference example: Copied import soundfile from espnet2. bin .tts_inference import Text2Speech text2speech = Text2Speech.from_pretrained( "model_name" ) speech = text2speech( "foobar" )[ "wav" ] soundfile.write( "out.wav" , speech.numpy(), text2speech.fs, "PCM_16" ) If you want to see how to load a specific model, you can click Use in ESPnet and you will be given a working snippet that you can load it! Sharing your models ESPnet outputs a zip file that can be uploaded to Hugging Face easily. For a full guide on sharing models, we recommend checking out the official guide ). The run.sh script allows to upload a given model to a Hugging Face repository. Copied ./run.sh --stage 15 --skip_upload_hf false --hf_repo username/model_repo Additional resources ESPnet docs . ESPnet model zoo repository . Integration docs . < > Update on GitHub ← Diffusers fastai → Using ES Pnet at Hugging Face Exploring ES Pnet in the Hub Using existing models Sharing your models Additional resources
Habana_Gaudi.txt
Habana Gaudi Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Habana Gaudi Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Habana Gaudi 🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum . Follow the installation guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: Copied python -m pip install --upgrade-strategy eager optimum[habana] To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: GaudiStableDiffusionPipeline , a pipeline for text-to-image generation. GaudiDDIMScheduler , a Gaudi-optimized scheduler. When you initialize the pipeline, you have to specify use_habana=True to deploy it on HPUs and to get the fastest possible generation, you should enable HPU graphs with use_hpu_graphs=True . Finally, specify a GaudiConfig which can be downloaded from the Habana organization on the Hub. Copied from optimum.habana import GaudiConfig from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline model_name = "stabilityai/stable-diffusion-2-base" scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder= "scheduler" ) pipeline = GaudiStableDiffusionPipeline.from_pretrained( model_name, scheduler=scheduler, use_habana= True , use_hpu_graphs= True , gaudi_config= "Habana/stable-diffusion-2" , ) Now you can call the pipeline to generate images by batches from one or several prompts: Copied outputs = pipeline( prompt=[ "High quality photo of an astronaut riding a horse in space" , "Face of a yellow cat, high resolution, sitting on a park bench" , ], num_images_per_prompt= 10 , batch_size= 4 , ) For more information, check out 🤗 Optimum Habana’s documentation and the example provided in the official GitHub repository. Benchmark We benchmarked Habana’s first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion and Habana/stable-diffusion-2 Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. For Stable Diffusion v1.5 on 512x512 images: Latency (batch size = 1) Throughput first-generation Gaudi 3.80s 0.308 images/s (batch size = 8) Gaudi2 1.33s 1.081 images/s (batch size = 8) For Stable Diffusion v2.1 on 768x768 images: Latency (batch size = 1) Throughput first-generation Gaudi 10.2s 0.108 images/s (batch size = 4) Gaudi2 3.17s 0.379 images/s (batch size = 8) < > Update on GitHub ← Metal Performance Shaders (MPS) AWS Neuron → Habana Gaudi Benchmark
Configure_the_Dataset_Viewer.txt
Configure the Dataset Viewer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Configure the Dataset Viewer Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Configure the Dataset Viewer Embed the Dataset Viewer in a webpage SQL Console Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Configure the Dataset Viewer The Dataset Viewer supports many data files formats , from text to tabular and from image to audio formats. It also separates the train/validation/test splits based on file and folder names. To configure the Dataset Viewer for your dataset, first make sure your dataset is in a supported data format . Configure dropdowns for splits or subsets In the Dataset Viewer you can view the train/validation/test splits of datasets, and sometimes additionally choose between multiple subsets (e.g. one per language). To define those dropdowns, you can name the data files or their folder after their split names (train/validation/test). It is also possible to customize your splits manually using YAML. For more information, feel free to check out the documentation on Data files Configuration and the collections of example datasets . The Image Dataset doc page proposes various methods to structure a dataset with images. Disable the viewer The dataset viewer can be disabled. To do this, add a YAML section to the dataset’s README.md file (create one if it does not already exist) and add a viewer property with the value false . Copied --- viewer: false --- Private datasets For private datasets, the Dataset Viewer is enabled for PRO users and Enterprise Hub organizations . < > Update on GitHub ← Dataset Viewer Embed the Dataset Viewer in a webpage → Configure the Dataset Viewer Configure dropdowns for splits or subsets Disable the viewer Private datasets
PagedAttention.txt
PagedAttention Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation PagedAttention text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started PagedAttention LLMs struggle with memory limitations during generation. In the decoding part of generation, all the attention keys and values generated for previous tokens are stored in GPU memory for reuse. This is called KV cache , and it may take up a large amount of memory for large models and long sequences. PagedAttention attempts to optimize memory use by partitioning the KV cache into blocks that are accessed through a lookup table. Thus, the KV cache does not need to be stored in contiguous memory, and blocks are allocated as needed. The memory efficiency can increase GPU utilization on memory-bound workloads, so more inference batches can be supported. The use of a lookup table to access the memory blocks can also help with KV sharing across multiple generations. This is helpful for techniques such as parallel sampling , where multiple outputs are generated simultaneously for the same prompt. In this case, the cached KV blocks can be shared among the generations. TGI’s PagedAttention implementation leverages the custom cuda kernels developed by the vLLM Project . You can learn more about this technique in the project’s page . < > Update on GitHub ← Tensor Parallelism Safetensors → Paged Attention
Outpainting.txt
Outpainting Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Outpainting Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Outpainting Outpainting extends an image beyond its original boundaries, allowing you to add, replace, or modify visual elements in an image while preserving the original image. Like inpainting , you want to fill the white area (in this case, the area outside of the original image) with new visual elements while keeping the original image (represented by a mask of black pixels). There are a couple of ways to outpaint, such as with a ControlNet or with Differential Diffusion . This guide will show you how to outpaint with an inpainting model, ControlNet, and a ZoeDepth estimator. Before you begin, make sure you have the controlnet_aux library installed so you can use the ZoeDepth estimator. Copied !pip install -q controlnet_aux Image preparation Start by picking an image to outpaint with and remove the background with a Space like BRIA-RMBG-1.4 . For example, remove the background from this image of a pair of shoes. original image background removed Stable Diffusion XL (SDXL) models work best with 1024x1024 images, but you can resize the image to any size as long as your hardware has enough memory to support it. The transparent background in the image should also be replaced with a white background. Create a function (like the one below) that scales and pastes the image onto a white background. Copied import random import requests import torch from controlnet_aux import ZoeDetector from PIL import Image, ImageOps from diffusers import ( AutoencoderKL, ControlNetModel, StableDiffusionXLControlNetPipeline, StableDiffusionXLInpaintPipeline, ) def scale_and_paste ( original_image ): aspect_ratio = original_image.width / original_image.height if original_image.width > original_image.height: new_width = 1024 new_height = round (new_width / aspect_ratio) else : new_height = 1024 new_width = round (new_height * aspect_ratio) resized_original = original_image.resize((new_width, new_height), Image.LANCZOS) white_background = Image.new( "RGBA" , ( 1024 , 1024 ), "white" ) x = ( 1024 - new_width) // 2 y = ( 1024 - new_height) // 2 white_background.paste(resized_original, (x, y), resized_original) return resized_original, white_background original_image = Image. open ( requests.get( "https://huggingface.co/datasets/stevhliu/testing-images/resolve/main/no-background-jordan.png" , stream= True , ).raw ).convert( "RGBA" ) resized_img, white_bg_image = scale_and_paste(original_image) To avoid adding unwanted extra details, use the ZoeDepth estimator to provide additional guidance during generation and to ensure the shoes remain consistent with the original image. Copied zoe = ZoeDetector.from_pretrained( "lllyasviel/Annotators" ) image_zoe = zoe(white_bg_image, detect_resolution= 512 , image_resolution= 1024 ) image_zoe Outpaint Once your image is ready, you can generate content in the white area around the shoes with controlnet-inpaint-dreamer-sdxl , a SDXL ControlNet trained for inpainting. Load the inpainting ControlNet, ZoeDepth model, VAE and pass them to the StableDiffusionXLControlNetPipeline . Then you can create an optional generate_image function (for convenience) to outpaint an initial image. Copied controlnets = [ ControlNetModel.from_pretrained( "destitech/controlnet-inpaint-dreamer-sdxl" , torch_dtype=torch.float16, variant= "fp16" ), ControlNetModel.from_pretrained( "diffusers/controlnet-zoe-depth-sdxl-1.0" , torch_dtype=torch.float16 ), ] vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix" , torch_dtype=torch.float16).to( "cuda" ) pipeline = StableDiffusionXLControlNetPipeline.from_pretrained( "SG161222/RealVisXL_V4.0" , torch_dtype=torch.float16, variant= "fp16" , controlnet=controlnets, vae=vae ).to( "cuda" ) def generate_image ( prompt, negative_prompt, inpaint_image, zoe_image, seed: int = None ): if seed is None : seed = random.randint( 0 , 2 ** 32 - 1 ) generator = torch.Generator(device= "cpu" ).manual_seed(seed) image = pipeline( prompt, negative_prompt=negative_prompt, image=[inpaint_image, zoe_image], guidance_scale= 6.5 , num_inference_steps= 25 , generator=generator, controlnet_conditioning_scale=[ 0.5 , 0.8 ], control_guidance_end=[ 0.9 , 0.6 ], ).images[ 0 ] return image prompt = "nike air jordans on a basketball court" negative_prompt = "" temp_image = generate_image(prompt, negative_prompt, white_bg_image, image_zoe, 908097 ) Paste the original image over the initial outpainted image. You’ll improve the outpainted background in a later step. Copied x = ( 1024 - resized_img.width) // 2 y = ( 1024 - resized_img.height) // 2 temp_image.paste(resized_img, (x, y), resized_img) temp_image Now is a good time to free up some memory if you’re running low! Copied pipeline= None torch.cuda.empty_cache() Now that you have an initial outpainted image, load the StableDiffusionXLInpaintPipeline with the RealVisXL model to generate the final outpainted image with better quality. Copied pipeline = StableDiffusionXLInpaintPipeline.from_pretrained( "OzzyGT/RealVisXL_V4.0_inpainting" , torch_dtype=torch.float16, variant= "fp16" , vae=vae, ).to( "cuda" ) Prepare a mask for the final outpainted image. To create a more natural transition between the original image and the outpainted background, blur the mask to help it blend better. Copied mask = Image.new( "L" , temp_image.size) mask.paste(resized_img.split()[ 3 ], (x, y)) mask = ImageOps.invert(mask) final_mask = mask.point( lambda p: p > 128 and 255 ) mask_blurred = pipeline.mask_processor.blur(final_mask, blur_factor= 20 ) mask_blurred Create a better prompt and pass it to the generate_outpaint function to generate the final outpainted image. Again, paste the original image over the final outpainted background. Copied def generate_outpaint ( prompt, negative_prompt, image, mask, seed: int = None ): if seed is None : seed = random.randint( 0 , 2 ** 32 - 1 ) generator = torch.Generator(device= "cpu" ).manual_seed(seed) image = pipeline( prompt, negative_prompt=negative_prompt, image=image, mask_image=mask, guidance_scale= 10.0 , strength= 0.8 , num_inference_steps= 30 , generator=generator, ).images[ 0 ] return image prompt = "high quality photo of nike air jordans on a basketball court, highly detailed" negative_prompt = "" final_image = generate_outpaint(prompt, negative_prompt, temp_image, mask_blurred, 7688778 ) x = ( 1024 - resized_img.width) // 2 y = ( 1024 - resized_img.height) // 2 final_image.paste(resized_img, (x, y), resized_img) final_image < > Update on GitHub ← Prompt techniques CogVideoX → Outpainting Image preparation Outpaint
Hugging_Face_-_Documentation.txt
Hugging Face - Documentation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Documentations Hub Host Git-based models, datasets and Spaces on the Hugging Face Hub. Transformers State-of-the-art ML for Pytorch, TensorFlow, and JAX. Diffusers State-of-the-art diffusion models for image and audio generation in PyTorch. Datasets Access and share datasets for computer vision, audio, and NLP tasks. Gradio Build machine learning demos and other web apps, in just a few lines of Python. Hub Python Library Client library for the HF Hub: manage repositories from your Python runtime. Huggingface.js A collection of JS libraries to interact with Hugging Face, with TS types included. Transformers.js State-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server. Inference API (serverless) Experiment with over 200k models easily using the serverless tier of Inference Endpoints. Inference Endpoints (dedicated) Easily deploy models to production on dedicated, fully managed infrastructure. PEFT Parameter efficient finetuning methods for large models. Accelerate Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision. Optimum Fast training and inference of HF Transformers with easy to use hardware optimization tools. AWS Trainium & Inferentia Train and Deploy Transformers & Diffusers with AWS Trainium and AWS Inferentia via Optimum. Tokenizers Fast tokenizers, optimized for both research and production. Evaluate Evaluate and report model performance easier and more standardized. Tasks All things about ML tasks: demos, use cases, models, datasets, and more! Dataset viewer API to access the contents, metadata and basic statistics of all Hugging Face Hub datasets. TRL Train transformer language models with reinforcement learning. Amazon SageMaker Train and Deploy Transformer models with Amazon SageMaker and Hugging Face DLCs. timm State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities. Safetensors Simple, safe way to store and distribute neural networks weights safely and quickly. Text Generation Inference Toolkit to serve Large Language Models. AutoTrain AutoTrain API and UI. Text Embeddings Inference Toolkit to serve Text Embedding Models. Competitions Create your own competitions on Hugging Face. Bitsandbytes Toolkit to optimize and quantize models. Sentence Transformers Multilingual Sentence & Image Embeddings Google Cloud Train and Deploy Transformer models with Hugging Face DLCs on Google Cloud. Google TPUs Deploy models on Google TPUs via Optimum. Chat UI Open source chat frontend, powers the HuggingChat app. Leaderboards Create your own Leaderboards on Hugging Face. Lighteval Your all-in-one toolkit for evaluating LLMs across multiple backends. Argilla Collaboration tool for AI engineers and domain experts who need to build high quality datasets. Distilabel The framework for synthetic data generation and AI feedback. Hugging Face Generative AI Services (HUGS) Optimized, zero-configuration inference microservices designed to simplify and accelerate the development of AI applications with open models smolagents Barebones library for agents. Agents write python code to call tools and orchestrate other agents. Community Blog Learn Discord Forum Github System theme Company TOS Privacy About Jobs Website Models Datasets Spaces Pricing Docs
Downloading_models.txt
Downloading models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Downloading models Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Downloading models Integrated libraries If a model on the Hub is tied to a supported library , loading the model can be done in just a few lines. For information on accessing the model, you can click on the “Use in Library ” button on the model page to see how to do so. For example, distilbert/distilgpt2 shows how to do so with 🤗 Transformers below. Using the Hugging Face Client Library You can use the huggingface_hub library to create, delete, update and retrieve information from repos. You can also download files from repos or integrate them into your library! For example, you can quickly load a Scikit-learn model with a few lines. Copied from huggingface_hub import hf_hub_download import joblib REPO_ID = "YOUR_REPO_ID" FILENAME = "sklearn_model.joblib" model = joblib.load( hf_hub_download(repo_id=REPO_ID, filename=FILENAME) ) Using Git Since all models on the Model Hub are Git repositories, you can clone the models locally by running: Copied git lfs install git clone [email protected]:<MODEL ID> # example: git clone [email protected]:bigscience/bloom If you have write-access to the particular model repo, you’ll also have the ability to commit and push revisions to the model. Add your SSH public key to your user settings to push changes and/or access private repos. Faster downloads If you are running on a machine with high bandwidth, you can increase your download speed with hf_transfer , a Rust-based library developed to speed up file transfers with the Hub. Copied pip install "huggingface_hub[hf_transfer]" HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download ... hf_transfer is a power user tool! It is tested and production-ready, but it lacks user-friendly features like advanced error handling or proxies. For more details, please take a look at this guide . < > Update on GitHub ← Uploading Models Integrated Libraries → Downloading models Integrated libraries Using the Hugging Face Client Library Using Git Faster downloads
Torch_shared_tensors.txt
Torch shared tensors Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Safetensors documentation Torch shared tensors Safetensors 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.5.0-rc.0 v0.3.2 v0.2.9 EN Getting started 🤗 Safetensors Speed Comparison Tensor Sharing in Pytorch Metadata Parsing Convert weights to safetensors API Torch API Tensorflow API PaddlePaddle API Flax API Numpy API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.5.0-rc.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Torch shared tensors TL;DR Using specific functions, which should work in most cases for you. This is not without side effects. Copied from safetensors.torch import load_model, save_model save_model(model, "model.safetensors" ) # Instead of save_file(model.state_dict(), "model.safetensors") load_model(model, "model.safetensors" ) # Instead of model.load_state_dict(load_file("model.safetensors")) What are shared tensors ? Pytorch uses shared tensors for some computation. This is extremely interesting to reduce memory usage in general. One very classic use case is in transformers the embeddings are shared with lm_head . By using the same matrix, the model uses less parameters, and gradients flow much better to the embeddings (which is the start of the model, so they don’t flow easily there, whereas lm_head is at the tail of the model, so gradients are extremely good over there, since they are the same tensors, they both benefit) Copied from torch import nn class Model (nn.Module): def __init__ ( self ): super ().__init__() self.a = nn.Linear( 100 , 100 ) self.b = self.a def forward ( self, x ): return self.b(self.a(x)) model = Model() print (model.state_dict()) # odict_keys(['a.weight', 'a.bias', 'b.weight', 'b.bias']) torch.save(model.state_dict(), "model.bin" ) # This file is now 41k instead of ~80k, because A and B are the same weight hence only 1 is saved on disk with both `a` and `b` pointing to the same buffer Why are shared tensors not saved in safetensors ? Multiple reasons for that: Not all frameworks support them for instance tensorflow does not. So if someone saves shared tensors in torch, there is no way to load them in a similar fashion so we could not keep the same Dict[str, Tensor] API. It makes lazy loading very quickly. Lazy loading is the ability to load only some tensors, or part of tensors for a given file. This is trivial to do without sharing tensors but with tensor sharing Copied with safe_open( "model.safetensors" , framework= "pt" ) as f: a = f.get_tensor( "a" ) b = f.get_tensor( "b" ) Now it’s impossible with this given code to “reshare” buffers after the fact. Once we give the a tensor we have no way to give back the same memory when you ask for b . (In this particular example we could keep track of given buffers but this is not the case in general, since you could do arbitrary work with a like sending it to another device before asking for b ) It can lead to much larger file than necessary . If you are saving a shared tensor which is only a fraction of a larger tensor, then saving it with pytorch leads to saving the entire buffer instead of saving just what is needed. Copied a = torch.zeros(( 100 , 100 )) b = a[: 1 , :] torch.save({ "b" : b}, "model.bin" ) # File is 41k instead of the expected 400 bytes # In practice it could happen that you save several 10GB instead of 1GB. Now with all those reasons being mentioned, nothing is set in stone in there. Shared tensors do not cause unsafety, or denial of service potential, so this decision could be revisited if current workarounds are not satisfactory. How does it work ? The design is rather simple. We’re going to look for all shared tensors, then looking for all tensors covering the entire buffer (there can be multiple such tensors). That gives us multiple names which can be saved, we simply choose the first one During load_model , we are loading a bit like load_state_dict does, except we’re looking into the model itself, to check for shared buffers, and ignoring the “missed keys” which were actually covered by virtue of buffer sharing (they were properly loaded since there was a buffer that loaded under the hood). Every other error is raised as-is Caveat : This means we’re dropping some keys within the file. meaning if you’re checking for the keys saved on disk, you will see some “missing tensors” or if you’re using load_state_dict . Unless we start supporting shared tensors directly in the format there’s no real way around it. < > Update on GitHub ← Speed Comparison Metadata Parsing → Torch shared tensors T L;DR What are shared tensors ? Why are shared tensors not saved in safetensors ? How does it work ?
Using_TGI_with_Google_TPUs.txt
Using TGI with Google TPUs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Using TGI with Google TPUs text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using TGI with Google TPUs Check out this guide on how to serve models with TGI on TPUs. < > Update on GitHub ← Using TGI with AWS Inferentia Using TGI with Intel GPUs → Using TG I with Google TP Us
Automatic_Speech_Recognition.txt
Automatic Speech Recognition Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Automatic Speech Recognition api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Automatic Speech Recognition Automatic Speech Recognition (ASR), also known as Speech to Text (STT), is the task of transcribing a given audio to text. Example applications: Transcribing a podcast Building a voice assistant Generating subtitles for a video For more details about the automatic-speech-recognition task, check out its dedicated page ! You will find examples and related materials. Recommended models openai/whisper-large-v3 : A powerful ASR model by OpenAI. pyannote/speaker-diarization-3.1 : Powerful speaker diarization model. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/openai/whisper-large-v3" headers = { "Authorization" : "Bearer hf_***" } def query ( filename ): with open (filename, "rb" ) as f: data = f.read() response = requests.post(API_URL, headers=headers, data=data) return response.json() output = query( "sample1.flac" ) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload inputs* string The input audio data as a base64-encoded string. If no parameters are provided, you can also provide the audio data as a raw bytes payload. parameters object return_timestamps boolean Whether to output corresponding timestamps with the generated text generation_parameters object temperature number The value used to modulate the next token probabilities. top_k integer The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p number If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. typical_p number Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to typical_p or higher are kept for generation. See this paper for more details. epsilon_cutoff number If set to float strictly between 0 and 1, only tokens with a conditional probability greater than epsilon_cutoff will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details. eta_cutoff number Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly between 0 and 1, a token is only considered if it is greater than either eta_cutoff or sqrt(eta_cutoff) * exp(-entropy(softmax(next_token_logits))). The latter term is intuitively the expected next token probability, scaled by sqrt(eta_cutoff). In the paper, suggested values range from 3e-4 to 2e-3, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details. max_length integer The maximum length (in tokens) of the generated text, including the input. max_new_tokens integer The maximum number of tokens to generate. Takes precedence over max_length. min_length integer The minimum length (in tokens) of the generated text, including the input. min_new_tokens integer The minimum number of tokens to generate. Takes precedence over min_length. do_sample boolean Whether to use sampling instead of greedy decoding when generating new tokens. early_stopping enum Possible values: never, true, false. num_beams integer Number of beams to use for beam search. num_beam_groups integer Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. See this paper for more details. penalty_alpha number The value balances the model confidence and the degeneration penalty in contrastive search decoding. use_cache boolean Whether the model should use the past last key/values attentions to speed up decoding Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body text string The recognized text. chunks object[] When returnTimestamps is enabled, chunks contains a list of audio chunks identified by the model. text string A chunk of text identified by the model timestamps number[] The start and end timestamps corresponding with the text < > Update on GitHub ← Audio Classification Chat Completion → Automatic Speech Recognition Recommended models Using the API AP I specification Request Response
Use_a_custom_Container_Image.txt
Use a custom Container Image Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Use a custom Container Image Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Use a custom Container Image Inference Endpoints not only allows you to customize your inference handler , but it also allows you to provide a custom container image. Those can be public images like tensorflow/serving:2.7.3 or private Images hosted on Docker Hub , AWS ECR , Azure ACR , or Google GCR . The creation flow of your Image artifacts from a custom image is the same as the base image. This means Inference Endpoints will create a unique image artifact derived from your provided image, including all Model Artifacts. The Model Artifacts (weights) are stored under /repository . For example, if you use tensorflow/serving as your custom image, then you have to set `model_base_path=“/repository”: Copied tensorflow_model_server \ --rest_api_port =5000 \ --model_name =my_model \ --model_base_path = "/repository" < > Update on GitHub ← Create custom Inference Handler Access and read Logs → Use a custom Container Image
Evaluating_Diffusion_Models.txt
Evaluating Diffusion Models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Evaluating Diffusion Models Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination of both qualitative and quantitative evaluations provides a stronger signal when choosing one model over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers . The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. Scenarios We cover Diffusion models with the following pipelines: Text-guided image generation (such as the StableDiffusionPipeline ). Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline and StableDiffusionInstructPix2PixPipeline ). Class-conditioned image generation models (such as the DiTPipeline ). Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website : PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. PartiPrompts has the following columns: Prompt Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) These benchmarks allow for side-by-side human evaluation of different image generation models. For this, the 🧨 Diffusers team has built Open Parti Prompts , which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models: Open Parti Prompts Game : For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best. Open Parti Prompts Leaderboard : The leaderboard comparing the currently best open-sourced diffusion models to each other. To manually compare images, let’s see how we can use diffusers on a couple of PartiPrompts. Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset . Copied from datasets import load_dataset # prompts = load_dataset("nateraw/parti-prompts", split="train") # prompts = prompts.shuffle() # sample_prompts = [prompts[i]["Prompt"] for i in range(5)] # Fixing these sample prompts in the interest of reproducibility. sample_prompts = [ "a corgi" , "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky" , "a car with no windows" , "a cube made of porcupine" , 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.' , ] Now we can use these prompts to generate some images using Stable Diffusion ( v1-4 checkpoint ): Copied import torch seed = 0 generator = torch.manual_seed(seed) images = sd_pipeline(sample_prompts, num_images_per_prompt= 1 , generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint ( v1-5 ), yields: Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the training progress. In our training scripts , we support this utility with additional support for logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let’s first load a StableDiffusionPipeline : Copied from diffusers import StableDiffusionPipeline import torch model_ckpt = "CompVis/stable-diffusion-v1-4" sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to( "cuda" ) Generate some images with multiple prompts: Copied prompts = [ "a photo of an astronaut riding a horse on mars" , "A high tech solarpunk utopia in the Amazon rainforest" , "A pikachu fine dining with a view to the Eiffel Tower" , "A mecha robot in a favela in expressionist style" , "an insect robot preparing a delicious meal" , "A small cabin on top of a snowy mountain in the style of Disney, artstation" , ] images = sd_pipeline(prompts, num_images_per_prompt= 1 , output_type= "np" ).images print (images.shape) # (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score from functools import partial clip_score_fn = partial(clip_score, model_name_or_path= "openai/clip-vit-base-patch16" ) def calculate_clip_score ( images, prompts ): images_int = (images * 255 ).astype( "uint8" ) clip_score = clip_score_fn(torch.from_numpy(images_int).permute( 0 , 3 , 1 , 2 ), prompts).detach() return round ( float (clip_score), 4 ) sd_clip_score = calculate_clip_score(images, prompts) print ( f"CLIP score: {sd_clip_score} " ) # CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a fixed seed with the v1-4 Stable Diffusion checkpoint : Copied seed = 0 generator = torch.manual_seed(seed) images = sd_pipeline(prompts, num_images_per_prompt= 1 , generator=generator, output_type= "np" ).images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "stable-diffusion-v1-5/stable-diffusion-v1-5" sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=torch.float16).to( "cuda" ) images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt= 1 , generator=generator, output_type= "np" ).images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) print ( f"CLIP Score with v-1-4: {sd_clip_score_1_4} " ) # CLIP Score with v-1-4: 34.9102 sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) print ( f"CLIP Score with v-1-5: {sd_clip_score_1_5} " ) # CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there are some limitations in this score. The captions in the training dataset were crawled from the web and extracted from alt and similar tags associated an image on the internet. They are not necessarily representative of what a human being would use to describe an image. Hence we had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline , as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators ). This is referred to as the ” CLIP directional similarity “. Caption 1 corresponds to the input image (image 1) that is to be edited. Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. Following is a pictorial overview: We have prepared a mini dataset to implement this metric. Let’s first load the dataset. Copied from datasets import load_dataset dataset = load_dataset( "sayakpaul/instructpix2pix-demo" , split= "train" ) dataset.features Copied { 'input' : Value(dtype= 'string' , id =None), 'edit' : Value(dtype= 'string' , id =None), 'output' : Value(dtype= 'string' , id =None), 'image' : Image(decode=True, id =None)} Here we have: input is a caption corresponding to the image . edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 print ( f"Original caption: {dataset[idx][ 'input' ]} " ) print ( f"Edit instruction: {dataset[idx][ 'edit' ]} " ) print ( f"Modified caption: {dataset[idx][ 'output' ]} " ) Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for' , according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' Edit instruction: make the isles all white marble Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for' , according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' And here is the image: Copied dataset[idx][ "image" ] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. Let’s first load the StableDiffusionInstructPix2PixPipeline : Copied from diffusers import StableDiffusionInstructPix2PixPipeline instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( "timbrooks/instruct-pix2pix" , torch_dtype=torch.float16 ).to( "cuda" ) Now, we perform the edits: Copied import numpy as np def edit_image ( input_image, instruction ): image = instruct_pix2pix_pipeline( instruction, image=input_image, output_type= "np" , generator=generator, ).images[ 0 ] return image input_images = [] original_captions = [] modified_captions = [] edited_images = [] for idx in range ( len (dataset)): input_image = dataset[idx][ "image" ] edit_instruction = dataset[idx][ "edit" ] edited_image = edit_image(input_image, edit_instruction) input_images.append(np.array(input_image)) original_captions.append(dataset[idx][ "input" ]) modified_captions.append(dataset[idx][ "output" ]) edited_images.append(edited_image) To measure the directional similarity, we first load CLIP’s image and text encoders: Copied from transformers import ( CLIPTokenizer, CLIPTextModelWithProjection, CLIPVisionModelWithProjection, CLIPImageProcessor, ) clip_id = "openai/clip-vit-large-patch14" tokenizer = CLIPTokenizer.from_pretrained(clip_id) text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to( "cuda" ) image_processor = CLIPImageProcessor.from_pretrained(clip_id) image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to( "cuda" ) Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14 . This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation . Next, we prepare a PyTorch nn.Module to compute directional similarity: Copied import torch.nn as nn import torch.nn.functional as F class DirectionalSimilarity (nn.Module): def __init__ ( self, tokenizer, text_encoder, image_processor, image_encoder ): super ().__init__() self.tokenizer = tokenizer self.text_encoder = text_encoder self.image_processor = image_processor self.image_encoder = image_encoder def preprocess_image ( self, image ): image = self.image_processor(image, return_tensors= "pt" )[ "pixel_values" ] return { "pixel_values" : image.to( "cuda" )} def tokenize_text ( self, text ): inputs = self.tokenizer( text, max_length=self.tokenizer.model_max_length, padding= "max_length" , truncation= True , return_tensors= "pt" , ) return { "input_ids" : inputs.input_ids.to( "cuda" )} def encode_image ( self, image ): preprocessed_image = self.preprocess_image(image) image_features = self.image_encoder(**preprocessed_image).image_embeds image_features = image_features / image_features.norm(dim= 1 , keepdim= True ) return image_features def encode_text ( self, text ): tokenized_text = self.tokenize_text(text) text_features = self.text_encoder(**tokenized_text).text_embeds text_features = text_features / text_features.norm(dim= 1 , keepdim= True ) return text_features def compute_directional_similarity ( self, img_feat_one, img_feat_two, text_feat_one, text_feat_two ): sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) return sim_direction def forward ( self, image_one, image_two, caption_one, caption_two ): img_feat_one = self.encode_image(image_one) img_feat_two = self.encode_image(image_two) text_feat_one = self.encode_text(caption_one) text_feat_two = self.encode_text(caption_two) directional_similarity = self.compute_directional_similarity( img_feat_one, img_feat_two, text_feat_one, text_feat_two ) return directional_similarity Let’s put DirectionalSimilarity to use now. Copied dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) scores = [] for i in range ( len (input_images)): original_image = input_images[i] original_caption = original_captions[i] edited_image = edited_images[i] modified_caption = modified_captions[i] similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) scores.append( float (similarity_score.detach().cpu())) print ( f"CLIP directional similarity: {np.mean(scores)} " ) # CLIP directional similarity: 0.0797976553440094 Like the CLIP Score, the higher the CLIP directional similarity, the better it is. It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one) . For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline . Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset ). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. Using the above metrics helps evaluate models that are class-conditioned. For example, DiT . It was pre-trained being conditioned on the ImageNet-1k classes. Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k . Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ( Heusel et al. ). We show how to compute it with the DiTPipeline , which uses the DiT model under the hood. FID aims to measure how similar are two datasets of images. As per this resource : Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. Let’s first download a few images from the ImageNet-1k training set: Copied from zipfile import ZipFile import requests def download ( url, local_filepath ): r = requests.get(url) with open (local_filepath, "wb" ) as f: f.write(r.content) return local_filepath dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" local_filepath = download(dummy_dataset_url, dummy_dataset_url.split( "/" )[- 1 ]) with ZipFile(local_filepath, "r" ) as zipper: zipper.extractall( "." ) Copied from PIL import Image import os import numpy as np dataset_path = "sample-imagenet-images" image_paths = sorted ([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) real_images = [np.array(Image. open (path).convert( "RGB" )) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. Copied from torchvision.transforms import functional as F import torch def preprocess_image ( image ): image = torch.tensor(image).unsqueeze( 0 ) image = image.permute( 0 , 3 , 1 , 2 ) / 255.0 return F.center_crop(image, ( 256 , 256 )) real_images = torch.cat([preprocess_image(image) for image in real_images]) print (real_images.shape) # torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler dit_pipeline = DiTPipeline.from_pretrained( "facebook/DiT-XL-2-256" , torch_dtype=torch.float16) dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) dit_pipeline = dit_pipeline.to( "cuda" ) seed = 0 generator = torch.manual_seed(seed) words = [ "cassette player" , "chainsaw" , "chainsaw" , "church" , "gas pump" , "gas pump" , "gas pump" , "parachute" , "parachute" , "tench" , ] class_ids = dit_pipeline.get_label_ids(words) output = dit_pipeline(class_labels=class_ids, generator=generator, output_type= "np" ) fake_images = output.images fake_images = torch.tensor(fake_images) fake_images = fake_images.permute( 0 , 3 , 1 , 2 ) print (fake_images.shape) # torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics . Copied from torchmetrics.image.fid import FrechetInceptionDistance fid = FrechetInceptionDistance(normalize= True ) fid.update(real_images, real= True ) fid.update(fake_images, real= False ) print ( f"FID: { float (fid.compute())} " ) # FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. FID results tend to be fragile as they depend on a lot of factors: The specific Inception model used during computation. The implementation accuracy of the computation. The image format (not the same if we start from PNGs vs JPGs). Keeping that in mind, FID is often most useful when comparing similar runs, but it is hard to reproduce paper results unless the authors carefully disclose the FID measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images . Fake images. < > Update on GitHub ← Diffusers' Ethical Guidelines Projects built with Diffusers → Evaluating Diffusion Models Scenarios Qualitative Evaluation Quantitative Evaluation Text-guided image generation Image-conditioned text-to-image generation Class-conditioned image generation
Using_Datasets_with_TensorFlow.txt
Using Datasets with TensorFlow Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Using Datasets with TensorFlow Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using Datasets with TensorFlow This document is a quick introduction to using datasets with TensorFlow, with a particular focus on how to get tf.Tensor objects out of our datasets, and how to stream data from Hugging Face Dataset objects to Keras methods like model.fit() . Dataset format By default, datasets return regular Python objects: integers, floats, strings, lists, etc. To get TensorFlow tensors instead, you can set the format of the dataset to tf : Copied >>> from datasets import Dataset >>> data = [[ 1 , 2 ],[ 3 , 4 ]] >>> ds = Dataset.from_dict({ "data" : data}) >>> ds = ds.with_format( "tf" ) >>> ds[ 0 ] { 'data' : <tf.Tensor: shape=( 2 ,), dtype=int64, numpy=array([ 1 , 2 ])>} >>> ds[: 2 ] { 'data' : <tf.Tensor: shape=( 2 , 2 ), dtype=int64, numpy= array([[ 1 , 2 ], [ 3 , 4 ]])>} A Dataset object is a wrapper of an Arrow table, which allows fast reads from arrays in the dataset to TensorFlow tensors. This can be useful for converting your dataset to a dict of Tensor objects, or for writing a generator to load TF samples from it. If you wish to convert the entire dataset to Tensor , simply query the full dataset: Copied >>> ds[:] { 'data' : <tf.Tensor: shape=( 2 , 2 ), dtype=int64, numpy= array([[ 1 , 2 ], [ 3 , 4 ]])>} N-dimensional arrays If your dataset consists of N-dimensional arrays, you will see that by default they are considered as the same tensor if the shape is fixed: Copied >>> from datasets import Dataset >>> data = [[[ 1 , 2 ],[ 3 , 4 ]],[[ 5 , 6 ],[ 7 , 8 ]]] # fixed shape >>> ds = Dataset.from_dict({ "data" : data}) >>> ds = ds.with_format( "tf" ) >>> ds[ 0 ] { 'data' : <tf.Tensor: shape=( 2 , 2 ), dtype=int64, numpy= array([[ 1 , 2 ], [ 3 , 4 ]])>} Otherwise, a TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor: Copied >>> from datasets import Dataset >>> data = [[[ 1 , 2 ],[ 3 ]],[[ 4 , 5 , 6 ],[ 7 , 8 ]]] # varying shape >>> ds = Dataset.from_dict({ "data" : data}) >>> ds = ds.with_format( "torch" ) >>> ds[ 0 ] { 'data' : <tf.RaggedTensor [[ 1 , 2 ], [ 3 ]]>} However this logic often requires slow shape comparisons and data copies. To avoid this, you must explicitly use the Array feature type and specify the shape of your tensors: Copied >>> from datasets import Dataset, Features, Array2D >>> data = [[[ 1 , 2 ],[ 3 , 4 ]],[[ 5 , 6 ],[ 7 , 8 ]]] >>> features = Features({ "data" : Array2D(shape=( 2 , 2 ), dtype= 'int32' )}) >>> ds = Dataset.from_dict({ "data" : data}, features=features) >>> ds = ds.with_format( "tf" ) >>> ds[ 0 ] { 'data' : <tf.Tensor: shape=( 2 , 2 ), dtype=int64, numpy= array([[ 1 , 2 ], [ 3 , 4 ]])>} >>> ds[: 2 ] { 'data' : <tf.Tensor: shape=( 2 , 2 , 2 ), dtype=int64, numpy= array([[[ 1 , 2 ], [ 3 , 4 ]], [[ 5 , 6 ], [ 7 , 8 ]]])>} Other feature types ClassLabel data are properly converted to tensors: Copied >>> from datasets import Dataset, Features, ClassLabel >>> labels = [ 0 , 0 , 1 ] >>> features = Features({ "label" : ClassLabel(names=[ "negative" , "positive" ])}) >>> ds = Dataset.from_dict({ "label" : labels}, features=features) >>> ds = ds.with_format( "tf" ) >>> ds[: 3 ] { 'label' : <tf.Tensor: shape=( 3 ,), dtype=int64, numpy=array([ 0 , 0 , 1 ])>} Strings and binary objects are also supported: Copied >>> from datasets import Dataset, Features >>> text = [ "foo" , "bar" ] >>> data = [ 0 , 1 ] >>> ds = Dataset.from_dict({ "text" : text, "data" : data}) >>> ds = ds.with_format( "tf" ) >>> ds[: 2 ] { 'text' : <tf.Tensor: shape=( 2 ,), dtype=string, numpy=array([ b'foo' , b'bar' ], dtype= object )>, 'data' : <tf.Tensor: shape=( 2 ,), dtype=int64, numpy=array([ 0 , 1 ])>} You can also explicitly format certain columns and leave the other columns unformatted: Copied >>> ds = ds.with_format( "tf" , columns=[ "data" ], output_all_columns= True ) >>> ds[: 2 ] { 'data' : <tf.Tensor: shape=( 2 ,), dtype=int64, numpy=array([ 0 , 1 ])>, 'text' : [ 'foo' , 'bar' ]} String and binary objects are unchanged, since PyTorch only supports numbers. The Image and Audio feature types are also supported. To use the Image feature type, you’ll need to install the vision extra as pip install datasets[vision] . Copied >>> from datasets import Dataset, Features, Audio, Image >>> images = [ "path/to/image.png" ] * 10 >>> features = Features({ "image" : Image()}) >>> ds = Dataset.from_dict({ "image" : images}, features=features) >>> ds = ds.with_format( "tf" ) >>> ds[ 0 ] { 'image' : <tf.Tensor: shape=( 512 , 512 , 4 ), dtype=uint8, numpy= array([[[ 255 , 215 , 106 , 255 ], [ 255 , 215 , 106 , 255 ], ..., [ 255 , 255 , 255 , 255 ], [ 255 , 255 , 255 , 255 ]]], dtype=uint8)>} >>> ds[: 2 ] { 'image' : <tf.Tensor: shape=( 2 , 512 , 512 , 4 ), dtype=uint8, numpy= array([[[[ 255 , 215 , 106 , 255 ], [ 255 , 215 , 106 , 255 ], ..., [ 255 , 255 , 255 , 255 ], [ 255 , 255 , 255 , 255 ]]]], dtype=uint8)>} To use the Audio feature type, you’ll need to install the audio extra as pip install datasets[audio] . Copied >>> from datasets import Dataset, Features, Audio, Image >>> audio = [ "path/to/audio.wav" ] * 10 >>> features = Features({ "audio" : Audio()}) >>> ds = Dataset.from_dict({ "audio" : audio}, features=features) >>> ds = ds.with_format( "tf" ) >>> ds[ 0 ][ "audio" ][ "array" ] <tf.Tensor: shape=( 202311 ,), dtype=float32, numpy= array([ 6.1035156e-05 , 1.5258789e-05 , 1.6784668e-04 , ..., - 1.5258789e-05 , - 1.5258789e-05 , 1.5258789e-05 ], dtype=float32)> >>> ds[ 0 ][ "audio" ][ "sampling_rate" ] <tf.Tensor: shape=(), dtype=int32, numpy= 44100 > Data loading Although you can load individual samples and batches just by indexing into your dataset, this won’t work if you want to use Keras methods like fit() and predict() . You could write a generator function that shuffles and loads batches from your dataset and fit() on that, but that sounds like a lot of unnecessary work. Instead, if you want to stream data from your dataset on-the-fly, we recommend converting your dataset to a tf.data.Dataset using the to_tf_dataset() method. The tf.data.Dataset class covers a wide range of use-cases - it is often created from Tensors in memory, or using a load function to read files on disc or external storage. The dataset can be transformed arbitrarily with the map() method, or methods like batch() and shuffle() can be used to create a dataset that’s ready for training. These methods do not modify the stored data in any way - instead, the methods build a data pipeline graph that will be executed when the dataset is iterated over, usually during model training or inference. This is different from the map() method of Hugging Face Dataset objects, which runs the map function immediately and saves the new or changed columns. Since the entire data preprocessing pipeline can be compiled in a tf.data.Dataset , this approach allows for massively parallel, asynchronous data loading and training. However, the requirement for graph compilation can be a limitation, particularly for Hugging Face tokenizers, which are usually not (yet!) compilable as part of a TF graph. As a result, we usually advise pre-processing the dataset as a Hugging Face dataset, where arbitrary Python functions can be used, and then converting to tf.data.Dataset afterwards using to_tf_dataset() to get a batched dataset ready for training. To see examples of this approach, please see the examples or notebooks for transformers . Using to_tf_dataset() Using to_tf_dataset() is straightforward. Once your dataset is preprocessed and ready, simply call it like so: Copied >>> from datasets import Dataset >>> data = { "inputs" : [[ 1 , 2 ],[ 3 , 4 ]], "labels" : [ 0 , 1 ]} >>> ds = Dataset.from_dict(data) >>> tf_ds = ds.to_tf_dataset( columns=[ "inputs" ], label_cols=[ "labels" ], batch_size= 2 , shuffle= True ) The returned tf_ds object here is now fully ready to train on, and can be passed directly to model.fit() . Note that you set the batch size when creating the dataset, and so you don’t need to specify it when calling fit() : Copied >>> model.fit(tf_ds, epochs= 2 ) For a full description of the arguments, please see the to_tf_dataset() documentation. In many cases, you will also need to add a collate_fn to your call. This is a function that takes multiple elements of the dataset and combines them into a single batch. When all elements have the same length, the built-in default collator will suffice, but for more complex tasks a custom collator may be necessary. In particular, many tasks have samples with varying sequence lengths which will require a data collator that can pad batches correctly. You can see examples of this in the transformers NLP examples and notebooks , where variable sequence lengths are very common. If you find that loading with to_tf_dataset is slow, you can also use the num_workers argument. This spins up multiple subprocesses to load data in parallel. This feature is recent and still somewhat experimental - please file an issue if you encounter any bugs while using it! When to use to_tf_dataset The astute reader may have noticed at this point that we have offered two approaches to achieve the same goal - if you want to pass your dataset to a TensorFlow model, you can either convert the dataset to a Tensor or dict of Tensors using .with_format('tf') , or you can convert the dataset to a tf.data.Dataset with to_tf_dataset() . Either of these can be passed to model.fit() , so which should you choose? The key thing to recognize is that when you convert the whole dataset to Tensor s, it is static and fully loaded into RAM. This is simple and convenient, but if any of the following apply, you should probably use to_tf_dataset() instead: Your dataset is too large to fit in RAM. to_tf_dataset() streams only one batch at a time, so even very large datasets can be handled with this method. You want to apply random transformations using dataset.with_transform() or the collate_fn . This is common in several modalities, such as image augmentations when training vision models, or random masking when training masked language models. Using to_tf_dataset() will apply those transformations at the moment when a batch is loaded, which means the same samples will get different augmentations each time they are loaded. This is usually what you want. Your data has a variable dimension, such as input texts in NLP that consist of varying numbers of tokens. When you create a batch with samples with a variable dimension, the standard solution is to pad the shorter samples to the length of the longest one. When you stream samples from a dataset with to_tf_dataset , you can apply this padding to each batch via your collate_fn . However, if you want to convert such a dataset to dense Tensor s, then you will have to pad samples to the length of the longest sample in the entire dataset! This can result in huge amounts of padding, which wastes memory and reduces your model’s speed. Caveats and limitations Right now, to_tf_dataset() always returns a batched dataset - we will add support for unbatched datasets soon! < > Update on GitHub ← Stream Use with PyTorch → Using Datasets with Tensor Flow Dataset format N-dimensional arrays Other feature types Data loading Using to_tf_dataset() When to use to_tf_dataset Caveats and limitations
Autoscaling.txt
Autoscaling Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Autoscaling Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Autoscaling Autoscaling allows you to dynamically adjust the number of endpoint replicas running your models based on traffic and accelerator utilization. By leveraging autoscaling, you can seamlessly handle varying workloads while optimizing costs and ensuring high availability. Scaling Criteria The autoscaling process is triggered based on the accelerator’s utilization metrics. The criteria for scaling differ depending on the type of accelerator being used: CPU Accelerators : A new replica is added when the average CPU utilization of all replicas reaches 80%. GPU Accelerators : A new replica is added when the average GPU utilization of all replicas over a 1-minute window reaches 80%. It’s important to note that the scaling up process takes place every minute and scaling down takes place every 2 minutes. This frequency ensures a balance between responsiveness and stability of the autoscaling system, with a stabilization of 300 seconds once scaled down. Scaling based on pending requests (beta feature) You can change the scaling criteria to be based on pending requests instead of utilization metrics. This is currently an experimental feature and we advise testing prior to using it for production workloads. pending requests are requests that have not yet received an HTTP status, meaning they include in-flight requests and requests currently being processed. by default, if there are more than 1.5 pending requests per replica in the past 20 seconds, it triggers an autoscaling event and adds a replica to your deployment. you can adjust this threshold to meet your specific requirements under Endpoint settings. Considerations for Effective Autoscaling While autoscaling offers convenient resource management, certain considerations should be kept in mind to ensure its effectiveness: Model Initialization Time : During the initialization of a new replica, the model is downloaded and loaded into memory. If your replicas have a long initialization time, autoscaling may not be as effective. This is because the average GPU utilization might fall below the threshold during that time, triggering the automatic scaling down of your endpoint. Enterprise Plan Control : If you have an enterprise plan , you have full control over the autoscaling definitions. This allows you to customize the scaling thresholds, behavior and criteria based on your specific requirements. Scaling to 0 Inference Endpoints also supports autoscaling to 0, which means reducing the number of replicas to 0 when there is no incoming traffic. This feature is based on request patterns rather than accelerator utilization. When an endpoint remains idle without receiving any requests for over 15 minutes, the system automatically scales down the endpoint to 0 replicas. To enable the feature, go to the Settings page and you’ll find a section called “Automatic Scale-to-Zero”. Scaling to 0 replicas helps optimize cost savings by minimizing resource usage during periods of inactivity. However, it’s important to be aware that scaling to 0 implies a cold start period when the endpoint receives a new request. Additionally, the HTTP server will respond with a status code 502 Bad Gateway while the new replica is initializing. Please note that there is currently no queueing system in place for incoming requests. Therefore, we recommend developing your own request queue client-side with proper error handling to optimize throughput and latency. The duration of the cold start period varies depending on your model’s size. It is recommended to consider the potential latency impact when enabling scaling to 0 and managing user expectations. < > Update on GitHub ← API Reference (Swagger) Pricing → Autoscaling Scaling Criteria Scaling based on pending requests (beta feature) Considerations for Effective Autoscaling Scaling to 0
ZenML_on_Spaces.txt
ZenML on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation ZenML on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started ZenML on Spaces ZenML is an extensible, open-source MLOps framework for creating portable, production-ready MLOps pipelines. It’s built for Data Scientists, ML Engineers, and MLOps Developers to collaborate as they develop to production. ZenML offers a simple and flexible syntax, is cloud- and tool-agnostic, and has interfaces/abstractions catered toward ML workflows. With ZenML you’ll have all your favorite tools in one place, so you can tailor a workflow that caters to your specific needs. The ZenML Huggingface Space allows you to get up and running with a deployed version of ZenML with just a few clicks. Within a few minutes, you’ll have this default ZenML dashboard deployed and ready for you to connect to from your local machine. In the sections that follow, you’ll learn to deploy your own instance of ZenML and use it to view and manage your machine learning pipelines right from the Hub. ZenML on Huggingface Spaces is a self-contained application completely hosted on the Hub using Docker . The diagram below illustrates the complete process. Visit the ZenML documentation to learn more about its features and how to get started with running your machine learning pipelines through your Huggingface Spaces deployment. You can check out some small sample examples of ZenML pipelines to get started or take your pick of some more complex production-grade projects at the ZenML Projects repository . ZenML integrates with many of your favorite tools out of the box, including Huggingface of course! If there’s something else you want to use, we’re built to be extensible and you can easily make it work with whatever your custom tool or workflow is. ⚡️ Deploy ZenML on Spaces You can deploy ZenML on Spaces with just a few clicks: To set up your ZenML app, you need to specify three main components: the Owner (either your personal account or an organization), a Space name, and the Visibility (a bit lower down the page). Note that the space visibility needs to be set to ‘Public’ if you wish to connect to the ZenML server from your local machine. You have the option here to select a higher tier machine to use for your server. The advantage of selecting a paid CPU instance is that it is not subject to auto-shutdown policies and thus will stay up as long as you leave it up. In order to make use of a persistent CPU, you’ll likely want to create and set up a MySQL database to connect to (see below). To personalize your Space’s appearance, such as the title, emojis, and colors, navigate to “Files and Versions” and modify the metadata in your README.md file. Full information on Spaces configuration parameters can be found on the HuggingFace documentation reference guide . After creating your Space, you’ll notice a ‘Building’ status along with logs displayed on the screen. When this switches to ‘Running’, your Space is ready for use. If the ZenML login UI isn’t visible, try refreshing the page. In the upper-right hand corner of your space you’ll see a button with three dots which, when you click on it, will offer you a menu option to “Embed this Space”. (See the HuggingFace documentation for more details on this feature.) Copy the “Direct URL” shown in the box that you can now see on the screen. This should look something like this: https://<YOUR_USERNAME>-<SPACE_NAME>.hf.space . Open that URL and use our default login to access the dashboard (username: ‘default’, password: (leave it empty)). Connecting to your ZenML Server from your Local Machine Once you have your ZenML server up and running, you can connect to it from your local machine. To do this, you’ll need to get your Space’s ‘Direct URL’ (see above). Your Space's URL will only be available and usable for connecting from your local machine if the visibility of the space is set to 'Public'. You can use the ‘Direct URL’ to connect to your ZenML server from your local machine with the following CLI command (after installing ZenML, and using your custom URL instead of the placeholder): Copied zenml connect --url '<YOUR_HF_SPACES_DIRECT_URL>' --username='default' --password='' You can also use the Direct URL in your browser to use the ZenML dashboard as a fullscreen application (i.e. without the HuggingFace Spaces wrapper around it). The ZenML dashboard will currently not work when viewed from within the Huggingface webpage (i.e. wrapped in the main `https://huggingface.co/...` website). This is on account of a limitation in how cookies are handled between ZenML and Huggingface. You **must** view the dashboard from the 'Direct URL' (see above). Extra Configuration Options By default the ZenML application will be configured to use a SQLite non-persistent database. If you want to use a persistent database, you can configure this by amending the Dockerfile in your Space’s root directory. For full details on the various parameters you can change, see our reference documentation on configuring ZenML when deployed with Docker. If you are using the space just for testing and experimentation, you don't need to make any changes to the configuration. Everything will work out of the box. You can also use an external secrets backend together with your HuggingFace Spaces as described in our documentation . You should be sure to use HuggingFace’s inbuilt ‘Repository secrets’ functionality to configure any secrets you need to use in your Dockerfile configuration. See the documentation for more details how to set this up. If you wish to use a cloud secrets backend together with ZenML for secrets management, **you must take the following minimal security precautions** on your ZenML Server on the Dashboard: change your password on the default account that you get when you start. You can do this from the Dashboard or via the CLI. create a new user account with a password and assign it the admin role. This can also be done from the Dashboard (by ‘inviting’ a new user) or via the CLI. reconnect to the server using the new user account and password as described above, and use this new user account as your working account. This is because the default user created by the HuggingFace Spaces deployment process has no password assigned to it and as the Space is publicly accessible (since the Space is public) potentially anyone could access your secrets without this extra step . To change your password navigate to the Settings page by clicking the button in the upper right hand corner of the Dashboard and then click ‘Update Password’. Upgrading your ZenML Server on HF Spaces The default space will use the latest version of ZenML automatically. If you want to update your version, you can simply select the ‘Factory reboot’ option within the ‘Settings’ tab of the space. Note that this will wipe any data contained within the space and so if you are not using a MySQL persistent database (as described above) you will lose any data contained within your ZenML deployment on the space. You can also configure the space to use an earlier version by updating the Dockerfile ’s FROM import statement at the very top. Next Steps As a next step, check out our Starter Guide to MLOps with ZenML which is a series of short practical pages on how to get going quickly. Alternatively, check out our quickstart example which is a full end-to-end example of many of the features of ZenML. 🤗 Feedback and support If you are having trouble with your ZenML server on HuggingFace Spaces, you can view the logs by clicking on the “Open Logs” button at the top of the space. This will give you more context of what’s happening with your server. If you have suggestions or need specific support for anything else which isn’t working, please join the ZenML Slack community and we’ll be happy to help you out! < > Update on GitHub ← Shiny on Spaces ChatUI on Spaces → ZenM L on Spaces ⚡️ Deploy ZenM L on Spaces Connecting to your ZenM L Server from your Local Machine Extra Configuration Options Upgrading your ZenM L Server on H F Spaces Next Steps 🤗 Feedback and support
Sentence_Transformers_on_AWS_Inferentia_with_Optim.txt
Sentence Transformers on AWS Inferentia with Optimum Neuron Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Sentence Transformers on AWS Inferentia with Optimum Neuron AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Sentence Transformers on AWS Inferentia with Optimum Neuron Text Models There is a notebook version of that tutorial here . This guide explains how to compile, load, and use Sentence Transformers (SBERT) models on AWS Inferentia2 with Optimum Neuron, enabling efficient calculation of embeddings. Sentence Transformers are powerful models for generating sentence embeddings. You can use this Sentence Transformers to compute sentence / text embeddings for more than 100 languages. These embeddings can then be compared e.g. with cosine-similarity to find sentences with a similar meaning. This can be useful for semantic textual similarity, semantic search, or paraphrase mining. Convert Sentence Transformers model to AWS Inferentia2 First, you need to convert your Sentence Transformers model to a format compatible with AWS Inferentia2. You can compile Sentence Transformers models with Optimum Neuron using the optimum-cli or NeuronModelForSentenceTransformers class. Below you will find an example for both approaches. We have to make sure sentence-transformers is installed. Thats only needed for exporting the model. Copied pip install sentence-transformers Here we will use the NeuronModelForSentenceTransformers , which can be used to convert any Sntence Transformers model to a format compatible with AWS Inferentia2 or load already converted models. When exporting models with the NeuronModelForSentenceTransformers you need to set export=True and define the input shape and batch size. The input shape is defined by the sequence_length and the batch size by batch_size . Copied from optimum.neuron import NeuronModelForSentenceTransformers # Sentence Transformers model from HuggingFace model_id = "BAAI/bge-small-en-v1.5" input_shapes = { "batch_size" : 1 , "sequence_length" : 384 } # mandatory shapes # Load Transformers model and export it to AWS Inferentia2 model = NeuronModelForSentenceTransformers.from_pretrained(model_id, export= True , **input_shapes) # Save model to disk model.save_pretrained( "bge_emb_inf2/" ) Here we will use the optimum-cli to convert the model. Similar to the NeuronModelForSentenceTransformers we need to define our input shape and batch size. The input shape is defined by the sequence_length and the batch size by batch_size . The optimum-cli will automatically convert the model to a format compatible with AWS Inferentia2 and save it to the specified output directory. Copied optimum-cli export neuron -m BAAI/bge-small-en-v1.5 --sequence_length 384 --batch_size 1 --task feature-extraction bge_emb_inf2/ Load compiled Sentence Transformers model and run inference Once we have a compiled Sentence Transformers model, which we either exported ourselves or is available on the Hugging Face Hub, we can load it and run inference. For loading the model we can use the NeuronModelForSentenceTransformers class, which is an abstraction layer for the SentenceTransformer class. The NeuronModelForSentenceTransformers class will automatically pad the input to the specified sequence_length and run inference on AWS Inferentia2. Copied from optimum.neuron import NeuronModelForSentenceTransformers from transformers import AutoTokenizer model_id_or_path = "bge_emb_inf2/" tokenizer_id = "BAAI/bge-small-en-v1.5" # Load model and tokenizer model = NeuronModelForSentenceTransformers.from_pretrained(model_id_or_path) tokenizer = AutoTokenizer.from_pretrained(tokenizer_id) # Run inference prompt = "I like to eat apples" encoded_input = tokenizer(prompt, return_tensors= 'pt' ) outputs = model(**encoded_input) token_embeddings = outputs.token_embeddings sentence_embedding = outputs.sentence_embedding print ( f"token embeddings: {token_embeddings.shape} " ) # torch.Size([1, 7, 384]) print ( f"sentence_embedding: {sentence_embedding.shape} " ) # torch.Size([1, 384]) Production Usage For deploying these models in a production environment, refer to the Amazon SageMaker Blog . CLIP Compile CLIP for AWS Inferentia2 You can compile CLIP models with Optimum Neuron either by using the optimum-cli or NeuronModelForSentenceTransformers class. Adopt one approach that you prefer: With the Optimum CLI Copied optimum-cli export neuron -m sentence-transformers/clip-ViT-B-32 --sequence_length 64 --text_batch_size 3 --image_batch_size 1 --num_channels 3 --height 224 --width 224 --task feature-extraction --subfolder 0_CLIPModel clip_emb/ With the NeuronModelForSentenceTransformers class Copied from optimum.neuron import NeuronModelForSentenceTransformers model_id = "sentence-transformers/clip-ViT-B-32" # configs for compiling model input_shapes = { "num_channels" : 3 , "height" : 224 , "width" : 224 , "text_batch_size" : 3 , "image_batch_size" : 1 , "sequence_length" : 64 , } emb_model = NeuronModelForSentenceTransformers.from_pretrained( model_id, subfolder= "0_CLIPModel" , export= True , library_name= "sentence_transformers" , dynamic_batch_size= False , **input_shapes ) # Save locally or upload to the HuggingFace Hub save_directory = "clip_emb/" emb_model.save_pretrained(save_directory) Load compiled Sentence Transformers model and run inference Copied from PIL import Image from sentence_transformers import util from transformers import CLIPProcessor from optimum.neuron import NeuronModelForSentenceTransformers save_directory = "clip_emb" emb_model = NeuronModelForSentenceTransformers.from_pretrained(save_directory) processor = CLIPProcessor.from_pretrained(save_directory) inputs = processor( text=[ "Two dogs in the snow" , 'A cat on a table' , 'A picture of London at night' ], images=Image. open ( "two_dogs_in_snow.jpg" ), return_tensors= "pt" , padding= True ) outputs = emb_model(**inputs) # Compute cosine similarities cos_scores = util.cos_sim(outputs.image_embeds, outputs.text_embeds) print (cos_scores) # tensor([[0.3072, 0.1016, 0.1095]]) Caveat Since compiled models with dynamic batching enabled only accept input tensors with the same batch size, we cannot set dynamic_batch_size=True if the input texts and images have different batch sizes. And as NeuronModelForSentenceTransformers class pads the inputs to the batch sizes ( text_batch_size and image_batch_size ) used during the compilation, you could use relatively larger batch sizes during the compilation for flexibility with the trade-off of compute. eg. if you want to encode 3 or 4 or 5 texts and 1 image, you could set text_batch_size = 5 = max(3, 4, 5) and image_batch_size = 1 during the compilation. ← Create your own chatbot with llama-2-13B on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia → Sentence Transformers on AW S Inferentia with Optimum Neuron Text Models Convert Sentence Transformers model to AW S Inferentia2 Load compiled Sentence Transformers model and run inference Production Usage CLIP Compile CLI P for AW S Inferentia2 Load compiled Sentence Transformers model and run inference
Text_or_image-to-video.txt
Text or image-to-video Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Text or image-to-video Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Text or image-to-video Driven by the success of text-to-image diffusion models, generative video models are able to generate short clips of video from a text prompt or an initial image. These models extend a pretrained diffusion model to generate videos by adding some type of temporal and/or spatial convolution layer to the architecture. A mixed dataset of images and videos are used to train the model which learns to output a series of video frames based on the text or image conditioning. This guide will show you how to generate videos, how to configure video model parameters, and how to control video generation. Popular models Discover other cool and trending video generation models on the Hub here ! Stable Video Diffusions (SVD) , I2VGen-XL , AnimateDiff , and ModelScopeT2V are popular models used for video diffusion. Each model is distinct. For example, AnimateDiff inserts a motion modeling module into a frozen text-to-image model to generate personalized animated images, whereas SVD is entirely pretrained from scratch with a three-stage training process to generate short high-quality videos. CogVideoX is another popular video generation model. The model is a multidimensional transformer that integrates text, time, and space. It employs full attention in the attention module and includes an expert block at the layer level to spatially align text and video. CogVideoX CogVideoX uses a 3D Variational Autoencoder (VAE) to compress videos along the spatial and temporal dimensions. Begin by loading the CogVideoXPipeline and passing an initial text or image to generate a video. CogVideoX is available for image-to-video and text-to-video. THUDM/CogVideoX-5b-I2V uses the CogVideoXImageToVideoPipeline for image-to-video. THUDM/CogVideoX-5b and THUDM/CogVideoX-2b are available for text-to-video with the CogVideoXPipeline . Copied import torch from diffusers import CogVideoXImageToVideoPipeline from diffusers.utils import export_to_video, load_image prompt = "A vast, shimmering ocean flows gracefully under a twilight sky, its waves undulating in a mesmerizing dance of blues and greens. The surface glints with the last rays of the setting sun, casting golden highlights that ripple across the water. Seagulls soar above, their cries blending with the gentle roar of the waves. The horizon stretches infinitely, where the ocean meets the sky in a seamless blend of hues. Close-ups reveal the intricate patterns of the waves, capturing the fluidity and dynamic beauty of the sea in motion." image = load_image(image= "cogvideox_rocket.png" ) pipe = CogVideoXImageToVideoPipeline.from_pretrained( "THUDM/CogVideoX-5b-I2V" , torch_dtype=torch.bfloat16 ) pipe.vae.enable_tiling() pipe.vae.enable_slicing() video = pipe( prompt=prompt, image=image, num_videos_per_prompt= 1 , num_inference_steps= 50 , num_frames= 49 , guidance_scale= 6 , generator=torch.Generator(device= "cuda" ).manual_seed( 42 ), ).frames[ 0 ] export_to_video(video, "output.mp4" , fps= 8 ) initial image generated video Stable Video Diffusion SVD is based on the Stable Diffusion 2.1 model and it is trained on images, then low-resolution videos, and finally a smaller dataset of high-resolution videos. This model generates a short 2-4 second video from an initial image. You can learn more details about model, like micro-conditioning, in the Stable Video Diffusion guide. Begin by loading the StableVideoDiffusionPipeline and passing an initial image to generate a video from. Copied import torch from diffusers import StableVideoDiffusionPipeline from diffusers.utils import load_image, export_to_video pipeline = StableVideoDiffusionPipeline.from_pretrained( "stabilityai/stable-video-diffusion-img2vid-xt" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png" ) image = image.resize(( 1024 , 576 )) generator = torch.manual_seed( 42 ) frames = pipeline(image, decode_chunk_size= 8 , generator=generator).frames[ 0 ] export_to_video(frames, "generated.mp4" , fps= 7 ) initial image generated video I2VGen-XL I2VGen-XL is a diffusion model that can generate higher resolution videos than SVD and it is also capable of accepting text prompts in addition to images. The model is trained with two hierarchical encoders (detail and global encoder) to better capture low and high-level details in images. These learned details are used to train a video diffusion model which refines the video resolution and details in the generated video. You can use I2VGen-XL by loading the I2VGenXLPipeline , and passing a text and image prompt to generate a video. Copied import torch from diffusers import I2VGenXLPipeline from diffusers.utils import export_to_gif, load_image pipeline = I2VGenXLPipeline.from_pretrained( "ali-vilab/i2vgen-xl" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() image_url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/i2vgen_xl_images/img_0009.png" image = load_image(image_url).convert( "RGB" ) prompt = "Papers were floating in the air on a table in the library" negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" generator = torch.manual_seed( 8888 ) frames = pipeline( prompt=prompt, image=image, num_inference_steps= 50 , negative_prompt=negative_prompt, guidance_scale= 9.0 , generator=generator ).frames[ 0 ] export_to_gif(frames, "i2v.gif" ) initial image generated video AnimateDiff AnimateDiff is an adapter model that inserts a motion module into a pretrained diffusion model to animate an image. The adapter is trained on video clips to learn motion which is used to condition the generation process to create a video. It is faster and easier to only train the adapter and it can be loaded into most diffusion models, effectively turning them into “video models”. Start by loading a MotionAdapter . Copied import torch from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter from diffusers.utils import export_to_gif adapter = MotionAdapter.from_pretrained( "guoyww/animatediff-motion-adapter-v1-5-2" , torch_dtype=torch.float16) Then load a finetuned Stable Diffusion model with the AnimateDiffPipeline . Copied pipeline = AnimateDiffPipeline.from_pretrained( "emilianJR/epiCRealism" , motion_adapter=adapter, torch_dtype=torch.float16) scheduler = DDIMScheduler.from_pretrained( "emilianJR/epiCRealism" , subfolder= "scheduler" , clip_sample= False , timestep_spacing= "linspace" , beta_schedule= "linear" , steps_offset= 1 , ) pipeline.scheduler = scheduler pipeline.enable_vae_slicing() pipeline.enable_model_cpu_offload() Create a prompt and generate the video. Copied output = pipeline( prompt= "A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution" , negative_prompt= "bad quality, worse quality, low resolution" , num_frames= 16 , guidance_scale= 7.5 , num_inference_steps= 50 , generator=torch.Generator( "cpu" ).manual_seed( 49 ), ) frames = output.frames[ 0 ] export_to_gif(frames, "animation.gif" ) ModelscopeT2V ModelscopeT2V adds spatial and temporal convolutions and attention to a UNet, and it is trained on image-text and video-text datasets to enhance what it learns during training. The model takes a prompt, encodes it and creates text embeddings which are denoised by the UNet, and then decoded by a VQGAN into a video. ModelScopeT2V generates watermarked videos due to the datasets it was trained on. To use a watermark-free model, try the cerspense/zeroscope_v2_76w model with the TextToVideoSDPipeline first, and then upscale it’s output with the cerspense/zeroscope_v2_XL checkpoint using the VideoToVideoSDPipeline . Load a ModelScopeT2V checkpoint into the DiffusionPipeline along with a prompt to generate a video. Copied import torch from diffusers import DiffusionPipeline from diffusers.utils import export_to_video pipeline = DiffusionPipeline.from_pretrained( "damo-vilab/text-to-video-ms-1.7b" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() pipeline.enable_vae_slicing() prompt = "Confident teddy bear surfer rides the wave in the tropics" video_frames = pipeline(prompt).frames[ 0 ] export_to_video(video_frames, "modelscopet2v.mp4" , fps= 10 ) Configure model parameters There are a few important parameters you can configure in the pipeline that’ll affect the video generation process and quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Number of frames The num_frames parameter determines how many video frames are generated per second. A frame is an image that is played in a sequence of other frames to create motion or a video. This affects video length because the pipeline generates a certain number of frames per second (check a pipeline’s API reference for the default value). To increase the video duration, you’ll need to increase the num_frames parameter. Copied import torch from diffusers import StableVideoDiffusionPipeline from diffusers.utils import load_image, export_to_video pipeline = StableVideoDiffusionPipeline.from_pretrained( "stabilityai/stable-video-diffusion-img2vid" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png" ) image = image.resize(( 1024 , 576 )) generator = torch.manual_seed( 42 ) frames = pipeline(image, decode_chunk_size= 8 , generator=generator, num_frames= 25 ).frames[ 0 ] export_to_video(frames, "generated.mp4" , fps= 7 ) num_frames=14 num_frames=25 Guidance scale The guidance_scale parameter controls how closely aligned the generated video and text prompt or initial image is. A higher guidance_scale value means your generated video is more aligned with the text prompt or initial image, while a lower guidance_scale value means your generated video is less aligned which could give the model more “creativity” to interpret the conditioning input. SVD uses the min_guidance_scale and max_guidance_scale parameters for applying guidance to the first and last frames respectively. Copied import torch from diffusers import I2VGenXLPipeline from diffusers.utils import export_to_gif, load_image pipeline = I2VGenXLPipeline.from_pretrained( "ali-vilab/i2vgen-xl" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() image_url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/i2vgen_xl_images/img_0009.png" image = load_image(image_url).convert( "RGB" ) prompt = "Papers were floating in the air on a table in the library" negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" generator = torch.manual_seed( 0 ) frames = pipeline( prompt=prompt, image=image, num_inference_steps= 50 , negative_prompt=negative_prompt, guidance_scale= 1.0 , generator=generator ).frames[ 0 ] export_to_gif(frames, "i2v.gif" ) guidance_scale=9.0 guidance_scale=1.0 Negative prompt A negative prompt deters the model from generating things you don’t want it to. This parameter is commonly used to improve overall generation quality by removing poor or bad features such as “low resolution” or “bad details”. Copied import torch from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter from diffusers.utils import export_to_gif adapter = MotionAdapter.from_pretrained( "guoyww/animatediff-motion-adapter-v1-5-2" , torch_dtype=torch.float16) pipeline = AnimateDiffPipeline.from_pretrained( "emilianJR/epiCRealism" , motion_adapter=adapter, torch_dtype=torch.float16) scheduler = DDIMScheduler.from_pretrained( "emilianJR/epiCRealism" , subfolder= "scheduler" , clip_sample= False , timestep_spacing= "linspace" , beta_schedule= "linear" , steps_offset= 1 , ) pipeline.scheduler = scheduler pipeline.enable_vae_slicing() pipeline.enable_model_cpu_offload() output = pipeline( prompt= "360 camera shot of a sushi roll in a restaurant" , negative_prompt= "Distorted, discontinuous, ugly, blurry, low resolution, motionless, static" , num_frames= 16 , guidance_scale= 7.5 , num_inference_steps= 50 , generator=torch.Generator( "cpu" ).manual_seed( 0 ), ) frames = output.frames[ 0 ] export_to_gif(frames, "animation.gif" ) no negative prompt negative prompt applied Model-specific parameters There are some pipeline parameters that are unique to each model such as adjusting the motion in a video or adding noise to the initial image. Stable Video Diffusion Text2Video-Zero Stable Video Diffusion provides additional micro-conditioning for the frame rate with the fps parameter and for motion with the motion_bucket_id parameter. Together, these parameters allow for adjusting the amount of motion in the generated video. There is also a noise_aug_strength parameter that increases the amount of noise added to the initial image. Varying this parameter affects how similar the generated video and initial image are. A higher noise_aug_strength also increases the amount of motion. To learn more, read the Micro-conditioning guide. Control video generation Video generation can be controlled similar to how text-to-image, image-to-image, and inpainting can be controlled with a ControlNetModel . The only difference is you need to use the CrossFrameAttnProcessor so each frame attends to the first frame. Text2Video-Zero Text2Video-Zero video generation can be conditioned on pose and edge images for even greater control over a subject’s motion in the generated video or to preserve the identity of a subject/object in the video. You can also use Text2Video-Zero with InstructPix2Pix for editing videos with text. pose control edge control InstructPix2Pix Start by downloading a video and extracting the pose images from it. Copied from huggingface_hub import hf_hub_download from PIL import Image import imageio filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" repo_id = "PAIR/Text2Video-Zero" video_path = hf_hub_download(repo_type= "space" , repo_id=repo_id, filename=filename) reader = imageio.get_reader(video_path, "ffmpeg" ) frame_count = 8 pose_images = [Image.fromarray(reader.get_data(i)) for i in range (frame_count)] Load a ControlNetModel for pose estimation and a checkpoint into the StableDiffusionControlNetPipeline . Then you’ll use the CrossFrameAttnProcessor for the UNet and ControlNet. Copied import torch from diffusers import StableDiffusionControlNetPipeline, ControlNetModel from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" controlnet = ControlNetModel.from_pretrained( "lllyasviel/sd-controlnet-openpose" , torch_dtype=torch.float16) pipeline = StableDiffusionControlNetPipeline.from_pretrained( model_id, controlnet=controlnet, torch_dtype=torch.float16 ).to( "cuda" ) pipeline.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size= 2 )) pipeline.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size= 2 )) Fix the latents for all the frames, and then pass your prompt and extracted pose images to the model to generate a video. Copied latents = torch.randn(( 1 , 4 , 64 , 64 ), device= "cuda" , dtype=torch.float16).repeat( len (pose_images), 1 , 1 , 1 ) prompt = "Darth Vader dancing in a desert" result = pipeline(prompt=[prompt] * len (pose_images), image=pose_images, latents=latents).images imageio.mimsave( "video.mp4" , result, fps= 4 ) Optimize Video generation requires a lot of memory because you’re generating many video frames at once. You can reduce your memory requirements at the expense of some inference speed. Try: offloading pipeline components that are no longer needed to the CPU feed-forward chunking runs the feed-forward layer in a loop instead of all at once break up the number of frames the VAE has to decode into chunks instead of decoding them all at once Copied - pipeline.enable_model_cpu_offload() - frames = pipeline(image, decode_chunk_size=8, generator=generator).frames[0] + pipeline.enable_model_cpu_offload() + pipeline.unet.enable_forward_chunking() + frames = pipeline(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] If memory is not an issue and you want to optimize for speed, try wrapping the UNet with torch.compile . Copied - pipeline.enable_model_cpu_offload() + pipeline.to("cuda") + pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) < > Update on GitHub ← Inpainting Depth-to-image → Text or image-to-video Popular models Cog VideoX Stable Video Diffusion I2V Gen-XL Animate Diff Modelscope T2V Configure model parameters Number of frames Guidance scale Negative prompt Model-specific parameters Control video generation Text2 Video- Zero Optimize
Contribute_to_🤗_Transformers.txt
Contribute to 🤗 Transformers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Contribute to 🤗 Transformers Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Contribute to 🤗 Transformers Everyone is welcome to contribute, and we value everybody’s contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable. It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you. However you choose to contribute, please be mindful and respect our code of conduct . This guide was heavily inspired by the awesome scikit-learn guide to contributing . Ways to contribute There are several ways you can contribute to 🤗 Transformers: Fix outstanding issues with the existing code. Submit issues related to bugs or desired new features. Implement new models. Contribute to the examples or to the documentation. If you don’t know where to start, there is a special Good First Issue listing. It will give you a list of open issues that are beginner-friendly and help you start contributing to open-source. The best way to do that is to open a Pull Request and link it to the issue that you’d like to work on. We try to give priority to opened PRs as we can easily track the progress of the fix, and if the contributor does not have time anymore, someone else can take the PR over. For something slightly more challenging, you can also take a look at the Good Second Issue list. In general though, if you feel like you know what you’re doing, go for it and we’ll help you get there! 🚀 All contributions are equally valuable to the community. 🥰 Fixing outstanding issues If you notice an issue with the existing code and have a fix in mind, feel free to start contributing and open a Pull Request! Submitting a bug-related issue or feature request Do your best to follow these guidelines when submitting a bug-related issue or a feature request. It will make it easier for us to come back to you quickly and with good feedback. Did you find a bug? The 🤗 Transformers library is robust and reliable thanks to users who report the problems they encounter. Before you report an issue, we would really appreciate it if you could make sure the bug was not already reported (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you’re unsure whether the bug is in your code or the library, please ask in the forum or on our discord first. This helps us respond quicker to fixing issues related to the library versus general questions. We have a docs bot , and we highly encourage you to ask all your questions there. There is always a chance your bug can be fixed with a simple flag 👾🔫 Once you’ve confirmed the bug hasn’t already been reported, please include the following information in your issue so we can quickly resolve it: Your OS type and version and Python , PyTorch and TensorFlow versions when applicable. A short, self-contained, code snippet that allows us to reproduce the bug in less than 30s. The full traceback if an exception is raised. Attach any other additional information, like screenshots, you think may help. To get the OS and software versions automatically, run the following command: Copied transformers-cli env You can also run the same command from the root of the repository: Copied python src/transformers/commands/transformers_cli.py env Do you want a new feature? If there is a new feature you’d like to see in 🤗 Transformers, please open an issue and describe: What is the motivation behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community? Whatever it is, we’d love to hear about it! Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we’ll be able to help you. Provide a code snippet that demonstrates the features usage. If the feature is related to a paper, please include a link. If your issue is well written we’re already 80% of the way there by the time you create it. We have added templates to help you get started with your issue. Do you want to implement a new model? New models are constantly released and if you want to implement a new model, please provide the following information: A short description of the model and a link to the paper. Link to the implementation if it is open-sourced. Link to the model weights if they are available. If you are willing to contribute the model yourself, let us know so we can help you add it to 🤗 Transformers! We have a technical guide for how to add a model to 🤗 Transformers . Do you want to add documentation? We’re always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We’ll be happy to make the changes or help you make a contribution if you’re interested! For more details about how to generate, build, and write the documentation, take a look at the documentation README . Create a Pull Request Before writing any code, we strongly advise you to search through the existing PRs or issues to make sure nobody is already working on the same thing. If you are unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to contribute to 🤗 Transformers. While git is not the easiest tool to use, it has the greatest manual. Type git --help in a shell and enjoy! If you prefer books, Pro Git is a very good reference. You’ll need Python 3.9 or above to contribute to 🤗 Transformers. Follow the steps below to start contributing: Fork the repository by clicking on the Fork button on the repository’s page. This creates a copy of the code under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied git clone [email protected]:<your Github handle>/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git Create a new branch to hold your development changes: Copied git checkout -b a-descriptive-name-for-my-changes 🚨 Do not work on the main branch! Set up a development environment by running the following command in a virtual environment: Copied pip install -e ".[dev]" If 🤗 Transformers was already installed in the virtual environment, remove it with pip uninstall transformers before reinstalling it in editable mode with the -e flag. Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that’s the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do: Copied pip install -e ".[quality]" which should be enough for most use cases. Develop the features in your branch. As you work on your code, you should make sure the test suite passes. Run the tests impacted by your changes like this: Copied pytest tests/<TEST_TO_RUN>.py For more information about tests, check out the Testing guide. 🤗 Transformers relies on black and ruff to format its source code consistently. After you make changes, apply automatic style corrections and code verifications that can’t be automated in one go with: Copied make fixup This target is also optimized to only work with files modified by the PR you’re working on. If you prefer to run the checks one after the other, the following command applies the style corrections: Copied make style 🤗 Transformers also uses ruff and a few custom scripts to check for coding mistakes. Quality controls are run by the CI, but you can run the same checks with: Copied make quality Finally, we have a lot of scripts to make sure we don’t forget to update some files when adding a new model. You can run these scripts with: Copied make repo-consistency To learn more about those checks and how to fix any issues with them, check out the Checks on a Pull Request guide. If you’re modifying documents under the docs/source directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check make sure you install the documentation builder: Copied pip install ".[docs]" Run the following command from the root of the repository: Copied doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build This will build the documentation in the ~/tmp/test-build folder where you can inspect the generated Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request. Once you’re happy with your changes, add the changed files with git add and record your changes locally with git commit : Copied git add modified_file.py git commit Please remember to write good commit messages to clearly communicate the changes you made! To keep your copy of the code up to date with the original repository, rebase your branch on upstream/branch before you open a pull request or if requested by a maintainer: Copied git fetch upstream git rebase upstream/main Push your changes to your branch: Copied git push -u origin a-descriptive-name-for-my-changes If you’ve already opened a pull request, you’ll need to force push with the --force flag. Otherwise, if the pull request hasn’t been opened yet, you can just push your changes normally. Now you can go to your fork of the repository on GitHub and click on Pull Request to open a pull request. Make sure you tick off all the boxes on our checklist below. When you’re ready, you can send your changes to the project maintainers for review. It’s ok if maintainers request changes, it happens to our core contributors too! So everyone can see the changes in the pull request, work in your local branch and push the changes to your fork. They will automatically appear in the pull request. Pull request checklist ☐ The pull request title should summarize your contribution. ☐ If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people viewing the issue know you are working on it). ☐ To indicate a work in progress please prefix the title with [WIP] . These are useful to avoid duplicated work, and to differentiate it from PRs ready to be merged. ☐ Make sure existing tests pass. ☐ If adding a new feature, also add tests for it. If you are adding a new model, make sure you use ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...) to trigger the common tests. If you are adding new @slow tests, make sure they pass using RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py . If you are adding a new tokenizer, write tests and make sure RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py passes. CircleCI does not run the slow tests, but GitHub Actions does every night! ☐ All public methods must have informative docstrings (see modeling_bert.py for an example). ☐ Due to the rapidly growing repository, don’t add any images, videos and other non-text files that’ll significantly weigh down the repository. Instead, use a Hub repository such as hf-internal-testing to host these files and reference them by URL. We recommend placing documentation related images in the following repository: huggingface/documentation-images . You can open a PR on this dataset repository and ask a Hugging Face member to merge it. For more information about the checks run on a pull request, take a look at our Checks on a Pull Request guide. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the tests folder and examples tests in the examples folder. We like pytest and pytest-xdist because it’s faster. From the root of the repository, specify a path to a subfolder or a test file to run the test: Copied python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model Similarly, for the examples directory, specify a path to a subfolder or test file to run the test. For example, the following command tests the text classification subfolder in the PyTorch examples directory: Copied pip install -r examples/xxx/requirements.txt # only needed the first time python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification In fact, this is actually how our make test and make test-examples commands are implemented (not including the pip install )! You can also specify a smaller set of tests in order to test only the feature you’re working on. By default, slow tests are skipped but you can set the RUN_SLOW environment variable to yes to run them. This will download many gigabytes of models so make sure you have enough disk space, a good internet connection or a lot of patience! Remember to specify a path to a subfolder or a test file to run the test. Otherwise, you’ll run all the tests in the tests or examples folder, which will take a very long time! Copied RUN_SLOW= yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model RUN_SLOW= yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification Like the slow tests, there are other environment variables available which are not enabled by default during testing: RUN_CUSTOM_TOKENIZERS : Enables tests for custom tokenizers. RUN_PT_FLAX_CROSS_TESTS : Enables tests for PyTorch + Flax integration. RUN_PT_TF_CROSS_TESTS : Enables tests for TensorFlow + PyTorch integration. More environment variables and additional information can be found in the testing_utils.py . 🤗 Transformers uses pytest as a test runner only. It doesn’t use any pytest -specific features in the test suite itself. This means unittest is fully supported. Here’s how to run tests with unittest : Copied python -m unittest discover -s tests -t . -v python -m unittest discover -s examples -t examples -v Style guide For documentation strings, 🤗 Transformers follows the Google Python Style Guide . Check our documentation writing guide for more information. Develop on Windows On Windows (unless you’re working in Windows Subsystem for Linux or WSL), you need to configure git to transform Windows CRLF line endings to Linux LF line endings: Copied git config core.autocrlf input One way to run the make command on Windows is with MSYS2: Download MSYS2 , and we assume it’s installed in C:\msys64 . Open the command line C:\msys64\msys2.exe (it should be available from the Start menu). Run in the shell: pacman -Syu and install make with pacman -S make . Add C:\msys64\usr\bin to your PATH environment variable. You can now use make from any terminal (PowerShell, cmd.exe, etc.)! 🎉 Sync a forked repository with upstream main (the Hugging Face repository) When updating the main branch of a forked repository, please follow these steps to avoid pinging the upstream repository which adds reference notes to each upstream PR, and sends unnecessary notifications to the developers involved in these PRs. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied git checkout -b your-branch-for-syncing git pull --squash --no-commit upstream main git commit -m '<your message without GitHub references>' git push --set-upstream origin your-branch-for-syncing < > Update on GitHub ← Optimize inference using `torch.compile()` How to add a model to 🤗 Transformers? → Contribute to 🤗 Transformers Ways to contribute Fixing outstanding issues Submitting a bug-related issue or feature request Did you find a bug? Do you want a new feature? Do you want to implement a new model? Do you want to add documentation? Create a Pull Request Pull request checklist Tests Style guide Develop on Windows Sync a forked repository with upstream main (the Hugging Face repository)
🤗_Hugging_Face_Space_Header.txt
🤗 Hugging Face Space Header Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation 🤗 Hugging Face Space Header Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🤗 Hugging Face Space Header A Typescript powered wrapper for the Space mini_header feature. Install Copied pnpm add @huggingface/space-header npm add @huggingface/space-header yarn add @huggingface/space-header Deno Copied // esm.sh import { init } from "https://esm.sh/@huggingface/space-header" // or npm: import { init } from "npm:@huggingface/space-header" Initialize Copied import { init } from "@huggingface/space-header" ; // ... init ( ":user/:spaceId" ); // init("enzostvs/lora-studio") for example ❗Important note: The init method must be called on the client side. Usage Uses the target option to inject the space-header into another DOM element Copied const app = document . getElementById ( "app" ); // ... init ( ":user/:spaceId" , { target : app }); If you already have the space data, you can also pass it as a parameter to avoid a fetch Copied init (space); // space = { // id: string; // likes: number; // author: string; // } < > Update on GitHub ← HfAgent Parse local and remote GGUF files → 🤗 Hugging Face Space Header Install Deno Initialize Usage
Embed_the_Dataset_Viewer_in_a_webpage.txt
Embed the Dataset Viewer in a webpage Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Embed the Dataset Viewer in a webpage Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Configure the Dataset Viewer Embed the Dataset Viewer in a webpage SQL Console Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Embed the Dataset Viewer in a webpage You can embed the Dataset Viewer in your own webpage using an iframe. The URL to use is https://huggingface.co/datasets/<namespace>/<dataset-name>/embed/viewer , where <namespace> is the owner of the dataset (user or organization) and <dataset-name> is the name of the dataset. You can also pass other parameters like the subset, split, filter, search or selected row. For example, the following iframe embeds the Dataset Viewer for the glue dataset from the nyu-mll organization: Copied < iframe src = "https://huggingface.co/datasets/nyu-mll/glue/embed/viewer" frameborder = "0" width = "100%" height = "560px" > </ iframe > You can also get the embed code directly from the Dataset Viewer interface. Click on the Embed button in the top right corner of the Dataset Viewer: It will open a modal with the iframe code that you can copy and paste into your webpage: Parameters All the parameters of the dataset viewer page can also be passed to the embedded viewer (filter, search, specific split, etc.) by adding them to the iframe URL. For example, to show the results of the search on mangrove in the test split of the rte subset of the nyu-mll/glue dataset, you can use the following URL: Copied < iframe src = "https://huggingface.co/datasets/nyu-mll/glue/embed/viewer/rte/split?search=mangrove" frameborder = "0" width = "100%" height = "560px" > </ iframe > You can get this code directly from the Dataset Viewer interface by performing the search, clicking on the ⋮ button then Embed : It will open a modal with the iframe code that you can copy and paste into your webpage: Examples The embedded dataset viewer is used in multiple Machine Learning tools and platforms to display datasets. Here are a few examples. Open a pull request if you want to appear in this section! Tool: ZenML htahir1 shares a blog post showing how you can use the ZenML integration with the Datasets Viewer to visualize a Hugging Face dataset within a ZenML pipeline. Tool: Metaflow + Outerbounds eddie-OB shows in a demo video how to include the dataset viewer in Metaflow cards on Outerbounds . Tool: AutoTrain abhishek showcases how the dataset viewer is integrated into AutoTrain in a demo video . Datasets: Alpaca-style datasets gallery davanstrien showcases the collection of Alpaca-style datasets in a space . Datasets: Docmatix andito uses the embedded viewer in the blog post announcing the release of Docmatix , a huge dataset for Document Visual Question Answering (DocVQA). App: Electric Vehicle Charge Finder cfahlgren1 embeds the dataset viewer in the Electric Vehicle Charge Finder app . App: Masader - Arabic NLP data catalogue Zaid showcases the dataset viewer in Masader - the Arabic NLP data catalogue0 . < > Update on GitHub ← Configure the Dataset Viewer SQL Console → Embed the Dataset Viewer in a webpage Parameters Examples Tool: ZenML Tool: Metaflow + Outerbounds Tool: Auto Train Datasets: Alpaca-style datasets gallery Datasets: Docmatix App: Electric Vehicle Charge Finder App: Masader - Arabic NL P data catalogue
How_to_contribute_to_Diffusers_🧨.txt
How to contribute to Diffusers 🧨 Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation How to contribute to Diffusers 🧨 Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started How to contribute to Diffusers 🧨 We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. Overview You can contribute in many ways ranging from answering questions on issues and discussions to adding new diffusion models to the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. Asking and answering questions on the Diffusers discussion forum or on Discord . Opening new issues on the GitHub Issues tab or new discussions on the GitHub Discussions tab . Answering issues on the GitHub Issues tab or discussions on the GitHub Discussions tab . Fix a simple issue, marked by the “Good first issue” label, see here . Contribute to the documentation . Contribute a Community Pipeline . Contribute to the examples . Fix a more difficult issue, marked by the “Good second issue” label, see here . Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy . As said before, all contributions are valuable to the community . In the following, we will explain each contribution a bit more in detail. For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in Opening a pull request . 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord . Such questions and comments include (but are not limited to): Reports of training or inference experiments in an attempt to share knowledge Presentation of personal projects Questions to non-official training examples Project proposals General feedback Paper summaries Asking for help on personal projects that build on top of the Diffusers library General questions Ethical questions regarding diffusion models … Every question that is asked on the forum or on Discord actively encourages the community to publicly share knowledge and might very well help a beginner in the future who has the same question you’re having. Please do pose any questions you might have. In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. Please keep in mind that the more effort you put into asking or answering a question, the higher the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. In short, a high quality question or answer is precise , concise , relevant , easy-to-understand , accessible , and well-formatted/well-posed . For more information, please have a look through the How to write a good issue section. NOTE about channels : The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. In addition, questions and answers posted in the forum can easily be linked to. In contrast, Discord has a chat-like format that invites fast back-and-forth communication. While it will most likely take less time for you to get an answer to your question on Discord, your question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. 2. Opening new issues on the GitHub issues tab The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord . Please consider the following guidelines when opening a new issue : Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). Please never report a new issue on another (related) issue. If another issue is highly related, please open a new issue nevertheless and link to the related issue. Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. This means in more detail: Narrow the bug down as much as you can, do not just dump your whole code file . Format your code. Do not include any external libraries except for Diffusers depending on them. Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, (s)he cannot solve it. Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the How to write a good issue section. You can open a bug report here . 2.2. Feature requests A world-class feature request addresses the following points: Motivation first: Is it related to a problem/frustration with the library? If so, please explain why. Providing a code snippet that demonstrates the problem is best. Is it related to something you would need for a project? We’d love to hear about it! Is it something you worked on and think could benefit the community? Awesome! Tell us what problem it solved for you. Write a full paragraph describing the feature; Provide a code snippet that demonstrates its future use; In case this is related to a paper, please attach a link; Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request here . 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here . If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback here . 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on why this part of the code is difficult to understand. You can open an issue about a technical question here . 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. Link to any of its open-source implementation(s). Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler here . 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. Some tips to give a high-quality answer to an issue: Be as concise and minimal as possible. Stay on topic. An answer to the issue should concern the issue and only the issue. Provide links to code, papers, or other sources that prove or encourage your point. Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great help to the maintainers if you can answer such issues, encouraging the author of the issue to be more precise, provide the link to a duplicated issue or redirect them to the forum or Discord . If you have verified that the issued bug report is correct and requires a correction in the source code, please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull request section. 4. Fixing a “Good first issue” Good first issues are marked by the Good first issue label. Usually, the issue already explains how a potential solution should look so that it is easier to fix. If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. 5. Contribute to the documentation A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly valuable contribution . Contributing to the library can have many forms: Correcting spelling or grammatical errors. Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it. Correct the shape or dimensions of a docstring input or output tensor. Clarify documentation that is hard to understand or incorrect. Update outdated code examples. Translating the documentation to another language. Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source . Please have a look at this page on how to verify changes made to the documentation locally. 6. Contribute a community pipeline Read the Community pipelines guide to learn more about the difference between a GitHub and Hugging Face Hub community pipeline. If you’re interested in why we have community pipelines, take a look at GitHub Issue #841 (basically, we can’t maintain all the possible ways diffusion models can be used for inference but we also don’t want to prevent the community from building them). Contributing a community pipeline is a great way to share your creativity and work with the community. It lets you build on top of the DiffusionPipeline so that anyone can load and use it by setting the custom_pipeline parameter. This section will walk you through how to create a simple pipeline where the UNet only does a single forward pass and calls the scheduler once (a “one-step” pipeline). Create a one_step_unet.py file for your community pipeline. This file can contain whatever package you want to use as long as it’s installed by the user. Make sure you only have one pipeline class that inherits from DiffusionPipeline to load model weights and the scheduler configuration from the Hub. Add a UNet and scheduler to the __init__ function. You should also add the register_modules function to ensure your pipeline and its components can be saved with save_pretrained() . Copied from diffusers import DiffusionPipeline import torch class UnetSchedulerOneForwardPipeline ( DiffusionPipeline ): def __init__ ( self, unet, scheduler ): super ().__init__() self.register_modules(unet=unet, scheduler=scheduler) In the forward pass (which we recommend defining as __call__ ), you can add any feature you’d like. For the “one-step” pipeline, create a random image and call the UNet and scheduler once by setting timestep=1 . Copied from diffusers import DiffusionPipeline import torch class UnetSchedulerOneForwardPipeline ( DiffusionPipeline ): def __init__ ( self, unet, scheduler ): super ().__init__() self.register_modules(unet=unet, scheduler=scheduler) def __call__ ( self ): image = torch.randn( ( 1 , self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ) timestep = 1 model_output = self.unet(image, timestep).sample scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample return scheduler_output Now you can run the pipeline by passing a UNet and scheduler to it or load pretrained weights if the pipeline structure is identical. Copied from diffusers import DDPMScheduler, UNet2DModel scheduler = DDPMScheduler() unet = UNet2DModel() pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) output = pipeline() # load pretrained weights pipeline = UnetSchedulerOneForwardPipeline.from_pretrained( "google/ddpm-cifar10-32" , use_safetensors= True ) output = pipeline() You can either share your pipeline as a GitHub community pipeline or Hub community pipeline. GitHub pipeline Hub pipeline Share your GitHub pipeline by opening a pull request on the Diffusers repository and add the one_step_unet.py file to the examples/community subfolder. 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in examples . We support two types of training examples: Official training examples Research training examples Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the training examples, it is required to clone the repository: Copied git clone https://github.com/huggingface/diffusers as well as to install all additional dependencies required for training: Copied cd diffusers pip install -r examples/<your-example-folder>/requirements.txt Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file . Training examples of the Diffusers library should adhere to the following philosophy: All the code necessary to run the examples should be found in a single Python file. One should be able to run the example from the command line with python <your-example>.py --args . Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated with Diffusers. Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: An example command on how to run the example script as shown here . A link to some training results (logs, models, etc.) that show what the user can expect as shown here . If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here . If you are contributing to the official training examples, please also make sure to add a test to its folder such as examples/dreambooth/test_dreambooth.py . This is not necessary for non-official training examples. 8. Fixing a “Good second issue” Good second issues are marked by the Good second issue label. Good second issues are usually more complicated to solve than Good first issues . The issue description usually gives less guidance on how to fix the issue and requires a decent understanding of the library by the interested contributor. If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. They provide easy access to state-of-the-art diffusion technologies and thus allow the community to build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them if you don’t know yet what specific component you would like to add: Model or pipeline Scheduler Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a Feedback issue instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. Copied from mechanism A unique and important feature to understand when adding any pipeline, model or scheduler code is the # Copied from mechanism. You’ll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the # Copied from mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run make fix-copies . For example, in the code example below, StableDiffusionPipelineOutput is the original code and AltDiffusionPipelineOutput uses the # Copied from mechanism to copy it. The only difference is changing the class prefix from Stable to Alt . Copied # Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt class AltDiffusionPipelineOutput ( BaseOutput ): """ Output class for Alt Diffusion pipelines. Args: images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, num_channels)`. nsfw_content_detected (`List[bool]`) List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or `None` if safety checking could not be performed. """ To learn more, read this section of the ~Don’t~ Repeat Yourself* blog post. How to write a good issue The better your issue is written, the higher the chances that it will be quickly resolved. Make sure that you’ve used the correct template for your issue. You can pick between Bug Report , Feature Request , Feedback about API Design , New model/pipeline/scheduler addition , Forum , or a blank issue. Make sure to pick the correct one when opening a new issue . Be precise : Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. Reproducibility : No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. Minimalistic : Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. How to write a good PR Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. If helpful, try to add a code snippet that displays an example of how your addition can be used. The title of your pull request should be a summary of its contribution. If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people consulting the issue know you are working on it); To indicate a work in progress please prefix the title with [WIP] . These are useful to avoid duplicated work, and to differentiate it from PRs ready to be merged; Try to formulate and format your text as explained in How to write a good issue . Make sure existing tests pass; Add high-coverage tests. No quality testing = no merge. If you are adding new @slow tests, make sure they pass using RUN_SLOW=1 python -m pytest tests/test_my_new_model.py . CircleCI does not run the slow tests, but GitHub Actions does every night! All public methods must have informative docstrings that work nicely with markdown. See pipeline_latent_diffusion.py for an example. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like hf-internal-testing or huggingface/documentation-images to place these files. If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images to this dataset. How to open a PR Before writing code, we strongly advise you to search through the existing PRs or issues to make sure that nobody is already working on the same thing. If you are unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to be able to contribute to 🧨 Diffusers. git is not the easiest tool to use but it has the greatest manual. Type git --help in a shell and enjoy. If you prefer books, Pro Git is a very good reference. Follow these steps to start contributing ( supported Python versions ): Fork the repository by clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied $ git clone [email protected]:<your GitHub handle>/diffusers.git $ cd diffusers $ git remote add upstream https://github.com/huggingface/diffusers.git Create a new branch to hold your development changes: Copied $ git checkout -b a-descriptive-name-for-my-changes Do not work on the main branch. Set up a development environment by running the following command in a virtual environment: Copied $ pip install -e ".[dev]" If you have already cloned the repo, you might need to git pull to get the most recent changes in the library. Develop the features on your branch. As you work on the features, you should make sure that the test suite passes. You should run the tests impacted by your changes like this: Copied $ pytest tests/<TEST_TO_RUN>.py Before you run the tests, please make sure you install the dependencies required for testing. You can do so with this command: Copied $ pip install -e ".[test]" You can also run the full test suite with the following command, but it takes a beefy machine to produce a result in a decent amount of time now that Diffusers has grown a lot. Here is the command for it: Copied $ make test 🧨 Diffusers relies on black and isort to format its source code consistently. After you make changes, apply automatic style corrections and code verifications that can’t be automated in one go with: Copied $ make style 🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality control runs in CI, however, you can also run the same checks with: Copied $ make quality Once you’re happy with your changes, add changed files using git add and make a commit with git commit to record your changes locally: Copied $ git add modified_file.py $ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors too! So everyone can see the changes in the Pull request, work in your local branch and push the changes to your fork. They will automatically appear in the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the tests folder . We like pytest and pytest-xdist because it’s faster. From the root of the repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to yes to run them. This will download many gigabytes of models — make sure you have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW= yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v $ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied $ git checkout -b your-branch-for-syncing $ git pull --squash --no-commit upstream main $ git commit -m '<your message without GitHub references>' $ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style . < > Update on GitHub ← Controlled generation Diffusers' Ethical Guidelines → How to contribute to Diffusers 🧨 Overview 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord 2. Opening new issues on the Git Hub issues tab 2.1. Reproducible, minimal bug reports 2.2. Feature requests 2.3 Feedback 2.4 Technical questions 2.5 Proposal to add a new model, scheduler, or pipeline 3. Answering issues on the Git Hub issues tab 4. Fixing a “ Good first issue” 5. Contribute to the documentation 6. Contribute a community pipeline 7. Contribute to training examples 8. Fixing a “ Good second issue” 9. Adding pipelines, models, schedulers Copied from mechanism How to write a good issue How to write a good PR How to open a PR Tests Syncing forked main with upstream ( Hugging Face) main Style guide
Authentication.txt
Authentication Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Authentication Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Argilla Dask Datasets Distilabel DuckDB FiftyOne Pandas Polars Authentication for private and gated datasets Supported file formats Performing data transformations Performance optimizations Spark WebDataset Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Authentication In order to access private or gated datasets, you need to authenticate first. Authentication works by providing an access token which will be used to authenticate and authorize your access to gated and private datasets. The first step is to create an access token for your account. This can be done by visiting Hugging Face Settings - Tokens . There are three ways to provide the token: setting an environment variable, passing a parameter to the reader or using the Hugging Face CLI. Environment variable If you set the environment variable HF_TOKEN , Polars will automatically use it when requesting datasets from Hugging Face. Copied export HF_TOKEN= "hf_xxxxxxxxxxxxx" Parameters You can also explicitly provide the access token to the reader (e.g. read_parquet ) through the storage_options parameter. For a full overview of all the parameters, check out the API reference guide . Copied pl.read_parquet( "hf://datasets/roneneldan/TinyStories/data/train-*.parquet" , storage_options={ "token" : ACCESS_TOKEN}, ) CLI Alternatively, you can you use the Hugging Face CLI to authenticate. After successfully logging in with huggingface-cli login an access token will be stored in the HF_HOME directory which defaults to ~/.cache/huggingface . Polars will then use this token for authentication. If multiple methods are specified, they are prioritized in the following order: Parameters ( storage_options ) Environment variable ( HF_TOKEN ) CLI < > Update on GitHub ← Polars Supported file formats → Authentication Environment variable Parameters CLI
Modular_transformers.txt
Modular transformers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Modular transformers Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Modular transformers transformers is an opinionated framework; our philosophy is defined in the following conceptual guide . The core of that philosophy is exemplified by the single model, single file aspect of the library. This component’s downside is that it limits the inheritance and importability of components from files to others in the toolkit. As a result, model components tend to be repeated across many files. There are as many attention layers defined in transformers as there are models, and a significant number of those are identical to each other. The unfortunate consequence is that independent implementations tend to diverge as fixes and changes get applied to specific parts of the code. In order to balance this issue, we introduced the concept of “copies” across the library. By adding a comment indicating that code is a copy of another, we can enforce through CI and local commands that copies do not diverge. However, while the complexity is low, this is often quite tedious to do. And, finally, this contributes to adding a significant overhead to contributing models which we would like to remove. This approach often requires model contributions to add modeling code (~1k lines), processor (~500 lines), tests, docs, etc. Model contribution PRs rarely add less than 3-5k lines of code, with much of this code being boilerplate. This raises the bar for contributions, and with Modular Transformers, we’re aiming to lower the bar to a much more acceptable point. If you plan to add a model to transformers make sure you read How to add a model to 🤗 Transformers? . For any kind of contributions, see CONTRIBUTING.md . What is it? Modular Transformers introduces the concept of a “modular” file to a model folder. This modular file accepts code that isn’t typically accepted in modeling/processing files, as it allows importing from neighbouring models as well as inheritance from classes to others. This modular file defines models, processors, and the configuration class that would otherwise be defined in their respective modules. Finally, this feature introduces a new linter which will “unravel” the modular file into the “single model, single file” directory structure. These files will get auto-generated every time the script is run; reducing the required contributions to the modular file, and therefore only to the changes between the contributed model and others. Model users will end up importing and using the single-file interface, so no change is expected here. Doing this, we hope to combine the best of both worlds: enabling simple contributions while sticking to our philosophy. This is therefore a replacement for the # Copied from markers, and previously contributed models can be expected to be moved to the new Modular Transformers format in the coming months. Details To generate a single file from the modular file, run the following command. Copied python utils/modular_model_converter.py --files-to-parse src/transformers/models/<your_model>/modular_<your_model>.py The “linter”, which unravels the inheritance and creates all single-files from the modular file, will flatten the inheritance while trying to be invisible to Python users. At this time, the linter flattens a single level of inheritance. For example: If a configuration class inherits from another and adds/deletes an argument, the generated file will either directly reference it (in case of addition) or completely remove it (in case of deletion). If a class inherits from another, for example: class GemmaModel(LlamaModel):, dependencies are automatically inferred. All submodules will be automatically inferred from the superclass. If you define new functions in the modular and use them inside classes, the linter will automatically infer the You should be able to write everything (the tokenizer, the image processor, the model, the config) in this modular file, and the corresponding files will be created for you. Enforcement Run the command below to ensure the generated content matches modular_<your_model>.py Copied python utils/check_modular_conversion.py --files src/transformers/models/<your_model>/modular_<your_model>.py Examples Here is a quick example with BERT and RoBERTa. The two models are intimately related: their modeling implementation differs solely by a change in the embedding layer. Instead of redefining the model entirely, here is what the modular_roberta.py file looks like for the modeling & configuration classes (for the sake of the example, the tokenizer is ignored at this time as very different). Copied from torch import nn from ..bert.configuration_bert import BertConfig from ..bert.modeling_bert import ( BertModel, BertEmbeddings, BertForMaskedLM ) # The RoBERTa config is identical to BERT's config class RobertaConfig ( BertConfig ): model_type = 'roberta' # We redefine the embeddings here to highlight the padding ID difference, and we redefine the position embeddings class RobertaEmbeddings ( BertEmbeddings ): def __init__ ( self, config ): super ().__init__(config()) self.padding_idx = config.pad_token_id self.position_embeddings = nn.Embedding( config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx ) # The RoBERTa model is identical to the BERT model, except for the embedding layer. # We redefine the embeddings above, so here there is no need to do additional work class RobertaModel ( BertModel ): def __init__ ( self, config ): super ().__init__(config) self.embeddings = RobertaEmbeddings(config) # The heads now only need to redefine the model inside to the correct `RobertaModel` class RobertaForMaskedLM ( BertForMaskedLM ): def __init__ ( self, config ): super ().__init__(config) self.model = RobertaModel(config) Note that if you do not use the dependency that you defined, you will have the following error: Copied ValueError: You defined `RobertaEmbeddings` in the modular_roberta.py, it should be used when you define `BertModel`, as it is one of it 's direct dependencies. Make sure you use it in the `__init__` function. Additionally, you may find a list of examples here: What it is not It is not a replacement for the modeling code (yet?), and if your model is not based on anything else that ever existed, then you can add a modeling file as usual. Advanced usage Removing attributes and functions To remove attributes that are not used in your modular model, and that you don’t want to see in the unravelled modeling: Copied class GemmaModel ( LlamaModel ): | class GemmaModel ( PreTrainedModel ): def __init__ ( self, config ): | def __init__ ( self, config ): super ().__init__(self, eos_token) | super ().__init__(config) del self.embed_tokens | self.padding_idx = config.pad_token_id | self.vocab_size = config.vocab_size | | self.layers = nn.ModuleList( | [LlamaDecoderLayer(config, layer_idx) for layer_idx in range (config.num_hidden_layers)] | ) | self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) | self.rotary_emb = LlamaRotaryEmbedding(config=config) | self.gradient_checkpointing = False | | # Initialize weights and apply final processing | self.post_init() If you check the original LlamaModel , it has a embed_tokens which was removed here (as you would expect!) Removing a function is pretty similar, you just need to write it with a raise ValueError("") to mimick the behaviour you actually want when you remove a parent function in python. Copied class GemmaTokenizer ( LlamaTokenizer ): ... def get_spm_processor ( self ): raise AttributeError( "Not needed for Gemma" ) def unk_token_length ( self ): raise AttributeError( "Not needed for Gemma" ) Define new functions If you define a new function in the modular file to be used inside a class, say Copied def my_new_function ( *args, **kwargs ): # Do something here pass class GemmaModel ( LlamaModel ): def forward ( *args, **kwargs ): # Call the function example = my_new_function(*args, **kwargs) # continue here the my_new_function function (and, recursively, any other new functions called in its body) will be automatically copy-pasted in the file where it is used. Calling super() We recently shipped a few features that allow you to go from: Copied class GemmaTokenizer (LlamaTokenizer, PretrainedTokenizerFast): | class GemmaModel (nn.Module): def __init__ ( self, eos_token= "</s>" ): | def __init__ ( self ): eos_token = AddedToken(eos_token) | eos_token = AddedToken(eos_token) PretrainedTokenizerFast.__init__(self, eos_token) | super ().__init__(eos_token) This is useful want you don’t want to unravel the call to super() , and you want to differentiate which super init call you are doing! Special naming We now also support special cases like Copied class GemmaVisionModel ( CLIPModel ): pass where the name of your class GemmaVision is not the same as the modular Gemma . This is super useful for composite models. < > Update on GitHub ← Interoperability with TikToken files Model Hacking (overwriting a class to your usage) → Modular transformers What is it? Details Enforcement Examples What it is not Advanced usage Removing attributes and functions Define new functions Calling super() Special naming
Kwargs_handlers.txt
Kwargs handlers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Kwargs handlers Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Kwargs handlers The following objects can be passed to the main Accelerator to customize how some PyTorch objects related to distributed training or mixed precision are created. AutocastKwargs class accelerate. AutocastKwargs < source > ( enabled : bool = True cache_enabled : bool = None ) Use this object in your Accelerator to customize how torch.autocast behaves. Please refer to the documentation of this context manager for more information on each argument. Example: Copied from accelerate import Accelerator from accelerate.utils import AutocastKwargs kwargs = AutocastKwargs(cache_enabled= True ) accelerator = Accelerator(kwargs_handlers=[kwargs]) DistributedDataParallelKwargs class accelerate. DistributedDataParallelKwargs < source > ( dim : int = 0 broadcast_buffers : bool = True bucket_cap_mb : int = 25 find_unused_parameters : bool = False check_reduction : bool = False gradient_as_bucket_view : bool = False static_graph : bool = False comm_hook : DDPCommunicationHookType = <DDPCommunicationHookType.NO: 'no'> comm_wrapper : typing.Literal[<DDPCommunicationHookType.NO: 'no'>, <DDPCommunicationHookType.FP16: 'fp16'>, <DDPCommunicationHookType.BF16: 'bf16'>] = <DDPCommunicationHookType.NO: 'no'> comm_state_option : dict = <factory> ) Use this object in your Accelerator to customize how your model is wrapped in a torch.nn.parallel.DistributedDataParallel . Please refer to the documentation of this wrapper for more information on each argument. gradient_as_bucket_view is only available in PyTorch 1.7.0 and later versions. static_graph is only available in PyTorch 1.11.0 and later versions. Example: Copied from accelerate import Accelerator from accelerate.utils import DistributedDataParallelKwargs kwargs = DistributedDataParallelKwargs(find_unused_parameters= True ) accelerator = Accelerator(kwargs_handlers=[kwargs]) FP8RecipeKwargs class accelerate.utils. FP8RecipeKwargs < source > ( backend : typing.Literal['MSAMP', 'TE'] = None use_autocast_during_eval : bool = None opt_level : typing.Literal['O1', 'O2'] = None margin : int = None interval : int = None fp8_format : typing.Literal['E4M3', 'HYBRID'] = None amax_history_len : int = None amax_compute_algo : typing.Literal['max', 'most_recent'] = None override_linear_precision : typing.Tuple[bool, bool, bool] = None ) Parameters backend ( str , optional ) — Which FP8 engine to use. Must be one of "msamp" (MS-AMP) or "te" (TransformerEngine). If not passed, will use whichever is available in the environment, prioritizing MS-AMP. use_autocast_during_eval ( bool , optional , default to False ) — Whether to use FP8 autocast during eval mode. Generally better metrics are found when this is False . margin ( int , optional , default to 0) — The margin to use for the gradient scaling. interval ( int , optional , default to 1) — The interval to use for how often the scaling factor is recomputed. fp8_format ( str , optional , default to “HYBRID”) — The format to use for the FP8 recipe. Must be one of HYBRID or E4M3 . (Generally HYBRID for training, E4M3 for evaluation) amax_history_len ( int , optional , default to 1024) — The length of the history to use for the scaling factor computation amax_compute_algo ( str , optional , default to “most_recent”) — The algorithm to use for the scaling factor computation. Must be one of max or most_recent . override_linear_precision ( tuple of three bool , optional , default to (False, False, False) ) — Whether or not to execute fprop , dgrad , and wgrad GEMMS in higher precision. optimization_level ( str ), one of O1 , O2 . (default is O2 ) — What level of 8-bit collective communication should be used with MS-AMP. In general: O1: Weight gradients and all_reduce communications are done in fp8, reducing GPU memory usage and communication bandwidth O2: First-order optimizer states are in 8-bit, and second order states are in FP16. Only available when using Adam or AdamW. This maintains accuracy and can potentially save the highest memory. 03: Specifically for DeepSpeed, implements capabilities so weights and master weights of models are stored in FP8. If fp8 is selected and deepspeed is enabled, will be used by default. (Not available currently). Use this object in your Accelerator to customize the initialization of the recipe for FP8 mixed precision training with transformer-engine or ms-amp . For more information on transformer-engine args, please refer to the API documentation . For more information on the ms-amp args, please refer to the Optimization Level documentation . Copied from accelerate import Accelerator from accelerate.utils import FP8RecipeKwargs kwargs = FP8RecipeKwargs(backend= "te" , fp8_format= "HYBRID" ) accelerator = Accelerator(mixed_precision= "fp8" , kwargs_handlers=[kwargs]) To use MS-AMP as an engine, pass backend="msamp" and the optimization_level : Copied kwargs = FP8RecipeKwargs(backend= "msamp" , optimization_level= "02" ) ProfileKwargs class accelerate. ProfileKwargs < source > ( activities : typing.Optional[typing.List[typing.Literal['cpu', 'xpu', 'mtia', 'cuda']]] = None schedule_option : typing.Optional[typing.Dict[str, int]] = None on_trace_ready : typing.Optional[typing.Callable] = None record_shapes : bool = False profile_memory : bool = False with_stack : bool = False with_flops : bool = False with_modules : bool = False output_trace_dir : typing.Optional[str] = None ) Parameters activities ( List[str] , optional , default to None ) — The list of activity groups to use in profiling. Must be one of "cpu" , "xpu" , "mtia" , or "cuda" . schedule_option ( Dict[str, int] , optional , default to None ) — The schedule option to use for the profiler. Available keys are wait , warmup , active , repeat and skip_first . The profiler will skip the first skip_first steps, then wait for wait steps, then do the warmup for the next warmup steps, then do the active recording for the next active steps and then repeat the cycle starting with wait steps. The optional number of cycles is specified with the repeat parameter, the zero value means that the cycles will continue until the profiling is finished. on_trace_ready ( Callable , optional , default to None ) — Callable that is called at each step when schedule returns ProfilerAction.RECORD_AND_SAVE during the profiling. record_shapes ( bool , optional , default to False ) — Save information about operator’s input shapes. profile_memory ( bool , optional , default to False ) — Track tensor memory allocation/deallocation with_stack ( bool , optional , default to False ) — Record source information (file and line number) for the ops. with_flops ( bool , optional , default to False ) — Use formula to estimate the FLOPS of specific operators with_modules ( bool , optional , default to False ) — Record module hierarchy (including function names) corresponding to the callstack of the op. output_trace_dir ( str , optional , default to None ) — Exports the collected trace in Chrome JSON format. Chrome use ‘chrome://tracing’ view json file. Defaults to None, which means profiling does not store json files. Use this object in your Accelerator to customize the initialization of the profiler. Please refer to the documentation of this context manager for more information on each argument. torch.profiler is only available in PyTorch 1.8.1 and later versions. Example: Copied from accelerate import Accelerator from accelerate.utils import ProfileKwargs kwargs = ProfileKwargs(activities=[ "cpu" , "cuda" ]) accelerator = Accelerator(kwargs_handlers=[kwargs]) build < source > ( ) → torch.profiler.profile Returns torch.profiler.profile The profiler object. Build a profiler object with the current configuration. GradScalerKwargs class accelerate. GradScalerKwargs < source > ( init_scale : float = 65536.0 growth_factor : float = 2.0 backoff_factor : float = 0.5 growth_interval : int = 2000 enabled : bool = True ) Use this object in your Accelerator to customize the behavior of mixed precision, specifically how the torch.cuda.amp.GradScaler used is created. Please refer to the documentation of this scaler for more information on each argument. GradScaler is only available in PyTorch 1.5.0 and later versions. Example: Copied from accelerate import Accelerator from accelerate.utils import GradScalerKwargs kwargs = GradScalerKwargs(backoff_factor= 0.25 ) accelerator = Accelerator(kwargs_handlers=[kwargs]) InitProcessGroupKwargs class accelerate. InitProcessGroupKwargs < source > ( backend : typing.Optional[str] = 'nccl' init_method : typing.Optional[str] = None timeout : typing.Optional[datetime.timedelta] = None ) Use this object in your Accelerator to customize the initialization of the distributed processes. Please refer to the documentation of this method for more information on each argument. Note: If timeout is set to None , the default will be based upon how backend is set. Copied from datetime import timedelta from accelerate import Accelerator from accelerate.utils import InitProcessGroupKwargs kwargs = InitProcessGroupKwargs(timeout=timedelta(seconds= 800 )) accelerator = Accelerator(kwargs_handlers=[kwargs]) KwargsHandler class accelerate.utils. KwargsHandler < source > ( ) Internal mixin that implements a to_kwargs() method for a dataclass. to_kwargs < source > ( ) Returns a dictionary containing the attributes with values different from the default of this class. < > Update on GitHub ← Pipeline parallelism FP8 → Kwargs handlers Autocast Kwargs Distributed Data Parallel Kwargs F P8 Recipe Kwargs Profile Kwargs Grad Scaler Kwargs Init Process Group Kwargs Kwargs Handler
Example_Zoo.txt
Example Zoo Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Example Zoo Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Example Zoo Below contains a non-exhaustive list of tutorials and scripts showcasing Accelerate. Official Accelerate Examples: Basic Examples These examples showcase the base features of Accelerate and are a great starting point Barebones NLP example Barebones distributed NLP example in a Jupyter Notebook Barebones computer vision example Barebones distributed computer vision example in a Jupyter Notebook Using Accelerate in Kaggle Feature Specific Examples These examples showcase specific features that the Accelerate framework offers Automatic memory-aware gradient accumulation Checkpointing states Cross validation DeepSpeed Fully Sharded Data Parallelism Gradient accumulation Memory-aware batch size finder Metric Computation Using Trackers Using Megatron-LM Full Examples These examples showcase every feature in Accelerate at once that was shown in “Feature Specific Examples” Complete NLP example Complete computer vision example Very complete and extensible vision example showcasing SLURM, hydra, and a very extensible usage of the framework Causal language model fine-tuning example Masked language model fine-tuning example Speech pretraining example Translation fine-tuning example Text classification fine-tuning example Semantic segmentation fine-tuning example Question answering fine-tuning example Beam search question answering fine-tuning example Multiple choice question answering fine-tuning example Named entity recognition fine-tuning example Image classification fine-tuning example Summarization fine-tuning example End-to-end examples on how to use AWS SageMaker integration of Accelerate Megatron-LM examples for various NLp tasks Integration Examples These are tutorials from libraries that integrate with Accelerate: Don’t find your integration here? Make a PR to include it! Amphion Training Text-to-Speech Models with Amphion Training Singing Voice Conversion Models with Amphion Training Vocoders with Amphion Catalyst Distributed training tutorial with Catalyst DALLE2-pytorch Fine-tuning DALLE2 Diffusers Performing textual inversion with diffusers Training DreamBooth with diffusers fastai Distributed training from Jupyter Notebooks with fastai Basic distributed training examples with fastai GradsFlow Auto Image Classification with GradsFlow imagen-pytorch Fine-tuning Imagen Kornia Fine-tuning vision models with Kornia’s Trainer PyTorch Accelerated Quickstart distributed training tutorial with PyTorch Accelerated PyTorch3D Perform Deep Learning with 3D data Stable-Dreamfusion Training with Stable-Dreamfusion to convert text to a 3D model Tez Leaf disease detection with Tez and Accelerate trlx How to implement a sentiment learning task with trlx Comfy-UI Enabling using large Stable Diffusion Models in low-vram settings using Accelerate In Science Below contains a non-exhaustive list of papers utilizing Accelerate. Don’t find your paper here? Make a PR to include it! Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, Omer Levy: “Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation”, 2023; arXiv:2305.01569 . Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim: “Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models”, 2023; arXiv:2305.04091 . Arthur Câmara, Claudia Hauff: “Moving Stuff Around: A study on efficiency of moving documents into memory for Neural IR models”, 2022; arXiv:2205.08343 . Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y. Fu, Zhiqiang Xie, Beidi Chen, Clark Barrett, Joseph E. Gonzalez, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang: “High-throughput Generative Inference of Large Language Models with a Single GPU”, 2023; arXiv:2303.06865 . Peter Melchior, Yan Liang, ChangHoon Hahn, Andy Goulding: “Autoencoding Galaxy Spectra I: Architecture”, 2022; arXiv:2211.07890 . Jiaao Chen, Aston Zhang, Mu Li, Alex Smola, Diyi Yang: “A Cheaper and Better Diffusion Language Model with Soft-Masked Noise”, 2023; arXiv:2304.04746 . Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa: “Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions”, 2023; arXiv:2303.12789 . Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi: “RealFusion: 360° Reconstruction of Any Object from a Single Image”, 2023; arXiv:2302.10663 . Xiaoshi Wu, Keqiang Sun, Feng Zhu, Rui Zhao, Hongsheng Li: “Better Aligning Text-to-Image Models with Human Preference”, 2023; arXiv:2303.14420 . Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang: “HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace”, 2023; arXiv:2303.17580 . Yue Yang, Wenlin Yao, Hongming Zhang, Xiaoyang Wang, Dong Yu, Jianshu Chen: “Z-LaVI: Zero-Shot Language Solver Fueled by Visual Imagination”, 2022; arXiv:2210.12261 . Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho: “How to Backdoor Diffusion Models?”, 2022; arXiv:2212.05400 . Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Jaehoon Ko, Hyeonsu Kim, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, Seungryong Kim: “Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation”, 2023; arXiv:2303.07937 . Or Patashnik, Daniel Garibi, Idan Azuri, Hadar Averbuch-Elor, Daniel Cohen-Or: “Localizing Object-level Shape Variations with Text-to-Image Diffusion Models”, 2023; arXiv:2303.11306 . Dídac Surís, Sachit Menon, Carl Vondrick: “ViperGPT: Visual Inference via Python Execution for Reasoning”, 2023; arXiv:2303.08128 . Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, Qifeng Chen: “FateZero: Fusing Attentions for Zero-shot Text-based Video Editing”, 2023; arXiv:2303.09535 . Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, Yejin Choi: “NaturalProver: Grounded Mathematical Proof Generation with Language Models”, 2022; arXiv:2205.12910 . Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or: “TEXTure: Text-Guided Texturing of 3D Shapes”, 2023; arXiv:2302.01721 . Puijin Cheng, Li Lin, Yijin Huang, Huaqing He, Wenhan Luo, Xiaoying Tang: “Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement”, 2023; arXiv:2303.04603 . Shun Shao, Yftah Ziser, Shay Cohen: “Erasure of Unaligned Attributes from Neural Representations”, 2023; arXiv:2302.02997 . Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo: “In-Context Instruction Learning”, 2023; arXiv:2302.14691 . Shikun Liu, Linxi Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, Anima Anandkumar: “Prismer: A Vision-Language Model with An Ensemble of Experts”, 2023; arXiv:2303.02506 . Haoyu Chen, Zhihua Wang, Yang Yang, Qilin Sun, Kede Ma: “Learning a Deep Color Difference Metric for Photographic Images”, 2023; arXiv:2303.14964 . Van-Hoang Le, Hongyu Zhang: “Log Parsing with Prompt-based Few-shot Learning”, 2023; arXiv:2302.07435 . Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui: “Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?”, 2023; arXiv:2302.07866 . Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, Prithviraj Ammanabrolu: “Behavior Cloned Transformers are Neurosymbolic Reasoners”, 2022; arXiv:2210.07382 . Martin Wessel, Tomáš Horych, Terry Ruas, Akiko Aizawa, Bela Gipp, Timo Spinde: “Introducing MBIB — the first Media Bias Identification Benchmark Task and Dataset Collection”, 2023; arXiv:2304.13148 . DOI: [https://dx.doi.org/10.1145/3539618.3591882 10.1145/3539618.3591882]. Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or: “Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models”, 2023; arXiv:2301.13826 . Marcio Fonseca, Yftah Ziser, Shay B. Cohen: “Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents”, 2022; arXiv:2205.12486 . Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or: “TEXTure: Text-Guided Texturing of 3D Shapes”, 2023; arXiv:2302.01721 . Tianxing He, Jingyu Zhang, Tianle Wang, Sachin Kumar, Kyunghyun Cho, James Glass, Yulia Tsvetkov: “On the Blind Spots of Model-Based Evaluation Metrics for Text Generation”, 2022; arXiv:2212.10020 . Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham: “In-Context Retrieval-Augmented Language Models”, 2023; arXiv:2302.00083 . Dacheng Li, Rulin Shao, Hongyi Wang, Han Guo, Eric P. Xing, Hao Zhang: “MPCFormer: fast, performant and private Transformer inference with MPC”, 2022; arXiv:2211.01452 . Baolin Peng, Michel Galley, Pengcheng He, Chris Brockett, Lars Liden, Elnaz Nouri, Zhou Yu, Bill Dolan, Jianfeng Gao: “GODEL: Large-Scale Pre-Training for Goal-Directed Dialog”, 2022; arXiv:2206.11309 . Egil Rønningstad, Erik Velldal, Lilja Øvrelid: “Entity-Level Sentiment Analysis (ELSA): An exploratory task survey”, 2023, Proceedings of the 29th International Conference on Computational Linguistics, 2022, pages 6773-6783; arXiv:2304.14241 . Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, Sergey Levine: “Offline RL for Natural Language Generation with Implicit Language Q Learning”, 2022; arXiv:2206.11871 . Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig: “Execution-Based Evaluation for Open-Domain Code Generation”, 2022; arXiv:2212.10481 . Minh-Long Luu, Zeyi Huang, Eric P. Xing, Yong Jae Lee, Haohan Wang: “Expeditious Saliency-guided Mix-up through Random Gradient Thresholding”, 2022; arXiv:2212.04875 . Jun Hao Liew, Hanshu Yan, Daquan Zhou, Jiashi Feng: “MagicMix: Semantic Mixing with Diffusion Models”, 2022; arXiv:2210.16056 . Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao: “LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners”, 2021; arXiv:2110.06274 . < > Update on GitHub ← Troubleshoot Gradient accumulation → Example Zoo Official Accelerate Examples: Basic Examples Feature Specific Examples Full Examples Integration Examples Amphion Catalyst DALL E2-pytorch Diffusers fastai Grads Flow imagen-pytorch Kornia Py Torch Accelerated Py Torch3D Stable- Dreamfusion Tez trlx Comfy-UI In Science
Integrate_your_library_with_the_Hub.txt
Integrate your library with the Hub Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Integrate your library with the Hub Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Integrate a library with the Hub Tasks GGUF DDUF Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Integrate your library with the Hub The Hugging Face Hub aims to facilitate sharing machine learning models, checkpoints, and artifacts. This endeavor includes integrating the Hub into many of the amazing third-party libraries in the community. Some of the ones already integrated include spaCy , Sentence Transformers , OpenCLIP , and timm , among many others. Integration means users can download and upload files to the Hub directly from your library. We hope you will integrate your library and join us in democratizing artificial intelligence for everyone. Integrating the Hub with your library provides many benefits, including: Free model hosting for you and your users. Built-in file versioning - even for huge files - made possible by Git-LFS . Community features (discussions, pull requests, likes). Usage metrics for all models ran with your library. This tutorial will help you integrate the Hub into your library so your users can benefit from all the features offered by the Hub. Before you begin, we recommend you create a Hugging Face account from which you can manage your repositories and files. If you need help with the integration, feel free to open an issue , and we would be more than happy to help you. Implementation Implementing an integration of a library with the Hub often means providing built-in methods to load models from the Hub and allow users to push new models to the Hub. This section will cover the basics of how to do that using the huggingface_hub library. For more in-depth guidance, check out this guide . Installation To integrate your library with the Hub, you will need to add huggingface_hub library as a dependency: Copied pip install huggingface_hub For more details about huggingface_hub installation, check out this guide . In this guide, we will focus on Python libraries. If you’ve implemented your library in JavaScript, you can use @huggingface/hub instead. The rest of the logic (i.e. hosting files, code samples, etc.) does not depend on the code language. Copied npm add @huggingface/hub Users will need to authenticate once they have successfully installed the huggingface_hub library. The easiest way to authenticate is to save the token on the machine. Users can do that from the terminal using the login() command: Copied huggingface- cli login The command tells them if they are already logged in and prompts them for their token. The token is then validated and saved in their HF_HOME directory (defaults to ~/.cache/huggingface/token ). Any script or library interacting with the Hub will use this token when sending requests. Alternatively, users can programmatically login using login() in a notebook or a script: Copied from huggingface_hub import login login() Authentication is optional when downloading files from public repos on the Hub. Download files from the Hub Integrations allow users to download a model from the Hub and instantiate it directly from your library. This is often made possible by providing a method (usually called from_pretrained or load_from_hf ) that has to be specific to your library. To instantiate a model from the Hub, your library has to: download files from the Hub. This is what we will discuss now. instantiate the Python model from these files. Use the hf_hub_download method to download files from a repository on the Hub. Downloaded files are stored in the cache: ~/.cache/huggingface/hub . Users won’t have to re-download the file the next time they use it, which saves a lot of time for large files. Furthermore, if the repository is updated with a new version of the file, huggingface_hub will automatically download the latest version and store it in the cache. Users don’t have to worry about updating their files manually. For example, download the config.json file from the lysandre/arxiv-nlp repository: Copied >>> from huggingface_hub import hf_hub_download >>> config_path = hf_hub_download(repo_id= "lysandre/arxiv-nlp" , filename= "config.json" ) >>> config_path '/home/lysandre/.cache/huggingface/hub/models--lysandre--arxiv-nlp/snapshots/894a9adde21d9a3e3843e6d5aeaaf01875c7fade/config.json' config_path now contains a path to the downloaded file. You are guaranteed that the file exists and is up-to-date. If your library needs to download an entire repository, use snapshot_download . It will take care of downloading all the files in parallel. The return value is a path to the directory containing the downloaded files. Copied >>> from huggingface_hub import snapshot_download >>> snapshot_download(repo_id= "lysandre/arxiv-nlp" ) '/home/lysandre/.cache/huggingface/hub/models--lysandre--arxiv-nlp/snapshots/894a9adde21d9a3e3843e6d5aeaaf01875c7fade' Many options exists to download files from a specific revision, to filter which files to download, to provide a custom cache directory, to download to a local directory, etc. Check out the download guide for more details. Upload files to the Hub You might also want to provide a method so that users can push their own models to the Hub. This allows the community to build an ecosystem of models compatible with your library. The huggingface_hub library offers methods to create repositories and upload files: create_repo creates a repository on the Hub. upload_file and upload_folder upload files to a repository on the Hub. The create_repo method creates a repository on the Hub. Use the repo_id parameter to provide a name for your repository: Copied >>> from huggingface_hub import create_repo >>> create_repo(repo_id= "test-model" ) 'https://huggingface.co/lysandre/test-model' When you check your Hugging Face account, you should now see a test-model repository under your namespace. The upload_file method uploads a file to the Hub. This method requires the following: A path to the file to upload. The final path in the repository. The repository you wish to push the files to. For example: Copied >>> from huggingface_hub import upload_file >>> upload_file( ... path_or_fileobj= "/home/lysandre/dummy-test/README.md" , ... path_in_repo= "README.md" , ... repo_id= "lysandre/test-model" ... ) 'https://huggingface.co/lysandre/test-model/blob/main/README.md' If you check your Hugging Face account, you should see the file inside your repository. Usually, a library will serialize the model to a local directory and then upload to the Hub the entire folder at once. This can be done using upload_folder : Copied >>> from huggingface_hub import upload_folder >>> upload_folder( ... folder_path= "/home/lysandre/dummy-test" , ... repo_id= "lysandre/test-model" , ... ) For more details about how to upload files, check out the upload guide . Model cards Model cards are files that accompany the models and provide handy information. Under the hood, model cards are simple Markdown files with additional metadata. Model cards are essential for discoverability, reproducibility, and sharing! You can find a model card as the README.md file in any model repo. See the model cards guide for more details about how to create a good model card. If your library allows pushing a model to the Hub, it is recommended to generate a minimal model card with prefilled metadata (typically library_name , pipeline_tag or tags ) and information on how the model has been trained. This will help having a standardized description for all models built with your library. Register your library Well done! You should now have a library able to load a model from the Hub and eventually push new models. The next step is to make sure that your models on the Hub are well-documented and integrated with the platform. To do so, libraries can be registered on the Hub, which comes with a few benefits for the users: a pretty label can be shown on the model page (e.g. KerasNLP instead of keras-nlp ) a link to your library repository and documentation is added to each model page a custom download count rule can be defined code snippets can be generated to show how to load the model using your library To register a new library, please open a Pull Request here following the instructions below: The library id should be lowercased and hyphen-separated (example: "adapter-transformers" ). Make sure to preserve alphabetical order when opening the PR. set repoName and prettyLabel with user-friendly casing (example: DeepForest ). set repoUrl with a link to the library source code (usually a GitHub repository). (optional) set docsUrl with a link to the docs of the library. If the documentation is in the GitHub repo referenced above, no need to set it twice. set filter to false . (optional) define how downloads must be counted by setting countDownload . Downloads can be tracked by file extensions or filenames. Make sure to not duplicate the counting. For instance, if loading a model requires 3 files, the download count rule must count downloads only on 1 of the 3 files. Otherwise, the download count will be overestimated. Note: if the library uses one of the default config files ( config.json , config.yaml , hyperparams.yaml , and meta.yaml , see here ), there is no need to manually define a download count rule. (optional) define snippets to let the user know how they can quickly instantiate a model. More details below. Before opening the PR, make sure that at least one model is referenced on https://huggingface.co/models?other=my-library-name . If not, the model card metadata of the relevant models must be updated with library_name: my-library-name (see example ). If you are not the owner of the models on the Hub, please open PRs (see example ). Here is a minimal example adding integration for VFIMamba. Code snippets We recommend adding a code snippet to explain how to use a model in your downstream library. To add a code snippet, you should update the model-libraries-snippets.ts file with instructions for your model. For example, the Asteroid integration includes a brief code snippet for how to load and use an Asteroid model: Copied const asteroid = ( model: ModelData ) => `from asteroid.models import BaseModel model = BaseModel.from_pretrained(" ${model.id} ")` ; Doing so will also add a tag to your model so users can quickly identify models from your library. Once your snippet has been added to model-libraries-snippets.ts , you can reference it in model-libraries.ts as described above. Document your library Finally, you can add your library to the Hub’s documentation. Check for example the Setfit PR that added SetFit to the documentation. < > Update on GitHub ← Advanced Topics Tasks → Integrate your library with the Hub Implementation Installation Download files from the Hub Upload files to the Hub Model cards Register your library Code snippets Document your library
Zero-Shot_Classification.txt
Zero-Shot Classification Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Zero-Shot Classification api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Zero-Shot Classification Zero-shot text classification is super useful to try out classification with zero code, you simply pass a sentence/paragraph and the possible labels for that sentence, and you get a result. The model has not been necessarily trained on the labels you provide, but it can still predict the correct label. For more details about the zero-shot-classification task, check out its dedicated page ! You will find examples and related materials. Recommended models facebook/bart-large-mnli : Powerful zero-shot text classification model. MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7 : Powerful zero-shot multilingual text classification model that can accomplish multiple tasks. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-mnli" headers = { "Authorization" : "Bearer hf_***" } def query ( payload ): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs" : "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!" , "parameters" : { "candidate_labels" : [ "refund" , "legal" , "faq" ]}, }) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload inputs* string The text to classify parameters* object candidate_labels* string[] The set of possible class labels to classify the text into. hypothesis_template string The sentence used in conjunction with candidate_labels to attempt the text classification by replacing the placeholder with the candidate labels. multi_label boolean Whether multiple candidate labels can be true. If false, the scores are normalized such that the sum of the label likelihoods for each sequence is 1. If true, the labels are considered independent and probabilities are normalized for each candidate. Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body (array) object[] Output is an array of objects. label string The predicted class label. score number The corresponding probability. < > Update on GitHub ← Translation Zero- Shot Classification Recommended models Using the API AP I specification Request Response
Installation_from_source.txt
Installation from source Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Installation from source text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Installation from source Installing TGI from source is not the recommended usage. We strongly recommend to use TGI through Docker, check the Quick Tour , Installation for Nvidia GPUs and Installation for AMD GPUs to learn how to use TGI with Docker. Install CLI You can use TGI command-line interface (CLI) to download weights, serve and quantize models, or get information on serving parameters. To install the CLI, you need to first clone the TGI repository and then run make . Copied git clone https://github.com/huggingface/text-generation-inference.git && cd text-generation-inference make install If you would like to serve models with custom kernels, run Copied BUILD_EXTENSIONS=True make install Local Installation from Source Before you start, you will need to setup your environment, and install Text Generation Inference. Text Generation Inference is tested on Python 3.9+ . Text Generation Inference is available on pypi, conda and GitHub. To install and launch locally, first install Rust and create a Python virtual environment with at least Python 3.9, e.g. using conda: Copied curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh conda create -n text-generation-inference python=3.9 conda activate text-generation-inference You may also need to install Protoc. On Linux: Copied PROTOC_ZIP=protoc-21.12-linux-x86_64.zip curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v21.12/ $PROTOC_ZIP sudo unzip -o $PROTOC_ZIP -d /usr/local bin/protoc sudo unzip -o $PROTOC_ZIP -d /usr/local 'include/*' rm -f $PROTOC_ZIP On MacOS, using Homebrew: Copied brew install protobuf Then run to install Text Generation Inference: Copied git clone https://github.com/huggingface/text-generation-inference.git && cd text-generation-inference BUILD_EXTENSIONS=True make install On some machines, you may also need the OpenSSL libraries and gcc. On Linux machines, run: Copied sudo apt-get install libssl-dev gcc -y Once installation is done, simply run: Copied make run-falcon-7b-instruct This will serve Falcon 7B Instruct model from the port 8080, which we can query. < > Update on GitHub ← Using TGI with Intel GPUs Multi-backend support → Installation from source Install CLI Local Installation from Source
Datasets_Download_Stats.txt
Datasets Download Stats Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Datasets Download Stats Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Datasets Download Stats How are downloads counted for datasets? Counting the number of downloads for datasets is not a trivial task, as a single dataset repository might contain multiple files, from multiple subsets and splits (e.g. train/validation/test) and sometimes with many files in a single split. To solve this issue and avoid counting one person’s download multiple times, we treat all files downloaded by a user (based on their IP address) within a 5-minute window as a single dataset download. This counting happens automatically on our servers when files are downloaded (through GET or HEAD requests), with no need to collect any user information or make additional calls. Before September 2024 The Hub used to provide download stats only for the datasets loadable via the datasets library. To determine the number of downloads, the Hub previously counted every time load_dataset was called in Python, excluding Hugging Face’s CI tooling on GitHub. No information was sent from the user, and no additional calls were made for this. The count was done server-side as we served files for downloads. This means that: The download count was the same regardless of whether the data is directly stored on the Hub repo or if the repository has a script to load the data from an external source. If a user manually downloaded the data using tools like wget or the Hub’s user interface (UI), those downloads were not included in the download count. < > Update on GitHub ← SQL Console Data files Configuration → Datasets Download Stats How are downloads counted for datasets? Before September 2024
Langfuse_on_Spaces.txt
Langfuse on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Langfuse on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Langfuse on Spaces This guide shows you how to deploy Langfuse on Hugging Face Spaces and start instrumenting your LLM application for observability. This integration helps you to experiment with LLM APIs on the Hugging Face Hub, manage your prompts in one place, and evaluate model outputs. What is Langfuse? Langfuse is an open-source LLM engineering platform that helps teams collaboratively debug, evaluate, and iterate on their LLM applications. Key features of Langfuse include LLM tracing to capture the full context of your application’s execution flow, prompt management for centralized and collaborative prompt iteration, evaluation metrics to assess output quality, dataset creation for testing and benchmarking, and a playground to experiment with prompts and model configurations. This video is a 10 min walkthrough of the Langfuse features: Why LLM Observability? As language models become more prevalent, understanding their behavior and performance is important. LLM observability involves monitoring and understanding the internal states of an LLM application through its outputs. It is essential for addressing challenges such as: Complex control flows with repeated or chained calls, making debugging challenging. Non-deterministic outputs , adding complexity to consistent quality assessment. Varied user intents , requiring deep understanding to improve user experience. Building LLM applications involves intricate workflows, and observability helps in managing these complexities. Step 1: Set up Langfuse on Spaces The Langfuse Hugging Face Space allows you to get up and running with a deployed version of Langfuse with just a few clicks. To get started, click the button above or follow these steps: Create a new Hugging Face Space Select Docker as the Space SDK Select Langfuse as the Space template Enable persistent storage to ensure your Langfuse data is persisted across restarts Ensure the space is set to public visibility so Langfuse API/SDK’s can access the app (see note below for more details) [Optional but recommended] For a secure deployment, replace the default values of the environment variables : NEXTAUTH_SECRET : Used to validate login session cookies, generate secret with at least 256 entropy using openssl rand -base64 32 . SALT : Used to salt hashed API keys, generate secret with at least 256 entropy using openssl rand -base64 32 . ENCRYPTION_KEY : Used to encrypt sensitive data. Must be 256 bits, 64 string characters in hex format, generate via: openssl rand -hex 32 . Click Create Space ! User Access Your Langfuse Space is pre-configured with Hugging Face OAuth for secure authentication, so you’ll need to authorize read access to your Hugging Face account upon first login by following the instructions in the pop-up. Once inside the app, you can use the native Langfuse features to manage Organizations, Projects, and Users. The Langfuse space must be set to public visibility so that Langfuse API/SDK’s can reach the app. This means that by default, any logged-in Hugging Face user will be able to access the Langfuse space. You can prevent new users from signing up and accessing the space via two different methods: 1. (Recommended) Hugging Face native org-level OAuth restrictions If you want to restrict access to only members of a specified organization(s), you can simply set the hf_oauth_authorized_org metadata field in the space’s README.md file, as shown here . Once configured, only users who are members of the specified organization(s) will be able to access the space. 2. Manual access control You can also restrict access on a per-user basis by setting the AUTH_DISABLE_SIGNUP environment variable to true . Be sure that you’ve first signed in & authenticated to the space before setting this variable else your own user profile won’t be able to authenticate. Note: If you’ve set the AUTH_DISABLE_SIGNUP environment variable to true to restrict access, and want to grant a new user access to the space, you’ll need to first set it back to false (wait for rebuild to complete), add the user and have them authenticate with OAuth, and then set it back to true . Step 2: Use Langfuse Now that you have Langfuse running, you can start instrumenting your LLM application to capture traces and manage your prompts. Let’s see how! Monitor Any Application Langfuse is model agnostic and can be used to trace any application. Follow the get-started guide in Langfuse documentation to see how you can instrument your code. Langfuse maintains native integrations with many popular LLM frameworks, including Langchain , LlamaIndex and OpenAI and offers Python and JS/TS SDKs to instrument your code. Langfuse also offers various API endpoints to ingest data and has been integrated by other open source projects such as Langflow , Dify and Haystack . Example 1: Trace Calls to HF Serverless API As a simple example, here’s how to trace LLM calls to the HF Serverless API using the Langfuse Python SDK. Be sure to first configure your LANGFUSE_HOST , LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables, and make sure you’ve authenticated with your Hugging Face account . Copied from langfuse.openai import openai from huggingface_hub import get_token client = openai.OpenAI( base_url= "https://api-inference.huggingface.co/v1/" , api_key=get_token(), ) messages = [{ "role" : "user" , "content" : "What is observability for LLMs?" }] response = client.chat.completions.create( model= "meta-llama/Llama-3.3-70B-Instruct" , messages=messages, max_tokens= 100 , ) Example 2: Monitor a Gradio Application We created a Gradio template space that shows how to create a simple chat application using a Hugging Face model and trace model calls and user feedback in Langfuse - without leaving Hugging Face. To get started, duplicate this Gradio template space and follow the instructions in the README . Step 3: View Traces in Langfuse Once you have instrumented your application, and ingested traces or user feedback into Langfuse, you can view your traces in Langfuse. Example trace in the Langfuse UI Additional Resources and Support Langfuse documentation Langfuse GitHub repository Langfuse Discord Langfuse template Space For more help, open a support thread on GitHub discussions or open an issue . < > Update on GitHub ← marimo on Spaces Embed your Space → Langfuse on Spaces What is Langfuse? Why LL M Observability? Step 1: Set up Langfuse on Spaces User Access 1. ( Recommended) Hugging Face native org-level O Auth restrictions 2. Manual access control Step 2: Use Langfuse Monitor Any Application Example 1: Trace Calls to H F Serverless API Example 2: Monitor a Gradio Application Step 3: View Traces in Langfuse Additional Resources and Support
Logging_methods.txt
Logging methods Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Logging methods Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Logging methods 🤗 Evaluate strives to be transparent and explicit about how it works, but this can be quite verbose at times. We have included a series of logging methods which allow you to easily adjust the level of verbosity of the entire library. Currently the default verbosity of the library is set to WARNING . To change the level of verbosity, use one of the direct setters. For instance, here is how to change the verbosity to the INFO level: Copied import evaluate evaluate.logging.set_verbosity_info() You can also use the environment variable EVALUATE_VERBOSITY to override the default verbosity, and set it to one of the following: debug , info , warning , error , critical : Copied EVALUATE_VERBOSITY=error ./myprogram.py All the methods of this logging module are documented below. The main ones are: logging.get_verbosity() to get the current level of verbosity in the logger logging.set_verbosity() to set the verbosity to the level of your choice In order from the least to the most verbose (with their corresponding int values): logging.CRITICAL or logging.FATAL (int value, 50): only report the most critical errors. logging.ERROR (int value, 40): only report errors. logging.WARNING or logging.WARN (int value, 30): only reports error and warnings. This the default level used by the library. logging.INFO (int value, 20): reports error, warnings and basic information. logging.DEBUG (int value, 10): report all information. By default, tqdm progress bars will be displayed during evaluate download and processing. logging.disable_progress_bar() and logging.enable_progress_bar() can be used to suppress or unsuppress this behavior. Functions evaluate.utils.logging.get_verbosity < source > ( ) Return the current level for the HuggingFace datasets library’s root logger. HuggingFace datasets library has following logging levels: evaluate.logging.CRITICAL , evaluate.logging.FATAL evaluate.logging.ERROR evaluate.logging.WARNING , evaluate.logging.WARN evaluate.logging.INFO evaluate.logging.DEBUG evaluate.utils.logging.set_verbosity < source > ( verbosity : int ) Set the level for the HuggingFace datasets library’s root logger. evaluate.utils.logging.set_verbosity_info < source > ( ) Set the level for the HuggingFace datasets library’s root logger to INFO. This will display most of the logging information and tqdm bars. Shortcut to evaluate.logging.set_verbosity(evaluate.logging.INFO) evaluate.utils.logging.set_verbosity_warning < source > ( ) Set the level for the HuggingFace datasets library’s root logger to WARNING. This will display only the warning and errors logging information and tqdm bars. Shortcut to evaluate.logging.set_verbosity(evaluate.logging.WARNING) evaluate.utils.logging.set_verbosity_debug < source > ( ) Set the level for the HuggingFace datasets library’s root logger to DEBUG. This will display all the logging information and tqdm bars. Shortcut to evaluate.logging.set_verbosity(evaluate.logging.DEBUG) evaluate.utils.logging.set_verbosity_error < source > ( ) Set the level for the HuggingFace datasets library’s root logger to ERROR. This will display only the errors logging information and tqdm bars. Shortcut to evaluate.logging.set_verbosity(evaluate.logging.ERROR) evaluate.utils.logging.disable_propagation < source > ( ) Disable propagation of the library log outputs. Note that log propagation is disabled by default. evaluate.utils.logging.enable_propagation < source > ( ) Enable propagation of the library log outputs. Please disable the HuggingFace datasets library’s default handler to prevent double logging if the root logger has been configured. evaluate.utils.logging.get_logger < source > ( name : typing.Optional[str] = None ) Return a logger with the specified name. This function can be used in dataset and metrics scripts. evaluate.enable_progress_bar < source > ( ) Enable tqdm progress bar. evaluate.disable_progress_bar < source > ( ) Enable tqdm progress bar. Levels evaluate.logging.CRITICAL evaluate.logging.CRITICAL = 50 evaluate.logging.DEBUG evaluate.logging.DEBUG = 10 evaluate.logging.ERROR evaluate.logging.ERROR = 40 evaluate.logging.FATAL evaluate.logging.FATAL = 50 evaluate.logging.INFO evaluate.logging.INFO = 20 evaluate.logging.NOTSET evaluate.logging.NOTSET = 0 evaluate.logging.WARN evaluate.logging.WARN = 30 evaluate.logging.WARNING evaluate.logging.WARNING = 30 ← Visualization methods Logging methods Functions Levels evaluate.logging.CRITICAL evaluate.logging.DEBUG evaluate.logging.ERROR evaluate.logging.FATAL evaluate.logging.INFO evaluate.logging.NOTSET evaluate.logging.WARN evaluate.logging.WARNING
How_to_handle_URL_parameters_in_Spaces.txt
How to handle URL parameters in Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation How to handle URL parameters in Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Using OpenCV in Spaces More ways to create Spaces Managing Spaces with Github Actions Managing Spaces with CircleCI Workflows Custom Python Spaces How to Add a Space to ArXiv Cookie limitations in Spaces Set URL query and hash Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started How to handle URL parameters in Spaces You can use URL query parameters as a data sharing mechanism, for instance to be able to deep-link into an app with a specific state. On a Space page ( https://huggingface.co/spaces/<user>/<app> ), the actual application page ( https://*.hf.space/ ) is embedded in an iframe. The query string and the hash attached to the parent page URL are propagated to the embedded app on initial load, so the embedded app can read these values without special consideration. In contrast, updating the query string and the hash of the parent page URL from the embedded app is slightly more complex. If you want to do this in a Docker or static Space, you need to add the following JS code that sends a message to the parent page that has a queryString and/or hash key. Copied const queryString = "..." ; const hash = "..." ; window . parent . postMessage ({ queryString, hash, }, "https://huggingface.co" ); This is only for Docker or static Spaces. For Streamlit apps, Spaces automatically syncs the URL parameters. Gradio apps can read the query parameters from the Spaces page, but do not sync updated URL parameters with the parent page. Note that the URL parameters of the parent page are propagated to the embedded app only on the initial load. So location.hash in the embedded app will not change even if the parent URL hash is updated using this method. An example of this method can be found in this static Space, whitphx/static-url-param-sync-example . < > Update on GitHub ← Cookie limitations in Spaces Other → How to handle UR L parameters in Spaces
Using_SpeechBrain_at_Hugging_Face.txt
Using SpeechBrain at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using SpeechBrain at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using SpeechBrain at Hugging Face speechbrain is an open-source and all-in-one conversational toolkit for audio/speech. The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognition, speech enhancement, speech separation, language identification, multi-microphone signal processing, and many others. Exploring SpeechBrain in the Hub You can find speechbrain models by filtering at the left of the models page . All models on the Hub come up with the following features: An automatically generated model card with a brief description. Metadata tags that help for discoverability with information such as the language, license, paper, and more. An interactive widget you can use to play out with the model directly in the browser. An Inference API that allows to make inference requests. Using existing models speechbrain offers different interfaces to manage pretrained models for different tasks, such as EncoderClassifier , EncoderClassifier , SepformerSeperation , and SpectralMaskEnhancement . These classes have a from_hparams method you can use to load a model from the Hub Here is an example to run inference for sound recognition in urban sounds. Copied import torchaudio from speechbrain.pretrained import EncoderClassifier classifier = EncoderClassifier.from_hparams( source= "speechbrain/urbansound8k_ecapa" ) out_prob, score, index, text_lab = classifier.classify_file( 'speechbrain/urbansound8k_ecapa/dog_bark.wav' ) If you want to see how to load a specific model, you can click Use in speechbrain and you will be given a working snippet that you can load it! Additional resources SpeechBrain website . SpeechBrain docs . < > Update on GitHub ← SpanMarker Stable-Baselines3 → Using Speech Brain at Hugging Face Exploring Speech Brain in the Hub Using existing models Additional resources
Performance_and_Scalability.txt
Performance and Scalability Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Performance and Scalability Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Performance and Scalability Training large transformer models and deploying them to production present various challenges. During training, the model may require more GPU memory than available or exhibit slow training speed. In the deployment phase, the model can struggle to handle the required throughput in a production environment. This documentation aims to assist you in overcoming these challenges and finding the optimal settings for your use-case. The guides are divided into training and inference sections, as each comes with different challenges and solutions. Within each section you’ll find separate guides for different hardware configurations, such as single GPU vs. multi-GPU for training or CPU vs. GPU for inference. Use this document as your starting point to navigate further to the methods that match your scenario. Training Training large transformer models efficiently requires an accelerator such as a GPU or TPU. The most common case is where you have a single GPU. The methods that you can apply to improve training efficiency on a single GPU extend to other setups such as multiple GPU. However, there are also techniques that are specific to multi-GPU or CPU training. We cover them in separate sections. Methods and tools for efficient training on a single GPU : start here to learn common approaches that can help optimize GPU memory utilization, speed up the training, or both. Multi-GPU training section : explore this section to learn about further optimization methods that apply to a multi-GPU settings, such as data, tensor, and pipeline parallelism. CPU training section : learn about mixed precision training on CPU. Efficient Training on Multiple CPUs : learn about distributed CPU training. Training on TPU with TensorFlow : if you are new to TPUs, refer to this section for an opinionated introduction to training on TPUs and using XLA. Custom hardware for training : find tips and tricks when building your own deep learning rig. Hyperparameter Search using Trainer API Inference Efficient inference with large models in a production environment can be as challenging as training them. In the following sections we go through the steps to run inference on CPU and single/multi-GPU setups. Inference on a single CPU Inference on a single GPU Multi-GPU inference XLA Integration for TensorFlow Models Training and inference Here you’ll find techniques, tips and tricks that apply whether you are training a model, or running inference with it. Instantiating a big model Troubleshooting performance issues Contribute This document is far from being complete and a lot more needs to be added, so if you have additions or corrections to make please don’t hesitate to open a PR or if you aren’t sure start an Issue and we can discuss the details there. When making contributions that A is better than B, please try to include a reproducible benchmark and/or a link to the source of that information (unless it comes directly from you). < > Update on GitHub ← Contribute new quantization method LLM inference optimization → Performance and Scalability Training Inference Training and inference Contribute
Create_a_dataset_for_training.txt
Create a dataset for training Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Create a dataset for training Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Create a dataset for training There are many datasets on the Hub to train a model on, but if you can’t find one you’re interested in or want to use your own, you can create a dataset with the 🤗 Datasets library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. This guide will show you two ways to create a dataset to finetune on: provide a folder of images to the --train_data_dir argument upload a dataset to the Hub and pass the dataset repository id to the --dataset_name argument 💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. Provide a dataset as a folder For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the ImageFolder builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like: Copied data_dir/xxx.png data_dir/xxy.png data_dir/[...]/xxz.png Pass the path to the dataset directory to the --train_data_dir argument, and then you can start training: Copied accelerate launch train_unconditional.py \ --train_data_dir <path-to-train-directory> \ <other-arguments> Upload your data to the Hub 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. Start by creating a dataset with the ImageFolder feature, which creates an image column containing the PIL-encoded images. You can use the data_dir or data_files parameters to specify the location of the dataset. The data_files parameter supports mapping specific files to dataset splits like train or test : Copied from datasets import load_dataset # example 1: local folder dataset = load_dataset( "imagefolder" , data_dir= "path_to_your_folder" ) # example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) dataset = load_dataset( "imagefolder" , data_files= "path_to_zip_file" ) # example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) dataset = load_dataset( "imagefolder" , data_files= "https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip" , ) # example 4: providing several splits dataset = load_dataset( "imagefolder" , data_files={ "train" : [ "path/to/file1" , "path/to/file2" ], "test" : [ "path/to/file3" , "path/to/file4" ]} ) Then use the push_to_hub method to upload the dataset to the Hub: Copied # assuming you have ran the huggingface-cli login command in a terminal dataset.push_to_hub( "name_of_your_dataset" ) # if you want to push to a private repo, simply pass private=True: dataset.push_to_hub( "name_of_your_dataset" , private= True ) Now the dataset is available for training by passing the dataset name to the --dataset_name argument: Copied accelerate launch --mixed_precision= "fp16" train_text_to_image.py \ --pretrained_model_name_or_path= "stable-diffusion-v1-5/stable-diffusion-v1-5" \ --dataset_name= "name_of_your_dataset" \ <other-arguments> Next steps Now that you’ve created a dataset, you can plug it into the train_data_dir (if your dataset is local) or dataset_name (if your dataset is on the Hub) arguments of a training script. For your next steps, feel free to try and use your dataset to train a model for unconditional generation or text-to-image generation ! < > Update on GitHub ← Overview Adapt a model to a new task → Create a dataset for training Provide a dataset as a folder Upload your data to the Hub Next steps
Encode_Inputs.txt
Encode Inputs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Tokenizers documentation Encode Inputs Tokenizers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.20.3 v0.13.4.rc2 v0.10.0 v0.9.4 EN Getting started 🤗 Tokenizers Quicktour Installation The tokenization pipeline Components Training from memory API Input Sequences Encode Inputs Tokenizer Encoding Added Tokens Models Normalizers Pre-tokenizers Post-processors Trainers Decoders Visualizer Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Encode Inputs Python Rust Node These types represent all the different kinds of input that a Tokenizer accepts when using encode_batch() . TextEncodeInput[[[ tokenizers.TextEncodeInput ]]] tokenizers.TextEncodeInput Represents a textual input for encoding. Can be either: A single sequence: TextInputSequence A pair of sequences: A Tuple of TextInputSequence Or a List of TextInputSequence of size 2 alias of Union[str, Tuple[str, str], List[str]] . PreTokenizedEncodeInput[[[ tokenizers.PreTokenizedEncodeInput ]]] tokenizers.PreTokenizedEncodeInput Represents a pre-tokenized input for encoding. Can be either: A single sequence: PreTokenizedInputSequence A pair of sequences: A Tuple of PreTokenizedInputSequence Or a List of PreTokenizedInputSequence of size 2 alias of Union[List[str], Tuple[str], Tuple[Union[List[str], Tuple[str]], Union[List[str], Tuple[str]]], List[Union[List[str], Tuple[str]]]] . EncodeInput[[[ tokenizers.EncodeInput ]]] tokenizers.EncodeInput Represents all the possible types of input for encoding. Can be: When is_pretokenized=False : TextEncodeInput When is_pretokenized=True : PreTokenizedEncodeInput alias of Union[str, Tuple[str, str], List[str], Tuple[str], Tuple[Union[List[str], Tuple[str]], Union[List[str], Tuple[str]]], List[Union[List[str], Tuple[str]]]] . < > Update on GitHub ← Input Sequences Tokenizer → Encode Inputs Text Encode Input[[[ tokenizers. Text Encode Input ]]] Pre Tokenized Encode Input[[[ tokenizers. Pre Tokenized Encode Input ]]] Encode Input[[[ tokenizers. Encode Input ]]]
Contribute_to_PEFT.txt
Contribute to PEFT Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Contribute to PEFT PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Contribute to PEFT We are happy to accept contributions to PEFT. If you plan to contribute, please read this to make the process as smooth as possible. Installation For code contributions to PEFT, you should choose the “source” installation method. If you are new to creating a pull request, follow the Creating a pull request guide by GitHub. Tests and code quality checks Regardless of the contribution type (unless it’s only about the docs), you should run tests and code quality checks before creating a PR to ensure your contribution doesn’t break anything and follows the project standards. We provide a Makefile to execute the necessary tests. Run the code below for the unit test: Copied make test Run one of the following to either only check or check and fix code quality and style: Copied make quality # just check make style # check and fix You can also set up pre-commit to run these fixes automatically as Git commit hooks. Copied $ pip install pre-commit $ pre-commit install Running all the tests can take a couple of minutes, so during development it can be more efficient to only run tests specific to your change: Copied pytest tests/ -k <name-of-test> This should finish much quicker and allow for faster iteration. However, you should still run the whole test suite before creating a PR because your change can inadvertently break tests that at first glance are unrelated. If your change is specific to a hardware setting (e.g., it requires CUDA), take a look at tests/test_gpu_examples.py and tests/test_common_gpu.py to see if it makes sense to add tests there. If your change could have an effect on saving and loading models, please run the tests with the --regression flag to trigger regression tests. It can happen that while you’re working on your PR, the underlying code base changes due to other changes being merged. If that happens – especially when there is a merge conflict – please update your branch with the latest changes. This can be a merge or a rebase, and we’ll squash and merge the PR once it’s ready. PR description When opening a PR, please provide a nice description of the change you’re proposing. If it relates to other issues or PRs, please reference them. Providing a good description not only helps the reviewers review your code better and faster, it can also be used later (as a basis) for the commit message which helps with long term maintenance of the project. If your code makes some non-trivial changes, it may also be a good idea to add comments to the code to explain those changes. For example, if you had to iterate on your implementation multiple times because the most obvious way didn’t work, it’s a good indication that a code comment is needed. Bugfixes Please give a description of the circumstances that led to the bug. If there is an existing issue, please link to it (e.g., “Resolves #12345”). Ideally when a bugfix is provided, it should be accompanied by a test for the bug. The test should fail with the current code and pass with the bugfix. Add a comment to the test that references the issue or PR. Without a test, it is more difficult to prevent regressions in the future. Add a new fine-tuning method New parameter-efficient fine-tuning methods are developed all the time. If you would like to add a new and promising method to PEFT, please follow these steps. Before you start to implement the new method, please open a GitHub issue with your proposal. This way, the maintainers can give you some early feedback. Please add a link to the source (usually a paper) of the method. Some evidence should be provided there is general interest in using the method. We will not add new methods that are freshly published, but there is no evidence of demand for it. When implementing the method, it makes sense to look for existing implementations that already exist as a guide. Moreover, when you structure your code, please take inspiration from the other PEFT methods. For example, if your method is similar to LoRA, it makes sense to structure your code similarly or even reuse some functions or classes where it makes sense (some code duplication is okay, but don’t overdo it). Ideally, in addition to the implementation of the new method, there should also be examples (notebooks, scripts), documentation, and an extensive test suite that proves the method works with a variety of tasks. However, this can be more challenging so it is acceptable to only provide the implementation and at least one working example. Documentation and tests can be added in follow up PRs. Once you have something that seems to be working, don’t hesitate to create a draft PR even if it’s not in a mergeable state yet. The maintainers are happy to give you feedback and guidance along the way. Add other features It is best if you first open an issue on GitHub with a proposal to add the new feature. This way, you can discuss with the maintainers if it makes sense to add the feature before spending too much time on implementing it. New features should generally be accompanied by tests and documentation or examples. Without the latter, users will have a hard time discovering your cool new feature. Changes to the code should be implemented in a backward-compatible way. For example, existing code should continue to work the same way after the feature is merged. < > Update on GitHub ← torch.compile Troubleshooting → Contribute to PEFT Installation Tests and code quality checks P R description Bugfixes Add a new fine-tuning method Add other features
Spaces_Dev_Mode__Seamless_development_in_Spaces_99.txt
Spaces Dev Mode: Seamless development in Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Spaces Dev Mode: Seamless development in Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Spaces Dev Mode: Seamless development in Spaces This feature is still in Beta stage. The Spaces Dev Mode is part of PRO and Enterprise Hub subscriptions. Spaces Dev Mode Spaces Dev Mode is a feature that eases the debugging of your application and makes iterating on Spaces faster. Whenever your commit some changes to your Space repo, the underlying Docker image gets rebuilt, and then a new virtual machine is provisioned to host the new container. The Dev Mode allows you to update your Space much quicker by overriding the Docker image. The Dev Mode Docker image starts your application as a sub-process, allowing you to restart it without stopping the Space container itself. It also starts a VS Code server and a SSH server in the background for you to connect to the Space. The ability to connect to the running Space unlocks several use cases: You can make changes to the app code without the Space rebuilding everytime You can debug a running application and monitor resources live Overall it makes developing and experimenting with Spaces much faster by skipping the Docker image rebuild phase. Interface Once the Dev Mode is enabled on your Space, you should see a modal like the following. The application does not restart automatically when you change the code. For your changes to appear in the Space, you need to use the Refresh button that will restart the app. If you're using the Streamlit or Gradio SDK, or if your application is Pyhton-based, note that requirements are not installed automatically. You will need to manually run `pip install` from VS Code or SSH. SSH connection and VS Code The Dev Mode allows you to connect to your Space’s docker container using SSH. Instructions to connect are listed in the Dev Mode controls modal. You will need to add your machine’s SSH public key to your user account to be able to connect to the Space using SSH. Check out the Git over SSH documentation for more detailed instructions. You can also use a local install of VS Code to connect to the Space container. To do so, you will need to install the SSH Remote extension. Persisting changes The changes you make when Dev Mode is enabled are not persisted to the Space repo automatically. By default, they will be discarded when Dev Mode is disabled or when the Space goes to sleep. If you wish to persist changes made while Dev Mode is enabled, you need to use `git` from inside the Space container (using VS Code or SSH). For example: Copied # Add changes and commit them git add . git commit -m "Persist changes from Dev Mode" # Push the commit to persist them in the repo git push The modal will display a warning if you have uncommitted or unpushed changes in the Space: Enabling Dev Mode You can enable the Dev Mode on your Space from the web interface. You can also create a Space with the dev mode enabled: Limitations Dev Mode is currently not available for static Spaces. Docker Spaces also have some additional requirements. Docker Spaces Dev Mode is supported for Docker Spaces. However, your Space needs to comply with the following rules for Dev Mode to work properly. The following packages must be installed: bash (required to establish SSH connections) curl , wget and procps (required by the VS Code server process) git and git-lfs to be able to commit and push changes from your Dev Mode environment Your application code must be located in the /app folder for the Dev Mode daemon to be able to detect changes. The /app folder must be owned by the user with uid 1000 to allow you to make changes to the code. The Dockerfile must contain a CMD instruction for startup. Checkout Docker’s documentation about the CMD instruction for more details. Dev Mode works well when the base image is debian-based (eg, ubuntu). More exotic linux distros (eg, alpine) are not tested and Dev Mode is not guaranteed to work on them. Example of compatible Dockerfiles This is an example of a Dockerfile compatible with Spaces Dev Mode. It installs the required packages with apt-get , along with a couple more for developer convenience (namely: top , vim and nano ). It then starts a NodeJS application from /app . Copied FROM node: 19 -slim RUN apt-get update && \ apt-get install -y \ bash \ git git-lfs \ wget curl procps \ htop vim nano && \ rm -rf /var/lib/apt/lists/* WORKDIR /app COPY -- link ./ /app RUN npm i RUN chown 1000 /app USER 1000 CMD [ "node" , "index.js" ] There are several examples of Dev Mode compatible Docker Spaces in this organization. Feel free to duplicate them in your namespace! Example Python app (FastAPI HTTP server): https://huggingface.co/spaces/dev-mode-explorers/dev-mode-python Example Javascript app (Express.js HTTP server): https://huggingface.co/spaces/dev-mode-explorers/dev-mode-javascript Feedback You can share your feedback on Spaces Dev Mode directly on the HF Hub: https://huggingface.co/spaces/dev-mode-explorers/README/discussions < > Update on GitHub ← Spaces ZeroGPU Spaces Persistent Storage → Spaces Dev Mode: Seamless development in Spaces Spaces Dev Mode Interface SS H connection and V S Code Persisting changes Enabling Dev Mode Limitations Docker Spaces Example of compatible Dockerfiles Feedback
Optimum_Neuron_Distributed.txt
Optimum Neuron Distributed Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Optimum Neuron Distributed AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Optimum Neuron Distributed The optimum.neuron.distributed module provides a set of tools to perform distributed training and inference. Parallelization The main task in distributed training / inference is being able to shard things such as the model weights, the gradient, and/or the optimizer state. We built Parallelizer classes to handle the sharding. Base Parallelizer The Parallelizer class is the base abstract class being derived for every model supporting model parallelism. It provides methods to parallelize the model and save and load sharded checkpoints. class optimum.neuron.distributed. Parallelizer < source > ( ) Base abstract class that handles model parallelism. _parallelize < source > ( model : PreTrainedModel device : typing.Optional[torch.device] = None parallelize_embeddings : bool = True sequence_parallel_enabled : bool = False should_parallelize_layer_predicate_func : typing.Optional[typing.Callable[[torch.nn.modules.module.Module], bool]] = None **parallel_layer_specific_kwargs ) → PreTrainedModel Parameters model ( PreTrainedModel ) — The model to parallelize. device ( Optional[torch.device] , defaults to None ) — The device where the new parallel layers should be put. parallelize_embeddings ( bool , defaults to True ) — Whether or not the embeddings should be parallelized. This can be disabled in the case when the TP size does not divide the vocabulary size. sequence_parallel_enabled ( bool , defaults to False ) — Whether or not sequence parallelism is enabled. should_parallelize_layer_predicate_func (Optional[Callable[[torch.nn.Module], bool]], defaults to None ) — A function that takes a layer as input and returns a boolean specifying if the input layer should be parallelized. This is useful to skip unnecessary parallelization, for pipeline parallelism for instance. * *parallel_layer_specific_kwargs ( Dict[str, Any] ) — Keyword arguments specific to some parallel layers, they will be ignored by the other parallel layers. Returns PreTrainedModel The parallelized model. Parallelizes the model by transforming regular layer into their parallel counterparts. Each concrete class must implement it. parallelize < source > ( model : typing.Union[ForwardRef('PreTrainedModel'), optimum.neuron.utils.peft_utils.NeuronPeftModel] device : typing.Optional[torch.device] = None parallelize_embeddings : bool = True sequence_parallel_enabled : bool = False kv_size_multiplier : typing.Optional[int] = None pipeline_parallel_input_names : typing.Union[typing.Tuple[str, ...], typing.Dict[str, typing.Tuple[str, ...]], NoneType] = None pipeline_parallel_num_microbatches : int = 1 pipeline_parallel_use_zero1_optimizer : bool = False pipeline_parallel_gradient_checkpointing_enabled : bool = False checkpoint_dir : typing.Union[str, pathlib.Path, NoneType] = None num_local_ranks_per_step : int = 8 ) → PreTrainedModel Parameters model ( Union[PreTrainedModel, NeuronPeftModel] ) — The model to parallelize. device ( Optional[torch.device] , defaults to None ) — The device where the new parallel layers should be put. parallelize_embeddings ( bool , defaults to True ) — Whether or not the embeddings should be parallelized. This can be disabled in the case when the TP size does not divide the vocabulary size. sequence_parallel_enabled ( bool , defaults to False ) — Whether or not sequence parallelism is enabled. kv_size_multiplier ( Optional[int], defaults to None`) — The number of times to replicate the KV heads when the TP size is bigger than the number of KV heads. If left unspecified, the smallest multiplier that makes the number of KV heads divisible by the TP size will be used. pipeline_parallel_num_microbatches ( int , defaults to 1) — The number of microbatches used for pipeline execution. pipeline_parallel_use_zero1_optimizer ( bool , defaults to False ) — When zero-1 optimizer is used, set this to True, so the PP model will understand that zero-1 optimizer will handle data parallel gradient averaging. pipeline_parallel_gradient_checkpointing_enabled ( bool , defaults to False ) — Whether or not gradient checkpointing should be enabled when doing pipeline parallelism. checkpoint_dir ( Optional[Union[str, Path]] ) — Path to a sharded checkpoint. If specified, the checkpoint weights will be loaded to the parallelized model. num_local_ranks_per_step ( int , defaults to 8 ) — Corresponds to the number of local ranks that can initialize and load the model weights at the same time. If the value is inferior to 0, the maximum number of ranks will be used. Returns PreTrainedModel The parallelized model. Parallelizes the model by transforming regular layer into their parallel counterparts using cls._parallelize() . It also makes sure that each parameter has loaded its weights or has been initialized if there is no pre-trained weights associated to it. optimizer_for_mp < source > ( optimizer : torch.optim.Optimizer orig_param_to_parallel_param_on_xla : typing.Mapping[int, ForwardRef('torch.nn.Parameter')] ) → torch.optim.Optimizer Parameters optimizer ( torch.optim.Optimizer ) — The original optimizer. orig_param_to_parallel_param_on_xla ( Mapping[int, torch.nn.Parameter] ) — A mapping (e.g. dict-like) that maps the id of a parameter in optimizer to the id of its parallelized counterpart on an XLA device. Returns torch.optim.Optimizer The tensor parallelism ready optimizer. Creates an optimizer ready for a parallelized model from an existing optimizer. There are two cases: The optimizer has been created via a lazy constructor from optimum.neuron.distributed.utils.make_optimizer_constructor_lazy , it which case the exactly intended optimizer is created for tensor parallelism. The optimizer was created with a regular constructor. In this case the optimizer for tensor parallelism is created as close as possible to what was intended but that is not guaranteed. save_model_sharded_checkpoint < source > ( model : typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('NxDPPModel')] output_dir : typing.Union[str, pathlib.Path] optimizer : typing.Optional[ForwardRef('torch.optim.Optimizer')] = None use_xser : bool = True async_save : bool = False num_local_ranks_per_step : int = 8 ) load_model_sharded_checkpoint < source > ( model : PreTrainedModel load_dir : typing.Union[str, pathlib.Path] ) Selecting Model-Specific Parallelizer Classes Each model that supports parallelization in optimum-neuron has its own derived Parallelizer class. The factory class ParallelizersManager allows you to retrieve such model-specific Parallelizer s easily. class optimum.neuron.distributed. ParallelizersManager < source > ( ) get_supported_model_types < source > ( ) Provides the list of supported model types for parallelization. is_model_supported < source > ( model_type_or_model : typing.Union[str, transformers.modeling_utils.PreTrainedModel, optimum.neuron.utils.peft_utils.NeuronPeftModel] ) Parameters model_type_or_model ( Union[str, PreTrainedModel] ) — Either the model type or an instance of the model. Returns a tuple of 3 booleans where: The first element indicates if tensor parallelism can be used for this model, The second element indicates if sequence parallelism can be used on top of tensor parallelism for this model, The third element indicates if pipeline parallelism can be used for this model. parallelizer_for_model < source > ( model_type_or_model : typing.Union[str, transformers.modeling_utils.PreTrainedModel, optimum.neuron.utils.peft_utils.NeuronPeftModel] ) Parameters model_type_or_model ( Union[str, PreTrainedModel] ) — Either the model type or an instance of the model. Returns the parallelizer class associated to the model. Utils Lazy Loading Distributed training / inference is usually needed when the model is too big to fit in one device. Tools that allow for lazy loading of model weights and optimizer states are thus needed to avoid going out-of-memory before parallelization. optimum.neuron.distributed.lazy_load_for_parallelism < source > ( tensor_parallel_size : int = 1 pipeline_parallel_size : int = 1 ) Parameters tensor_parallel_size ( int , defaults to 1) — The tensor parallel size considered. pipeline_parallel_size ( int , defaults to 1) — The pipeline parallel size considered. Context manager that makes the loading of a model lazy for model parallelism: Every torch.nn.Linear is put on the torch.device("meta") device, meaning that it takes no memory to instantiate. Every torch.nn.Embedding is also put on the torch.device("meta") device. No state dict is actually loaded, instead a weight map is created and attached to the model. For more information, read the optimum.neuron.distributed.utils.from_pretrained_for_mp docstring. If both tensor_parallel_size and pipeline_parallel_size are set to 1, no lazy loading is performed. optimum.neuron.distributed.make_optimizer_constructor_lazy < source > ( optimizer_cls : typing.Type[ForwardRef('torch.optim.Optimizer')] ) Transforms an optimizer constructor (optimizer class) to make it lazy by not initializing the parameters. This makes the optimizer lightweight and usable to create a “real” optimizer once the model has been parallelized. ← Neuron Trainer Supported Architectures → Optimum Neuron Distributed Parallelization Base Parallelizer Selecting Model- Specific Parallelizer Classes Utils Lazy Loading
Widgets.txt
Widgets Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Widgets Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Widget Examples Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Widgets What’s a widget? Many model repos have a widget that allows anyone to run inferences directly in the browser! Here are some examples: Named Entity Recognition using spaCy . Image Classification using 🤗 Transformers Text to Speech using ESPnet . Sentence Similarity using Sentence Transformers . You can try out all the widgets here . Enabling a widget A widget is automatically created for your model when you upload it to the Hub. To determine which pipeline and widget to display ( text-classification , token-classification , translation , etc.), we analyze information in the repo, such as the metadata provided in the model card and configuration files. This information is mapped to a single pipeline_tag . We choose to expose only one widget per model for simplicity. For most use cases, we determine the model type from the tags. For example, if there is tag: text-classification in the model card metadata , the inferred pipeline_tag will be text-classification . For some libraries, such as 🤗 Transformers , the model type should be inferred automatically based from configuration files ( config.json ). The architecture can determine the type: for example, AutoModelForTokenClassification corresponds to token-classification . If you’re interested in this, you can see pseudo-code in this gist . You can always manually override your pipeline type with pipeline_tag: xxx in your model card metadata . (You can also use the metadata GUI editor to do this). How can I control my model’s widget example input? You can specify the widget input in the model card metadata section: Copied widget: - text: "Jens Peter Hansen kommer fra Danmark" You can provide more than one example input. In the examples dropdown menu of the widget, they will appear as Example 1 , Example 2 , etc. Optionally, you can supply example_title as well. Copied widget: - text: "Is this review positive or negative? Review: Best cast iron skillet you will ever buy." example_title: "Sentiment analysis" - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ..." example_title: "Coreference resolution" - text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ..." example_title: "Logic puzzles" - text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night ..." example_title: "Reading comprehension" Moreover, you can specify non-text example inputs in the model card metadata. Refer here for a complete list of sample input formats for all widget types. For vision & audio widget types, provide example inputs with src rather than text . For example, allow users to choose from two sample audio files for automatic speech recognition tasks by: Copied widget: - src: https://example.org/somewhere/speech_samples/sample1.flac example_title: Speech sample 1 - src: https://example.org/somewhere/speech_samples/sample2.flac example_title: Speech sample 2 Note that you can also include example files in your model repository and use them as: Copied widget: - src: https://huggingface.co/username/model_repo/resolve/main/sample1.flac example_title: Custom Speech Sample 1 But even more convenient, if the file lives in the corresponding model repo, you can just use the filename or file path inside the repo: Copied widget: - src: sample1.flac example_title: Custom Speech Sample 1 or if it was nested inside the repo: Copied widget: - src: nested/directory/sample1.flac We provide example inputs for some languages and most widget types in default-widget-inputs.ts file . If some examples are missing, we welcome PRs from the community to add them! Example outputs As an extension to example inputs, for each widget example, you can also optionally describe the corresponding model output, directly in the output property. This is useful when the model is not yet supported by the Inference API (for instance, the model library is not yet supported or the model is too large) so that the model page can still showcase how the model works and what results it gives. For instance, for an automatic-speech-recognition model: Copied widget: - src: sample1.flac output: text: "Hello my name is Julien" The output property should be a YAML dictionary that represents the Inference API output. For a model that outputs text, see the example above. For a model that outputs labels (like a text-classification model for instance), output should look like this: Copied widget: - text: "I liked this movie" output: - label: POSITIVE score: 0.8 - label: NEGATIVE score: 0.2 Finally, for a model that outputs an image, audio, or any other kind of asset, the output should include a url property linking to either a file name or path inside the repo or a remote URL. For example, for a text-to-image model: Copied widget: - text: "picture of a futuristic tiger, artstation" output: url: images/tiger.jpg We can also surface the example outputs in the Hugging Face UI, for instance, for a text-to-image model to display a gallery of cool image generations. What are all the possible task/widget types? You can find all the supported tasks in pipelines.ts file . Here are some links to examples: text-classification , for instance FacebookAI/roberta-large-mnli token-classification , for instance dbmdz/bert-large-cased-finetuned-conll03-english question-answering , for instance distilbert/distilbert-base-uncased-distilled-squad translation , for instance google-t5/t5-base summarization , for instance facebook/bart-large-cnn conversational , for instance facebook/blenderbot-400M-distill text-generation , for instance openai-community/gpt2 fill-mask , for instance distilbert/distilroberta-base zero-shot-classification (implemented on top of a nli text-classification model), for instance facebook/bart-large-mnli table-question-answering , for instance google/tapas-base-finetuned-wtq sentence-similarity , for instance osanseviero/full-sentence-distillroberta2 How can I control my model’s widget Inference API parameters? Generally, the Inference API for a model uses the default pipeline settings associated with each task. But if you’d like to change the pipeline’s default settings and specify additional inference parameters, you can configure the parameters directly through the model card metadata. Refer here for some of the most commonly used parameters associated with each task. For example, if you want to specify an aggregation strategy for a NER task in the widget: Copied inference: parameters: aggregation_strategy: "none" Or if you’d like to change the temperature for a summarization task in the widget: Copied inference: parameters: temperature: 0.7 The Serverless inference API allows you to send HTTP requests to models in the Hugging Face Hub programatically. ⚡⚡ Learn more about it by reading the Inference API documentation . Finally, you can also deploy all those models to dedicated Inference Endpoints . < > Update on GitHub ← Unity Sentis Widget Examples → Widgets What’s a widget? Enabling a widget How can I control my model’s widget example input? Example outputs What are all the possible task/widget types? How can I control my model’s widget Inference AP I parameters?
Using_TGI_with_Inferentia.txt
Using TGI with Inferentia Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Using TGI with Inferentia text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using TGI with Inferentia Check out this guide on how to serve models with TGI on Inferentia2. < > Update on GitHub ← Using TGI with Intel Gaudi Using TGI with Google TPUs → Using TG I with Inferentia
@huggingface_inference.txt
@huggingface/inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation @huggingface/inference Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started @huggingface/inference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue Type Aliases AudioClassificationArgs Ƭ AudioClassificationArgs : BaseArgs & { data : Blob | ArrayBuffer } Defined in inference/src/tasks/audio/audioClassification.ts:5 AudioClassificationReturn Ƭ AudioClassificationReturn : AudioClassificationOutputValue [] Defined in inference/src/tasks/audio/audioClassification.ts:24 AudioToAudioArgs Ƭ AudioToAudioArgs : BaseArgs & { data : Blob | ArrayBuffer } Defined in inference/src/tasks/audio/audioToAudio.ts:5 AudioToAudioReturn Ƭ AudioToAudioReturn : AudioToAudioOutputValue [] Defined in inference/src/tasks/audio/audioToAudio.ts:29 AutomaticSpeechRecognitionArgs Ƭ AutomaticSpeechRecognitionArgs : BaseArgs & { data : Blob | ArrayBuffer } Defined in inference/src/tasks/audio/automaticSpeechRecognition.ts:6 DocumentQuestionAnsweringArgs Ƭ DocumentQuestionAnsweringArgs : BaseArgs & { inputs : { image : Blob | ArrayBuffer ; question : string } } Defined in inference/src/tasks/multimodal/documentQuestionAnswering.ts:8 FeatureExtractionArgs Ƭ FeatureExtractionArgs : BaseArgs & { inputs : string | string [] } Defined in inference/src/tasks/nlp/featureExtraction.ts:6 FeatureExtractionOutput Ƭ FeatureExtractionOutput : ( number | number [] | number [][])[] Returned values are a multidimensional array of floats (dimension depending on if you sent a string or a list of string, and if the automatic reduction, usually mean_pooling for instance was applied for you or not. This should be explained on the model’s README). Defined in inference/src/tasks/nlp/featureExtraction.ts:19 FillMaskArgs Ƭ FillMaskArgs : BaseArgs & { inputs : string } Defined in inference/src/tasks/nlp/fillMask.ts:5 FillMaskOutput Ƭ FillMaskOutput : { score : number ; sequence : string ; token : number ; token_str : string }[] Defined in inference/src/tasks/nlp/fillMask.ts:9 ImageClassificationArgs Ƭ ImageClassificationArgs : BaseArgs & { data : Blob | ArrayBuffer } Defined in inference/src/tasks/cv/imageClassification.ts:5 ImageClassificationOutput Ƭ ImageClassificationOutput : ImageClassificationOutputValue [] Defined in inference/src/tasks/cv/imageClassification.ts:23 ImageSegmentationArgs Ƭ ImageSegmentationArgs : BaseArgs & { data : Blob | ArrayBuffer } Defined in inference/src/tasks/cv/imageSegmentation.ts:5 ImageSegmentationOutput Ƭ ImageSegmentationOutput : ImageSegmentationOutputValue [] Defined in inference/src/tasks/cv/imageSegmentation.ts:27 ImageToImageArgs Ƭ ImageToImageArgs : BaseArgs & { inputs : Blob | ArrayBuffer ; parameters? : { guess_mode? : boolean ; guidance_scale? : number ; height? : number ; negative_prompt? : string ; num_inference_steps? : number ; prompt? : string ; strength? : number ; width? : number } } Defined in inference/src/tasks/cv/imageToImage.ts:6 ImageToImageOutput Ƭ ImageToImageOutput : Blob Defined in inference/src/tasks/cv/imageToImage.ts:55 ImageToTextArgs Ƭ ImageToTextArgs : BaseArgs & { data : Blob | ArrayBuffer } Defined in inference/src/tasks/cv/imageToText.ts:5 InferenceProvider Ƭ InferenceProvider : typeof INFERENCE_PROVIDERS [ number ] Defined in inference/src/types.ts:49 InferenceTask Ƭ InferenceTask : Exclude \< PipelineType , "other" > Defined in inference/src/types.ts:46 ModelId Ƭ ModelId : string HF model id, like “meta-llama/Llama-3.3-70B-Instruct” Defined in inference/src/types.ts:7 ObjectDetectionArgs Ƭ ObjectDetectionArgs : BaseArgs & { data : Blob | ArrayBuffer } Defined in inference/src/tasks/cv/objectDetection.ts:5 ObjectDetectionOutput Ƭ ObjectDetectionOutput : ObjectDetectionOutputValue [] Defined in inference/src/tasks/cv/objectDetection.ts:33 ProviderMapping Ƭ ProviderMapping \< ProviderId >: Partial \< Record \< WidgetType , Partial \< Record \< ModelId , ProviderId >>>> Type parameters Name Type ProviderId extends string Defined in inference/src/providers/types.ts:4 QuestionAnsweringArgs Ƭ QuestionAnsweringArgs : BaseArgs & { inputs : { context : string ; question : string } } Defined in inference/src/tasks/nlp/questionAnswering.ts:5 RequestArgs Ƭ RequestArgs : BaseArgs & { data : Blob | ArrayBuffer } | { inputs : unknown } | ChatCompletionInput & { accessToken? : string ; parameters? : Record \< string , unknown > } Defined in inference/src/types.ts:86 SentenceSimilarityArgs Ƭ SentenceSimilarityArgs : BaseArgs & { inputs : Record \< string , unknown > | Record \< string , unknown >[] } Defined in inference/src/tasks/nlp/sentenceSimilarity.ts:6 SentenceSimilarityOutput Ƭ SentenceSimilarityOutput : number [] Returned values are a list of floats Defined in inference/src/tasks/nlp/sentenceSimilarity.ts:19 SummarizationArgs Ƭ SummarizationArgs : BaseArgs & { inputs : string ; parameters? : { max_length? : number ; max_time? : number ; min_length? : number ; repetition_penalty? : number ; temperature? : number ; top_k? : number ; top_p? : number } } Defined in inference/src/tasks/nlp/summarization.ts:5 TableQuestionAnsweringArgs Ƭ TableQuestionAnsweringArgs : BaseArgs & { inputs : { query : string ; table : Record \< string , string []> } } Defined in inference/src/tasks/nlp/tableQuestionAnswering.ts:5 TabularClassificationArgs Ƭ TabularClassificationArgs : BaseArgs & { inputs : { data : Record \< string , string []> } } Defined in inference/src/tasks/tabular/tabularClassification.ts:5 TabularClassificationOutput Ƭ TabularClassificationOutput : number [] A list of predicted labels for each row Defined in inference/src/tasks/tabular/tabularClassification.ts:17 TabularRegressionArgs Ƭ TabularRegressionArgs : BaseArgs & { inputs : { data : Record \< string , string []> } } Defined in inference/src/tasks/tabular/tabularRegression.ts:5 TabularRegressionOutput Ƭ TabularRegressionOutput : number [] a list of predicted values for each row Defined in inference/src/tasks/tabular/tabularRegression.ts:17 TextClassificationArgs Ƭ TextClassificationArgs : BaseArgs & { inputs : string } Defined in inference/src/tasks/nlp/textClassification.ts:5 TextClassificationOutput Ƭ TextClassificationOutput : { label : string ; score : number }[] Defined in inference/src/tasks/nlp/textClassification.ts:12 TextGenerationStreamFinishReason Ƭ TextGenerationStreamFinishReason : "length" | "eos_token" | "stop_sequence" Defined in inference/src/tasks/nlp/textGenerationStream.ts:46 TextToImageArgs Ƭ TextToImageArgs : BaseArgs & { input? : { prompt : string } ; inputs : string ; parameters? : { guidance_scale? : number ; height? : number ; negative_prompt? : string ; num_inference_steps? : number ; width? : number } ; prompt? : string ; response_format? : "base64" } Defined in inference/src/tasks/cv/textToImage.ts:5 TextToImageOutput Ƭ TextToImageOutput : Blob Defined in inference/src/tasks/cv/textToImage.ts:44 TextToSpeechArgs Ƭ TextToSpeechArgs : BaseArgs & { inputs : string } Defined in inference/src/tasks/audio/textToSpeech.ts:5 TextToSpeechOutput Ƭ TextToSpeechOutput : Blob Defined in inference/src/tasks/audio/textToSpeech.ts:12 TokenClassificationArgs Ƭ TokenClassificationArgs : BaseArgs & { inputs : string ; parameters? : { aggregation_strategy? : "none" | "simple" | "first" | "average" | "max" } } Defined in inference/src/tasks/nlp/tokenClassification.ts:6 TokenClassificationOutput Ƭ TokenClassificationOutput : TokenClassificationOutputValue [] Defined in inference/src/tasks/nlp/tokenClassification.ts:52 TranslationArgs Ƭ TranslationArgs : BaseArgs & { inputs : string | string [] } Defined in inference/src/tasks/nlp/translation.ts:5 TranslationOutput Ƭ TranslationOutput : TranslationOutputValue | TranslationOutputValue [] Defined in inference/src/tasks/nlp/translation.ts:19 VisualQuestionAnsweringArgs Ƭ VisualQuestionAnsweringArgs : BaseArgs & { inputs : { image : Blob | ArrayBuffer ; question : string } } Defined in inference/src/tasks/multimodal/visualQuestionAnswering.ts:6 ZeroShotClassificationArgs Ƭ ZeroShotClassificationArgs : BaseArgs & { inputs : string | string [] ; parameters : { candidate_labels : string [] ; multi_label? : boolean } } Defined in inference/src/tasks/nlp/zeroShotClassification.ts:6 ZeroShotClassificationOutput Ƭ ZeroShotClassificationOutput : ZeroShotClassificationOutputValue [] Defined in inference/src/tasks/nlp/zeroShotClassification.ts:29 ZeroShotImageClassificationArgs Ƭ ZeroShotImageClassificationArgs : BaseArgs & { inputs : { image : Blob | ArrayBuffer } ; parameters : { candidate_labels : string [] } } Defined in inference/src/tasks/cv/zeroShotImageClassification.ts:7 ZeroShotImageClassificationOutput Ƭ ZeroShotImageClassificationOutput : ZeroShotImageClassificationOutputValue [] Defined in inference/src/tasks/cv/zeroShotImageClassification.ts:27 Variables FAL _ AI _ SUPPORTED _ MODEL _ IDS • Const FAL_AI_SUPPORTED_MODEL_IDS : ProviderMapping \< FalAiId > Defined in inference/src/providers/fal-ai.ts:7 INFERENCE _ PROVIDERS • Const INFERENCE_PROVIDERS : readonly [ "fal-ai" , "replicate" , "sambanova" , "together" , "hf-inference" ] Defined in inference/src/types.ts:48 REPLICATE _ SUPPORTED _ MODEL _ IDS • Const REPLICATE_SUPPORTED_MODEL_IDS : ProviderMapping \< ReplicateId > Defined in inference/src/providers/replicate.ts:7 SAMBANOVA _ SUPPORTED _ MODEL _ IDS • Const SAMBANOVA_SUPPORTED_MODEL_IDS : ProviderMapping \< SambanovaId > Defined in inference/src/providers/sambanova.ts:7 TOGETHER _ SUPPORTED _ MODEL _ IDS • Const TOGETHER_SUPPORTED_MODEL_IDS : ProviderMapping \< TogetherId > https://docs.together.ai/reference/models-1 Defined in inference/src/providers/together.ts:13 Functions audioClassification ▸ audioClassification ( args , options? ): Promise \< AudioClassificationReturn > This task reads some audio input and outputs the likelihood of classes. Recommended model: superb/hubert-large-superb-er Parameters Name Type args AudioClassificationArgs options? Options Returns Promise \< AudioClassificationReturn > Defined in inference/src/tasks/audio/audioClassification.ts:30 audioToAudio ▸ audioToAudio ( args , options? ): Promise \< AudioToAudioReturn > This task reads some audio input and outputs one or multiple audio files. Example model: speechbrain/sepformer-wham does audio source separation. Parameters Name Type args AudioToAudioArgs options? Options Returns Promise \< AudioToAudioReturn > Defined in inference/src/tasks/audio/audioToAudio.ts:35 automaticSpeechRecognition ▸ automaticSpeechRecognition ( args , options? ): Promise \< AutomaticSpeechRecognitionOutput > This task reads some audio input and outputs the said words within the audio files. Recommended model (english language): facebook/wav2vec2-large-960h-lv60-self Parameters Name Type args AutomaticSpeechRecognitionArgs options? Options Returns Promise \< AutomaticSpeechRecognitionOutput > Defined in inference/src/tasks/audio/automaticSpeechRecognition.ts:24 chatCompletion ▸ chatCompletion ( args , options? ): Promise \< ChatCompletionOutput > Use the chat completion endpoint to generate a response to a prompt, using OpenAI message completion API no stream Parameters Name Type args BaseArgs & ChatCompletionInput options? Options Returns Promise \< ChatCompletionOutput > Defined in inference/src/tasks/nlp/chatCompletion.ts:9 chatCompletionStream ▸ chatCompletionStream ( args , options? ): AsyncGenerator \< ChatCompletionStreamOutput > Use to continue text from a prompt. Same as textGeneration but returns generator that can be read one token at a time Parameters Name Type args BaseArgs & ChatCompletionInput options? Options Returns AsyncGenerator \< ChatCompletionStreamOutput > Defined in inference/src/tasks/nlp/chatCompletionStream.ts:8 documentQuestionAnswering ▸ documentQuestionAnswering ( args , options? ): Promise \< DocumentQuestionAnsweringOutput > Answers a question on a document image. Recommended model: impira/layoutlm-document-qa. Parameters Name Type args DocumentQuestionAnsweringArgs options? Options Returns Promise \< DocumentQuestionAnsweringOutput > Defined in inference/src/tasks/multimodal/documentQuestionAnswering.ts:42 featureExtraction ▸ featureExtraction ( args , options? ): Promise \< FeatureExtractionOutput > This task reads some text and outputs raw float values, that are usually consumed as part of a semantic database/semantic search. Parameters Name Type args FeatureExtractionArgs options? Options Returns Promise \< FeatureExtractionOutput > Defined in inference/src/tasks/nlp/featureExtraction.ts:24 fillMask ▸ fillMask ( args , options? ): Promise \< FillMaskOutput > Tries to fill in a hole with a missing word (token to be precise). That’s the base task for BERT models. Parameters Name Type args FillMaskArgs options? Options Returns Promise \< FillMaskOutput > Defined in inference/src/tasks/nlp/fillMask.ts:31 imageClassification ▸ imageClassification ( args , options? ): Promise \< ImageClassificationOutput > This task reads some image input and outputs the likelihood of classes. Recommended model: google/vit-base-patch16-224 Parameters Name Type args ImageClassificationArgs options? Options Returns Promise \< ImageClassificationOutput > Defined in inference/src/tasks/cv/imageClassification.ts:29 imageSegmentation ▸ imageSegmentation ( args , options? ): Promise \< ImageSegmentationOutput > This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects. Recommended model: facebook/detr-resnet-50-panoptic Parameters Name Type args ImageSegmentationArgs options? Options Returns Promise \< ImageSegmentationOutput > Defined in inference/src/tasks/cv/imageSegmentation.ts:33 imageToImage ▸ imageToImage ( args , options? ): Promise \< ImageToImageOutput > This task reads some text input and outputs an image. Recommended model: lllyasviel/sd-controlnet-depth Parameters Name Type args ImageToImageArgs options? Options Returns Promise \< ImageToImageOutput > Defined in inference/src/tasks/cv/imageToImage.ts:61 imageToText ▸ imageToText ( args , options? ): Promise \< ImageToTextOutput > This task reads some image input and outputs the text caption. Parameters Name Type args ImageToTextArgs options? Options Returns Promise \< ImageToTextOutput > Defined in inference/src/tasks/cv/imageToText.ts:22 objectDetection ▸ objectDetection ( args , options? ): Promise \< ObjectDetectionOutput > This task reads some image input and outputs the likelihood of classes & bounding boxes of detected objects. Recommended model: facebook/detr-resnet-50 Parameters Name Type args ObjectDetectionArgs options? Options Returns Promise \< ObjectDetectionOutput > Defined in inference/src/tasks/cv/objectDetection.ts:39 questionAnswering ▸ questionAnswering ( args , options? ): Promise \< QuestionAnsweringOutput > Want to have a nice know-it-all bot that can answer any question?. Recommended model: deepset/roberta-base-squad2 Parameters Name Type args QuestionAnsweringArgs options? Options Returns Promise \< QuestionAnsweringOutput > Defined in inference/src/tasks/nlp/questionAnswering.ts:34 request ▸ request \< T >( args , options? ): Promise \< T > Primitive to make custom calls to the inference provider Type parameters Name T Parameters Name Type args RequestArgs options? Options & { chatCompletion? : boolean ; task? : string ; taskHint? : InferenceTask } Returns Promise \< T > Defined in inference/src/tasks/custom/request.ts:7 sentenceSimilarity ▸ sentenceSimilarity ( args , options? ): Promise \< SentenceSimilarityOutput > Calculate the semantic similarity between one text and a list of other sentences by comparing their embeddings. Parameters Name Type args SentenceSimilarityArgs options? Options Returns Promise \< SentenceSimilarityOutput > Defined in inference/src/tasks/nlp/sentenceSimilarity.ts:24 streamingRequest ▸ streamingRequest \< T >( args , options? ): AsyncGenerator \< T > Primitive to make custom inference calls that expect server-sent events, and returns the response through a generator Type parameters Name T Parameters Name Type args RequestArgs options? Options & { chatCompletion? : boolean ; task? : string ; taskHint? : InferenceTask } Returns AsyncGenerator \< T > Defined in inference/src/tasks/custom/streamingRequest.ts:9 summarization ▸ summarization ( args , options? ): Promise \< SummarizationOutput > This task is well known to summarize longer text into shorter text. Be careful, some models have a maximum length of input. That means that the summary cannot handle full books for instance. Be careful when choosing your model. Parameters Name Type args SummarizationArgs options? Options Returns Promise \< SummarizationOutput > Defined in inference/src/tasks/nlp/summarization.ts:52 tableQuestionAnswering ▸ tableQuestionAnswering ( args , options? ): Promise \< TableQuestionAnsweringOutput > Don’t know SQL? Don’t want to dive into a large spreadsheet? Ask questions in plain english! Recommended model: google/tapas-base-finetuned-wtq. Parameters Name Type args TableQuestionAnsweringArgs options? Options Returns Promise \< TableQuestionAnsweringOutput > Defined in inference/src/tasks/nlp/tableQuestionAnswering.ts:40 tabularClassification ▸ tabularClassification ( args , options? ): Promise \< TabularClassificationOutput > Predicts target label for a given set of features in tabular form. Typically, you will want to train a classification model on your training data and use it with your new data of the same format. Example model: vvmnnnkv/wine-quality Parameters Name Type args TabularClassificationArgs options? Options Returns Promise \< TabularClassificationOutput > Defined in inference/src/tasks/tabular/tabularClassification.ts:24 tabularRegression ▸ tabularRegression ( args , options? ): Promise \< TabularRegressionOutput > Predicts target value for a given set of features in tabular form. Typically, you will want to train a regression model on your training data and use it with your new data of the same format. Example model: scikit-learn/Fish-Weight Parameters Name Type args TabularRegressionArgs options? Options Returns Promise \< TabularRegressionOutput > Defined in inference/src/tasks/tabular/tabularRegression.ts:24 textClassification ▸ textClassification ( args , options? ): Promise \< TextClassificationOutput > Usually used for sentiment-analysis this will output the likelihood of classes of an input. Recommended model: distilbert-base-uncased-finetuned-sst-2-english Parameters Name Type args TextClassificationArgs options? Options Returns Promise \< TextClassificationOutput > Defined in inference/src/tasks/nlp/textClassification.ts:26 textGeneration ▸ textGeneration ( args , options? ): Promise \< TextGenerationOutput > Use to continue text from a prompt. This is a very generic task. Recommended model: gpt2 (it’s a simple model, but fun to play with). Parameters Name Type args BaseArgs & TextGenerationInput options? Options Returns Promise \< TextGenerationOutput > Defined in inference/src/tasks/nlp/textGeneration.ts:27 textGenerationStream ▸ textGenerationStream ( args , options? ): AsyncGenerator \< TextGenerationStreamOutput > Use to continue text from a prompt. Same as textGeneration but returns generator that can be read one token at a time Parameters Name Type args BaseArgs & TextGenerationInput options? Options Returns AsyncGenerator \< TextGenerationStreamOutput > Defined in inference/src/tasks/nlp/textGenerationStream.ts:88 textToImage ▸ textToImage ( args , options? ): Promise \< TextToImageOutput > This task reads some text input and outputs an image. Recommended model: stabilityai/stable-diffusion-2 Parameters Name Type args TextToImageArgs options? Options Returns Promise \< TextToImageOutput > Defined in inference/src/tasks/cv/textToImage.ts:59 textToSpeech ▸ textToSpeech ( args , options? ): Promise \< TextToSpeechOutput > This task synthesize an audio of a voice pronouncing a given text. Recommended model: espnet/kan-bayashi_ljspeech_vits Parameters Name Type args TextToSpeechArgs options? Options Returns Promise \< TextToSpeechOutput > Defined in inference/src/tasks/audio/textToSpeech.ts:18 tokenClassification ▸ tokenClassification ( args , options? ): Promise \< TokenClassificationOutput > Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text. Recommended model: dbmdz/bert-large-cased-finetuned-conll03-english Parameters Name Type args TokenClassificationArgs options? Options Returns Promise \< TokenClassificationOutput > Defined in inference/src/tasks/nlp/tokenClassification.ts:57 translation ▸ translation ( args , options? ): Promise \< TranslationOutput > This task is well known to translate text from one language to another. Recommended model: Helsinki-NLP/opus-mt-ru-en. Parameters Name Type args TranslationArgs options? Options Returns Promise \< TranslationOutput > Defined in inference/src/tasks/nlp/translation.ts:24 visualQuestionAnswering ▸ visualQuestionAnswering ( args , options? ): Promise \< VisualQuestionAnsweringOutput > Answers a question on an image. Recommended model: dandelin/vilt-b32-finetuned-vqa. Parameters Name Type args VisualQuestionAnsweringArgs options? Options Returns Promise \< VisualQuestionAnsweringOutput > Defined in inference/src/tasks/multimodal/visualQuestionAnswering.ts:32 zeroShotClassification ▸ zeroShotClassification ( args , options? ): Promise \< ZeroShotClassificationOutput > This task is super useful to try out classification with zero code, you simply pass a sentence/paragraph and the possible labels for that sentence, and you get a result. Recommended model: facebook/bart-large-mnli. Parameters Name Type args ZeroShotClassificationArgs options? Options Returns Promise \< ZeroShotClassificationOutput > Defined in inference/src/tasks/nlp/zeroShotClassification.ts:34 zeroShotImageClassification ▸ zeroShotImageClassification ( args , options? ): Promise \< ZeroShotImageClassificationOutput > Classify an image to specified classes. Recommended model: openai/clip-vit-large-patch14-336 Parameters Name Type args ZeroShotImageClassificationArgs options? Options Returns Promise \< ZeroShotImageClassificationOutput > Defined in inference/src/tasks/cv/zeroShotImageClassification.ts:33 < > Update on GitHub ← Use Inference Endpoints HfInference → @huggingface/inference Classes Interfaces Type Aliases Audio Classification Args Defined in Audio Classification Return Defined in Audio To Audio Args Defined in Audio To Audio Return Defined in Automatic Speech Recognition Args Defined in Document Question Answering Args Defined in Feature Extraction Args Defined in Feature Extraction Output Defined in Fill Mask Args Defined in Fill Mask Output Defined in Image Classification Args Defined in Image Classification Output Defined in Image Segmentation Args Defined in Image Segmentation Output Defined in Image To Image Args Defined in Image To Image Output Defined in Image To Text Args Defined in Inference Provider Defined in Inference Task Defined in Model Id Defined in Object Detection Args Defined in Object Detection Output Defined in Provider Mapping Type parameters Defined in Question Answering Args Defined in Request Args Defined in Sentence Similarity Args Defined in Sentence Similarity Output Defined in Summarization Args Defined in Table Question Answering Args Defined in Tabular Classification Args Defined in Tabular Classification Output Defined in Tabular Regression Args Defined in Tabular Regression Output Defined in Text Classification Args Defined in Text Classification Output Defined in Text Generation Stream Finish Reason Defined in Text To Image Args Defined in Text To Image Output Defined in Text To Speech Args Defined in Text To Speech Output Defined in Token Classification Args Defined in Token Classification Output Defined in Translation Args Defined in Translation Output Defined in Visual Question Answering Args Defined in Zero Shot Classification Args Defined in Zero Shot Classification Output Defined in Zero Shot Image Classification Args Defined in Zero Shot Image Classification Output Defined in Variables FA L _ A I _ SUPPORTE D _ MODE L _ IDS Defined in INFERENC E _ PROVIDERS Defined in REPLICAT E _ SUPPORTE D _ MODE L _ IDS Defined in SAMBANOV A _ SUPPORTE D _ MODE L _ IDS Defined in TOGETHE R _ SUPPORTE D _ MODE L _ IDS Defined in Functions audio Classification Parameters Returns Defined in audio To Audio Parameters Returns Defined in automatic Speech Recognition Parameters Returns Defined in chat Completion Parameters Returns Defined in chat Completion Stream Parameters Returns Defined in document Question Answering Parameters Returns Defined in feature Extraction Parameters Returns Defined in fill Mask Parameters Returns Defined in image Classification Parameters Returns Defined in image Segmentation Parameters Returns Defined in image To Image Parameters Returns Defined in image To Text Parameters Returns Defined in object Detection Parameters Returns Defined in question Answering Parameters Returns Defined in request Type parameters Parameters Returns Defined in sentence Similarity Parameters Returns Defined in streaming Request Type parameters Parameters Returns Defined in summarization Parameters Returns Defined in table Question Answering Parameters Returns Defined in tabular Classification Parameters Returns Defined in tabular Regression Parameters Returns Defined in text Classification Parameters Returns Defined in text Generation Parameters Returns Defined in text Generation Stream Parameters Returns Defined in text To Image Parameters Returns Defined in text To Speech Parameters Returns Defined in token Classification Parameters Returns Defined in translation Parameters Returns Defined in visual Question Answering Parameters Returns Defined in zero Shot Classification Parameters Returns Defined in zero Shot Image Classification Parameters Returns Defined in
Server-side_Audio_Processing_in_Node.js.txt
Server-side Audio Processing in Node.js Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Server-side Audio Processing in Node.js Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Server-side Audio Processing in Node.js A major benefit of writing code for the web is that you can access the multitude of APIs that are available in modern browsers. Unfortunately, when writing server-side code, we are not afforded such luxury, so we have to find another way. In this tutorial, we will design a simple Node.js application that uses Transformers.js for speech recognition with Whisper , and in the process, learn how to process audio on the server. The main problem we need to solve is that the Web Audio API is not available in Node.js, meaning we can’t use the AudioContext class to process audio. So, we will need to install third-party libraries to obtain the raw audio data. For this example, we will only consider .wav files, but the same principles apply to other audio formats. This tutorial will be written as an ES module, but you can easily adapt it to use CommonJS instead. For more information, see the node tutorial . Useful links: Source code Documentation Prerequisites Node.js version 18+ npm version 9+ Getting started Let’s start by creating a new Node.js project and installing Transformers.js via NPM : Copied npm init -y npm i @huggingface/transformers Remember to add "type": "module" to your package.json to indicate that your project uses ECMAScript modules. Next, let’s install the wavefile package, which we will use for loading .wav files: Copied npm i wavefile Creating the application Start by creating a new file called index.js , which will be the entry point for our application. Let’s also import the necessary modules: Copied import { pipeline } from '@huggingface/transformers' ; import wavefile from 'wavefile' ; For this tutorial, we will use the Xenova/whisper-tiny.en model, but feel free to choose one of the other whisper models from the Hugging Face Hub . Let’s create our pipeline with: Copied let transcriber = await pipeline ( 'automatic-speech-recognition' , 'Xenova/whisper-tiny.en' ); Next, let’s load an audio file and convert it to the format required by Transformers.js: Copied // Load audio data let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/jfk.wav' ; let buffer = Buffer . from ( await fetch (url). then ( x => x. arrayBuffer ())) // Read .wav file and convert it to required format let wav = new wavefile. WaveFile (buffer); wav. toBitDepth ( '32f' ); // Pipeline expects input as a Float32Array wav. toSampleRate ( 16000 ); // Whisper expects audio with a sampling rate of 16000 let audioData = wav. getSamples (); if ( Array . isArray (audioData)) { if (audioData. length > 1 ) { const SCALING_FACTOR = Math . sqrt ( 2 ); // Merge channels (into first channel to save memory) for ( let i = 0 ; i < audioData[ 0 ]. length ; ++i) { audioData[ 0 ][i] = SCALING_FACTOR * (audioData[ 0 ][i] + audioData[ 1 ][i]) / 2 ; } } // Select first channel audioData = audioData[ 0 ]; } Finally, let’s run the model and measure execution duration. Copied let start = performance. now (); let output = await transcriber (audioData); let end = performance. now (); console . log ( `Execution duration: ${(end - start) / 1000 } seconds` ); console . log (output); You can now run the application with node index.js . Note that when running the script for the first time, it may take a while to download and cache the model. Subsequent requests will use the cached model, and model loading will be much faster. You should see output similar to: Copied Execution duration: 0.6460317999720574 seconds { text: ' And so my fellow Americans ask not what your country can do for you . Ask what you can do for your country.' } That’s it! You’ve successfully created a Node.js application that uses Transformers.js for speech recognition with Whisper. You can now use this as a starting point for your own applications. < > Update on GitHub ← Accessing Private/Gated Models Index → Server-side Audio Processing in Node.js Prerequisites Getting started Creating the application
Splits_and_subsets.txt
Splits and subsets Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Splits and subsets Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Splits and subsets Machine learning datasets are commonly organized in splits and they may also have subsets (also called configurations ). These internal structures provide the scaffolding for building out a dataset, and determines how a dataset should be split and organized. Understanding a dataset’s structure can help you create your own dataset, and know which subset of data you should use when during model training and evaluation. Splits Every processed and cleaned dataset contains splits , specific parts of the data reserved for specific needs. The most common splits are: train : data used to train a model; this data is exposed to the model validation : data reserved for evaluation and improving model hyperparameters; this data is hidden from the model test : data reserved for evaluation only; this data is completely hidden from the model and ourselves The validation and test sets are especially important to ensure a model is actually learning instead of overfitting , or just memorizing the data. Subsets A subset (also called configuration ) is a higher-level internal structure than a split, and a subset contains splits. You can think of a subset as a sub-dataset contained within a larger dataset. It is a useful structure for adding additional layers of organization to a dataset. For example, if you take a look at the Multilingual LibriSpeech (MLS) dataset, you’ll notice there are eight different languages. While you can create a dataset containing all eight languages, it’s probably neater to create a dataset with each language as a subset. This way, users can instantly load a dataset with their language of interest instead of preprocessing the dataset to filter for a specific language. Subsets are flexible, and can be used to organize a dataset along whatever objective you’d like. For example, the SceneParse150 dataset uses subsets to organize the dataset by task. One subset is dedicated to segmenting the whole image, while the other subset is for instance segmentation. < > Update on GitHub ← PySpark Data types → Splits and subsets Splits Subsets
Load_text_data.txt
Load text data Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Load text data Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Load text data This guide shows you how to load text datasets. To learn how to load any type of dataset, take a look at the general loading guide . Text files are one of the most common file types for storing a dataset. By default, 🤗 Datasets samples a text file line by line to build the dataset. Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "text" , data_files={ "train" : [ "my_text_1.txt" , "my_text_2.txt" ], "test" : "my_test_file.txt" }) # Load from a directory >>> dataset = load_dataset( "text" , data_dir= "path/to/text/dataset" ) To sample a text file by paragraph or even an entire document, use the sample_by parameter: Copied # Sample by paragraph >>> dataset = load_dataset( "text" , data_files={ "train" : "my_train_file.txt" , "test" : "my_test_file.txt" }, sample_by= "paragraph" ) # Sample by document >>> dataset = load_dataset( "text" , data_files={ "train" : "my_train_file.txt" , "test" : "my_test_file.txt" }, sample_by= "document" ) You can also use grep patterns to load specific files: Copied >>> from datasets import load_dataset >>> c4_subset = load_dataset( "allenai/c4" , data_files= "en/c4-train.0000*-of-01024.json.gz" ) To load remote text files via HTTP, pass the URLs instead: Copied >>> dataset = load_dataset( "text" , data_files= "https://huggingface.co/datasets/lhoestq/test/resolve/main/some_text.txt" ) To load XML data you can use the “xml” loader, which is equivalent to “text” with sample_by=“document”: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "xml" , data_files={ "train" : [ "my_xml_1.xml" , "my_xml_2.xml" ], "test" : "my_xml_file.xml" }) # Load from a directory >>> dataset = load_dataset( "xml" , data_dir= "path/to/xml/dataset" ) < > Update on GitHub ← Create a video dataset Process text data → Load text data
Text-generation-launcher_arguments.txt
Text-generation-launcher arguments Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Text-generation-launcher arguments text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Text-generation-launcher arguments Copied Text Generation Launcher Usage: text-generation-launcher [OPTIONS] Options: MODEL_ID Copied --model-id <MODEL_ID> The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `gpt2` or `OpenAssistant/oasst-sft-1-pythia-12b`. Or it can be a local directory containing the necessary files as saved by `save_pretrained(...)` methods of transformers [env: MODEL_ID=] [default: bigscience/bloom-560m] REVISION Copied --revision <REVISION> The actual revision of the model if you're referring to a model on the hub. You can use a specific commit id or a branch like `refs/pr/2` [env: REVISION=] VALIDATION_WORKERS Copied --validation-workers <VALIDATION_WORKERS> The number of tokenizer workers used for payload validation and truncation inside the router [env: VALIDATION_WORKERS=] [default: 2] SHARDED Copied --sharded <SHARDED> Whether to shard the model across multiple GPUs By default text-generation-inference will use all available GPUs to run the model. Setting it to `false` deactivates `num_shard` [env: SHARDED=] [possible values: true, false] NUM_SHARD Copied --num-shard <NUM_SHARD> The number of shards to use if you don't want to use all GPUs on a given machine. You can use `CUDA_VISIBLE_DEVICES=0,1 text-generation-launcher... --num_shard 2` and `CUDA_VISIBLE_DEVICES=2,3 text-generation-launcher... --num_shard 2` to launch 2 copies with 2 shard each on a given machine with 4 GPUs for instance [env: NUM_SHARD=] QUANTIZE Copied --quantize <QUANTIZE> Quantization method to use for the model. It is not necessary to specify this option for pre-quantized models, since the quantization method is read from the model configuration. Marlin kernels will be used automatically for GPTQ/AWQ models. [env: QUANTIZE=] Possible values: - awq: 4 bit quantization. Requires a specific AWQ quantized model: <https://hf.co/models?search=awq>. Should replace GPTQ models wherever possible because of the better latency - compressed-tensors: Compressed tensors, which can be a mixture of different quantization methods - eetq: 8 bit quantization, doesn't require specific model. Should be a drop-in replacement to bitsandbytes with much better performance. Kernels are from <https://github.com/NetEase-FuXi/EETQ.git> - exl2: Variable bit quantization. Requires a specific EXL2 quantized model: <https://hf.co/models?search=exl2>. Requires exllama2 kernels and does not support tensor parallelism (num_shard > 1) - gptq: 4 bit quantization. Requires a specific GTPQ quantized model: <https://hf.co/models?search=gptq>. text-generation-inference will use exllama (faster) kernels wherever possible, and use triton kernel (wider support) when it's not. AWQ has faster kernels - marlin: 4 bit quantization. Requires a specific Marlin quantized model: <https://hf.co/models?search=marlin> - bitsandbytes: Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, but it is known that the model will be much slower to run than the native f16 - bitsandbytes-nf4: Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, but it is known that the model will be much slower to run than the native f16 - bitsandbytes-fp4: Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better perplexity performance for you model - fp8: [FP8](https://developer.nvidia.com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format-for-ai/) (e4m3) works on H100 and above This dtype has native ops should be the fastest if available. This is currently not the fastest because of local unpacking + padding to satisfy matrix multiplication limitations SPECULATE Copied --speculate <SPECULATE> The number of input_ids to speculate on If using a medusa model, the heads will be picked up automatically Other wise, it will use n-gram speculation which is relatively free in terms of compute, but the speedup heavily depends on the task [env: SPECULATE=] DTYPE Copied --dtype <DTYPE> The dtype to be forced upon the model. This option cannot be used with `--quantize` [env: DTYPE=] [possible values: float16, bfloat16] KV_CACHE_DTYPE Copied --kv-cache-dtype <KV_CACHE_DTYPE> Specify the dtype for the key-value cache. When this option is not provided, the dtype of the model is used (typically `float16` or `bfloat16`). Currently the only supported value are `fp8_e4m3fn` and `fp8_e5m2` on CUDA [env: KV_CACHE_DTYPE=] [possible values: fp8_e4m3fn, fp8_e5m2] TRUST_REMOTE_CODE Copied --trust-remote-code Whether you want to execute hub modelling code. Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision [env: TRUST_REMOTE_CODE=] MAX_CONCURRENT_REQUESTS Copied --max-concurrent-requests <MAX_CONCURRENT_REQUESTS> The maximum amount of concurrent requests for this particular deployment. Having a low limit will refuse clients requests instead of having them wait for too long and is usually good to handle backpressure correctly [env: MAX_CONCURRENT_REQUESTS=] [default: 128] MAX_BEST_OF Copied --max-best-of <MAX_BEST_OF> This is the maximum allowed value for clients to set `best_of`. Best of makes `n` generations at the same time, and return the best in terms of overall log probability over the entire generated sequence [env: MAX_BEST_OF=] [default: 2] MAX_STOP_SEQUENCES Copied --max-stop-sequences <MAX_STOP_SEQUENCES> This is the maximum allowed value for clients to set `stop_sequences`. Stop sequences are used to allow the model to stop on more than just the EOS token, and enable more complex "prompting" where users can preprompt the model in a specific way and define their "own" stop token aligned with their prompt [env: MAX_STOP_SEQUENCES=] [default: 4] MAX_TOP_N_TOKENS Copied --max-top-n-tokens <MAX_TOP_N_TOKENS> This is the maximum allowed value for clients to set `top_n_tokens`. `top_n_tokens` is used to return information about the the `n` most likely tokens at each generation step, instead of just the sampled token. This information can be used for downstream tasks like for classification or ranking [env: MAX_TOP_N_TOKENS=] [default: 5] MAX_INPUT_TOKENS Copied --max-input-tokens <MAX_INPUT_TOKENS> This is the maximum allowed input length (expressed in number of tokens) for users. The larger this value, the longer prompt users can send which can impact the overall memory required to handle the load. Please note that some models have a finite range of sequence they can handle. Default to min(max_allocatable, max_position_embeddings) - 1 [env: MAX_INPUT_TOKENS=] MAX_INPUT_LENGTH Copied --max-input-length <MAX_INPUT_LENGTH> Legacy version of [`Args::max_input_tokens`] [env: MAX_INPUT_LENGTH=] MAX_TOTAL_TOKENS Copied --max-total-tokens <MAX_TOTAL_TOKENS> This is the most important value to set as it defines the "memory budget" of running clients requests. Clients will send input sequences and ask to generate `max_new_tokens` on top. with a value of `1512` users can send either a prompt of `1000` and ask for `512` new tokens, or send a prompt of `1` and ask for `1511` max_new_tokens. The larger this value, the larger amount each request will be in your RAM and the less effective batching can be. Default to min(max_allocatable, max_position_embeddings) [env: MAX_TOTAL_TOKENS=] WAITING_SERVED_RATIO Copied --waiting-served-ratio <WAITING_SERVED_RATIO> This represents the ratio of waiting queries vs running queries where you want to start considering pausing the running queries to include the waiting ones into the same batch. `waiting_served_ratio=1.2` Means when 12 queries are waiting and there's only 10 queries left in the current batch we check if we can fit those 12 waiting queries into the batching strategy, and if yes, then batching happens delaying the 10 running queries by a `prefill` run. This setting is only applied if there is room in the batch as defined by `max_batch_total_tokens`. [env: WAITING_SERVED_RATIO=] [default: 0.3] MAX_BATCH_PREFILL_TOKENS Copied --max-batch-prefill-tokens <MAX_BATCH_PREFILL_TOKENS> Limits the number of tokens for the prefill operation. Since this operation take the most memory and is compute bound, it is interesting to limit the number of requests that can be sent. Default to `max_input_tokens + 50` to give a bit of room [env: MAX_BATCH_PREFILL_TOKENS=] MAX_BATCH_TOTAL_TOKENS Copied --max-batch-total-tokens <MAX_BATCH_TOTAL_TOKENS> **IMPORTANT** This is one critical control to allow maximum usage of the available hardware. This represents the total amount of potential tokens within a batch. When using padding (not recommended) this would be equivalent of `batch_size` * `max_total_tokens`. However in the non-padded (flash attention) version this can be much finer. For `max_batch_total_tokens=1000`, you could fit `10` queries of `total_tokens=100` or a single query of `1000` tokens. Overall this number should be the largest possible amount that fits the remaining memory (after the model is loaded). Since the actual memory overhead depends on other parameters like if you're using quantization, flash attention or the model implementation, text-generation-inference cannot infer this number automatically. [env: MAX_BATCH_TOTAL_TOKENS=] MAX_WAITING_TOKENS Copied --max-waiting-tokens <MAX_WAITING_TOKENS> This setting defines how many tokens can be passed before forcing the waiting queries to be put on the batch (if the size of the batch allows for it). New queries require 1 `prefill` forward, which is different from `decode` and therefore you need to pause the running batch in order to run `prefill` to create the correct values for the waiting queries to be able to join the batch. With a value too small, queries will always "steal" the compute to run `prefill` and running queries will be delayed by a lot. With a value too big, waiting queries could wait for a very long time before being allowed a slot in the running batch. If your server is busy that means that requests that could run in ~2s on an empty server could end up running in ~20s because the query had to wait for 18s. This number is expressed in number of tokens to make it a bit more "model" agnostic, but what should really matter is the overall latency for end users. [env: MAX_WAITING_TOKENS=] [default: 20] MAX_BATCH_SIZE Copied --max-batch-size <MAX_BATCH_SIZE> Enforce a maximum number of requests per batch Specific flag for hardware targets that do not support unpadded inference [env: MAX_BATCH_SIZE=] CUDA_GRAPHS Copied --cuda-graphs <CUDA_GRAPHS> Specify the batch sizes to compute cuda graphs for. Use "0" to disable. Default = "1,2,4,8,16,32" [env: CUDA_GRAPHS=] HOSTNAME Copied --hostname <HOSTNAME> The IP address to listen on [env: HOSTNAME=] [default: 0.0.0.0] PORT Copied -p, --port <PORT> The port to listen on [env: PORT=] [default: 3000] SHARD_UDS_PATH Copied --shard-uds-path <SHARD_UDS_PATH> The name of the socket for gRPC communication between the webserver and the shards [env: SHARD_UDS_PATH=] [default: /tmp/text-generation-server] MASTER_ADDR Copied --master-addr <MASTER_ADDR> The address the master shard will listen on. (setting used by torch distributed) [env: MASTER_ADDR=] [default: localhost] MASTER_PORT Copied --master-port <MASTER_PORT> The address the master port will listen on. (setting used by torch distributed) [env: MASTER_PORT=] [default: 29500] HUGGINGFACE_HUB_CACHE Copied --huggingface-hub-cache <HUGGINGFACE_HUB_CACHE> The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance [env: HUGGINGFACE_HUB_CACHE=] WEIGHTS_CACHE_OVERRIDE Copied --weights-cache-override <WEIGHTS_CACHE_OVERRIDE> The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance [env: WEIGHTS_CACHE_OVERRIDE=] DISABLE_CUSTOM_KERNELS Copied --disable-custom-kernels For some models (like bloom), text-generation-inference implemented custom cuda kernels to speed up inference. Those kernels were only tested on A100. Use this flag to disable them if you're running on different hardware and encounter issues [env: DISABLE_CUSTOM_KERNELS=] CUDA_MEMORY_FRACTION Copied --cuda-memory-fraction <CUDA_MEMORY_FRACTION> Limit the CUDA available memory. The allowed value equals the total visible memory multiplied by cuda-memory-fraction [env: CUDA_MEMORY_FRACTION=] [default: 1.0] ROPE_SCALING Copied --rope-scaling <ROPE_SCALING> Rope scaling will only be used for RoPE models and allow rescaling the position rotary to accomodate for larger prompts. Goes together with `rope_factor`. `--rope-factor 2.0` gives linear scaling with a factor of 2.0 `--rope-scaling dynamic` gives dynamic scaling with a factor of 1.0 `--rope-scaling linear` gives linear scaling with a factor of 1.0 (Nothing will be changed basically) `--rope-scaling linear --rope-factor` fully describes the scaling you want [env: ROPE_SCALING=] [possible values: linear, dynamic] ROPE_FACTOR Copied --rope-factor <ROPE_FACTOR> Rope scaling will only be used for RoPE models See `rope_scaling` [env: ROPE_FACTOR=] JSON_OUTPUT Copied --json-output Outputs the logs in JSON format (useful for telemetry) [env: JSON_OUTPUT=] OTLP_ENDPOINT Copied --otlp-endpoint <OTLP_ENDPOINT> [env: OTLP_ENDPOINT=] OTLP_SERVICE_NAME Copied --otlp-service-name <OTLP_SERVICE_NAME> [env: OTLP_SERVICE_NAME=] [default: text-generation-inference.router] CORS_ALLOW_ORIGIN Copied --cors-allow-origin <CORS_ALLOW_ORIGIN> [env: CORS_ALLOW_ORIGIN=] API_KEY Copied --api-key <API_KEY> [env: API_KEY=] WATERMARK_GAMMA Copied --watermark-gamma <WATERMARK_GAMMA> [env: WATERMARK_GAMMA=] WATERMARK_DELTA Copied --watermark-delta <WATERMARK_DELTA> [env: WATERMARK_DELTA=] NGROK Copied --ngrok Enable ngrok tunneling [env: NGROK=] NGROK_AUTHTOKEN Copied --ngrok-authtoken <NGROK_AUTHTOKEN> ngrok authentication token [env: NGROK_AUTHTOKEN=] NGROK_EDGE Copied --ngrok-edge <NGROK_EDGE> ngrok edge [env: NGROK_EDGE=] TOKENIZER_CONFIG_PATH Copied --tokenizer-config-path <TOKENIZER_CONFIG_PATH> The path to the tokenizer config file. This path is used to load the tokenizer configuration which may include a `chat_template`. If not provided, the default config will be used from the model hub [env: TOKENIZER_CONFIG_PATH=] DISABLE_GRAMMAR_SUPPORT Copied --disable-grammar-support Disable outlines grammar constrained generation. This is a feature that allows you to generate text that follows a specific grammar [env: DISABLE_GRAMMAR_SUPPORT=] ENV Copied -e, --env Display a lot of information about your runtime environment MAX_CLIENT_BATCH_SIZE Copied --max-client-batch-size <MAX_CLIENT_BATCH_SIZE> Control the maximum number of inputs that a client can send in a single request [env: MAX_CLIENT_BATCH_SIZE=] [default: 4] LORA_ADAPTERS Copied --lora-adapters <LORA_ADAPTERS> Lora Adapters a list of adapter ids i.e. `repo/adapter1,repo/adapter2` to load during startup that will be available to callers via the `adapter_id` field in a request [env: LORA_ADAPTERS=] USAGE_STATS Copied --usage-stats <USAGE_STATS> Control if anonymous usage stats are collected. Options are "on", "off" and "no-stack" Defaul is on [env: USAGE_STATS=] [default: on] Possible values: - on: Default option, usage statistics are collected anonymously - off: Disables all collection of usage statistics - no-stack: Doesn't send the error stack trace or error type, but allows sending a crash event PAYLOAD_LIMIT Copied --payload-limit <PAYLOAD_LIMIT> Payload size limit in bytes Default is 2MB [env: PAYLOAD_LIMIT=] [default: 2000000] ENABLE_PREFILL_LOGPROBS Copied --enable-prefill-logprobs Enables prefill logprobs Logprobs in the prompt are deactivated by default because they consume a large amount of VRAM (especially for long prompts). Using this flag reallows users to ask for them. [env: ENABLE_PREFILL_LOGPROBS=] HELP Copied -h, --help Print help (see a summary with '-h') VERSION Copied -V, --version Print version < > Update on GitHub ← TensorRT-LLM Exported Metrics → Text-generation-launcher arguments MODE L_ID REVISION VALIDATIO N_WORKERS SHARDED NU M_SHARD QUANTIZE SPECULATE DTYPE K V_CACH E_DTYPE TRUS T_REMOT E_CODE MA X_CONCURREN T_REQUESTS MA X_BES T_OF MA X_STO P_SEQUENCES MA X_TO P_ N_TOKENS MA X_INPU T_TOKENS MA X_INPU T_LENGTH MA X_TOTA L_TOKENS WAITIN G_SERVE D_RATIO MA X_BATC H_PREFIL L_TOKENS MA X_BATC H_TOTA L_TOKENS MA X_WAITIN G_TOKENS MA X_BATC H_SIZE CUD A_GRAPHS HOSTNAME PORT SHAR D_UD S_PATH MASTE R_ADDR MASTE R_PORT HUGGINGFAC E_HU B_CACHE WEIGHT S_CACH E_OVERRIDE DISABL E_CUSTO M_KERNELS CUD A_MEMOR Y_FRACTION ROP E_SCALING ROP E_FACTOR JSO N_OUTPUT OTL P_ENDPOINT OTL P_SERVIC E_NAME COR S_ALLO W_ORIGIN AP I_KEY WATERMAR K_GAMMA WATERMAR K_DELTA NGROK NGRO K_AUTHTOKEN NGRO K_EDGE TOKENIZE R_CONFI G_PATH DISABL E_GRAMMA R_SUPPORT ENV MA X_CLIEN T_BATC H_SIZE LOR A_ADAPTERS USAG E_STATS PAYLOA D_LIMIT ENABL E_PREFIL L_LOGPROBS HELP VERSION
How_to_configure_SAML_SSO_with_Azure.txt
How to configure SAML SSO with Azure Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation How to configure SAML SSO with Azure Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security User Access Tokens Two-Factor Authentication Git over SSH Signing Commits with GPG Single Sign-On (SSO) How to configure OIDC with Okta in the Hub How to configure SAML with Okta in the Hub How to configure SAML with Azure in the Hub How to configure OIDC with Azure in the Hub Advanced Access Control (Resource Groups) Malware Scanning Pickle Scanning Secrets Scanning Protect AI Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started How to configure SAML SSO with Azure In this guide, we will use Azure as the SSO provider and with the Security Assertion Markup Language (SAML) protocol as our preferred identity protocol. We currently support SP-initiated and IdP-initiated authentication. User provisioning is not yet supported at this time. This feature is part of the Enterprise Hub . Step 1: Create a new application in your Identity Provider Open a new tab/window in your browser and sign in to the Azure portal of your organization. Navigate to “Enterprise applications” and click the “New application” button. You’ll be redirected to this page, click on “Create your own application”, fill the name of your application, and then “Create” the application. Then select “Single Sign-On”, and select SAML Step 2: Configure your application on Azure Open a new tab/window in your browser and navigate to the SSO section of your organization’s settings. Select the SAML protocol. Copy the “SP Entity Id” from the organization’s settings on Hugging Face, and paste it in the “Identifier (Entity Id)” field on Azure (1). Copy the “Assertion Consumer Service URL” from the organization’s settings on Hugging Face, and paste it in the “Reply URL” field on Azure (2). The URL looks like this: https://huggingface.co/organizations/[organizationIdentifier]/saml/consume . Then under “SAML Certificates”, verify that “Signin Option” is set to “Sign SAML response and assertion”. Save your new application. Step 3: Finalize configuration on Hugging Face In your Azure application, under “Set up”, find the following field: Login Url And under “SAML Certificates”: Download the “Certificate (base64)” You will need them to finalize the SSO setup on Hugging Face. In the SSO section of your organization’s settings, copy-paste these values from Azure: Login Url -> Sign-on URL Certificate -> Public certificate The public certificate must have the following format: Copied ----- BEGIN CERTIFICATE ----- {certificate} ----- END CERTIFICATE ----- You can now click on “Update and Test SAML configuration” to save the settings. You should be redirected to your SSO provider (IdP) login prompt. Once logged in, you’ll be redirected to your organization’s settings page. A green check mark near the SAML selector will attest that the test was successful. Step 4: Enable SSO in your organization Now that Single Sign-On is configured and tested, you can enable it for members of your organization by clicking on the “Enable” button. Once enabled, members of your organization must complete the SSO authentication flow described in How does it work? . < > Update on GitHub ← How to configure SAML with Okta in the Hub How to configure OIDC with Azure in the Hub → How to configure SAM L SS O with Azure Step 1: Create a new application in your Identity Provider Step 2: Configure your application on Azure Step 3: Finalize configuration on Hugging Face Step 4: Enable SS O in your organization
Audit_Logs.txt
Audit Logs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Audit Logs Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Single Sign-On (SSO) Audit Logs Storage Regions Dataset viewer for Private datasets Resource Groups (Access Control) Advanced Compute Options Advanced Security Tokens Management Analytics Network Security Gating Group Collections Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Audit Logs This feature is part of the Enterprise Hub . Audit Logs enable organization admins to easily review actions taken by members, including organization membership, repository settings and billing changes. Accessing Audit Logs Audit Logs are accessible through your organization settings. Each log entry includes: Who performed the action What type of action was taken A description of the change Location and anonymized IP address Date and time of the action You can also download the complete audit log as a JSON file for further analysis. What Events Are Tracked? Organization Management & Security Core organization changes Creation, deletion, and restoration Name changes and settings updates Security management Security token rotation Token approval system (enabling/disabling, authorization requests, approvals, denials, revocations) SSO events (logins and joins) Membership and Access Control Member lifecycle Invitations (sending, accepting) and automatic joins Adding and removing members Role changes and departures Join settings Domain-based access Automatic join configurations Content and Resource Management Repository administration Core actions (creation, deletion, moving, duplication) Settings and configuration changes Enabling/disabling repositories DOI management Resource group assignments Collections Creation and deletion events Repository security Secrets management (individual and bulk) Variables handling (individual and bulk) Spaces configuration Storage modifications Hardware settings Sleep time adjustments Billing and AWS Integration Payment management Payment methods (adding/removing) Customer account creation AWS integration setup and removal Subscription lifecycle Starting and renewing Updates and cancellations Cancellation reversals Resource Groups Administrative actions Creation and deletion Settings modifications Member management Adding and removing users Role assignments and changes < > Update on GitHub ← Single Sign-On (SSO) Storage Regions → Audit Logs Accessing Audit Logs What Events Are Tracked? Organization Management & Security Membership and Access Control Content and Resource Management Billing and AW S Integration Resource Groups
Share_a_dataset_using_the_CLI.txt
Share a dataset using the CLI Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Share a dataset using the CLI Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Share a dataset using the CLI At Hugging Face, we are on a mission to democratize good Machine Learning and we believe in the value of open source. That’s why we designed 🤗 Datasets so that anyone can share a dataset with the greater ML community. There are currently thousands of datasets in over 100 languages in the Hugging Face Hub, and the Hugging Face team always welcomes new contributions! Dataset repositories offer features such as: Free dataset hosting Dataset versioning Commit history and diffs Metadata for discoverability Dataset cards for documentation, licensing, limitations, etc. Dataset Viewer This guide will show you how to share a dataset folder or repository that can be easily accessed by anyone. Add a dataset You can share your dataset with the community with a dataset repository on the Hugging Face Hub. It can also be a private dataset if you want to control who has access to it. In a dataset repository, you can host all your data files and configure your dataset to define which file goes to which split. The following formats are supported: CSV, TSV, JSON, JSON lines, text, Parquet, Arrow, SQLite, WebDataset. Many kinds of compressed file types are also supported: GZ, BZ2, LZ4, LZMA or ZSTD. For example, your dataset can be made of .json.gz files. On the other hand, if your dataset is not in a supported format or if you want more control over how your dataset is loaded, you can write your own dataset script. Note that some feature are not available for datasets defined using a loading scripts, such as the Dataset Viewer. Users also have to pass trust_remote_code=True to load the dataset. It is generally recommended for datasets to not rely on a loading script if possible, to benefit from all the Hub’s features. When loading a dataset from the Hub, all the files in the supported formats are loaded, following the repository structure . However if there’s a dataset script, it is downloaded and executed to download and prepare the dataset instead. For more information on how to load a dataset from the Hub, take a look at the load a dataset from the Hub tutorial. Create the repository Sharing a community dataset will require you to create an account on hf.co if you don’t have one yet. You can directly create a new dataset repository from your account on the Hugging Face Hub, but this guide will show you how to upload a dataset from the terminal. Make sure you are in the virtual environment where you installed Datasets, and run the following command: Copied huggingface- cli login Login using your Hugging Face Hub credentials, and create a new dataset repository: Copied huggingface-cli repo create my-cool- dataset -- type dataset Add the -organization flag to create a repository under a specific organization: Copied huggingface-cli repo create my-cool- dataset -- type dataset --organization your-org-name Prepare your files Check your directory to ensure the only files you’re uploading are: The data files of the dataset The dataset card README.md (optional) your_dataset_name.py is your dataset loading script (optional if your data files are already in the supported formats csv/jsonl/json/parquet/txt). To create a dataset script, see the dataset script page. Note that some feature are not available for datasets defined using a loading scripts, such as the Dataset Viewer. Users also have to pass trust_remote_code=True to load the dataset. It is generally recommended for datasets to not rely on a loading script if possible, to benefit from all the Hub’s features. huggingface-cli upload Use the huggingface-cli upload command to upload files to the Hub directly. Internally, it uses the same upload_file and upload_folder helpers described in the Upload guide . In the examples below, we will walk through the most common use cases. For a full list of available options, you can run: Copied >>> huggingface-cli upload -- help For more general information about huggingface-cli you can check the CLI guide . Upload an entire folder The default usage for this command is: Copied # Usage: huggingface-cli upload [dataset_repo_id] [local_path] [path_in_repo] --repo-type dataset To upload the current directory at the root of the repo, use: Copied >>> huggingface-cli upload my-cool-dataset . . --repo-type dataset https://huggingface.co/datasets/Wauplin/my-cool-dataset/tree/main/ If the repo doesn’t exist yet, it will be created automatically. You can also upload a specific folder: Copied >>> huggingface-cli upload my-cool-dataset ./data . --repo-type dataset https://huggingface.co/datasetsWauplin/my-cool-dataset/tree/main/ Finally, you can upload a folder to a specific destination on the repo: Copied >>> huggingface-cli upload my-cool-dataset ./path/to/curated/data /data/train --repo-type dataset https://huggingface.co/datasetsWauplin/my-cool-dataset/tree/main/data/train Upload a single file You can also upload a single file by setting local_path to point to a file on your machine. If that’s the case, path_in_repo is optional and will default to the name of your local file: Copied >>> huggingface-cli upload Wauplin/my-cool-dataset ./files/train.csv --repo-type dataset https://huggingface.co/datasetsWauplin/my-cool-dataset/blob/main/train.csv If you want to upload a single file to a specific directory, set path_in_repo accordingly: Copied >>> huggingface-cli upload Wauplin/my-cool-dataset ./files/train.csv /data/train.csv --repo-type dataset https://huggingface.co/datasetsWauplin/my-cool-dataset/blob/main/data/train.csv Upload multiple files To upload multiple files from a folder at once without uploading the entire folder, use the --include and --exclude patterns. It can also be combined with the --delete option to delete files on the repo while uploading new ones. In the example below, we sync the local Space by deleting remote files and uploading all CSV files: Copied # Sync local Space with Hub (upload new CSV files, delete removed files) >>> huggingface-cli upload Wauplin/my-cool-dataset --repo-type dataset --include= "/data/*.csv" --delete= "*" --commit-message= "Sync local dataset with Hub" ... Upload to an organization To upload content to a repo owned by an organization instead of a personal repo, you must explicitly specify it in the repo_id : Copied >>> huggingface-cli upload MyCoolOrganization/my-cool-dataset . . --repo-type dataset https://huggingface.co/datasetsMyCoolOrganization/my-cool-dataset/tree/main/ Upload to a specific revision By default, files are uploaded to the main branch. If you want to upload files to another branch or reference, use the --revision option: Copied # Upload files to a PR huggingface-cli upload bigcode/the-stack . . --repo-type dataset --revision refs/pr/104 ... Note: if revision does not exist and --create-pr is not set, a branch will be created automatically from the main branch. Upload and create a PR If you don’t have the permission to push to a repo, you must open a PR and let the authors know about the changes you want to make. This can be done by setting the --create-pr option: Copied # Create a PR and upload the files to it >>> huggingface-cli upload bigcode/the-stack --repo-type dataset --revision refs/pr/104 --create-pr . . https://huggingface.co/datasets/bigcode/the-stack/blob/refs%2Fpr%2F104/ Upload at regular intervals In some cases, you might want to push regular updates to a repo. For example, this is useful if your dataset is growing over time and you want to upload the data folder every 10 minutes. You can do this using the --every option: Copied # Upload new logs every 10 minutes huggingface-cli upload my-cool-dynamic-dataset data/ --every=10 Specify a commit message Use the --commit-message and --commit-description to set a custom message and description for your commit instead of the default one Copied >>> huggingface-cli upload Wauplin/my-cool-dataset ./data . --repo-type dataset --commit-message= "Version 2" --commit-description= "Train size: 4321. Check Dataset Viewer for more details." ... https://huggingface.co/datasetsWauplin/my-cool-dataset/tree/main Specify a token To upload files, you must use a token. By default, the token saved locally (using huggingface-cli login ) will be used. If you want to authenticate explicitly, use the --token option: Copied >>> huggingface-cli upload Wauplin/my-cool-dataset ./data . --repo-type dataset --token=hf_**** ... https://huggingface.co/datasetsWauplin/my-cool-data/tree/main Quiet mode By default, the huggingface-cli upload command will be verbose. It will print details such as warning messages, information about the uploaded files, and progress bars. If you want to silence all of this, use the --quiet option. Only the last line (i.e. the URL to the uploaded files) is printed. This can prove useful if you want to pass the output to another command in a script. Copied >>> huggingface-cli upload Wauplin/my-cool-dataset ./data . --repo-type dataset --quiet https://huggingface.co/datasets/Wauplin/my-cool-dataset/tree/main Enjoy ! Congratulations, your dataset has now been uploaded to the Hugging Face Hub where anyone can load it in a single line of code! 🥳 Copied dataset = load_dataset( "Wauplin/my-cool-dataset" ) If your dataset is supported, it should also have a Dataset Viewer for everyone to explore the dataset content. Finally, don’t forget to enrich the dataset card to document your dataset and make it discoverable! Check out the Create a dataset card guide to learn more. < > Update on GitHub ← Load tabular data Create a dataset card → Share a dataset using the CLI Add a dataset Create the repository Prepare your files huggingface-cli upload Upload an entire folder Upload a single file Upload multiple files Upload to an organization Upload to a specific revision Upload and create a PR Upload at regular intervals Specify a commit message Specify a token Quiet mode Enjoy !
Accelerator.txt
Accelerator Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Accelerator Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Accelerator The Accelerator is the main class for enabling distributed training on any type of training setup. Read the Add Accelerator to your code tutorial to learn more about how to add the Accelerator to your script. Accelerator class accelerate. Accelerator < source > ( device_placement : bool = True split_batches : bool = <object object at 0x7f8a023d8f20> mixed_precision : PrecisionType | str | None = None gradient_accumulation_steps : int = 1 cpu : bool = False dataloader_config : DataLoaderConfiguration | None = None deepspeed_plugin : DeepSpeedPlugin | dict[str, DeepSpeedPlugin] | None = None fsdp_plugin : FullyShardedDataParallelPlugin | None = None megatron_lm_plugin : MegatronLMPlugin | None = None rng_types : list[str | RNGType] | None = None log_with : str | LoggerType | GeneralTracker | list[str | LoggerType | GeneralTracker] | None = None project_dir : str | os.PathLike | None = None project_config : ProjectConfiguration | None = None gradient_accumulation_plugin : GradientAccumulationPlugin | None = None step_scheduler_with_optimizer : bool = True kwargs_handlers : list[KwargsHandler] | None = None dynamo_backend : DynamoBackend | str | None = None dynamo_plugin : TorchDynamoPlugin | None = None deepspeed_plugins : DeepSpeedPlugin | dict[str, DeepSpeedPlugin] | None = None ) Parameters device_placement ( bool , optional , defaults to True ) — Whether or not the accelerator should put objects on device (tensors yielded by the dataloader, model, etc…). mixed_precision ( str , optional ) — Whether or not to use mixed precision training. Choose from ‘no’,‘fp16’,‘bf16’ or ‘fp8’. Will default to the value in the environment variable ACCELERATE_MIXED_PRECISION , which will use the default value in the accelerate config of the current system or the flag passed with the accelerate.launch command. ‘fp8’ requires the installation of transformers-engine. gradient_accumulation_steps ( int , optional , default to 1) — The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with Accelerator.accumulate . If not passed, will default to the value in the environment variable ACCELERATE_GRADIENT_ACCUMULATION_STEPS . Can also be configured through a GradientAccumulationPlugin . cpu ( bool , optional ) — Whether or not to force the script to execute on CPU. Will ignore GPU available if set to True and force the execution on one process only. dataloader_config ( DataLoaderConfiguration , optional ) — A configuration for how the dataloaders should be handled in distributed scenarios. deepspeed_plugin ( DeepSpeedPlugin or dict of str — DeepSpeedPlugin , optional ): Tweak your DeepSpeed related args using this argument. This argument is optional and can be configured directly using accelerate config . If using multiple plugins, use the configured key property of each plugin to access them from accelerator.state.get_deepspeed_plugin(key) . Alias for deepspeed_plugins . fsdp_plugin ( FullyShardedDataParallelPlugin , optional ) — Tweak your FSDP related args using this argument. This argument is optional and can be configured directly using accelerate config megatron_lm_plugin ( MegatronLMPlugin , optional ) — Tweak your MegatronLM related args using this argument. This argument is optional and can be configured directly using accelerate config rng_types (list of str or RNGType ) — The list of random number generators to synchronize at the beginning of each iteration in your prepared dataloaders. Should be one or several of: "torch" : the base torch random number generator "cuda" : the CUDA random number generator (GPU only) "xla" : the XLA random number generator (TPU only) "generator" : the torch.Generator of the sampler (or batch sampler if there is no sampler in your dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type. Will default to ["torch"] for PyTorch versions <=1.5.1 and ["generator"] for PyTorch versions >= 1.6. log_with (list of str , LoggerType or GeneralTracker , optional ) — A list of loggers to be setup for experiment tracking. Should be one or several of: "all" "tensorboard" "wandb" "comet_ml" If "all" is selected, will pick up all available trackers in the environment and initialize them. Can also accept implementations of GeneralTracker for custom trackers, and can be combined with "all" . project_config ( ProjectConfiguration , optional ) — A configuration for how saving the state can be handled. project_dir ( str , os.PathLike , optional ) — A path to a directory for storing data such as logs of locally-compatible loggers and potentially saved checkpoints. step_scheduler_with_optimizer ( bool , optional , defaults to True ) — Set True if the learning rate scheduler is stepped at the same time as the optimizer, False if only done under certain circumstances (at the end of each epoch, for instance). kwargs_handlers (list of KwargsHandler , optional ) — A list of KwargsHandler to customize how the objects related to distributed training, profiling or mixed precision are created. See kwargs for more information. dynamo_backend ( str or DynamoBackend , optional , defaults to "no" ) — Set to one of the possible dynamo backends to optimize your training with torch dynamo. dynamo_plugin ( TorchDynamoPlugin , optional ) — A configuration for how torch dynamo should be handled, if more tweaking than just the backend or mode is needed. gradient_accumulation_plugin ( GradientAccumulationPlugin , optional ) — A configuration for how gradient accumulation should be handled, if more tweaking than just the gradient_accumulation_steps is needed. Creates an instance of an accelerator for distributed training (on multi-GPU, TPU) or mixed precision training. Available attributes: device ( torch.device ) — The device to use. distributed_type ( DistributedType ) — The distributed training configuration. local_process_index ( int ) — The process index on the current machine. mixed_precision ( str ) — The configured mixed precision mode. num_processes ( int ) — The total number of processes used for training. optimizer_step_was_skipped ( bool ) — Whether or not the optimizer update was skipped (because of gradient overflow in mixed precision), in which case the learning rate should not be changed. process_index ( int ) — The overall index of the current process among all processes. state ( AcceleratorState ) — The distributed setup state. sync_gradients ( bool ) — Whether the gradients are currently being synced across all processes. use_distributed ( bool ) — Whether the current configuration is for distributed training. accumulate < source > ( *models ) Parameters *models (list of torch.nn.Module ) — PyTorch Modules that were prepared with Accelerator.prepare . Models passed to accumulate() will skip gradient syncing during backward pass in distributed training A context manager that will lightly wrap around and perform gradient accumulation automatically Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator(gradient_accumulation_steps= 1 ) >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler) >>> for input , output in dataloader: ... with accelerator.accumulate(model): ... outputs = model( input ) ... loss = loss_func(outputs) ... loss.backward() ... optimizer.step() ... scheduler.step() ... optimizer.zero_grad() autocast < source > ( autocast_handler : AutocastKwargs = None ) Will apply automatic mixed-precision inside the block inside this context manager, if it is enabled. Nothing different will happen otherwise. A different autocast_handler can be passed in to override the one set in the Accelerator object. This is useful in blocks under autocast where you want to revert to fp32. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator(mixed_precision= "fp16" ) >>> with accelerator.autocast(): ... train() backward < source > ( loss **kwargs ) Scales the gradients in accordance to the GradientAccumulationPlugin and calls the correct backward() based on the configuration. Should be used in lieu of loss.backward() . Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator(gradient_accumulation_steps= 2 ) >>> outputs = model(inputs) >>> loss = loss_fn(outputs, labels) >>> accelerator.backward(loss) check_trigger < source > ( ) Checks if the internal trigger tensor has been set to 1 in any of the processes. If so, will return True and reset the trigger tensor to 0. Note: Does not require wait_for_everyone() Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume later in the training script >>> # `should_do_breakpoint` is a custom function to monitor when to break, >>> # e.g. when the loss is NaN >>> if should_do_breakpoint(loss): ... accelerator.set_trigger() >>> # Assume later in the training script >>> if accelerator.check_trigger(): ... break clear < source > ( *objects ) Alias for Accelerate.free_memory , releases all references to the internal objects stored and call the garbage collector. You should call this method between two trainings with different models/optimizers. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer, scheduler = ... >>> model, optimizer, scheduler = accelerator.prepare(model, optimizer, scheduler) >>> model, optimizer, scheduler = accelerator.clear(model, optimizer, scheduler) clip_grad_norm_ < source > ( parameters max_norm norm_type = 2 ) → torch.Tensor Returns torch.Tensor Total norm of the parameter gradients (viewed as a single vector). Should be used in place of torch.nn.utils.clip_grad_norm_ . Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator(gradient_accumulation_steps= 2 ) >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler) >>> for input , target in dataloader: ... optimizer.zero_grad() ... output = model( input ) ... loss = loss_func(output, target) ... accelerator.backward(loss) ... if accelerator.sync_gradients: ... accelerator.clip_grad_norm_(model.parameters(), max_grad_norm) ... optimizer.step() clip_grad_value_ < source > ( parameters clip_value ) Should be used in place of torch.nn.utils.clip_grad_value_ . Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator(gradient_accumulation_steps= 2 ) >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler) >>> for input , target in dataloader: ... optimizer.zero_grad() ... output = model( input ) ... loss = loss_func(output, target) ... accelerator.backward(loss) ... if accelerator.sync_gradients: ... accelerator.clip_grad_value_(model.parameters(), clip_value) ... optimizer.step() end_training < source > ( ) Runs any special end training behaviors, such as stopping trackers on the main process only or destoying process group. Should always be called at the end of your script if using experiment tracking. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator(log_with= "tensorboard" ) >>> accelerator.init_trackers( "my_project" ) >>> # Do training >>> accelerator.end_training() free_memory < source > ( *objects ) Will release all references to the internal objects stored and call the garbage collector. You should call this method between two trainings with different models/optimizers. Also will reset Accelerator.step to 0. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer, scheduler = ... >>> model, optimizer, scheduler = accelerator.prepare(model, optimizer, scheduler) >>> model, optimizer, scheduler = accelerator.free_memory(model, optimizer, scheduler) gather < source > ( tensor ) → torch.Tensor , or a nested tuple/list/dictionary of torch.Tensor Parameters tensor ( torch.Tensor , or a nested tuple/list/dictionary of torch.Tensor ) — The tensors to gather across all processes. Returns torch.Tensor , or a nested tuple/list/dictionary of torch.Tensor The gathered tensor(s). Note that the first dimension of the result is num_processes multiplied by the first dimension of the input tensors. Gather the values in tensor across all processes and concatenate them on the first dimension. Useful to regroup the predictions from all processes when doing evaluation. Note: This gather happens in all processes. Example: Copied >>> # Assuming four processes >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> process_tensor = torch.tensor([accelerator.process_index]) >>> gathered_tensor = accelerator.gather(process_tensor) >>> gathered_tensor tensor([ 0 , 1 , 2 , 3 ]) gather_for_metrics < source > ( input_data use_gather_object = False ) Parameters input ( torch.Tensor , object , a nested tuple/list/dictionary of torch.Tensor , or a nested tuple/list/dictionary of object ) — The tensors or objects for calculating metrics across all processes use_gather_object( bool ) — Whether to forcibly use gather_object instead of gather (which is already done if all objects passed do not contain tensors). This flag can be useful for gathering tensors with different sizes that we don’t want to pad and concatenate along the first dimension. Using it with GPU tensors is not well supported and inefficient as it incurs GPU -> CPU transfer since tensors would be pickled. Gathers input_data and potentially drops duplicates in the last batch if on a distributed system. Should be used for gathering the inputs and targets for metric calculation. Example: Copied >>> # Assuming two processes, with a batch size of 5 on a dataset with 9 samples >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> dataloader = torch.utils.data.DataLoader( range ( 9 ), batch_size= 5 ) >>> dataloader = accelerator.prepare(dataloader) >>> batch = next ( iter (dataloader)) >>> gathered_items = accelerator.gather_for_metrics(batch) >>> len (gathered_items) 9 get_state_dict < source > ( model unwrap = True ) → dict Parameters model ( torch.nn.Module ) — A PyTorch model sent through Accelerator.prepare() unwrap ( bool , optional , defaults to True ) — Whether to return the original underlying state_dict of model or to return the wrapped state_dict Returns dict The state dictionary of the model potentially without full precision. Returns the state dictionary of a model sent through Accelerator.prepare() potentially without full precision. Example: Copied >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> net = torch.nn.Linear( 2 , 2 ) >>> net = accelerator.prepare(net) >>> state_dict = accelerator.get_state_dict(net) get_tracker < source > ( name : str unwrap : bool = False ) → GeneralTracker Parameters name ( str ) — The name of a tracker, corresponding to the .name property. unwrap ( bool ) — Whether to return the internal tracking mechanism or to return the wrapped tracker instead (recommended). Returns GeneralTracker The tracker corresponding to name if it exists. Returns a tracker from self.trackers based on name on the main process only. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator(log_with= "tensorboard" ) >>> accelerator.init_trackers( "my_project" ) >>> tensorboard_tracker = accelerator.get_tracker( "tensorboard" ) join_uneven_inputs < source > ( joinables even_batches = None ) Parameters joinables ( list[torch.distributed.algorithms.Joinable] ) — A list of models or optimizers that subclass torch.distributed.algorithms.Joinable . Most commonly, a PyTorch Module that was prepared with Accelerator.prepare for DistributedDataParallel training. even_batches ( bool , optional ) — If set, this will override the value of even_batches set in the Accelerator . If it is not provided, the default Accelerator value wil be used. A context manager that facilitates distributed training or evaluation on uneven inputs, which acts as a wrapper around torch.distributed.algorithms.join . This is useful when the total batch size does not evenly divide the length of the dataset. join_uneven_inputs is only supported for Distributed Data Parallel training on multiple GPUs. For any other configuration, this method will have no effect. Overidding even_batches will not affect iterable-style data loaders. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator(even_batches= True ) >>> ddp_model, optimizer, dataloader = accelerator.prepare(model, optimizer, dataloader) >>> with accelerator.join_uneven_inputs([ddp_model], even_batches= False ): ... for input , output in dataloader: ... outputs = model( input ) ... loss = loss_func(outputs) ... loss.backward() ... optimizer.step() ... optimizer.zero_grad() load_state < source > ( input_dir : str = None **load_model_func_kwargs ) Parameters input_dir ( str or os.PathLike ) — The name of the folder all relevant weights and states were saved in. Can be None if automatic_checkpoint_naming is used, and will pick up from the latest checkpoint. load_model_func_kwargs ( dict , optional ) — Additional keyword arguments for loading model which can be passed to the underlying load function, such as optional arguments for DeepSpeed’s load_checkpoint function or a map_location to load the model and optimizer on. Loads the current states of the model, optimizer, scaler, RNG generators, and registered objects. Should only be used in conjunction with Accelerator.save_state() . If a file is not registered for checkpointing, it will not be loaded if stored in the directory. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer, lr_scheduler = ... >>> model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler) >>> accelerator.load_state( "my_checkpoint" ) local_main_process_first < source > ( ) Lets the local main process go inside a with block. The other processes will enter the with block after the main process exits. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> with accelerator.local_main_process_first(): ... # This will be printed first by local process 0 then in a seemingly ... # random order by the other processes. ... print ( f"This will be printed by process {accelerator.local_process_index} " ) lomo_backward < source > ( loss : torch.Tensor learning_rate : float ) Runs backward pass on LOMO optimizers. main_process_first < source > ( ) Lets the main process go first inside a with block. The other processes will enter the with block after the main process exits. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> with accelerator.main_process_first(): ... # This will be printed first by process 0 then in a seemingly ... # random order by the other processes. ... print ( f"This will be printed by process {accelerator.process_index} " ) no_sync < source > ( model ) Parameters model ( torch.nn.Module ) — PyTorch Module that was prepared with Accelerator.prepare A context manager to disable gradient synchronizations across DDP processes by calling torch.nn.parallel.DistributedDataParallel.no_sync . If model is not in DDP, this context manager does nothing Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> dataloader, model, optimizer = accelerator.prepare(dataloader, model, optimizer) >>> input_a = next ( iter (dataloader)) >>> input_b = next ( iter (dataloader)) >>> with accelerator.no_sync(): ... outputs = model(input_a) ... loss = loss_func(outputs) ... accelerator.backward(loss) ... # No synchronization across processes, only accumulate gradients >>> outputs = model(input_b) >>> accelerator.backward(loss) >>> # Synchronization across all processes >>> optimizer.step() >>> optimizer.zero_grad() on_last_process < source > ( function : Callable[..., Any] ) Parameters function ( Callable ) — The function to decorate. A decorator that will run the decorated function on the last process only. Can also be called using the PartialState class. Example: Copied # Assume we have 4 processes. from accelerate import Accelerator accelerator = Accelerator() @accelerator.on_last_process def print_something (): print ( f"Printed on process {accelerator.process_index} " ) print_something() "Printed on process 3" on_local_main_process < source > ( function : Callable[..., Any] = None ) Parameters function ( Callable ) — The function to decorate. A decorator that will run the decorated function on the local main process only. Can also be called using the PartialState class. Example: Copied # Assume we have 2 servers with 4 processes each. from accelerate import Accelerator accelerator = Accelerator() @accelerator.on_local_main_process def print_something (): print ( "This will be printed by process 0 only on each server." ) print_something() # On server 1: "This will be printed by process 0 only" # On server 2: "This will be printed by process 0 only" on_local_process < source > ( function : Callable[..., Any] = None local_process_index : int = None ) Parameters function ( Callable , optional ) — The function to decorate. local_process_index ( int , optional ) — The index of the local process on which to run the function. A decorator that will run the decorated function on a given local process index only. Can also be called using the PartialState class. Example: Copied # Assume we have 2 servers with 4 processes each. from accelerate import Accelerator accelerator = Accelerator() @accelerator.on_local_process( local_process_index= 2 ) def print_something (): print ( f"Printed on process {accelerator.local_process_index} " ) print_something() # On server 1: "Printed on process 2" # On server 2: "Printed on process 2" on_main_process < source > ( function : Callable[..., Any] = None ) Parameters function ( Callable ) — The function to decorate. A decorator that will run the decorated function on the main process only. Can also be called using the PartialState class. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> @accelerator.on_main_process ... def print_something (): ... print ( "This will be printed by process 0 only." ) >>> print_something() "This will be printed by process 0 only" on_process < source > ( function : Callable[..., Any] = None process_index : int = None ) Parameters function ( Callable , optional ) — The function to decorate. process_index ( int , optional ) — The index of the process on which to run the function. A decorator that will run the decorated function on a given process index only. Can also be called using the PartialState class. Example: Copied # Assume we have 4 processes. from accelerate import Accelerator accelerator = Accelerator() @accelerator.on_process( process_index= 2 ) def print_something (): print ( f"Printed on process {accelerator.process_index} " ) print_something() "Printed on process 2" pad_across_processes < source > ( tensor dim = 0 pad_index = 0 pad_first = False ) → torch.Tensor , or a nested tuple/list/dictionary of torch.Tensor Parameters tensor (nested list/tuple/dictionary of torch.Tensor ) — The data to gather. dim ( int , optional , defaults to 0) — The dimension on which to pad. pad_index ( int , optional , defaults to 0) — The value with which to pad. pad_first ( bool , optional , defaults to False ) — Whether to pad at the beginning or the end. Returns torch.Tensor , or a nested tuple/list/dictionary of torch.Tensor The padded tensor(s). Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they can safely be gathered. Example: Copied >>> # Assuming two processes, with the first processes having a tensor of size 1 and the second of size 2 >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> process_tensor = torch.arange(accelerator.process_index + 1 ).to(accelerator.device) >>> padded_tensor = accelerator.pad_across_processes(process_tensor) >>> padded_tensor.shape torch.Size([ 2 ]) prepare < source > ( *args device_placement = None ) Parameters *args (list of objects) — Any of the following type of objects: torch.utils.data.DataLoader : PyTorch Dataloader torch.nn.Module : PyTorch Module torch.optim.Optimizer : PyTorch Optimizer torch.optim.lr_scheduler.LRScheduler : PyTorch LR Scheduler device_placement ( list[bool] , optional ) — Used to customize whether automatic device placement should be performed for each object passed. Needs to be a list of the same length as args . Not compatible with DeepSpeed or FSDP. Prepare all objects passed in args for distributed training and mixed precision, then return them in the same order. You don’t need to prepare a model if you only use it for inference without any kind of mixed precision Examples: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume a model, optimizer, data_loader and scheduler are defined >>> model, optimizer, data_loader, scheduler = accelerator.prepare(model, optimizer, data_loader, scheduler) Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume a model, optimizer, data_loader and scheduler are defined >>> device_placement = [ True , True , False , False ] >>> # Will place the first two items passed in automatically to the right device but not the last two. >>> model, optimizer, data_loader, scheduler = accelerator.prepare( ... model, optimizer, data_loader, scheduler, device_placement=device_placement ... ) prepare_data_loader < source > ( data_loader : torch.utils.data.DataLoader device_placement = None slice_fn_for_dispatch = None ) Parameters data_loader ( torch.utils.data.DataLoader ) — A vanilla PyTorch DataLoader to prepare device_placement ( bool , optional ) — Whether or not to place the batches on the proper device in the prepared dataloader. Will default to self.device_placement . slice_fn_for_dispatch ( Callable , optional ) -- If passed, this function will be used to slice tensors across num_processes . Will default to [slice_tensors()](/docs/accelerate/v1.3.0/en/package_reference/utilities#accelerate.utils.slice_tensors). This argument is used only when dispatch_batches is set to True` and will be ignored otherwise. Prepares a PyTorch DataLoader for training in any distributed setup. It is recommended to use Accelerator.prepare() instead. Example: Copied >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> data_loader = torch.utils.data.DataLoader(...) >>> data_loader = accelerator.prepare_data_loader(data_loader, device_placement= True ) prepare_model < source > ( model : torch.nn.Module device_placement : bool = None evaluation_mode : bool = False ) Parameters model ( torch.nn.Module ) — A PyTorch model to prepare. You don’t need to prepare a model if it is used only for inference without any kind of mixed precision device_placement ( bool , optional ) — Whether or not to place the model on the proper device. Will default to self.device_placement . evaluation_mode ( bool , optional , defaults to False ) — Whether or not to set the model for evaluation only, by just applying mixed precision and torch.compile (if configured in the Accelerator object). Prepares a PyTorch model for training in any distributed setup. It is recommended to use Accelerator.prepare() instead. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume a model is defined >>> model = accelerator.prepare_model(model) prepare_optimizer < source > ( optimizer : torch.optim.Optimizer device_placement = None ) Parameters optimizer ( torch.optim.Optimizer ) — A vanilla PyTorch optimizer to prepare device_placement ( bool , optional ) — Whether or not to place the optimizer on the proper device. Will default to self.device_placement . Prepares a PyTorch Optimizer for training in any distributed setup. It is recommended to use Accelerator.prepare() instead. Example: Copied >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> optimizer = torch.optim.Adam(...) >>> optimizer = accelerator.prepare_optimizer(optimizer, device_placement= True ) prepare_scheduler < source > ( scheduler : LRScheduler ) Parameters scheduler ( torch.optim.lr_scheduler.LRScheduler ) — A vanilla PyTorch scheduler to prepare Prepares a PyTorch Scheduler for training in any distributed setup. It is recommended to use Accelerator.prepare() instead. Example: Copied >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> optimizer = torch.optim.Adam(...) >>> scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, ...) >>> scheduler = accelerator.prepare_scheduler(scheduler) print < source > ( *args **kwargs ) Drop in replacement of print() to only print once per server. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> accelerator. print ( "Hello world!" ) profile < source > ( profile_handler : ProfileKwargs | None = None ) Parameters profile_handler ( ProfileKwargs , optional ) — The profile handler to use for this context manager. If not passed, will use the one set in the Accelerator object. Will profile the code inside the context manager. The profile will be saved to a Chrome Trace file if profile_handler.output_trace_dir is set. A different profile_handler can be passed in to override the one set in the Accelerator object. Example: Copied # Profile with default settings from accelerate import Accelerator from accelerate.utils import ProfileKwargs accelerator = Accelerator() with accelerator.profile() as prof: train() accelerator. print (prof.key_averages().table()) # Profile with the custom handler def custom_handler ( prof ): print (prof.key_averages().table(sort_by= "self_cpu_time_total" , row_limit= 10 )) kwargs = ProfileKwargs(schedule_option= dict (wait= 1 , warmup= 1 , active= 1 ), on_trace_ready=custom_handler) accelerator = Accelerator(kwarg_handler=[kwargs]) with accelerator.profile() as prof: for _ in range ( 10 ): train_iteration() prof.step() # Profile and export to Chrome Trace kwargs = ProfileKwargs(output_trace_dir= "output_trace" ) accelerator = Accelerator(kwarg_handler=[kwargs]) with accelerator.profile(): train() reduce < source > ( tensor reduction = 'sum' scale = 1.0 ) → torch.Tensor , or a nested tuple/list/dictionary of torch.Tensor Parameters tensor ( torch.Tensor , or a nested tuple/list/dictionary of torch.Tensor ) — The tensors to reduce across all processes. reduction ( str , optional , defaults to “sum”) — A reduction type, can be one of ‘sum’, ‘mean’, or ‘none’. If ‘none’, will not perform any operation. scale ( float , optional , defaults to 1.0) — A default scaling value to be applied after the reduce, only valied on XLA. Returns torch.Tensor , or a nested tuple/list/dictionary of torch.Tensor The reduced tensor(s). Reduce the values in tensor across all processes based on reduction . Note: All processes get the reduced value. Example: Copied >>> # Assuming two processes >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> process_tensor = torch.arange(accelerator.num_processes) + 1 + ( 2 * accelerator.process_index) >>> process_tensor = process_tensor.to(accelerator.device) >>> reduced_tensor = accelerator.reduce(process_tensor, reduction= "sum" ) >>> reduced_tensor tensor([ 4 , 6 ]) register_for_checkpointing < source > ( *objects ) Makes note of objects and will save or load them in during save_state or load_state . These should be utilized when the state is being loaded or saved in the same script. It is not designed to be used in different scripts. Every object must have a load_state_dict and state_dict function to be stored. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume `CustomObject` has a `state_dict` and `load_state_dict` function. >>> obj = CustomObject() >>> accelerator.register_for_checkpointing(obj) >>> accelerator.save_state( "checkpoint.pt" ) register_load_state_pre_hook < source > ( hook : Callable[..., None] ) → torch.utils.hooks.RemovableHandle Parameters hook ( Callable ) — A function to be called in Accelerator.load_state() before load_checkpoint . Returns torch.utils.hooks.RemovableHandle a handle that can be used to remove the added hook by calling handle.remove() Registers a pre hook to be run before load_checkpoint is called in Accelerator.load_state() . The hook should have the following signature: hook(models: list[torch.nn.Module], input_dir: str) -> None The models argument are the models as saved in the accelerator state under accelerator._models , and the input_dir argument is the input_dir argument passed to Accelerator.load_state() . Should only be used in conjunction with Accelerator.register_save_state_pre_hook() . Can be useful to load configurations in addition to model weights. Can also be used to overwrite model loading with a customized method. In this case, make sure to remove already loaded models from the models list. register_save_state_pre_hook < source > ( hook : Callable[..., None] ) → torch.utils.hooks.RemovableHandle Parameters hook ( Callable ) — A function to be called in Accelerator.save_state() before save_checkpoint . Returns torch.utils.hooks.RemovableHandle a handle that can be used to remove the added hook by calling handle.remove() Registers a pre hook to be run before save_checkpoint is called in Accelerator.save_state() . The hook should have the following signature: hook(models: list[torch.nn.Module], weights: list[dict[str, torch.Tensor]], input_dir: str) -> None The models argument are the models as saved in the accelerator state under accelerator._models , weigths argument are the state dicts of the models , and the input_dir argument is the input_dir argument passed to Accelerator.load_state() . Should only be used in conjunction with Accelerator.register_load_state_pre_hook() . Can be useful to save configurations in addition to model weights. Can also be used to overwrite model saving with a customized method. In this case, make sure to remove already loaded weights from the weights list. save < source > ( obj f safe_serialization = False ) Parameters obj ( object ) — The object to save. f ( str or os.PathLike ) — Where to save the content of obj . safe_serialization ( bool , optional , defaults to False ) — Whether to save obj using safetensors Save the object passed to disk once per machine. Use in place of torch.save . Note: If save_on_each_node was passed in as a ProjectConfiguration , will save the object once per node, rather than only once on the main node. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> arr = [ 0 , 1 , 2 , 3 ] >>> accelerator.save(arr, "array.pkl" ) save_model < source > ( model : torch.nn.Module save_directory : Union[str, os.PathLike] max_shard_size : Union[int, str] = '10GB' safe_serialization : bool = True ) Parameters model — ( torch.nn.Module ): Model to be saved. The model can be wrapped or unwraped. save_directory ( str or os.PathLike ) — Directory to which to save. Will be created if it doesn’t exist. max_shard_size ( int or str , optional , defaults to "10GB" ) — The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB" ). If a single weight of the model is bigger than max_shard_size , it will be in its own checkpoint shard which will be bigger than max_shard_size . safe_serialization ( bool , optional , defaults to True ) — Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle ). Save a model so that it can be re-loaded using load_checkpoint_in_model Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model = ... >>> accelerator.save_model(model, save_directory) save_state < source > ( output_dir : str = None safe_serialization : bool = True **save_model_func_kwargs ) Parameters output_dir ( str or os.PathLike ) — The name of the folder to save all relevant weights and states. safe_serialization ( bool , optional , defaults to True ) — Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle ). save_model_func_kwargs ( dict , optional ) — Additional keyword arguments for saving model which can be passed to the underlying save function, such as optional arguments for DeepSpeed’s save_checkpoint function. Saves the current states of the model, optimizer, scaler, RNG generators, and registered objects to a folder. If a ProjectConfiguration was passed to the Accelerator object with automatic_checkpoint_naming enabled then checkpoints will be saved to self.project_dir/checkpoints . If the number of current saves is greater than total_limit then the oldest save is deleted. Each checkpoint is saved in seperate folders named checkpoint_<iteration> . Otherwise they are just saved to output_dir . Should only be used when wanting to save a checkpoint during training and restoring the state in the same environment. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer, lr_scheduler = ... >>> model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler) >>> accelerator.save_state(output_dir= "my_checkpoint" ) set_trigger < source > ( ) Sets the internal trigger tensor to 1 on the current process. A latter check should follow using this which will check across all processes. Note: Does not require wait_for_everyone() Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume later in the training script >>> # `should_do_breakpoint` is a custom function to monitor when to break, >>> # e.g. when the loss is NaN >>> if should_do_breakpoint(loss): ... accelerator.set_trigger() >>> # Assume later in the training script >>> if accelerator.check_breakpoint(): ... break skip_first_batches < source > ( dataloader num_batches : int = 0 ) Parameters dataloader ( torch.utils.data.DataLoader ) — The data loader in which to skip batches. num_batches ( int , optional , defaults to 0) — The number of batches to skip Creates a new torch.utils.data.DataLoader that will efficiently skip the first num_batches . Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler) >>> skipped_dataloader = accelerator.skip_first_batches(dataloader, num_batches= 2 ) >>> # for the first epoch only >>> for input , target in skipped_dataloader: ... optimizer.zero_grad() ... output = model( input ) ... loss = loss_func(output, target) ... accelerator.backward(loss) ... optimizer.step() >>> # subsequent epochs >>> for input , target in dataloader: ... optimizer.zero_grad() ... ... split_between_processes < source > ( inputs : list | tuple | dict | torch.Tensor apply_padding : bool = False ) Parameters inputs ( list , tuple , torch.Tensor , or dict of list / tuple / torch.Tensor ) — The input to split between processes. apply_padding ( bool , optional , defaults to False ) — Whether to apply padding by repeating the last element of the input so that all processes have the same number of elements. Useful when trying to perform actions such as Accelerator.gather() on the outputs or passing in less inputs than there are processes. If so, just remember to drop the padded elements afterwards. Splits input between self.num_processes quickly and can be then used on that process. Useful when doing distributed inference, such as with different prompts. Note that when using a dict , all keys need to have the same number of elements. Example: Copied # Assume there are two processes from accelerate import Accelerator accelerator = Accelerator() with accelerator.split_between_processes([ "A" , "B" , "C" ]) as inputs: print (inputs) # Process 0 [ "A" , "B" ] # Process 1 [ "C" ] with accelerator.split_between_processes([ "A" , "B" , "C" ], apply_padding= True ) as inputs: print (inputs) # Process 0 [ "A" , "B" ] # Process 1 [ "C" , "C" ] trigger_sync_in_backward < source > ( model ) Parameters model ( torch.nn.Module ) — The model for which to trigger the gradient synchronization. Trigger the sync of the gradients in the next backward pass of the model after multiple forward passes under Accelerator.no_sync (only applicable in multi-GPU scenarios). If the script is not launched in distributed mode, this context manager does nothing. Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> dataloader, model, optimizer = accelerator.prepare(dataloader, model, optimizer) >>> with accelerator.no_sync(): ... loss_a = loss_func(model(input_a)) # first forward pass ... loss_b = loss_func(model(input_b)) # second forward pass >>> accelerator.backward(loss_a) # No synchronization across processes, only accumulate gradients >>> with accelerator.trigger_sync_in_backward(model): ... accelerator.backward(loss_b) # Synchronization across all processes >>> optimizer.step() >>> optimizer.zero_grad() unscale_gradients < source > ( optimizer = None ) Parameters optimizer ( torch.optim.Optimizer or list[torch.optim.Optimizer] , optional ) — The optimizer(s) for which to unscale gradients. If not set, will unscale gradients on all optimizers that were passed to prepare() . Unscale the gradients in mixed precision training with AMP. This is a noop in all other settings. Likely should be called through Accelerator.clip grad_norm () or Accelerator.clip grad_value () Example: Copied >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer = accelerator.prepare(model, optimizer) >>> outputs = model(inputs) >>> loss = loss_fn(outputs, labels) >>> accelerator.backward(loss) >>> accelerator.unscale_gradients(optimizer=optimizer) unwrap_model < source > ( model keep_fp32_wrapper : bool = True keep_torch_compile : bool = True ) → torch.nn.Module Parameters model ( torch.nn.Module ) — The model to unwrap. keep_fp32_wrapper ( bool , optional , defaults to True ) — Whether to not remove the mixed precision hook if it was added. keep_torch_compile ( bool , optional , defaults to True ) — Whether to not unwrap compiled model if compiled. Returns torch.nn.Module The unwrapped model. Unwraps the model from the additional layer possible added by prepare() . Useful before saving the model. Example: Copied >>> # Assuming two GPU processes >>> from torch.nn.parallel import DistributedDataParallel >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model = accelerator.prepare(MyModel()) >>> print (model.__class__.__name__) DistributedDataParallel >>> model = accelerator.unwrap_model(model) >>> print (model.__class__.__name__) MyModel verify_device_map < source > ( model : torch.nn.Module ) Verifies that model has not been prepared with big model inference with a device-map resembling auto . wait_for_everyone < source > ( ) Will stop the execution of the current process until every other process has reached that point (so this does nothing when the script is only run in one process). Useful to do before saving a model. Example: Copied >>> # Assuming two GPU processes >>> import time >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> if accelerator.is_main_process: ... time.sleep( 2 ) >>> else : ... print ( "I'm waiting for the main process to finish its sleep..." ) >>> accelerator.wait_for_everyone() >>> # Should print on every process at the same time >>> print ( "Everyone is here" ) Utilities accelerate.utils.gather_object < source > ( object : typing.Any ) Parameters object (nested list/tuple/dictionary of picklable object) — The data to gather. Recursively gather object in a nested list/tuple/dictionary of objects from all devices. < > Update on GitHub ← Training on TPUs Stateful classes → Accelerator Accelerator Utilities
Share_a_model.txt
Share a model Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Share a model Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Share a model The last two tutorials showed how you can fine-tune a model with PyTorch, Keras, and 🤗 Accelerate for distributed setups. The next step is to share your model with the community! At Hugging Face, we believe in openly sharing knowledge and resources to democratize artificial intelligence for everyone. We encourage you to consider sharing your model with the community to help others save time and resources. In this tutorial, you will learn two methods for sharing a trained or fine-tuned model on the Model Hub : Programmatically push your files to the Hub. Drag-and-drop your files to the Hub with the web interface. To share a model with the community, you need an account on huggingface.co . You can also join an existing organization or create a new one. Repository features Each repository on the Model Hub behaves like a typical GitHub repository. Our repositories offer versioning, commit history, and the ability to visualize differences. The Model Hub’s built-in versioning is based on git and git-lfs . In other words, you can treat one model as one repository, enabling greater access control and scalability. Version control allows revisions , a method for pinning a specific version of a model with a commit hash, tag or branch. As a result, you can load a specific model version with the revision parameter: Copied >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small" , revision= "4c77982" # tag name, or branch name, or commit hash ... ) Files are also easily edited in a repository, and you can view the commit history as well as the differences: Setup Before sharing a model to the Hub, you will need your Hugging Face credentials. If you have access to a terminal, run the following command in the virtual environment where 🤗 Transformers is installed. This will store your access token in your Hugging Face cache folder ( ~/.cache/ by default): Copied huggingface-cli login If you are using a notebook like Jupyter or Colaboratory, make sure you have the huggingface_hub library installed. This library allows you to programmatically interact with the Hub. Copied pip install huggingface_hub Then use notebook_login to sign-in to the Hub, and follow the link here to generate a token to login with: Copied >>> from huggingface_hub import notebook_login >>> notebook_login() Convert a model for all frameworks To ensure your model can be used by someone working with a different framework, we recommend you convert and upload your model with both PyTorch and TensorFlow checkpoints. While users are still able to load your model from a different framework if you skip this step, it will be slower because 🤗 Transformers will need to convert the checkpoint on-the-fly. Converting a checkpoint for another framework is easy. Make sure you have PyTorch and TensorFlow installed (see here for installation instructions), and then find the specific model for your task in the other framework. Pytorch Hide Pytorch content Specify from_tf=True to convert a checkpoint from TensorFlow to PyTorch: Copied >>> pt_model = DistilBertForSequenceClassification.from_pretrained( "path/to/awesome-name-you-picked" , from_tf= True ) >>> pt_model.save_pretrained( "path/to/awesome-name-you-picked" ) TensorFlow Hide TensorFlow content Specify from_pt=True to convert a checkpoint from PyTorch to TensorFlow: Copied >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained( "path/to/awesome-name-you-picked" , from_pt= True ) Then you can save your new TensorFlow model with its new checkpoint: Copied >>> tf_model.save_pretrained( "path/to/awesome-name-you-picked" ) JAX Hide JAX content If a model is available in Flax, you can also convert a checkpoint from PyTorch to Flax: Copied >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked" , from_pt= True ... ) Push a model during training Pytorch Hide Pytorch content Sharing a model to the Hub is as simple as adding an extra parameter or callback. Remember from the fine-tuning tutorial , the TrainingArguments class is where you specify hyperparameters and additional training options. One of these training options includes the ability to push a model directly to the Hub. Set push_to_hub=True in your TrainingArguments : Copied >>> training_args = TrainingArguments(output_dir= "my-awesome-model" , push_to_hub= True ) Pass your training arguments as usual to Trainer : Copied >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) After you fine-tune your model, call push_to_hub() on Trainer to push the trained model to the Hub. 🤗 Transformers will even automatically add training hyperparameters, training results and framework versions to your model card! Copied >>> trainer.push_to_hub() TensorFlow Hide TensorFlow content Share a model to the Hub with PushToHubCallback . In the PushToHubCallback function, add: An output directory for your model. A tokenizer. The hub_model_id , which is your Hub username and model name. Copied >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir= "./your_model_save_path" , tokenizer=tokenizer, hub_model_id= "your-username/my-awesome-model" ... ) Add the callback to fit , and 🤗 Transformers will push the trained model to the Hub: Copied >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs= 3 , callbacks=push_to_hub_callback) Use the push_to_hub function You can also call push_to_hub directly on your model to upload it to the Hub. Specify your model name in push_to_hub : Copied >>> pt_model.push_to_hub( "my-awesome-model" ) This creates a repository under your username with the model name my-awesome-model . Users can now load your model with the from_pretrained function: Copied >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained( "your_username/my-awesome-model" ) If you belong to an organization and want to push your model under the organization name instead, just add it to the repo_id : Copied >>> pt_model.push_to_hub( "my-awesome-org/my-awesome-model" ) The push_to_hub function can also be used to add other files to a model repository. For example, add a tokenizer to a model repository: Copied >>> tokenizer.push_to_hub( "my-awesome-model" ) Or perhaps you’d like to add the TensorFlow version of your fine-tuned PyTorch model: Copied >>> tf_model.push_to_hub( "my-awesome-model" ) Now when you navigate to your Hugging Face profile, you should see your newly created model repository. Clicking on the Files tab will display all the files you’ve uploaded to the repository. For more details on how to create and upload files to a repository, refer to the Hub documentation here . Upload with the web interface Users who prefer a no-code approach are able to upload a model through the Hub’s web interface. Visit huggingface.co/new to create a new repository: From here, add some information about your model: Select the owner of the repository. This can be yourself or any of the organizations you belong to. Pick a name for your model, which will also be the repository name. Choose whether your model is public or private. Specify the license usage for your model. Now click on the Files tab and click on the Add file button to upload a new file to your repository. Then drag-and-drop a file to upload and add a commit message. Add a model card To make sure users understand your model’s capabilities, limitations, potential biases and ethical considerations, please add a model card to your repository. The model card is defined in the README.md file. You can add a model card by: Manually creating and uploading a README.md file. Clicking on the Edit model card button in your model repository. Take a look at the DistilBert model card for a good example of the type of information a model card should include. For more details about other options you can control in the README.md file such as a model’s carbon footprint or widget examples, refer to the documentation here . < > Update on GitHub ← Load and train adapters with 🤗 PEFT Agents 101 → Share a model Repository features Setup Convert a model for all frameworks Push a model during training Use the push_to_hub function Upload with the web interface Add a model card
Streamlit_Spaces.txt
Streamlit Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Streamlit Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Streamlit Spaces Streamlit gives users freedom to build a full-featured web app with Python in a reactive way. Your code is rerun each time the state of the app changes. Streamlit is also great for data visualization and supports several charting libraries such as Bokeh, Plotly, and Altair. Read this blog post about building and hosting Streamlit apps in Spaces. Selecting Streamlit as the SDK when creating a new Space will initialize your Space with the latest version of Streamlit by setting the sdk property to streamlit in your README.md file’s YAML block. If you’d like to change the Streamlit version, you can edit the sdk_version property. To use Streamlit in a Space, select Streamlit as the SDK when you create a Space through the New Space form . This will create a repository with a README.md that contains the following properties in the YAML configuration block: Copied sdk: streamlit sdk_version: 1.25 .0 # The latest supported version You can edit the sdk_version , but note that issues may occur when you use an unsupported Streamlit version. Not all Streamlit versions are supported, so please refer to the reference section to see which versions are available. For in-depth information about Streamlit, refer to the Streamlit documentation . Only port 8501 is allowed for Streamlit Spaces (default port). As a result if you provide a `config.toml` file for your Space make sure the default port is not overriden. Your First Streamlit Space: Hot Dog Classifier In the following sections, you’ll learn the basics of creating a Space, configuring it, and deploying your code to it. We’ll create a Hot Dog Classifier Space with Streamlit that’ll be used to demo the julien-c/hotdog-not-hotdog model, which can detect whether a given picture contains a hot dog 🌭 You can find a completed version of this hosted at NimaBoscarino/hotdog-streamlit . Create a new Streamlit Space We’ll start by creating a brand new Space and choosing Streamlit as our SDK. Hugging Face Spaces are Git repositories, meaning that you can work on your Space incrementally (and collaboratively) by pushing commits. Take a look at the Getting Started with Repositories guide to learn about how you can create and edit files before continuing. Add the dependencies For the Hot Dog Classifier we’ll be using a 🤗 Transformers pipeline to use the model, so we need to start by installing a few dependencies. This can be done by creating a requirements.txt file in our repository, and adding the following dependencies to it: Copied transformers torch The Spaces runtime will handle installing the dependencies! Create the Streamlit app To create the Streamlit app, make a new file in the repository called app.py , and add the following code: Copied import streamlit as st from transformers import pipeline from PIL import Image pipeline = pipeline(task= "image-classification" , model= "julien-c/hotdog-not-hotdog" ) st.title( "Hot Dog? Or Not?" ) file_name = st.file_uploader( "Upload a hot dog candidate image" ) if file_name is not None : col1, col2 = st.columns( 2 ) image = Image. open (file_name) col1.image(image, use_column_width= True ) predictions = pipeline(image) col2.header( "Probabilities" ) for p in predictions: col2.subheader( f" { p[ 'label' ] } : { round (p[ 'score' ] * 100 , 1 )} %" ) This Python script uses a 🤗 Transformers pipeline to load the julien-c/hotdog-not-hotdog model, which is used by the Streamlit interface. The Streamlit app will expect you to upload an image, which it’ll then classify as hot dog or not hot dog . Once you’ve saved the code to the app.py file, visit the App tab to see your app in action! Embed Streamlit Spaces on other webpages You can use the HTML <iframe> tag to embed a Streamlit Space as an inline frame on other webpages. Simply include the URL of your Space, ending with the .hf.space suffix. To find the URL of your Space, you can use the “Embed this Space” button from the Spaces options. For example, the demo above can be embedded in these docs with the following tag: Copied <iframe src = "https://NimaBoscarino-hotdog-streamlit.hf.space?embed=true" title = "My awesome Streamlit Space" ></iframe> Please note that we have added ?embed=true to the URL, which activates the embed mode of the Streamlit app, removing some spacers and the footer for slim embeds. Embed Streamlit Spaces with auto-resizing IFrames Streamlit has supported automatic iframe resizing since 1.17.0 so that the size of the parent iframe is automatically adjusted to fit the content volume of the embedded Streamlit application. It relies on the iFrame Resizer library, for which you need to add a few lines of code, as in the following example where id is set to <iframe /> that is used to specify the auto-resize target. The iFrame Resizer is loaded via the script tag. The iFrameResize() function is called with the ID of the target iframe element, so that its size changes automatically. We can pass options to the first argument of iFrameResize() . See the document for the details. Copied < iframe id = "your-iframe-id" src = "https://<space-subdomain>.hf.space" frameborder = "0" width = "850" height = "450" > </ iframe > < script src = "https://cdn.jsdelivr.net/npm/[email protected]/js/iframeResizer.min.js" > </ script > < script > iFrameResize ({}, "#your-iframe-id" ) </ script > Additionally, you can checkout our documentation . < > Update on GitHub ← Gradio Spaces Static HTML Spaces → Streamlit Spaces Your First Streamlit Space: Hot Dog Classifier Create a new Streamlit Space Add the dependencies Create the Streamlit app Embed Streamlit Spaces on other webpages Embed Streamlit Spaces with auto-resizing I Frames
Utilities_for_Trainer.txt
Utilities for Trainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Utilities for Trainer Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Utilities for Trainer This page lists all the utility functions used by Trainer . Most of those are only useful if you are studying the code of the Trainer in the library. Utilities class transformers. EvalPrediction < source > ( predictions : typing.Union[numpy.ndarray, typing.Tuple[numpy.ndarray]] label_ids : typing.Union[numpy.ndarray, typing.Tuple[numpy.ndarray]] inputs : typing.Union[numpy.ndarray, typing.Tuple[numpy.ndarray], NoneType] = None losses : typing.Union[numpy.ndarray, typing.Tuple[numpy.ndarray], NoneType] = None ) Parameters predictions ( np.ndarray ) — Predictions of the model. label_ids ( np.ndarray ) — Targets to be matched. inputs ( np.ndarray , optional ) — Input data passed to the model. losses ( np.ndarray , optional ) — Loss values computed during evaluation. Evaluation output (always contains labels), to be used to compute metrics. class transformers. IntervalStrategy < source > ( value names = None module = None qualname = None type = None start = 1 ) An enumeration. transformers.enable_full_determinism < source > ( seed : int warn_only : bool = False ) Helper function for reproducible behavior during distributed training. See https://pytorch.org/docs/stable/notes/randomness.html for pytorch https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_op_determinism for tensorflow transformers.set_seed < source > ( seed : int deterministic : bool = False ) Parameters seed ( int ) — The seed to set. deterministic ( bool , optional , defaults to False ) — Whether to use deterministic algorithms where available. Can slow down training. Helper function for reproducible behavior to set the seed in random , numpy , torch and/or tf (if installed). transformers.torch_distributed_zero_first < source > ( local_rank : int ) Parameters local_rank ( int ) — The rank of the local process. Decorator to make all processes in distributed training wait for each local_master to do something. Callbacks internals class transformers.trainer_callback. CallbackHandler < source > ( callbacks model processing_class optimizer lr_scheduler ) Internal class that just calls the list of callbacks in order. Distributed Evaluation class transformers.trainer_pt_utils. DistributedTensorGatherer < source > ( world_size num_samples make_multiple_of = None padding_index = -100 ) Parameters world_size ( int ) — The number of processes used in the distributed training. num_samples ( int ) — The number of samples in our dataset. make_multiple_of ( int , optional ) — If passed, the class assumes the datasets passed to each process are made to be a multiple of this argument (by adding samples). padding_index ( int , optional , defaults to -100) — The padding index to use if the arrays don’t all have the same sequence length. A class responsible for properly gathering tensors (or nested list/tuple of tensors) on the CPU by chunks. If our dataset has 16 samples with a batch size of 2 on 3 processes and we gather then transfer on CPU at every step, our sampler will generate the following indices: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 1] to get something of size a multiple of 3 (so that each process gets the same dataset length). Then process 0, 1 and 2 will be responsible of making predictions for the following samples: P0: [0, 1, 2, 3, 4, 5] P1: [6, 7, 8, 9, 10, 11] P2: [12, 13, 14, 15, 0, 1] The first batch treated on each process will be P0: [0, 1] P1: [6, 7] P2: [12, 13] So if we gather at the end of the first batch, we will get a tensor (nested list/tuple of tensor) corresponding to the following indices: [0, 1, 6, 7, 12, 13] If we directly concatenate our results without taking any precautions, the user will then get the predictions for the indices in this order at the end of the prediction loop: [0, 1, 6, 7, 12, 13, 2, 3, 8, 9, 14, 15, 4, 5, 10, 11, 0, 1] For some reason, that’s not going to roll their boat. This class is there to solve that problem. add_arrays < source > ( arrays ) Add arrays to the internal storage, Will initialize the storage to the full size at the first arrays passed so that if we’re bound to get an OOM, it happens at the beginning. finalize < source > ( ) Return the properly gathered arrays and truncate to the number of samples (since the sampler added some extras to get each process a dataset of the same length). Trainer Argument Parser class transformers. HfArgumentParser < source > ( dataclass_types : typing.Union[transformers.hf_argparser.DataClassType, typing.Iterable[transformers.hf_argparser.DataClassType]] **kwargs ) This subclass of argparse.ArgumentParser uses type hints on dataclasses to generate arguments. The class is designed to play well with the native argparse. In particular, you can add more (non-dataclass backed) arguments to the parser after initialization and you’ll get the output back after parsing as an additional namespace. Optional: To create sub argument groups use the _argument_group_name attribute in the dataclass. parse_args_into_dataclasses < source > ( args = None return_remaining_strings = False look_for_args_file = True args_filename = None args_file_flag = None ) → Tuple consisting of Parameters args — List of strings to parse. The default is taken from sys.argv. (same as argparse.ArgumentParser) return_remaining_strings — If true, also return a list of remaining argument strings. look_for_args_file — If true, will look for a “.args” file with the same base name as the entry point script for this process, and will append its potential content to the command line args. args_filename — If not None, will uses this file instead of the “.args” file specified in the previous argument. args_file_flag — If not None, will look for a file in the command-line args specified with this flag. The flag can be specified multiple times and precedence is determined by the order (last one wins). Returns Tuple consisting of the dataclass instances in the same order as they were passed to the initializer.abspath if applicable, an additional namespace for more (non-dataclass backed) arguments added to the parser after initialization. The potential list of remaining argument strings. (same as argparse.ArgumentParser.parse_known_args) Parse command-line args into instances of the specified dataclass types. This relies on argparse’s ArgumentParser.parse_known_args . See the doc at: docs.python.org/3.7/library/argparse.html#argparse.ArgumentParser.parse_args parse_dict < source > ( args : typing.Dict[str, typing.Any] allow_extra_keys : bool = False ) → Tuple consisting of Parameters args ( dict ) — dict containing config values allow_extra_keys ( bool , optional , defaults to False ) — Defaults to False. If False, will raise an exception if the dict contains keys that are not parsed. Returns Tuple consisting of the dataclass instances in the same order as they were passed to the initializer. Alternative helper method that does not use argparse at all, instead uses a dict and populating the dataclass types. parse_json_file < source > ( json_file : typing.Union[str, os.PathLike] allow_extra_keys : bool = False ) → Tuple consisting of Parameters json_file ( str or os.PathLike ) — File name of the json file to parse allow_extra_keys ( bool , optional , defaults to False ) — Defaults to False. If False, will raise an exception if the json file contains keys that are not parsed. Returns Tuple consisting of the dataclass instances in the same order as they were passed to the initializer. Alternative helper method that does not use argparse at all, instead loading a json file and populating the dataclass types. parse_yaml_file < source > ( yaml_file : typing.Union[str, os.PathLike] allow_extra_keys : bool = False ) → Tuple consisting of Parameters yaml_file ( str or os.PathLike ) — File name of the yaml file to parse allow_extra_keys ( bool , optional , defaults to False ) — Defaults to False. If False, will raise an exception if the json file contains keys that are not parsed. Returns Tuple consisting of the dataclass instances in the same order as they were passed to the initializer. Alternative helper method that does not use argparse at all, instead loading a yaml file and populating the dataclass types. Debug Utilities class transformers.debug_utils. DebugUnderflowOverflow < source > ( model max_frames_to_save = 21 trace_batch_nums = [] abort_after_batch_num = None ) Parameters model ( nn.Module ) — The model to debug. max_frames_to_save ( int , optional , defaults to 21) — How many frames back to record trace_batch_nums( List[int] , optional , defaults to [] ) — Which batch numbers to trace (turns detection off) abort_after_batch_num (`int“, optional ) — Whether to abort after a certain batch number has finished This debug class helps detect and understand where the model starts getting very large or very small, and more importantly nan or inf weight and activation elements. There are 2 working modes: Underflow/overflow detection (default) Specific batch absolute min/max tracing without detection Mode 1: Underflow/overflow detection To activate the underflow/overflow detection, initialize the object with the model : Copied debug_overflow = DebugUnderflowOverflow(model) then run the training as normal and if nan or inf gets detected in at least one of the weight, input or output elements this module will throw an exception and will print max_frames_to_save frames that lead to this event, each frame reporting the fully qualified module name plus the class name whose forward was run the absolute min and max value of all elements for each module weights, and the inputs and output For example, here is the header and the last few frames in detection report for google/mt5-small run in fp16 mixed precision : Copied Detected inf/nan during batch_number= 0 Last 21 forward frames: abs min abs max metadata [...] encoder .block. 2 .layer. 1 .DenseReluDense.wi_0 Linear 2 . 17 e- 07 4 . 50 e+ 00 weight 1 . 79 e- 06 4 . 65 e+ 00 input[ 0 ] 2 . 68 e- 06 3 . 70 e+ 01 output encoder .block. 2 .layer. 1 .DenseReluDense.wi_1 Linear 8 . 08 e- 07 2 . 66 e+ 01 weight 1 . 79 e- 06 4 . 65 e+ 00 input[ 0 ] 1 . 27 e- 04 2 . 37 e+ 02 output encoder .block. 2 .layer. 1 .DenseReluDense.wo Linear 1 . 01 e- 06 6 . 44 e+ 00 weight 0 . 00 e+ 00 9 . 74 e+ 03 input[ 0 ] 3 . 18 e- 04 6 . 27 e+ 04 output encoder .block. 2 .layer. 1 .DenseReluDense T5DenseGatedGeluDense 1 . 79 e- 06 4 . 65 e+ 00 input[ 0 ] 3 . 18 e- 04 6 . 27 e+ 04 output encoder .block. 2 .layer. 1 .dropout Dropout 3 . 18 e- 04 6 . 27 e+ 04 input[ 0 ] 0 . 00 e+ 00 inf output You can see here, that T5DenseGatedGeluDense.forward resulted in output activations, whose absolute max value was around 62.7K, which is very close to fp16’s top limit of 64K. In the next frame we have Dropout which renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than 64K, and we get an overlow. As you can see it’s the previous frames that we need to look into when the numbers start going into very large for fp16 numbers. The tracking is done in a forward hook, which gets invoked immediately after forward has completed. By default the last 21 frames are printed. You can change the default to adjust for your needs. For example : Copied debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save= 100 ) To validate that you have set up this debugging feature correctly, and you intend to use it in a training that may take hours to complete, first run it with normal tracing enabled for one of a few batches as explained in the next section. Mode 2. Specific batch absolute min/max tracing without detection The second work mode is per-batch tracing with the underflow/overflow detection feature turned off. Let’s say you want to watch the absolute min and max values for all the ingredients of each forward call of a given batch, and only do that for batches 1 and 3. Then you instantiate this class as : Copied debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[ 1 , 3 ]) And now full batches 1 and 3 will be traced using the same format as explained above. Batches are 0-indexed. This is helpful if you know that the program starts misbehaving after a certain batch number, so you can fast-forward right to that area. Early stopping: You can also specify the batch number after which to stop the training, with : Copied debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[ 1 , 3 ], abort_after_batch_num= 3 ) This feature is mainly useful in the tracing mode, but you can use it for any mode. Performance : As this module measures absolute min /` max of each weight of the model on every forward it’ll slow the training down. Therefore remember to turn it off once the debugging needs have been met. < > Update on GitHub ← Utilities for Tokenizers Utilities for Generation → Utilities for Trainer Utilities Callbacks internals Distributed Evaluation Trainer Argument Parser Debug Utilities
Benchmarks.txt
Benchmarks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Benchmarks Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Benchmarks Hugging Face’s Benchmarking tools are deprecated and it is advised to use external Benchmarking libraries to measure the speed and memory complexity of Transformer models. Let’s take a look at how 🤗 Transformers models can be benchmarked, best practices, and already available benchmarks. A notebook explaining in more detail how to benchmark 🤗 Transformers models can be found here . How to benchmark 🤗 Transformers models The classes PyTorchBenchmark and TensorFlowBenchmark allow to flexibly benchmark 🤗 Transformers models. The benchmark classes allow us to measure the peak memory usage and required time for both inference and training . Here, inference is defined by a single forward pass, and training is defined by a single forward pass and backward pass. The benchmark classes PyTorchBenchmark and TensorFlowBenchmark expect an object of type PyTorchBenchmarkArguments and TensorFlowBenchmarkArguments , respectively, for instantiation. PyTorchBenchmarkArguments and TensorFlowBenchmarkArguments are data classes and contain all relevant configurations for their corresponding benchmark class. In the following example, it is shown how a BERT model of type bert-base-cased can be benchmarked. Pytorch Hide Pytorch content Copied >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments >>> args = PyTorchBenchmarkArguments(models=[ "google-bert/bert-base-uncased" ], batch_sizes=[ 8 ], sequence_lengths=[ 8 , 32 , 128 , 512 ]) >>> benchmark = PyTorchBenchmark(args) TensorFlow Hide TensorFlow content Copied >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments >>> args = TensorFlowBenchmarkArguments( ... models=[ "google-bert/bert-base-uncased" ], batch_sizes=[ 8 ], sequence_lengths=[ 8 , 32 , 128 , 512 ] ... ) >>> benchmark = TensorFlowBenchmark(args) Here, three arguments are given to the benchmark argument data classes, namely models , batch_sizes , and sequence_lengths . The argument models is required and expects a list of model identifiers from the model hub The list arguments batch_sizes and sequence_lengths define the size of the input_ids on which the model is benchmarked. There are many more parameters that can be configured via the benchmark argument data classes. For more detail on these one can either directly consult the files src/transformers/benchmark/benchmark_args_utils.py , src/transformers/benchmark/benchmark_args.py (for PyTorch) and src/transformers/benchmark/benchmark_args_tf.py (for Tensorflow). Alternatively, running the following shell commands from root will print out a descriptive list of all configurable parameters for PyTorch and Tensorflow respectively. Pytorch Hide Pytorch content Copied python examples/pytorch/benchmarking/run_benchmark.py -- help An instantiated benchmark object can then simply be run by calling benchmark.run() . Copied >>> results = benchmark.run() >>> print (results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- google-bert/bert-base-uncased 8 8 0.006 google-bert/bert-base-uncased 8 32 0.006 google-bert/bert-base-uncased 8 128 0.018 google-bert/bert-base-uncased 8 512 0.088 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- google-bert/bert-base-uncased 8 8 1227 google-bert/bert-base-uncased 8 32 1281 google-bert/bert-base-uncased 8 128 1307 google-bert/bert-base-uncased 8 512 1539 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11 .0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4 .0 - python_version: 3.6 .10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020 -06- 29 - time: 08: 58 : 43.371351 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False TensorFlow Hide TensorFlow content Copied python examples/tensorflow/benchmarking/run_benchmark_tf.py -- help An instantiated benchmark object can then simply be run by calling benchmark.run() . Copied >>> results = benchmark.run() >>> print (results) >>> results = benchmark.run() >>> print (results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- google-bert/bert-base-uncased 8 8 0.005 google-bert/bert-base-uncased 8 32 0.008 google-bert/bert-base-uncased 8 128 0.022 google-bert/bert-base-uncased 8 512 0.105 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- google-bert/bert-base-uncased 8 8 1330 google-bert/bert-base-uncased 8 32 1330 google-bert/bert-base-uncased 8 128 1330 google-bert/bert-base-uncased 8 512 1770 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11 .0 - framework: Tensorflow - use_xla: False - framework_version: 2.2 .0 - python_version: 3.6 .10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020 -06- 29 - time: 09: 26 : 35.617317 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False By default, the time and the required memory for inference are benchmarked. In the example output above the first two sections show the result corresponding to inference time and inference memory . In addition, all relevant information about the computing environment, e.g. the GPU type, the system, the library versions, etc… are printed out in the third section under ENVIRONMENT INFORMATION . This information can optionally be saved in a .csv file when adding the argument save_to_csv=True to PyTorchBenchmarkArguments and TensorFlowBenchmarkArguments respectively. In this case, every section is saved in a separate .csv file. The path to each .csv file can optionally be defined via the argument data classes. Instead of benchmarking pre-trained models via their model identifier, e.g. google-bert/bert-base-uncased , the user can alternatively benchmark an arbitrary configuration of any available model class. In this case, a list of configurations must be inserted with the benchmark args as follows. Pytorch Hide Pytorch content Copied >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig >>> args = PyTorchBenchmarkArguments( ... models=[ "bert-base" , "bert-384-hid" , "bert-6-lay" ], batch_sizes=[ 8 ], sequence_lengths=[ 8 , 32 , 128 , 512 ] ... ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size= 384 ) >>> config_6_lay = BertConfig(num_hidden_layers= 6 ) >>> benchmark = PyTorchBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 128 0.006 bert-base 8 512 0.006 bert-base 8 128 0.018 bert-base 8 512 0.088 bert- 384 -hid 8 8 0.006 bert- 384 -hid 8 32 0.006 bert- 384 -hid 8 128 0.011 bert- 384 -hid 8 512 0.054 bert- 6 -lay 8 8 0.003 bert- 6 -lay 8 32 0.004 bert- 6 -lay 8 128 0.009 bert- 6 -lay 8 512 0.044 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1277 bert-base 8 32 1281 bert-base 8 128 1307 bert-base 8 512 1539 bert- 384 -hid 8 8 1005 bert- 384 -hid 8 32 1027 bert- 384 -hid 8 128 1035 bert- 384 -hid 8 512 1255 bert- 6 -lay 8 8 1097 bert- 6 -lay 8 32 1101 bert- 6 -lay 8 128 1127 bert- 6 -lay 8 512 1359 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11 .0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4 .0 - python_version: 3.6 .10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020 -06- 29 - time: 09: 35 : 25.143267 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False TensorFlow Hide TensorFlow content Copied >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments, BertConfig >>> args = TensorFlowBenchmarkArguments( ... models=[ "bert-base" , "bert-384-hid" , "bert-6-lay" ], batch_sizes=[ 8 ], sequence_lengths=[ 8 , 32 , 128 , 512 ] ... ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size= 384 ) >>> config_6_lay = BertConfig(num_hidden_layers= 6 ) >>> benchmark = TensorFlowBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 8 0.005 bert-base 8 32 0.008 bert-base 8 128 0.022 bert-base 8 512 0.106 bert- 384 -hid 8 8 0.005 bert- 384 -hid 8 32 0.007 bert- 384 -hid 8 128 0.018 bert- 384 -hid 8 512 0.064 bert- 6 -lay 8 8 0.002 bert- 6 -lay 8 32 0.003 bert- 6 -lay 8 128 0.0011 bert- 6 -lay 8 512 0.074 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1330 bert-base 8 32 1330 bert-base 8 128 1330 bert-base 8 512 1770 bert- 384 -hid 8 8 1330 bert- 384 -hid 8 32 1330 bert- 384 -hid 8 128 1330 bert- 384 -hid 8 512 1540 bert- 6 -lay 8 8 1330 bert- 6 -lay 8 32 1330 bert- 6 -lay 8 128 1330 bert- 6 -lay 8 512 1540 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11 .0 - framework: Tensorflow - use_xla: False - framework_version: 2.2 .0 - python_version: 3.6 .10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020 -06- 29 - time: 09: 38 : 15.487125 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False Again, inference time and required memory for inference are measured, but this time for customized configurations of the BertModel class. This feature can especially be helpful when deciding for which configuration the model should be trained. Benchmark best practices This section lists a couple of best practices one should be aware of when benchmarking a model. Currently, only single device benchmarking is supported. When benchmarking on GPU, it is recommended that the user specifies on which device the code should be run by setting the CUDA_VISIBLE_DEVICES environment variable in the shell, e.g. export CUDA_VISIBLE_DEVICES=0 before running the code. The option no_multi_processing should only be set to True for testing and debugging. To ensure accurate memory measurement it is recommended to run each memory benchmark in a separate process by making sure no_multi_processing is set to True . One should always state the environment information when sharing the results of a model benchmark. Results can vary heavily between different GPU devices, library versions, etc., as a consequence, benchmark results on their own are not very useful for the community. Sharing your benchmark Previously all available core models (10 at the time) have been benchmarked for inference time , across many different settings: using PyTorch, with and without TorchScript, using TensorFlow, with and without XLA. All of those tests were done across CPUs (except for TensorFlow XLA) and GPUs. The approach is detailed in the following blogpost and the results are available here . With the new benchmark tools, it is easier than ever to share your benchmark results with the community PyTorch Benchmarking Results . TensorFlow Benchmarking Results . < > Update on GitHub ← Export to TorchScript Notebooks with examples → Benchmarks How to benchmark 🤗 Transformers models Benchmark best practices Sharing your benchmark
LyCORIS.txt
LyCORIS Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation LyCORIS PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started LyCORIS LyCORIS (Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion) are LoRA-like matrix decomposition adapters that modify the cross-attention layer of the UNet. The LoHa and LoKr methods inherit from the Lycoris classes here. LycorisConfig class peft.tuners.lycoris_utils. LycorisConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False rank_pattern : Optional[dict] = <factory> alpha_pattern : Optional[dict] = <factory> ) A base config for LyCORIS like adapters LycorisLayer class peft.tuners.lycoris_utils. LycorisLayer < source > ( base_layer : nn.Module ) A base layer for LyCORIS like adapters merge < source > ( safe_merge : bool = False adapter_names : Optional[list[str]] = None ) Parameters safe_merge ( bool , optional ) — If True , the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to False . adapter_names ( List[str] , optional ) — The list of adapter names that should be merged. If None , all active adapters will be merged. Defaults to None . Merge the active adapter weights into the base weights unmerge < source > ( ) This method unmerges all merged adapter layers from the base weights. LycorisTuner class peft.tuners.lycoris_utils. LycorisTuner < source > ( model config adapter_name low_cpu_mem_usage : bool = False ) Parameters model ( torch.nn.Module ) — The model to be adapted. config ( LoraConfig ) — The configuration of the Lora model. adapter_name ( str ) — The name of the adapter, defaults to "default" . low_cpu_mem_usage ( bool , optional , defaults to False ) — Create empty adapter weights on meta device. Useful to speed up the loading process. A base tuner for LyCORIS like adapters delete_adapter < source > ( adapter_name : str ) Parameters adapter_name ( str ) — Name of the adapter to be deleted. Deletes an existing adapter. disable_adapter_layers < source > ( ) Disable all adapters. When disabling all adapters, the model output corresponds to the output of the base model. enable_adapter_layers < source > ( ) Enable all adapters. Call this if you have previously disabled all adapters and want to re-enable them. merge_and_unload < source > ( progressbar : bool = False safe_merge : bool = False adapter_names : Optional[list[str]] = None ) Parameters progressbar ( bool ) — whether to show a progressbar indicating the unload and merge process safe_merge ( bool ) — whether to activate the safe merging check to check if there is any potential Nan in the adapter weights adapter_names ( List[str] , optional ) — The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to None . This method merges the adapter layers into the base model. This is needed if someone wants to use the base model as a standalone model. set_adapter < source > ( adapter_name : str | list[str] ) Parameters adapter_name ( str or list[str] ) — Name of the adapter(s) to be activated. Set the active adapter(s). Additionally, this function will set the specified adapters to trainable (i.e., requires_grad=True). If this is not desired, use the following code. Copied >>> for name, param in model_peft.named_parameters(): ... if ...: # some check on name (ex. if 'lora' in name) ... param.requires_grad = False unload < source > ( ) Gets back the base model by removing all the lora modules without merging. This gives back the original base model. < > Update on GitHub ← X-LoRA Multitask Prompt Tuning → LyCORIS Lycoris Config Lycoris Layer Lycoris Tuner
Train_a_diffusion_model.txt
Train a diffusion model Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Train a diffusion model Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Train a diffusion model Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub , but if you can’t find one you like, you can always train your own! This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. 💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). Copied # uncomment to install the necessary libraries in Colab #!pip install diffusers[training] We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. Copied >>> from huggingface_hub import notebook_login >>> notebook_login() Or login in from the terminal: Copied huggingface-cli login Since the model checkpoints are quite large, install Git-LFS to version these large files: Copied !sudo apt -qq install git-lfs !git config --global credential.helper store Training configuration For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): Copied >>> from dataclasses import dataclass >>> @dataclass ... class TrainingConfig : ... image_size = 128 # the generated image resolution ... train_batch_size = 16 ... eval_batch_size = 16 # how many images to sample during evaluation ... num_epochs = 50 ... gradient_accumulation_steps = 1 ... learning_rate = 1e-4 ... lr_warmup_steps = 500 ... save_image_epochs = 10 ... save_model_epochs = 30 ... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision ... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub ... push_to_hub = True # whether to upload the saved model to the HF Hub ... hub_model_id = "<your-username>/<my-awesome-model>" # the name of the repository to create on the HF Hub ... hub_private_repo = None ... overwrite_output_dir = True # overwrite the old model when re-running the notebook ... seed = 0 >>> config = TrainingConfig() Load the dataset You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: Copied >>> from datasets import load_dataset >>> config.dataset_name = "huggan/smithsonian_butterflies_subset" >>> dataset = load_dataset(config.dataset_name, split= "train" ) 💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder . Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. 🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: Copied >>> import matplotlib.pyplot as plt >>> fig, axs = plt.subplots( 1 , 4 , figsize=( 16 , 4 )) >>> for i, image in enumerate (dataset[: 4 ][ "image" ]): ... axs[i].imshow(image) ... axs[i].set_axis_off() >>> fig.show() The images are all different sizes though, so you’ll need to preprocess them first: Resize changes the image size to the one defined in config.image_size . RandomHorizontalFlip augments the dataset by randomly mirroring the images. Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. Copied >>> from torchvision import transforms >>> preprocess = transforms.Compose( ... [ ... transforms.Resize((config.image_size, config.image_size)), ... transforms.RandomHorizontalFlip(), ... transforms.ToTensor(), ... transforms.Normalize([ 0.5 ], [ 0.5 ]), ... ] ... ) Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: Copied >>> def transform ( examples ): ... images = [preprocess(image.convert( "RGB" )) for image in examples[ "image" ]] ... return { "images" : images} >>> dataset.set_transform(transform) Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! Copied >>> import torch >>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle= True ) Create a UNet2DModel Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel : Copied >>> from diffusers import UNet2DModel >>> model = UNet2DModel( ... sample_size=config.image_size, # the target image resolution ... in_channels= 3 , # the number of input channels, 3 for RGB images ... out_channels= 3 , # the number of output channels ... layers_per_block= 2 , # how many ResNet layers to use per UNet block ... block_out_channels=( 128 , 128 , 256 , 256 , 512 , 512 ), # the number of output channels for each UNet block ... down_block_types=( ... "DownBlock2D" , # a regular ResNet downsampling block ... "DownBlock2D" , ... "DownBlock2D" , ... "DownBlock2D" , ... "AttnDownBlock2D" , # a ResNet downsampling block with spatial self-attention ... "DownBlock2D" , ... ), ... up_block_types=( ... "UpBlock2D" , # a regular ResNet upsampling block ... "AttnUpBlock2D" , # a ResNet upsampling block with spatial self-attention ... "UpBlock2D" , ... "UpBlock2D" , ... "UpBlock2D" , ... "UpBlock2D" , ... ), ... ) It is often a good idea to quickly check the sample image shape matches the model output shape: Copied >>> sample_image = dataset[ 0 ][ "images" ].unsqueeze( 0 ) >>> print ( "Input shape:" , sample_image.shape) Input shape: torch.Size([ 1 , 3 , 128 , 128 ]) >>> print ( "Output shape:" , model(sample_image, timestep= 0 ).sample.shape) Output shape: torch.Size([ 1 , 3 , 128 , 128 ]) Great! Next, you’ll need a scheduler to add some noise to the image. Create a scheduler The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule . Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: Copied >>> import torch >>> from PIL import Image >>> from diffusers import DDPMScheduler >>> noise_scheduler = DDPMScheduler(num_train_timesteps= 1000 ) >>> noise = torch.randn(sample_image.shape) >>> timesteps = torch.LongTensor([ 50 ]) >>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) >>> Image.fromarray(((noisy_image.permute( 0 , 2 , 3 , 1 ) + 1.0 ) * 127.5 ). type (torch.uint8).numpy()[ 0 ]) The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: Copied >>> import torch.nn.functional as F >>> noise_pred = model(noisy_image, timesteps).sample >>> loss = F.mse_loss(noise_pred, noise) Train the model By now, you have most of the pieces to start training the model and all that’s left is putting everything together. First, you’ll need an optimizer and a learning rate scheduler: Copied >>> from diffusers.optimization import get_cosine_schedule_with_warmup >>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) >>> lr_scheduler = get_cosine_schedule_with_warmup( ... optimizer=optimizer, ... num_warmup_steps=config.lr_warmup_steps, ... num_training_steps=( len (train_dataloader) * config.num_epochs), ... ) Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: Copied >>> from diffusers import DDPMPipeline >>> from diffusers.utils import make_image_grid >>> import os >>> def evaluate ( config, epoch, pipeline ): ... # Sample some images from random noise (this is the backward diffusion process). ... # The default pipeline output type is `List[PIL.Image]` ... images = pipeline( ... batch_size=config.eval_batch_size, ... generator=torch.Generator(device= 'cpu' ).manual_seed(config.seed), # Use a separate torch generator to avoid rewinding the random state of the main training loop ... ).images ... # Make a grid out of the images ... image_grid = make_image_grid(images, rows= 4 , cols= 4 ) ... # Save the images ... test_dir = os.path.join(config.output_dir, "samples" ) ... os.makedirs(test_dir, exist_ok= True ) ... image_grid.save( f" {test_dir} / {epoch:04d} .png" ) Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. 💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 Copied >>> from accelerate import Accelerator >>> from huggingface_hub import create_repo, upload_folder >>> from tqdm.auto import tqdm >>> from pathlib import Path >>> import os >>> def train_loop ( config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler ): ... # Initialize accelerator and tensorboard logging ... accelerator = Accelerator( ... mixed_precision=config.mixed_precision, ... gradient_accumulation_steps=config.gradient_accumulation_steps, ... log_with= "tensorboard" , ... project_dir=os.path.join(config.output_dir, "logs" ), ... ) ... if accelerator.is_main_process: ... if config.output_dir is not None : ... os.makedirs(config.output_dir, exist_ok= True ) ... if config.push_to_hub: ... repo_id = create_repo( ... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok= True ... ).repo_id ... accelerator.init_trackers( "train_example" ) ... # Prepare everything ... # There is no specific order to remember, you just need to unpack the ... # objects in the same order you gave them to the prepare method. ... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( ... model, optimizer, train_dataloader, lr_scheduler ... ) ... global_step = 0 ... # Now you train the model ... for epoch in range (config.num_epochs): ... progress_bar = tqdm(total= len (train_dataloader), disable= not accelerator.is_local_main_process) ... progress_bar.set_description( f"Epoch {epoch} " ) ... for step, batch in enumerate (train_dataloader): ... clean_images = batch[ "images" ] ... # Sample noise to add to the images ... noise = torch.randn(clean_images.shape, device=clean_images.device) ... bs = clean_images.shape[ 0 ] ... # Sample a random timestep for each image ... timesteps = torch.randint( ... 0 , noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, ... dtype=torch.int64 ... ) ... # Add noise to the clean images according to the noise magnitude at each timestep ... # (this is the forward diffusion process) ... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) ... with accelerator.accumulate(model): ... # Predict the noise residual ... noise_pred = model(noisy_images, timesteps, return_dict= False )[ 0 ] ... loss = F.mse_loss(noise_pred, noise) ... accelerator.backward(loss) ... if accelerator.sync_gradients: ... accelerator.clip_grad_norm_(model.parameters(), 1.0 ) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update( 1 ) ... logs = { "loss" : loss.detach().item(), "lr" : lr_scheduler.get_last_lr()[ 0 ], "step" : global_step} ... progress_bar.set_postfix(**logs) ... accelerator.log(logs, step=global_step) ... global_step += 1 ... # After each epoch you optionally sample some demo images with evaluate() and save the model ... if accelerator.is_main_process: ... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) ... if (epoch + 1 ) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1 : ... evaluate(config, epoch, pipeline) ... if (epoch + 1 ) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1 : ... if config.push_to_hub: ... upload_folder( ... repo_id=repo_id, ... folder_path=config.output_dir, ... commit_message= f"Epoch {epoch} " , ... ignore_patterns=[ "step_*" , "epoch_*" ], ... ) ... else : ... pipeline.save_pretrained(config.output_dir) Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: Copied >>> from accelerate import notebook_launcher >>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) >>> notebook_launcher(train_loop, args, num_processes= 1 ) Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! Copied >>> import glob >>> sample_images = sorted (glob.glob( f" {config.output_dir} /samples/*.png" )) >>> Image. open (sample_images[- 1 ]) Next steps Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: Textual Inversion , an algorithm that teaches a model a specific visual concept and integrates it into the generated image. DreamBooth , a technique for generating personalized images of a subject given several input images of the subject. Guide to finetuning a Stable Diffusion model on your own dataset. Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. < > Update on GitHub ← AutoPipeline Load LoRAs for inference → Train a diffusion model Training configuration Load the dataset Create a U Net2D Model Create a scheduler Train the model Next steps
Batch_mapping.txt
Batch mapping Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Batch mapping Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Batch mapping Combining the utility of Dataset.map() with batch mode is very powerful. It allows you to speed up processing, and freely control the size of the generated dataset. Need for speed The primary objective of batch mapping is to speed up processing. Often times, it is faster to work with batches of data instead of single examples. Naturally, batch mapping lends itself to tokenization. For example, the 🤗 Tokenizers library works faster with batches because it parallelizes the tokenization of all the examples in a batch. Input size != output size The ability to control the size of the generated dataset can be leveraged for many interesting use-cases. In the How-to map section, there are examples of using batch mapping to: Split long sentences into shorter chunks. Augment a dataset with additional tokens. It is helpful to understand how this works, so you can come up with your own ways to use batch mapping. At this point, you may be wondering how you can control the size of the generated dataset. The answer is: the mapped function does not have to return an output batch of the same size . In other words, your mapped function input can be a batch of size N and return a batch of size M . The output M can be greater than or less than N . This means you can concatenate your examples, divide it up, and even add more examples! However, remember that all values in the output dictionary must contain the same number of elements as the other fields in the output dictionary. Otherwise, it is not possible to define the number of examples in the output returned by the mapped function. The number can vary between successive batches processed by the mapped function. For a single batch though, all values of the output dictionary should have the same length (i.e., the number of elements). For example, from a dataset of 1 column and 3 rows, if you use map to return a new column with twice as many rows, then you will have an error. In this case, you end up with one column with 3 rows, and one column with 6 rows. As you can see, the table will not be valid: Copied >>> from datasets import Dataset >>> dataset = Dataset.from_dict({ "a" : [ 0 , 1 , 2 ]}) >>> dataset. map ( lambda batch: { "b" : batch[ "a" ] * 2 }, batched= True ) # new column with 6 elements: [0, 1, 2, 0, 1, 2] 'ArrowInvalid: Column 1 named b expected length 3 but got length 6' To make it valid, you have to drop one of the columns: Copied >>> from datasets import Dataset >>> dataset = Dataset.from_dict({ "a" : [ 0 , 1 , 2 ]}) >>> dataset_with_duplicates = dataset. map ( lambda batch: { "b" : batch[ "a" ] * 2 }, remove_columns=[ "a" ], batched= True ) >>> len (dataset_with_duplicates) 6 < > Update on GitHub ← Build and load Main classes → Batch mapping Need for speed Input size != output size
Perform_SQL_operations.txt
Perform SQL operations Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Perform SQL operations Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Argilla Dask Datasets Distilabel DuckDB Authentication for private and gated datasets Query datasets Perform SQL operations Combine datasets and export Perform vector similarity search FiftyOne Pandas Polars Spark WebDataset Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Perform SQL operations Performing SQL operations with DuckDB opens up a world of possibilities for querying datasets efficiently. Let’s dive into some examples showcasing the power of DuckDB functions. For our demonstration, we’ll explore a fascinating dataset. The MMLU dataset is a multitask test containing multiple-choice questions spanning various knowledge domains. To preview the dataset, let’s select a sample of 3 rows: Copied FROM 'hf://datasets/cais/mmlu/all/test-*.parquet' USING SAMPLE 3; ┌──────────────────────┬──────────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐ │ question │ subject │ choices │ answer │ │ varchar │ varchar │ varchar[] │ int64 │ ├──────────────────────┼──────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤ │ The model of light… │ conceptual_physics │ [wave model, particle model, Both of these, Neither of these] │ 1 │ │ A person who is lo… │ professional_psych… │ [his/her life scripts., his/her own feelings, attitudes, and beliefs., the emotional reactions and behaviors of the people he/she is interacting with.… │ 1 │ │ The thermic effect… │ nutrition │ [is substantially higher for carbohydrate than for protein, is accompanied by a slight decrease in body core temperature., is partly related to sympat… │ 2 │ └──────────────────────┴──────────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘ This command retrieves a random sample of 3 rows from the dataset for us to examine. Let’s start by examining the schema of our dataset. The following table outlines the structure of our dataset: Copied DESCRIBE FROM 'hf://datasets/cais/mmlu/all/test-*.parquet' USING SAMPLE 3; ┌─────────────┬─────────────┬─────────┬─────────┬─────────┬─────────┐ │ column_name │ column_type │ null │ key │ default │ extra │ │ varchar │ varchar │ varchar │ varchar │ varchar │ varchar │ ├─────────────┼─────────────┼─────────┼─────────┼─────────┼─────────┤ │ question │ VARCHAR │ YES │ │ │ │ │ subject │ VARCHAR │ YES │ │ │ │ │ choices │ VARCHAR[] │ YES │ │ │ │ │ answer │ BIGINT │ YES │ │ │ │ └─────────────┴─────────────┴─────────┴─────────┴─────────┴─────────┘ Next, let’s analyze if there are any duplicated records in our dataset: Copied SELECT *, COUNT(*) AS counts FROM 'hf://datasets/cais/mmlu/all/test-*.parquet' GROUP BY ALL HAVING counts > 2; ┌──────────┬─────────┬───────────┬────────┬────────┐ │ question │ subject │ choices │ answer │ counts │ │ varchar │ varchar │ varchar[] │ int64 │ int64 │ ├──────────┴─────────┴───────────┴────────┴────────┤ │ 0 rows │ └──────────────────────────────────────────────────┘ Fortunately, our dataset doesn’t contain any duplicate records. Let’s see the proportion of questions based on the subject in a bar representation: Copied SELECT subject, COUNT(*) AS counts, BAR(COUNT(*), 0, (SELECT COUNT(*) FROM 'hf://datasets/cais/mmlu/all/test-*.parquet' )) AS percentage FROM 'hf://datasets/cais/mmlu/all/test-*.parquet' GROUP BY subject ORDER BY counts DESC; ┌──────────────────────────────┬────────┬────────────────────────────────────────────────────────────────────────────────┐ │ subject │ counts │ percentage │ │ varchar │ int64 │ varchar │ ├──────────────────────────────┼────────┼────────────────────────────────────────────────────────────────────────────────┤ │ professional_law │ 1534 │ ████████▋ │ │ moral_scenarios │ 895 │ █████ │ │ miscellaneous │ 783 │ ████▍ │ │ professional_psychology │ 612 │ ███▍ │ │ high_school_psychology │ 545 │ ███ │ │ high_school_macroeconomics │ 390 │ ██▏ │ │ elementary_mathematics │ 378 │ ██▏ │ │ moral_disputes │ 346 │ █▉ │ ├──────────────────────────────┴────────┴────────────────────────────────────────────────────────────────────────────────┤ │ 57 rows (8 shown) 3 columns │ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ Now, let’s prepare a subset of the dataset containing questions related to nutrition and create a mapping of questions to correct answers. Notice that we have the column choices from which we can get the correct answer using the answer column as an index. Copied SELECT * FROM 'hf://datasets/cais/mmlu/all/test-*.parquet' WHERE subject = 'nutrition' LIMIT 3; ┌──────────────────────┬───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┐ │ question │ subject │ choices │ answer │ │ varchar │ varchar │ varchar[] │ int64 │ ├──────────────────────┼───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┤ │ Which foods tend t… │ nutrition │ [Meat, Confectionary, Fruits and vegetables, Potatoes] │ 2 │ │ In which one of th… │ nutrition │ [If the incidence rate of the disease falls., If survival time with the disease increases., If recovery of the disease is faster., If the population in which the… │ 1 │ │ Which of the follo… │ nutrition │ [The flavonoid class comprises flavonoids and isoflavonoids., The digestibility and bioavailability of isoflavones in soya food products are not changed by proce… │ 0 │ └──────────────────────┴───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┘ Copied SELECT question, choices[answer] AS correct_answer FROM 'hf://datasets/cais/mmlu/all/test-*.parquet' WHERE subject = 'nutrition' LIMIT 3; ┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────────────────────┐ │ question │ correct_answer │ │ varchar │ varchar │ ├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────────────────────┤ │ Which foods tend to be consumed in lower quantities in Wales and Scotland (as of 2020)?\n │ Confectionary │ │ In which one of the following circumstances will the prevalence of a disease in the population increase, all else being constant?\n │ If the incidence rate of the disease falls. │ │ Which of the following statements is correct?\n │ │ └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────────────────────┘ To ensure data cleanliness, let’s remove any newline characters at the end of the questions and filter out any empty answers: Copied SELECT regexp_replace(question, '\n' , '' ) AS question, choices[answer] AS correct_answer FROM 'hf://datasets/cais/mmlu/all/test-*.parquet' WHERE subject = 'nutrition' AND LENGTH(correct_answer) > 0 LIMIT 3; ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────────────────────┐ │ question │ correct_answer │ │ varchar │ varchar │ ├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────────────────────┤ │ Which foods tend to be consumed in lower quantities in Wales and Scotland (as of 2020)? │ Confectionary │ │ In which one of the following circumstances will the prevalence of a disease in the population increase, all else being constant? │ If the incidence rate of the disease falls. │ │ Which vitamin is a major lipid-soluble antioxidant in cell membranes? │ Vitamin D │ └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────────────────────┘ Finally, lets highlight some of the DuckDB functions used in this section: DESCRIBE , returns the table schema. USING SAMPLE , samples are used to randomly select a subset of a dataset. BAR , draws a band whose width is proportional to (x - min) and equal to width characters when x = max. Width defaults to 80. string[begin:end] , extracts a string using slice conventions. Missing begin or end arguments are interpreted as the beginning or end of the list respectively. Negative values are accepted. regexp_replace , if the string contains the regexp pattern, replaces the matching part with replacement. LENGTH , gets the number of characters in the string. There are plenty of useful functions available in DuckDB’s SQL functions overview . The best part is that you can use them directly on Hugging Face datasets. < > Update on GitHub ← Query datasets Combine datasets and export → Perform SQ L operations
Optimum.txt
Optimum Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Optimum Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Optimum The Optimum library supports quantization for Intel, Furiosa, ONNX Runtime, GPTQ, and lower-level PyTorch quantization functions. Consider using Optimum for quantization if you’re using specific and optimized hardware like Intel CPUs, Furiosa NPUs or a model accelerator like ONNX Runtime. < > Update on GitHub ← FBGEMM_FP8 TorchAO → Optimum
Collection_of_Usage_Statistics.txt
Collection of Usage Statistics Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Collection of Usage Statistics text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Collection of Usage Statistics Text Generation Inference collects anonymous usage statistics to help us improve the service. The collected data is used to improve TGI and to understand what causes failures. The data is collected transparently and any sensitive information is omitted. Data is sent twice, once on server startup and once when server stops. Also, usage statistics are only enabled when TGI is running in docker to avoid collecting data then TGI runs directly on the host machine. What data is collected The code that collects the data is available here . As of release 2.1.2 this is an example of the data collected: From the TGI configuration: Copied { "event_type" : "start" , "disable_grammar_support" : false , "max_batch_prefill_tokens" : 4096 , "max_batch_size" : null , "max_batch_total_tokens" : null , "max_best_of" : 2 , "max_client_batch_size" : 4 , "max_concurrent_requests" : 128 , "max_input_tokens" : 1024 , "max_stop_sequences" : 4 , "max_top_n_tokens" : 5 , "max_total_tokens" : 2048 , "max_waiting_tokens" : 20 , "model_config" : { "model_type" : "Bloom" } , "revision" : null , "tokenizer_class" : "BloomTokenizerFast" , "validation_workers" : 2 , "waiting_served_ratio" : 1.2 , "docker_label" : "latest" , "git_sha" : "cfc118704880453d29bcbe4fbbd91dda501cf5fe" , "nvidia_env" : { "name" : "NVIDIA A10G" , "pci_bus_id" : "00000000:00:1E.0" , "driver_version" : "535.183.01" , "pstate" : "P8" , "pcie_link_gen_max" : "4" , "pcie_link_gen_current" : "1" , "temperature_gpu" : "31" , "utilization_gpu" : "0 %" , "utilization_memory" : "0 %" , "memory_total" : "23028 MiB" , "memory_free" : "22515 MiB" , "memory_used" : "0 MiB" , "reset_status_reset_required" : "No" , "reset_status_drain_and_reset_recommended" : "No" , "compute_cap" : "8.6" , "ecc_errors_corrected_volatile_total" : "0" , "mig_mode_current" : "[N/A]" , "power_draw_instant" : "10.86 W" , "power_limit" : "300.00 W" } , "system_env" : { "cpu_count" : 16 , "cpu_type" : "AMD EPYC 7R32" , "total_memory" : 66681196544 , "architecture" : "x86_64" , "platform" : "linux-unix-x86_64" } } How to opt-out By passing the --usage-stats to the text-generation-launcher you can control how much usage statistics are being collected. --usage-stats=no-stack will not emit the stack traces from errors and the error types, but will continue to send start and stop events --usage-stats=off will completely disable everything < > Update on GitHub ← Internal Architecture Consuming TGI → Collection of Usage Statistics What data is collected How to opt-out
Context_aware_Prompt_Tuning__Advancing_In_Context_.txt
Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods CPT combines In-Context Learning (ICL), Prompt Tuning (PT), and adversarial optimization to improve few-shot learning by refining context embeddings. CPT updates the context tokens by optimizing both the context and the training examples, encapsulating them into a novel loss design that minimizes overfitting, enables more effective optimization, and drives significant improvements in classification tasks. The abstract from the paper is: Large Language Models (LLMs) can perform few-shot learning using either optimization-based approaches or In-Context Learning (ICL). Optimization-based methods often suffer from overfitting, as they require updating a large number of parameters with limited data. In contrast, ICL avoids overfitting but typically underperforms compared to optimization-based methods and is highly sensitive to the selection, order, and format of demonstration examples. To overcome these challenges, we introduce Context-aware Prompt Tuning (CPT), a method inspired by ICL, Prompt Tuning (PT), and adversarial attacks. CPT builds on the ICL strategy of concatenating examples before the input, extending it by incorporating PT-like learning to refine the context embedding through iterative optimization, extracting deeper insights from the training examples. Our approach carefully modifies specific context tokens, considering the unique structure of the examples within the context. In addition to updating the context with PT-like optimization, CPT draws inspiration from adversarial attacks, adjusting the input based on the labels present in the context while preserving the inherent value of the user-provided data. To ensure robustness and stability during optimization, we employ a projected gradient descent algorithm, constraining token embeddings to remain close to their original values and safeguarding the quality of the context. Our method has demonstrated superior accuracy across multiple classification tasks using various LLM models, outperforming existing baselines and effectively addressing the overfitting challenge in few-shot learning. Take a look at Example for a step-by-step guide on how to train a model with CPT. CPTConfig class peft. CPTConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False num_virtual_tokens : int = None token_dim : int = None num_transformer_submodules : typing.Optional[int] = None num_attention_heads : typing.Optional[int] = None num_layers : typing.Optional[int] = None cpt_token_ids : typing.Optional[list[int]] = None cpt_mask : typing.Optional[list[int]] = None cpt_tokens_type_mask : typing.Optional[list[int]] = None opt_weighted_loss_type : typing.Optional[typing.Literal['none', 'decay']] = 'none' opt_loss_decay_factor : typing.Optional[float] = 1.0 opt_projection_epsilon : typing.Optional[float] = 0.1 opt_projection_format_epsilon : typing.Optional[float] = 0.1 tokenizer_name_or_path : typing.Optional[str] = None ) CPT Configuration class extending PeftConfig for Context-aware Prompt Tuning (CPT). This class introduces additional parameters required for CPT, such as: Token type masks Prompt tuning initialization Loss weighting Projection settings For more details, see the paper: https://arxiv.org/abs/2410.17222 CPTEmbedding class peft. CPTEmbedding < source > ( config word_embeddings ) CPTEmbedding is a custom embedding layer designed for Context-aware Prompt Tuning (CPT) in PEFT. It initializes embeddings, applies prompt-specific projections, and computes loss using label masks. calculate_loss < source > ( base_model_output labels cpt_type_mask config ) → ModelOutput Parameters base_model_output (ModelOutput) — Output from the base model containing logits. labels (torch.Tensor) — Ground-truth labels for the input tokens. cpt_type_mask (torch.Tensor) — Token type mask used for filtering valid loss terms. config (Namespace) — Configuration object containing loss-related hyperparameters. Returns ModelOutput The base model output with computed loss. Computes the loss for CPT models with optional exponential decay. forward < source > ( indices ) → torch.Tensor Parameters indices (torch.Tensor) — Indices of the tokens to be embedded. Returns torch.Tensor Sum of prompt embeddings and delta embeddings. Computes the prompt embeddings and applies delta adjustments. get_projection < source > ( ) Applies epsilon-based projection to the delta embeddings to control their norm. set_updated_tokens < source > ( ) Sets up a backward hook to selectively update token gradients based on the CPT token type mask. < > Update on GitHub ← HRA Bone → Context-aware Prompt Tuning: Advancing In- Context Learning with Adversarial Methods CPT Config CPT Embedding
Security.txt
Security Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Security Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security User Access Tokens Two-Factor Authentication Git over SSH Signing Commits with GPG Single Sign-On (SSO) Advanced Access Control (Resource Groups) Malware Scanning Pickle Scanning Secrets Scanning Protect AI Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Security The Hugging Face Hub offers several security features to ensure that your code and data are secure. Beyond offering private repositories for models, datasets, and Spaces, the Hub supports access tokens, commit signatures, and malware scanning. Hugging Face is GDPR compliant. If a contract or specific data storage is something you’ll need, we recommend taking a look at our Expert Acceleration Program . Hugging Face can also offer Business Associate Addendums or GDPR data processing agreements through an Enterprise Plan . Hugging Face is also SOC2 Type 2 certified , meaning we provide security certification to our customers and actively monitor and patch any security weaknesses. For any other security questions, please feel free to send us an email at [email protected] . Contents User Access Tokens Two-Factor Authentication (2FA) Git over SSH Signing commits with GPG Single Sign-On (SSO) Malware Scanning Pickle Scanning Secrets Scanning Third-party scanner: Protect AI Resource Groups < > Update on GitHub ← Billing User Access Tokens → Security Contents