filename
stringlengths 7
54
| content
stringlengths 1.74k
710k
|
---|---|
Server_infrastructure.txt | Server infrastructure Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Server infrastructure Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Server infrastructure The dataset viewer has two main components that work together to return queries about a dataset instantly: a user-facing web API for exploring and returning information about a dataset a server runs the queries ahead of time and caches them in a database While most of the documentation is focused on the web API, the server is crucial because it performs all the time-consuming preprocessing and stores the results so the web API can retrieve and serve them to the user. This saves a user time because instead of generating the response every time it gets requested, the dataset viewer can return the preprocessed results instantly from the cache. There are three elements that keep the server running: the job queue, workers, and the cache. Job queue The job queue is a list of jobs stored in a Mongo database that should be completed by the workers. The jobs are practically identical to the endpoints the user uses; only the server runs the jobs ahead of time, and the user gets the results when they use the endpoint. There are three jobs: /splits corresponds to the /splits endpoint. It refreshes a dataset and then returns that dataset’s splits and subsets. For every split in the dataset, it’ll create a new job. /first-rows corresponds to the /first-rows endpoint. It gets the first 100 rows and columns of a dataset split. /parquet corresponds to the /parquet endpoint. It downloads the whole dataset, converts it to parquet and publishes the parquet files to the Hub. You might’ve noticed the /rows and /search endpoints don’t have a job in the queue. The responses from these endpoints are generated on demand. Workers Workers are responsible for executing the jobs in the queue. They complete the actual preprocessing requests, such as getting a list of splits and subsets. The workers can be controlled by configurable environment variables, like the minimum or the maximum number of rows returned by a worker or the maximum number of jobs to start per dataset user or organization. Take a look at the workers configuration for a complete list of the environment variables if you’re interested in learning more. Cache Once the workers complete a job, the results are stored - or cached - in a Mongo database. When a user makes a request with an endpoint like /first-rows , the dataset viewer retrieves the preprocessed response from the cache, and serves it to the user. This eliminates the time a user would’ve waited if the server hadn’t already completed the job and stored the response. As a result, users can get their requested information about a dataset (even large ones) nearly instantaneously! < > Update on GitHub ← Data types Server infrastructure Job queue Workers Cache |
Interface__ZeroShotImageClassificationOutputValue_.txt | Interface: ZeroShotImageClassificationOutputValue Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: ZeroShotImageClassificationOutputValue Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: ZeroShotImageClassificationOutputValue Properties label • label : string Defined in inference/src/tasks/cv/zeroShotImageClassification.ts:23 score • score : number Defined in inference/src/tasks/cv/zeroShotImageClassification.ts:24 < > Update on GitHub ← ZeroShotClassificationOutputValue Interact with the Hub → Interface: Zero Shot Image Classification Output Value Properties label Defined in score Defined in |
Deploy_models_to_Amazon_SageMaker.txt | Deploy models to Amazon SageMaker Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Amazon SageMaker documentation Deploy models to Amazon SageMaker Amazon SageMaker 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Hugging Face on Amazon SageMaker Get started Run training on Amazon SageMaker Deploy models to Amazon SageMaker Reference Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Deploy models to Amazon SageMaker Deploying a 🤗 Transformers models in SageMaker for inference is as easy as: Copied from sagemaker.huggingface import HuggingFaceModel # create Hugging Face Model Class and deploy it as SageMaker endpoint huggingface_model = HuggingFaceModel(...).deploy() This guide will show you how to deploy models with zero-code using the Inference Toolkit . The Inference Toolkit builds on top of the pipeline feature from 🤗 Transformers. Learn how to: Install and setup the Inference Toolkit . Deploy a 🤗 Transformers model trained in SageMaker . Deploy a 🤗 Transformers model from the Hugging Face [model Hub](https://huggingface.co/models) . Run a Batch Transform Job using 🤗 Transformers and Amazon SageMaker . Create a custom inference module . Installation and setup Before deploying a 🤗 Transformers model to SageMaker, you need to sign up for an AWS account. If you don’t have an AWS account yet, learn more here . Once you have an AWS account, get started using one of the following: SageMaker Studio SageMaker notebook instance Local environment To start training locally, you need to setup an appropriate IAM role . Upgrade to the latest sagemaker version. Copied pip install sagemaker --upgrade SageMaker environment Setup your SageMaker environment as shown below: Copied import sagemaker sess = sagemaker.Session() role = sagemaker.get_execution_role() Note: The execution role is only available when running a notebook within SageMaker. If you run get_execution_role in a notebook not on SageMaker, expect a region error. Local environment Setup your local environment as shown below: Copied import sagemaker import boto3 iam_client = boto3.client( 'iam' ) role = iam_client.get_role(RoleName= 'role-name-of-your-iam-role-with-right-permissions' )[ 'Role' ][ 'Arn' ] sess = sagemaker.Session() Deploy a 🤗 Transformers model trained in SageMaker There are two ways to deploy your Hugging Face model trained in SageMaker: Deploy it after your training has finished. Deploy your saved model at a later time from S3 with the model_data . 📓 Open the deploy_transformer_model_from_s3.ipynb notebook for an example of how to deploy a model from S3 to SageMaker for inference. Deploy after training To deploy your model directly after training, ensure all required files are saved in your training script, including the tokenizer and the model. If you use the Hugging Face Trainer , you can pass your tokenizer as an argument to the Trainer . It will be automatically saved when you call trainer.save_model() . Copied from sagemaker.huggingface import HuggingFace ############ pseudo code start ############ # create Hugging Face Estimator for training huggingface_estimator = HuggingFace(....) # start the train job with our uploaded datasets as input huggingface_estimator.fit(...) ############ pseudo code end ############ # deploy model to SageMaker Inference predictor = hf_estimator.deploy(initial_instance_count= 1 , instance_type= "ml.m5.xlarge" ) # example request: you always need to define "inputs" data = { "inputs" : "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days." } # request predictor.predict(data) After you run your request you can delete the endpoint as shown: Copied # delete endpoint predictor.delete_endpoint() Deploy with model_data If you’ve already trained your model and want to deploy it at a later time, use the model_data argument to specify the location of your tokenizer and model weights. Copied from sagemaker.huggingface.model import HuggingFaceModel # create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data= "s3://models/my-bert-model/model.tar.gz" , # path to your trained SageMaker model role=role, # IAM role with permissions to create an endpoint transformers_version= "4.26" , # Transformers version used pytorch_version= "1.13" , # PyTorch version used py_version= 'py39' , # Python version used ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count= 1 , instance_type= "ml.m5.xlarge" ) # example request: you always need to define "inputs" data = { "inputs" : "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days." } # request predictor.predict(data) After you run our request, you can delete the endpoint again with: Copied # delete endpoint predictor.delete_endpoint() Create a model artifact for deployment For later deployment, you can create a model.tar.gz file that contains all the required files, such as: pytorch_model.bin tf_model.h5 tokenizer.json tokenizer_config.json For example, your file should look like this: Copied model.tar.gz/ |- pytorch_model.bin |- vocab.txt |- tokenizer_config.json |- config.json |- special_tokens_map.json Create your own model.tar.gz from a model from the 🤗 Hub: Download a model: Copied git lfs install git clone [email protected]:{repository} Create a tar file: Copied cd {repository} tar zcvf model.tar.gz * Upload model.tar.gz to S3: Copied aws s3 cp model.tar.gz <s3://{my-s3-path}> Now you can provide the S3 URI to the model_data argument to deploy your model later. Deploy a model from the 🤗 Hub To deploy a model directly from the 🤗 Hub to SageMaker, define two environment variables when you create a HuggingFaceModel : HF_MODEL_ID defines the model ID which is automatically loaded from huggingface.co/models when you create a SageMaker endpoint. Access 10,000+ models on he 🤗 Hub through this environment variable. HF_TASK defines the task for the 🤗 Transformers pipeline . A complete list of tasks can be found here . ⚠️ ** Pipelines are not optimized for parallelism (multi-threading) and tend to consume a lot of RAM. For example, on a GPU-based instance, the pipeline operates on a single vCPU. When this vCPU becomes saturated with the inference requests preprocessing, it can create a bottleneck, preventing the GPU from being fully utilized for model inference. Learn more here Copied from sagemaker.huggingface.model import HuggingFaceModel # Hub model configuration <https://huggingface.co/models> hub = { 'HF_MODEL_ID' : 'distilbert-base-uncased-distilled-squad' , # model_id from hf.co/models 'HF_TASK' : 'question-answering' # NLP task you want to use for predictions } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( env=hub, # configuration for loading model from Hub role=role, # IAM role with permissions to create an endpoint transformers_version= "4.26" , # Transformers version used pytorch_version= "1.13" , # PyTorch version used py_version= 'py39' , # Python version used ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count= 1 , instance_type= "ml.m5.xlarge" ) # example request: you always need to define "inputs" data = { "inputs" : { "question" : "What is used for inference?" , "context" : "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference." } } # request predictor.predict(data) After you run our request, you can delete the endpoint again with: Copied # delete endpoint predictor.delete_endpoint() 📓 Open the deploy_transformer_model_from_hf_hub.ipynb notebook for an example of how to deploy a model from the 🤗 Hub to SageMaker for inference. Run batch transform with 🤗 Transformers and SageMaker After training a model, you can use SageMaker batch transform to perform inference with the model. Batch transform accepts your inference data as an S3 URI and then SageMaker will take care of downloading the data, running the prediction, and uploading the results to S3. For more details about batch transform, take a look here . ⚠️ The Hugging Face Inference DLC currently only supports .jsonl for batch transform due to the complex structure of textual data. Note: Make sure your inputs fit the max_length of the model during preprocessing. If you trained a model using the Hugging Face Estimator, call the transformer() method to create a transform job for a model based on the training job (see here for more details): Copied batch_job = huggingface_estimator.transformer( instance_count= 1 , instance_type= 'ml.p3.2xlarge' , strategy= 'SingleRecord' ) batch_job.transform( data= 's3://s3-uri-to-batch-data' , content_type= 'application/json' , split_type= 'Line' ) If you want to run your batch transform job later or with a model from the 🤗 Hub, create a HuggingFaceModel instance and then call the transformer() method: Copied from sagemaker.huggingface.model import HuggingFaceModel # Hub model configuration <https://huggingface.co/models> hub = { 'HF_MODEL_ID' : 'distilbert/distilbert-base-uncased-finetuned-sst-2-english' , 'HF_TASK' : 'text-classification' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( env=hub, # configuration for loading model from Hub role=role, # IAM role with permissions to create an endpoint transformers_version= "4.26" , # Transformers version used pytorch_version= "1.13" , # PyTorch version used py_version= 'py39' , # Python version used ) # create transformer to run a batch job batch_job = huggingface_model.transformer( instance_count= 1 , instance_type= 'ml.p3.2xlarge' , strategy= 'SingleRecord' ) # starts batch transform job and uses S3 data as input batch_job.transform( data= 's3://sagemaker-s3-demo-test/samples/input.jsonl' , content_type= 'application/json' , split_type= 'Line' ) The input.jsonl looks like this: Copied { "inputs" : "this movie is terrible" } { "inputs" : "this movie is amazing" } { "inputs" : "SageMaker is pretty cool" } { "inputs" : "SageMaker is pretty cool" } { "inputs" : "this movie is terrible" } { "inputs" : "this movie is amazing" } 📓 Open the sagemaker-notebook.ipynb notebook for an example of how to run a batch transform job for inference. Deploy an LLM to SageMaker using TGI If you are interested in using a high-performance serving container for LLMs, you can use the Hugging Face TGI container. This utilizes the Text Generation Inference library. A list of compatible models can be found here . First, make sure that the latest version of SageMaker SDK is installed: Copied pip install sagemaker>=2.231.0 Then, we import the SageMaker Python SDK and instantiate a sagemaker_session to find the current region and execution role. Copied import sagemaker from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri import time sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name role = sagemaker.get_execution_role() Next we retrieve the LLM image URI. We use the helper function get_huggingface_llm_image_uri() to generate the appropriate image URI for the Hugging Face Large Language Model (LLM) inference. The function takes a required parameter backend and several optional parameters. The backend specifies the type of backend to use for the model: “huggingface” refers to using Hugging Face TGI backend. Copied image_uri = get_huggingface_llm_image_uri( backend= "huggingface" , region=region ) Now that we have the image uri, the next step is to configure the model object. We specify a unique name, the image_uri for the managed TGI container, and the execution role for the endpoint. Additionally, we specify a number of environment variables including the HF_MODEL_ID which corresponds to the model from the HuggingFace Hub that will be deployed, and the HF_TASK which configures the inference task to be performed by the model. You should also define SM_NUM_GPUS , which specifies the tensor parallelism degree of the model. Tensor parallelism can be used to split the model across multiple GPUs, which is necessary when working with LLMs that are too big for a single GPU. To learn more about tensor parallelism with inference, see our previous blog post. Here, you should set SM_NUM_GPUS to the number of available GPUs on your selected instance type. For example, in this tutorial, we set SM_NUM_GPUS to 4 because our selected instance type ml.g4dn.12xlarge has 4 available GPUs. Note that you can optionally reduce the memory and computational footprint of the model by setting the HF_MODEL_QUANTIZE environment variable to true , but this lower weight precision could affect the quality of the output for some models. Copied model_name = "llama-3-1-8b-instruct" + time.strftime( "%Y-%m-%d-%H-%M-%S" , time.gmtime()) hub = { 'HF_MODEL_ID' : 'meta-llama/Llama-3.1-8B-Instruct' , 'SM_NUM_GPUS' : '1' , 'HUGGING_FACE_HUB_TOKEN' : '<REPLACE WITH YOUR TOKEN>' , } assert hub[ 'HUGGING_FACE_HUB_TOKEN' ] != '<REPLACE WITH YOUR TOKEN>' , "You have to provide a token." model = HuggingFaceModel( name=model_name, env=hub, role=role, image_uri=image_uri ) Next, we invoke the deploy method to deploy the model. Copied predictor = model.deploy( initial_instance_count= 1 , instance_type= "ml.g5.2xlarge" , endpoint_name=model_name ) Once the model is deployed, we can invoke it to generate text. We pass an input prompt and run the predict method to generate a text response from the LLM running in the TGI container. Copied input_data = { "inputs" : "The diamondback terrapin was the first reptile to" , "parameters" : { "do_sample" : True , "max_new_tokens" : 100 , "temperature" : 0.7 , "watermark" : True } } predictor.predict(input_data) We receive the following auto-generated text response: Copied [{ 'generated_text' : 'The diamondback terrapin was the first reptile to make the list, followed by the American alligator, the American crocodile, and the American box turtle. The polecat, a ferret-like animal, and the skunk rounded out the list, both having gained their slots because they have proven to be particularly dangerous to humans.\n\nCalifornians also seemed to appreciate the new list, judging by the comments left after the election.\n\n“This is fantastic,” one commenter declared.\n\n“California is a very' }] Once we are done experimenting, we delete the endpoint and the model resources. Copied predictor.delete_model() predictor.delete_endpoint() User defined code and modules The Hugging Face Inference Toolkit allows the user to override the default methods of the HuggingFaceHandlerService . You will need to create a folder named code/ with an inference.py file in it. See here for more details on how to archive your model artifacts. For example: Copied model.tar.gz/ |- pytorch_model.bin |- .... |- code/ |- inference.py |- requirements.txt The inference.py file contains your custom inference module, and the requirements.txt file contains additional dependencies that should be added. The custom module can override the following methods: model_fn(model_dir) overrides the default method for loading a model. The return value model will be used in predict for predictions. predict receives argument the model_dir , the path to your unzipped model.tar.gz . transform_fn(model, data, content_type, accept_type) overrides the default transform function with your custom implementation. You will need to implement your own preprocess , predict and postprocess steps in the transform_fn . This method can’t be combined with input_fn , predict_fn or output_fn mentioned below. input_fn(input_data, content_type) overrides the default method for preprocessing. The return value data will be used in predict for predictions. The inputs are: input_data is the raw body of your request. content_type is the content type from the request header. predict_fn(processed_data, model) overrides the default method for predictions. The return value predictions will be used in postprocess . The input is processed_data , the result from preprocess . output_fn(prediction, accept) overrides the default method for postprocessing. The return value result will be the response of your request (e.g. JSON ). The inputs are: predictions is the result from predict . accept is the return accept type from the HTTP Request, e.g. application/json . Here is an example of a custom inference module with model_fn , input_fn , predict_fn , and output_fn : Copied from sagemaker_huggingface_inference_toolkit import decoder_encoder def model_fn ( model_dir ): # implement custom code to load the model loaded_model = ... return loaded_model def input_fn ( input_data, content_type ): # decode the input data (e.g. JSON string -> dict) data = decoder_encoder.decode(input_data, content_type) return data def predict_fn ( data, model ): # call your custom model with the data outputs = model(data , ... ) return predictions def output_fn ( prediction, accept ): # convert the model output to the desired output format (e.g. dict -> JSON string) response = decoder_encoder.encode(prediction, accept) return response Customize your inference module with only model_fn and transform_fn : Copied from sagemaker_huggingface_inference_toolkit import decoder_encoder def model_fn ( model_dir ): # implement custom code to load the model loaded_model = ... return loaded_model def transform_fn ( model, input_data, content_type, accept ): # decode the input data (e.g. JSON string -> dict) data = decoder_encoder.decode(input_data, content_type) # call your custom model with the data outputs = model(data , ... ) # convert the model output to the desired output format (e.g. dict -> JSON string) response = decoder_encoder.encode(output, accept) return response < > Update on GitHub ← Run training on Amazon SageMaker Reference → Deploy models to Amazon Sage Maker Installation and setup Deploy a 🤗 Transformers model trained in Sage Maker Deploy after training Deploy with model_data Create a model artifact for deployment Deploy a model from the 🤗 Hub Run batch transform with 🤗 Transformers and Sage Maker Deploy an LL M to Sage Maker using TGI User defined code and modules |
Train_Medusa.txt | Train Medusa Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Train Medusa text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Train Medusa This tutorial will show you how to train a Medusa model on a dataset of your choice. Please check out the speculation documentation for more information on how Medusa works and speculation in general. What are the benefits of training a Medusa model? Training Medusa heads can greatly improve the speed of generation. Medusa adds extra “heads” to LLMs to predict multiple future tokens simultaneously. When augmenting a model with Medusa, the original model stays untouched, and only the new heads are fine-tuned during training. One of the most important things is to have a good dataset (with similar data to what will be used in production) because Medusa has a much higher hit-rate when the generation is in-domain. If you train Medusa on a dataset that is very different from the one you will use in production then the model will not be able to predict the future tokens accurately and consequently the speedup will be minimal or non-existent. Self-distillation (Generating data for training) There are many methods for preparing data for training, but one of the easiest and most effective ways is to “self-distill” the data. This means that you can use the same model to generate the data that you will use to train the model. Essentially, you prompt the model with a similar input to what you will use in production and the model will generate the output. We’ll use this output to help train the medusa heads to predict the n+1 , n+2 , n+3 , etc tokens in the sequence. Training The original implementation of Medusa is available at https://github.com/FasterDecoding/Medusa and we’ll follow a very similar process to train the model as described on the original repository. Getting Started There are two methods for training the model: torchrun that is a wrapper around torch.distributed.launch a forked version of axlotl that supports Medusa In this tutorial we’ll use torchrun to train the model as it is the most straightforward way to train the model but similar steps can be followed to train the model using axlotl if you prefer. Training with torchrun Copied mkdir medusa-training cd medusa-training pyenv install 3.10 pyenv local 3.10 uv venv -p 3.10 source .venv/bin/activate Now lets clone the original Medusa repository and install the library. Copied git clone https://github.com/FasterDecoding/Medusa.git cd Medusa pip install -e . Next we’ll need some data to train on, we can use the ShareGPT_Vicuna_unfiltered dataset that is available on the Hugging Face Hub. Copied apt install git-lfs git lfs install git clone https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered Currently our directory structure looks like this: Copied . ├── assets ├── CITATION.cff ├── create_data.py ├── data_generation ├── deepspeed.json ├── last_run_prepared ├── LICENSE ├── llm_judge ├── medusa ├── medusa_llm.egg-info ├── mistral.json ├── notebooks ├── pyproject.toml ├── README.md ├── ROADMAP.md ├── scripts ├── ShareGPT_Vicuna_unfiltered │ ├── README.md │ ├── ShareGPT_2023.05.04v0_Wasteland_Edition.json │ └── ShareGPT_V4.3_unfiltered_cleaned_split.json ├── simple_gradio_interface.py ├── tiny-llama.json └── vicuna_7b_qlora_stage1 Start Training Now the lets generate the data and start training the model. This process will take a while since we are generating data from the model. First make sure you have an instance of TGI running with the model you want to use for self-distillation. Copied model=HuggingFaceH4/zephyr-7b-beta volume=/home/ubuntu/.cache/huggingface/hub/ docker run --gpus all --shm-size 1g -p 8080:80 -v $volume :/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model Now we can generate the data using the create_data.py script. Copied python create_data.py \ --input-filename ShareGPT_Vicuna_unfiltered/ShareGPT_V4.3_unfiltered_cleaned_split.json \ --output-filename zephyr_self_distill.json At this point our terminal should look like this: Note: In the screen shot above we are only using a the first 500 examples from the dataset to speed up the process, you should have a much larger dataset for training. Now we can finally get to the fun part and start training the model! Using torchrun we can easily launch the medusa training script with the zephyr_self_distill.json configuration file. NOTE: If you just self-distilled you may still have the model running, make sure to stop it before starting the training in order to allow all of the resources to be used for training. Copied WANDB_MODE=offline torchrun --nproc_per_node=4 medusa/train/train_legacy.py \ --model_name_or_path HuggingFaceH4/zephyr-7b-beta \ --data_path zephyr_self_distill.json \ --bf16 True \ --output_dir zephyr_out \ --num_train_epochs 5 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "no" \ --learning_rate 1e-3 \ --weight_decay 0.0 \ --warmup_ratio 0.1 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --lazy_preprocess True \ --medusa_num_heads 3 \ --medusa_num_layers 1 \ --deepspeed deepspeed.json If successful, you should see the similar output to the one below: Copied wandb: Run history : wandb: train/epoch ▁▁▁▁▁▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇███ wandb: train/global_step ▁▁▁▁▁▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇███ wandb: train/learning_rate ▅███▇▇▆▅▅▄▃▂▂▁▁▁ wandb: train/loss ██▆▄▄▃▃▂▂▃▁▁▂▁▁▁ wandb: train/medusa0_loss ▆▆▇▆▆▅▄▅▃▃▃▃▂▂▂▂▂▃▂▂▂▁▁▁▂▁▁▁▁▁█▁▁▁▂▁▁▁▁▁ wandb: train/medusa0_top1 ▁▁▁▁▁▁▁▁▃▂▃▃▄▄▄▃▄▃▄▄▅▅▆▅▆▆▇▅▇▇▄▇█▇▅▇█▆▇▇ wandb: train/medusa1_loss ▇▇█▇▇▆▅▅▃▄▃▃▃▃▃▃▃▃▃▃▂▁▂▂▂▁▁▂▁▁▇▁▁▁▂▁▁▁▁▁ wandb: train/medusa1_top1 ▁▁▁▁▁▁▁▁▃▂▃▃▃▄▄▃▃▂▃▃▅▅▆▄█▆▇▅▇▇▅█▇▇▅▇█▆▆▇ wandb: train/medusa2_loss ▃▃▄▄▄▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁█▁▁▁▂▁▁▁▁▁ wandb: train/medusa2_top1 ▁▁▁▂▁▁▁▁▂▂▃▃▃▄▄▃▃▂▃▃▅▆▅▄█▆▆▅▆▆▄█▇▇▄▇█▆▆▇ wandb: train/total_flos ▁ wandb: train/train_loss ▁ wandb: train/train_runtime ▁ wandb: train/train_samples_per_second ▁ wandb: train/train_steps_per_second ▁ wandb: wandb: Run summary: wandb: train/epoch 2.0 wandb: train/global_step 16 wandb: train/learning_rate 0.0 wandb: train/loss 14.8906 wandb: train/medusa0_loss 4.25 wandb: train/medusa0_top1 0.28809 wandb: train/medusa1_loss 4.8125 wandb: train/medusa1_top1 0.22727 wandb: train/medusa2_loss 5.5 wandb: train/medusa2_top1 0.17293 wandb: train/total_flos 0.0 wandb: train/train_loss 23.98242 wandb: train/train_runtime 396.9266 wandb: train/train_samples_per_second 2.519 wandb: train/train_steps_per_second 0.04 Last but most importantly, don’t forget to push this model to the Hugging Face Hub so you can use it in your projects. Copied python -m medusa.hf_utils \ --folder zephyr_out_medusa_mlp_zephyr-7b-beta_medusa_3_lr_0.001_layers_1 \ --repo drbh/zephyr_medusa_demo Woo, we’ve successfully trained a Medusa model and pushed it to the Hugging Face Hub! 🎉 < > Update on GitHub ← Monitoring TGI with Prometheus and Grafana TensorRT-LLM → Train Medusa What are the benefits of training a Medusa model? Self-distillation ( Generating data for training) Training Getting Started Training with torchrun Start Training |
Data_files_Configuration.txt | Data files Configuration Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Data files Configuration Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration File names and splits Manual Configuration Audio Dataset Image Dataset Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Data files Configuration There are no constraints on how to structure dataset repositories. However, if you want the Dataset Viewer to show certain data files, or to separate your dataset in train/validation/test splits, you need to structure your dataset accordingly. Often it is as simple as naming your data files according to their split names, e.g. train.csv and test.csv . What are splits and subsets? Machine learning datasets typically have splits and may also have subsets. A dataset is generally made of splits (e.g. train and test ) that are used during different stages of training and evaluating a model. A subset (also called configuration ) is a sub-dataset contained within a larger dataset. Subsets are especially common in multilingual speech datasets where there may be a different subset for each language. If you’re interested in learning more about splits and subsets, check out the Splits and subsets guide! Automatic splits detection Splits are automatically detected based on file and directory names. For example, this is a dataset with train , test , and validation splits: Copied my_dataset_repository/ ├── README.md ├── train. csv ├── test. csv └── validation. csv To structure your dataset by naming your data files or directories according to their split names, see the File names and splits documentation and the companion collection of example datasets . Manual splits and subsets configuration You can choose the data files to show in the Dataset Viewer for your dataset using YAML. It is useful if you want to specify which file goes into which split manually. You can also define multiple subsets for your dataset, and pass dataset building parameters (e.g. the separator to use for CSV files). Here is an example of a configuration defining a subset called “benchmark” with a test split. Copied configs: - config_name: benchmark data_files: - split: test path: benchmark.csv See the documentation on Manual configuration for more information. Look also to the example datasets . Supported file formats See the File formats doc page to find the list of supported formats and recommendations for your dataset. If your dataset uses CSV or TSV files, you can find more information in the example datasets . Image and Audio datasets For image and audio classification datasets, you can also use directories to name the image and audio classes. And if your images/audio files have metadata (e.g. captions, bounding boxes, transcriptions, etc.), you can have metadata files next to them. We provide two guides that you can check out: How to create an image dataset ( example datasets ) How to create an audio dataset ( example datasets ) < > Update on GitHub ← Datasets Download Stats File names and splits → Data files Configuration What are splits and subsets? Automatic splits detection Manual splits and subsets configuration Supported file formats Image and Audio datasets |
Dataset_features.txt | Dataset features Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Dataset features Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Dataset features Features defines the internal structure of a dataset. It is used to specify the underlying serialization format. What’s more interesting to you though is that Features contains high-level information about everything from the column names and types, to the ClassLabel . You can think of Features as the backbone of a dataset. The Features format is simple: dict[column_name, column_type] . It is a dictionary of column name and column type pairs. The column type provides a wide range of options for describing the type of data you have. Let’s have a look at the features of the MRPC dataset from the GLUE benchmark: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( 'glue' , 'mrpc' , split= 'train' ) >>> dataset.features { 'idx' : Value(dtype= 'int32' , id = None ), 'label' : ClassLabel(names=[ 'not_equivalent' , 'equivalent' ], id = None ), 'sentence1' : Value(dtype= 'string' , id = None ), 'sentence2' : Value(dtype= 'string' , id = None ), } The Value feature tells 🤗 Datasets: The idx data type is int32 . The sentence1 and sentence2 data types are string . 🤗 Datasets supports many other data types such as bool , float32 and binary to name just a few. Refer to Value for a full list of supported data types. The ClassLabel feature informs 🤗 Datasets the label column contains two classes. The classes are labeled not_equivalent and equivalent . Labels are stored as integers in the dataset. When you retrieve the labels, ClassLabel.int2str() and ClassLabel.str2int() carries out the conversion from integer value to label name, and vice versa. If your data type contains a list of objects, then you want to use the Sequence feature. Remember the SQuAD dataset? Copied >>> from datasets import load_dataset >>> dataset = load_dataset( 'squad' , split= 'train' ) >>> dataset.features { 'answers' : Sequence (feature={ 'text' : Value(dtype= 'string' , id = None ), 'answer_start' : Value(dtype= 'int32' , id = None )}, length=- 1 , id = None ), 'context' : Value(dtype= 'string' , id = None ), 'id' : Value(dtype= 'string' , id = None ), 'question' : Value(dtype= 'string' , id = None ), 'title' : Value(dtype= 'string' , id = None )} The answers field is constructed using the Sequence feature because it contains two subfields, text and answer_start , which are lists of string and int32 , respectively. See the flatten section to learn how you can extract the nested subfields as their own independent columns. The array feature type is useful for creating arrays of various sizes. You can create arrays with two dimensions using Array2D , and even arrays with five dimensions using Array5D . Copied >>> features = Features({ 'a' : Array2D(shape=( 1 , 3 ), dtype= 'int32' )}) The array type also allows the first dimension of the array to be dynamic. This is useful for handling sequences with variable lengths such as sentences, without having to pad or truncate the input to a uniform shape. Copied >>> features = Features({ 'a' : Array3D(shape=( None , 5 , 2 ), dtype= 'int32' )}) Audio feature Audio datasets have a column with type Audio , which contains three important fields: array : the decoded audio data represented as a 1-dimensional array. path : the path to the downloaded audio file. sampling_rate : the sampling rate of the audio data. When you load an audio dataset and call the audio column, the Audio feature automatically decodes and resamples the audio file: Copied >>> from datasets import load_dataset, Audio >>> dataset = load_dataset( "PolyAI/minds14" , "en-US" , split= "train" ) >>> dataset[ 0 ][ "audio" ] { 'array' : array([ 0. , 0.00024414 , - 0.00024414 , ..., - 0.00024414 , 0. , 0. ], dtype=float32), 'path' : '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav' , 'sampling_rate' : 8000 } Index into an audio dataset using the row index first and then the audio column - dataset[0]["audio"] - to avoid decoding and resampling all the audio files in the dataset. Otherwise, this can be a slow and time-consuming process if you have a large dataset. With decode=False , the Audio type simply gives you the path or the bytes of the audio file, without decoding it into an array , Copied >>> dataset = load_dataset( "PolyAI/minds14" , "en-US" , split= "train" ).cast_column( "audio" , Audio(decode= False )) >>> dataset[ 0 ] { 'audio' : { 'bytes' : None , 'path' : '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav' }, 'english_transcription' : 'I would like to set up a joint account with my partner' , 'intent_class' : 11 , 'lang_id' : 4 , 'path' : '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav' , 'transcription' : 'I would like to set up a joint account with my partner' } Image feature Image datasets have a column with type Image , which loads PIL.Image objects from images stored as bytes: When you load an image dataset and call the image column, the Image feature automatically decodes the image file: Copied >>> from datasets import load_dataset, Image >>> dataset = load_dataset( "beans" , split= "train" ) >>> dataset[ 0 ][ "image" ] <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x125506CF8 > Index into an image dataset using the row index first and then the image column - dataset[0]["image"] - to avoid decoding all the image files in the dataset. Otherwise, this can be a slow and time-consuming process if you have a large dataset. With decode=False , the Image type simply gives you the path or the bytes of the image file, without decoding it into an PIL.Image , Copied >>> dataset = load_dataset( "beans" , split= "train" ).cast_column( "image" , Image(decode= False )) >>> dataset[ 0 ][ "image" ] { 'bytes' : None , 'path' : '/Users/username/.cache/huggingface/datasets/downloads/extracted/772e7c1fba622cff102b85dd74bcce46e8168634df4eaade7bedd3b8d91d3cd7/train/healthy/healthy_train.265.jpg' } Depending on the dataset, you may get the path to the local downloaded image, or the content of the image as bytes if the dataset is not made of individual files. You can also define a dataset of images from numpy arrays: Copied >>> ds = Dataset.from_dict({ "i" : [np.zeros(shape=( 16 , 16 , 3 ), dtype=np.uint8)]}, features=Features({ "i" : Image()})) And in this case the numpy arrays are encoded into PNG (or TIFF if the pixels values precision is important). For multi-channels arrays like RGB or RGBA, only uint8 is supported. If you use a larger precision, you get a warning and the array is downcasted to uint8. For gray-scale images you can use the integer or float precision you want as long as it is compatible with Pillow . A warning is shown if your image integer or float precision is too high, and in this case the array is downcated: an int64 array is downcasted to int32, and a float64 array is downcasted to float32. < > Update on GitHub ← Dataset or IterableDataset Build and load → Dataset features Audio feature Image feature |
Process_audio_data.txt | Process audio data Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Process audio data Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Process audio data This guide shows specific methods for processing audio datasets. Learn how to: Resample the sampling rate. Use map() with audio datasets. For a guide on how to process any type of dataset, take a look at the general process guide . Cast The cast_column() function is used to cast a column to another feature to be decoded. When you use this function with the Audio feature, you can resample the sampling rate: Copied >>> from datasets import load_dataset, Audio >>> dataset = load_dataset( "PolyAI/minds14" , "en-US" , split= "train" ) >>> dataset = dataset.cast_column( "audio" , Audio(sampling_rate= 16000 )) Audio files are decoded and resampled on-the-fly, so the next time you access an example, the audio file is resampled to 16kHz: Copied >>> dataset[ 0 ][ "audio" ] { 'array' : array([ 2.3443763e-05 , 2.1729663e-04 , 2.2145823e-04 , ..., 3.8356509e-05 , - 7.3497440e-06 , - 2.1754686e-05 ], dtype=float32), 'path' : '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav' , 'sampling_rate' : 16000 } Map The map() function helps preprocess your entire dataset at once. Depending on the type of model you’re working with, you’ll need to either load a feature extractor or a processor . For pretrained speech recognition models, load a feature extractor and tokenizer and combine them in a processor : Copied >>> from transformers import AutoTokenizer, AutoFeatureExtractor, AutoProcessor >>> model_checkpoint = "facebook/wav2vec2-large-xlsr-53" # after defining a vocab.json file you can instantiate a tokenizer object: >>> tokenizer = AutoTokenizer( "./vocab.json" , unk_token= "[UNK]" , pad_token= "[PAD]" , word_delimiter_token= "|" ) >>> feature_extractor = AutoFeatureExtractor.from_pretrained(model_checkpoint) >>> processor = AutoProcessor.from_pretrained(feature_extractor=feature_extractor, tokenizer=tokenizer) For fine-tuned speech recognition models, you only need to load a processor : Copied >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained( "facebook/wav2vec2-base-960h" ) When you use map() with your preprocessing function, include the audio column to ensure you’re actually resampling the audio data: Copied >>> def prepare_dataset ( batch ): ... audio = batch[ "audio" ] ... batch[ "input_values" ] = processor(audio[ "array" ], sampling_rate=audio[ "sampling_rate" ]).input_values[ 0 ] ... batch[ "input_length" ] = len (batch[ "input_values" ]) ... with processor.as_target_processor(): ... batch[ "labels" ] = processor(batch[ "sentence" ]).input_ids ... return batch >>> dataset = dataset. map (prepare_dataset, remove_columns=dataset.column_names) < > Update on GitHub ← Load audio data Create an audio dataset → Process audio data Cast Map |
Adding_support_for_new_architectures.txt | Adding support for new architectures Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Adding support for new architectures AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Adding support for new architectures NOTE: ❗This section does not apply to the decoder model’s inference with autoregressive sampling integrated through transformers-neuronx . If you want to add support for these models, please open an issue on the Optimum Neuron GitHub repo, and ping maintainers for help. You want to export and run a new model on AWS Inferentia or Trainium? Check the guideline, and submit a pull request to 🤗 Optimum Neuron’s GitHub repo ! To support a new model architecture in the Optimum Neuron library here are some steps to follow: Implement a custom Neuron configuration. Export and validate the model. Contribute to the GitHub repo. Implement a custom Neuron configuration To support the export of a new model to a Neuron compatible format, the first thing to do is to define a Neuron configuration, describing how to export the PyTorch model by specifying: The input names. The output names. The dummy inputs used to trace the model: the Neuron Compiler records the computational graph via tracing and works on the resulting TorchScript module. The compilation arguments used to control the trade-off between hardware efficiency (latency, throughput) and accuracy. Depending on the choice of model and task, we represent the data above with configuration classes. Each configuration class is associated with a specific model architecture, and follows the naming convention ArchitectureNameNeuronConfig . For instance, the configuration that specifies the Neuron export of BERT models is BertNeuronConfig . Since many architectures share similar properties for their Neuron configuration, 🤗 Optimum adopts a 3-level class hierarchy: Abstract and generic base classes. These handle all the fundamental features, while being agnostic to the modality (text, image, audio, etc). Middle-end classes. These are aware of the modality. Multiple config classes could exist for the same modality, depending on the inputs they support. They specify which input generators should be used for generating the dummy inputs, but remain model-agnostic. Model-specific classes like the BertNeuronConfig mentioned above. These are the ones actually used to export models. Example: Adding support for ESM models Here we take the support of ESM models as an example. Let’s create an EsmNeuronConfig class in the optimum/exporters/neuron/model_configs.py . When an Esm model interprets as a text encoder, we are able to inherit from the middle-end class TextEncoderNeuronConfig . Since the modeling and configuration of Esm is almost the same as BERT when it is interpreted as an encoder, we can use the NormalizedConfigManager with model_type=bert to normalize the configuration to generate dummy inputs for tracing the model. And one last step, since optimum-neuron is an extension of optimum , we need to register the Neuron config that we create to the TasksManager with the register_in_tasks_manager decorator by specifying the model type and supported tasks. Copied @register_in_tasks_manager( "esm" , *[ "feature-extraction" , "fill-mask" , "text-classification" , "token-classification" ] ) class EsmNeuronConfig ( TextEncoderNeuronConfig ): NORMALIZED_CONFIG_CLASS = NormalizedConfigManager.get_normalized_config_class( "bert" ) ATOL_FOR_VALIDATION = 1e-3 # absolute tolerance to compare for comparing model on CPUs @property def inputs ( self ) -> List [ str ]: return [ "input_ids" , "attention_mask" ] Export and validate the model With the Neuron configuration class that you implemented, now do a quick test if it works as expected: Export Copied optimum-cli export neuron --model facebook/esm2_t33_650M_UR50D --task text-classification --batch_size 1 --sequence_length 16 esm_neuron/ During the export validate_model_outputs will be called to validate the outputs of your exported Neuron model by comparing them to the results of PyTorch on the CPU. You could also validate the model manually with: Copied from optimum.exporters.neuron import validate_model_outputs validate_model_outputs( neuron_config, base_model, neuron_model_path, neuron_named_outputs, neuron_config.ATOL_FOR_VALIDATION ) Inference (optional) Copied from transformers import AutoTokenizer from optimum.neuron import NeuronModelForSequenceClassification model = NeuronModelForSequenceClassification.from_pretrained( "esm_neuron/" ) tokenizer = AutoTokenizer.from_pretrained( "esm_neuron/" ) inputs = tokenizer( "Hello, my dog is cute" , return_tensors= "pt" ) logits = model(**inputs).logits Contribute to the GitHub repo We are almost all set. Now submit a pull request to make your work accessible to all community members! Open an issue in the Optimum Neuron GitHub repo to describe the new feature and make it visible to Optimum Neuron’s maintainers. Add the model to the exporter test in optimum-neuron/tests/exporters/exporters_utils.py and the inference test in optimum-neuron/tests/inference/inference_utils.py . Open a pull request! (Don’t forget to link it to the issue you opened, so that the maintainers could better track it and provide help when needed.) We usually test smaller checkpoints to accelerate the CIs, you could find tiny models for testing under the Hugging Face Internal Testing Organization . You have made a new model accessible on Neuron for the community! Thanks for joining us in the endeavor of democratizing good machine learning 🤗. ← Llama-3.1 8B on AWS Inferentia2 Neuron Trainer → Adding support for new architectures Implement a custom Neuron configuration Example: Adding support for ES M models Export and validate the model Contribute to the Git Hub repo |
CPU_inference.txt | CPU inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation CPU inference Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started CPU inference With some optimizations, it is possible to efficiently run large model inference on a CPU. One of these optimization techniques involves compiling the PyTorch code into an intermediate format for high-performance environments like C++. The other technique fuses multiple operations into one kernel to reduce the overhead of running each operation separately. You’ll learn how to use BetterTransformer for faster inference, and how to convert your PyTorch code to TorchScript . If you’re using an Intel CPU, you can also use graph optimizations from Intel Extension for PyTorch to boost inference speed even more. Finally, learn how to use 🤗 Optimum to accelerate inference with ONNX Runtime or OpenVINO (if you’re using an Intel CPU). BetterTransformer BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are: fusion, which combines multiple sequential operations into a single “kernel” to reduce the number of computation steps skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors BetterTransformer also converts all attention operations to use the more memory-efficient scaled dot product attention . BetterTransformer is not supported for all models. Check this list to see if a model supports BetterTransformer. Before you start, make sure you have 🤗 Optimum installed . Enable BetterTransformer with the PreTrainedModel.to_bettertransformer() method: Copied from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "bigcode/starcoder" , torch_dtype= "auto" ) TorchScript TorchScript is an intermediate PyTorch model representation that can be run in production environments where performance is important. You can train a model in PyTorch and then export it to TorchScript to free the model from Python performance constraints. PyTorch traces a model to return a ScriptFunction that is optimized with just-in-time compilation (JIT). Compared to the default eager mode, JIT mode in PyTorch typically yields better performance for inference using optimization techniques like operator fusion. For a gentle introduction to TorchScript, see the Introduction to PyTorch TorchScript tutorial. With the Trainer class, you can enable JIT mode for CPU inference by setting the --jit_mode_eval flag: Copied python examples/pytorch/question-answering/run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ --jit_mode_eval For PyTorch >= 1.14.0, JIT-mode could benefit any model for prediction and evaluation since the dict input is supported in jit.trace . For PyTorch < 1.14.0, JIT-mode could benefit a model if its forward parameter order matches the tuple input order in jit.trace , such as a question-answering model. If the forward parameter order does not match the tuple input order in jit.trace , like a text classification model, jit.trace will fail and we are capturing this with the exception here to make it fallback. Logging is used to notify users. IPEX graph optimization Intel® Extension for PyTorch (IPEX) provides further optimizations in JIT mode for Intel CPUs, and we recommend combining it with TorchScript for even faster performance. The IPEX graph optimization fuses operations like Multi-head attention, Concat Linear, Linear + Add, Linear + Gelu, Add + LayerNorm, and more. To take advantage of these graph optimizations, make sure you have IPEX installed : Copied pip install intel_extension_for_pytorch Set the --use_ipex and --jit_mode_eval flags in the Trainer class to enable JIT mode with the graph optimizations: Copied python examples/pytorch/question-answering/run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ --use_ipex \ --jit_mode_eval 🤗 Optimum Learn more details about using ORT with 🤗 Optimum in the Optimum Inference with ONNX Runtime guide. This section only provides a brief and simple example. ONNX Runtime (ORT) is a model accelerator that runs inference on CPUs by default. ORT is supported by 🤗 Optimum which can be used in 🤗 Transformers, without making too many changes to your code. You only need to replace the 🤗 Transformers AutoClass with its equivalent ORTModel for the task you’re solving, and load a checkpoint in the ONNX format. For example, if you’re running inference on a question answering task, load the optimum/roberta-base-squad2 checkpoint which contains a model.onnx file: Copied from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForQuestionAnswering model = ORTModelForQuestionAnswering.from_pretrained( "optimum/roberta-base-squad2" ) tokenizer = AutoTokenizer.from_pretrained( "deepset/roberta-base-squad2" ) onnx_qa = pipeline( "question-answering" , model=model, tokenizer=tokenizer) question = "What's my name?" context = "My name is Philipp and I live in Nuremberg." pred = onnx_qa(question, context) If you have an Intel CPU, take a look at 🤗 Optimum Intel which supports a variety of compression techniques (quantization, pruning, knowledge distillation) and tools for converting models to the OpenVINO format for higher performance inference. < > Update on GitHub ← Hyperparameter Search using Trainer API GPU inference → CP U inference Better Transformer Torch Script IPE X graph optimization 🤗 Optimum |
ORPO_Trainer.txt | ORPO Trainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation ORPO Trainer TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started ORPO Trainer Overview Odds Ratio Preference Optimization (ORPO) was introduced in ORPO: Monolithic Preference Optimization without Reference Model by Jiwoo Hong , Noah Lee , and James Thorne . The abstract from the paper is the following: While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on AlpacaEval_{2.0} (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-alpha (7B) and Mistral-ORPO-beta (7B). It studies the crucial role of SFT within the context of preference alignment. Using preference data the method posits that a minor penalty for the disfavored generation together with a strong adaption signal to the chosen response via a simple log odds ratio term appended to the NLL loss is sufficient for preference-aligned SFT. Thus ORPO is a reference model-free preference optimization algorithm eliminating the necessity for an additional preference alignment phase thus saving compute and memory. The official code can be found in xfactlab/orpo . This post-training method was contributed by Kashif Rasul , Lewis Tunstall and Alvaro Bartolome . Quick start This example demonstrates how to train a model using the ORPO method. We use the Qwen 0.5B model as the base model. We use the preference data from the UltraFeedback dataset . You can view the data in the dataset here: Below is the script to train the model: Copied # train_orpo.py from datasets import load_dataset from trl import ORPOConfig, ORPOTrainer from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) train_dataset = load_dataset( "trl-lib/ultrafeedback_binarized" , split= "train" ) training_args = ORPOConfig(output_dir= "Qwen2-0.5B-ORPO" , logging_steps= 10 ) trainer = ORPOTrainer(model=model, args=training_args, processing_class=tokenizer, train_dataset=train_dataset) trainer.train() Execute the script using the following command: Copied accelerate launch train_orpo.py Distributed across 8 GPUs, the training takes approximately 30 minutes. You can verify the training progress by checking the reward graph. An increasing trend in the reward margin indicates that the model is improving and generating better responses over time. To see how the trained model performs, you can use the TRL Chat CLI . $ trl chat --model_name_or_path trl-lib/Qwen2-0.5B-ORPO <quentin_gallouedec>: What is the best programming language? <trl-lib/Qwen2-0.5B-ORPO>: It's challenging to determine the best programming language as no one language is perfect, as the complexity of a task and the type of project are significant factors. Some popular languages include Java, Python, JavaScript, and C++. If you have specific needs or requirements for a specific project, it's important to choose the language that best suits those needs. Here are some other factors to consider when choosing a programming language for a project: • Language proficiency: A good programming language is more likely to be easy to understand and use, and will allow developers to collaborate on projects more efficiently. • Ease of use: There are tools and libraries available to make programming more accessible, so developers should choose a language that can help them get started easier. • Code readability: A clear and concise codebase should be easy to read and understand, especially when working with large projects. • Tool and framework support: There are numerous libraries available for Python, Java, and JavaScript, along with tools like IDEs and static code analysis tools. • Accessibility: Some languages and tools have features that make them more accessible to developers with disabilities, such as support for screen readers. • Version control: As your projects grow and complexity increases, version control tools can be beneficial for tracking changes. Expected dataset type ORPO requires a preference dataset . The ORPOTrainer supports both conversational and standard dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset. Although the ORPOTrainer supports both explicit and implicit prompts, we recommend using explicit prompts. If provided with an implicit prompt dataset, the trainer will automatically extract the prompt from the "chosen" and "rejected" columns. For more information, refer to the preference style section. Example script We provide an example script to train a model using the ORPO method. The script is available in examples/scripts/orpo.py To test the ORPO script with the Qwen2 0.5B model on the UltraFeedback dataset , run the following command: Copied accelerate launch examples/scripts/orpo.py \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --dataset_name trl-lib/ultrafeedback_binarized \ --num_train_epochs 1 \ --logging_steps 25 \ --output_dir Qwen2-0.5B-ORPO Usage tips For Mixture of Experts Models: Enabling the auxiliary loss MOEs are the most efficient if the load is about equally distributed between experts. To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss. This option is enabled by setting output_router_logits=True in the model config (e.g. MixtralConfig ). To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter router_aux_loss_coef=... (default: 0.001 ) in the model config. Logged metrics While training and evaluating we record the following reward metrics: rewards/chosen : the mean log probabilities of the policy model for the chosen responses scaled by beta rewards/rejected : the mean log probabilities of the policy model for the rejected responses scaled by beta rewards/accuracies : mean of how often the chosen rewards are > than the corresponding rejected rewards rewards/margins : the mean difference between the chosen and corresponding rejected rewards log_odds_chosen : the mean log odds ratio of the chosen responses over the rejected responses log_odds_ratio : the mean of the log(sigmoid(log_odds_chosen)) nll_loss : the mean negative log likelihood loss from the SFT part of the loss over chosen responses ORPOTrainer class trl. ORPOTrainer < source > ( model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, str, NoneType] = None args : typing.Optional[trl.trainer.orpo_config.ORPOConfig] = None data_collator : typing.Optional[transformers.data.data_collator.DataCollator] = None train_dataset : typing.Optional[datasets.arrow_dataset.Dataset] = None eval_dataset : typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], NoneType] = None processing_class : typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = None model_init : typing.Optional[typing.Callable[[], transformers.modeling_utils.PreTrainedModel]] = None callbacks : typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None optimizers : tuple = (None, None) preprocess_logits_for_metrics : typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None peft_config : typing.Optional[dict] = None compute_metrics : typing.Optional[typing.Callable[[transformers.trainer_utils.EvalLoopOutput], dict]] = None ) Parameters model ( transformers.PreTrainedModel ) — The model to train, preferably an AutoModelForSequenceClassification . args ( ORPOConfig ) — The ORPO config arguments to use for training. data_collator ( transformers.DataCollator ) — The data collator to use for training. If None is specified, the default data collator ( DPODataCollatorWithPadding ) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences. train_dataset ( datasets.Dataset ) — The dataset to use for training. eval_dataset ( datasets.Dataset ) — The dataset to use for evaluation. processing_class ( PreTrainedTokenizerBase or BaseImageProcessor or FeatureExtractionMixin or ProcessorMixin , optional ) — Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model. model_init ( Callable[[], transformers.PreTrainedModel] ) — The model initializer to use for training. If None is specified, the default model initializer will be used. callbacks ( list[transformers.TrainerCallback] ) — The callbacks to use for training. optimizers ( tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] ) — The optimizer and scheduler to use for training. preprocess_logits_for_metrics ( Callable[[torch.Tensor, torch.Tensor], torch.Tensor] ) — The function to use to preprocess the logits before computing the metrics. peft_config ( dict , defaults to None ) — The PEFT configuration to use for training. If you pass a PEFT configuration, the model will be wrapped in a PEFT model. compute_metrics ( Callable[[EvalPrediction], dict] , optional ) — The function to use to compute the metrics. Must take a EvalPrediction and return a dictionary string to metric values. Initialize ORPOTrainer. build_tokenized_answer < source > ( prompt answer ) Llama tokenizer does satisfy enc(a + b) = enc(a) + enc(b) . It does ensure enc(a + b) = enc(a) + enc(a + b)[len(enc(a)):] . Reference: https://github.com/EleutherAI/lm-evaluation-harness/pull/531#issuecomment-1595586257 concatenated_forward < source > ( model : Module batch : dict ) Run the given model on the given batch of inputs, concatenating the chosen and rejected inputs together. We do this to avoid doing two forward passes, because it’s faster for FSDP. concatenated_inputs < source > ( batch : dict is_encoder_decoder : bool = False label_pad_token_id : int = -100 padding_value : int = 0 device : typing.Optional[torch.device] = None ) Parameters batch — A batch of data. Must contain the keys ‘chosen_input_ids’ and ‘rejected_input_ids’, which are tensors of shape (batch_size, sequence_length). is_encoder_decoder — Whether the model is an encoder-decoder model. label_pad_token_id — The label pad token id. padding_value — The padding value to use for the concatenated inputs_ids. device — The device for the concatenated inputs. Concatenate the chosen and rejected inputs into a single tensor. create_model_card < source > ( model_name : typing.Optional[str] = None dataset_name : typing.Optional[str] = None tags : typing.Union[str, list[str], NoneType] = None ) Parameters model_name ( str , optional , defaults to None ) — The name of the model. dataset_name ( str , optional , defaults to None ) — The name of the dataset used for training. tags ( str , list[str] or None , optional , defaults to None ) — Tags to be associated with the model card. Creates a draft of a model card using the information available to the Trainer . evaluation_loop < source > ( dataloader : DataLoader description : str prediction_loss_only : typing.Optional[bool] = None ignore_keys : typing.Optional[list[str]] = None metric_key_prefix : str = 'eval' ) Overriding built-in evaluation loop to store metrics for each batch. Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict() . Works both with or without labels. generate_from_model < source > ( model batch : dict ) Generate samples from the model and reference model for the given batch of inputs. get_batch_logps < source > ( logits : FloatTensor labels : LongTensor average_log_prob : bool = False label_pad_token_id : int = -100 is_encoder_decoder : bool = False ) Parameters logits — Logits of the model (unnormalized). Shape: (batch_size, sequence_length, vocab_size) labels — Labels for which to compute the log probabilities. Label tokens with a value of label_pad_token_id are ignored. Shape: (batch_size, sequence_length) average_log_prob — If True, return the average log probability per (non-masked) token. Otherwise, return the sum of the log probabilities of the (non-masked) tokens. label_pad_token_id — The label pad token id. is_encoder_decoder — Whether the model is an encoder-decoder model. Compute the log probabilities of the given labels under the given logits. get_batch_loss_metrics < source > ( model batch : dict train_eval : typing.Literal['train', 'eval'] = 'train' ) Compute the ORPO loss and other metrics for the given batch of inputs for train or test. log < source > ( logs : dict start_time : typing.Optional[float] = None ) Parameters logs ( dict[str, float] ) — The values to log. start_time ( float or None , optional , defaults to None ) — Start time of the training. Log logs on the various objects watching training, including stored metrics. odds_ratio_loss < source > ( policy_chosen_logps : FloatTensor policy_rejected_logps : FloatTensor ) → A tuple of three tensors Parameters policy_chosen_logps — Log probabilities of the policy model for the chosen responses. Shape: (batch_size,) policy_rejected_logps — Log probabilities of the policy model for the rejected responses. Shape: (batch_size,) Returns A tuple of three tensors (losses, chosen_rewards, rejected_rewards). The losses tensor contains the ORPO loss for each example in the batch. The chosen_rewards and rejected_rewards tensors contain the rewards for the chosen and rejected responses, respectively. The log odds ratio of the chosen responses over the rejected responses ratio for logging purposes. The log(sigmoid(log_odds_chosen)) for logging purposes. Compute ORPO’s odds ratio (OR) loss for a batch of policy and reference model log probabilities. tokenize_row < source > ( feature model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, NoneType] = None ) Tokenize a single row from a ORPO specific dataset. At this stage, we don’t convert to PyTorch tensors yet; we just handle the truncation in case the prompt + chosen or prompt + rejected responses is/are too long. First we truncate the prompt; if we’re still too long, we truncate the chosen/rejected. We also create the labels for the chosen/rejected responses, which are of length equal to the sum of the length of the prompt and the chosen/rejected response, with label_pad_token_id for the prompt tokens. ORPOConfig class trl. ORPOConfig < source > ( output_dir : str overwrite_output_dir : bool = False do_train : bool = False do_eval : bool = False do_predict : bool = False eval_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only : bool = False per_device_train_batch_size : int = 8 per_device_eval_batch_size : int = 8 per_gpu_train_batch_size : typing.Optional[int] = None per_gpu_eval_batch_size : typing.Optional[int] = None gradient_accumulation_steps : int = 1 eval_accumulation_steps : typing.Optional[int] = None eval_delay : typing.Optional[float] = 0 torch_empty_cache_steps : typing.Optional[int] = None learning_rate : float = 1e-06 weight_decay : float = 0.0 adam_beta1 : float = 0.9 adam_beta2 : float = 0.999 adam_epsilon : float = 1e-08 max_grad_norm : float = 1.0 num_train_epochs : float = 3.0 max_steps : int = -1 lr_scheduler_type : typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs : typing.Union[dict, str, NoneType] = <factory> warmup_ratio : float = 0.0 warmup_steps : int = 0 log_level : typing.Optional[str] = 'passive' log_level_replica : typing.Optional[str] = 'warning' log_on_each_node : bool = True logging_dir : typing.Optional[str] = None logging_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step : bool = False logging_steps : float = 500 logging_nan_inf_filter : bool = True save_strategy : typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps : float = 500 save_total_limit : typing.Optional[int] = None save_safetensors : typing.Optional[bool] = True save_on_each_node : bool = False save_only_model : bool = False restore_callback_states_from_checkpoint : bool = False no_cuda : bool = False use_cpu : bool = False use_mps_device : bool = False seed : int = 42 data_seed : typing.Optional[int] = None jit_mode_eval : bool = False use_ipex : bool = False bf16 : bool = False fp16 : bool = False fp16_opt_level : str = 'O1' half_precision_backend : str = 'auto' bf16_full_eval : bool = False fp16_full_eval : bool = False tf32 : typing.Optional[bool] = None local_rank : int = -1 ddp_backend : typing.Optional[str] = None tpu_num_cores : typing.Optional[int] = None tpu_metrics_debug : bool = False debug : typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last : bool = False eval_steps : typing.Optional[float] = None dataloader_num_workers : int = 0 dataloader_prefetch_factor : typing.Optional[int] = None past_index : int = -1 run_name : typing.Optional[str] = None disable_tqdm : typing.Optional[bool] = None remove_unused_columns : typing.Optional[bool] = True label_names : typing.Optional[typing.List[str]] = None load_best_model_at_end : typing.Optional[bool] = False metric_for_best_model : typing.Optional[str] = None greater_is_better : typing.Optional[bool] = None ignore_data_skip : bool = False fsdp : typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params : int = 0 fsdp_config : typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap : typing.Optional[str] = None accelerator_config : typing.Union[dict, str, NoneType] = None deepspeed : typing.Union[dict, str, NoneType] = None label_smoothing_factor : float = 0.0 optim : typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args : typing.Optional[str] = None adafactor : bool = False group_by_length : bool = False length_column_name : typing.Optional[str] = 'length' report_to : typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters : typing.Optional[bool] = None ddp_bucket_cap_mb : typing.Optional[int] = None ddp_broadcast_buffers : typing.Optional[bool] = None dataloader_pin_memory : bool = True dataloader_persistent_workers : bool = False skip_memory_metrics : bool = True use_legacy_prediction_loop : bool = False push_to_hub : bool = False resume_from_checkpoint : typing.Optional[str] = None hub_model_id : typing.Optional[str] = None hub_strategy : typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token : typing.Optional[str] = None hub_private_repo : typing.Optional[bool] = None hub_always_push : bool = False gradient_checkpointing : bool = False gradient_checkpointing_kwargs : typing.Union[dict, str, NoneType] = None include_inputs_for_metrics : bool = False include_for_metrics : typing.List[str] = <factory> eval_do_concat_batches : bool = True fp16_backend : str = 'auto' evaluation_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id : typing.Optional[str] = None push_to_hub_organization : typing.Optional[str] = None push_to_hub_token : typing.Optional[str] = None mp_parameters : str = '' auto_find_batch_size : bool = False full_determinism : bool = False torchdynamo : typing.Optional[str] = None ray_scope : typing.Optional[str] = 'last' ddp_timeout : typing.Optional[int] = 1800 torch_compile : bool = False torch_compile_backend : typing.Optional[str] = None torch_compile_mode : typing.Optional[str] = None dispatch_batches : typing.Optional[bool] = None split_batches : typing.Optional[bool] = None include_tokens_per_second : typing.Optional[bool] = False include_num_input_tokens_seen : typing.Optional[bool] = False neftune_noise_alpha : typing.Optional[float] = None optim_target_modules : typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics : bool = False eval_on_start : bool = False use_liger_kernel : typing.Optional[bool] = False eval_use_gather_object : typing.Optional[bool] = False average_tokens_across_devices : typing.Optional[bool] = False max_length : typing.Optional[int] = None max_prompt_length : typing.Optional[int] = None max_completion_length : typing.Optional[int] = None beta : float = 0.1 disable_dropout : bool = True label_pad_token_id : int = -100 padding_value : typing.Optional[int] = None truncation_mode : str = 'keep_end' generate_during_eval : bool = False is_encoder_decoder : typing.Optional[bool] = None model_init_kwargs : typing.Optional[dict[str, typing.Any]] = None dataset_num_proc : typing.Optional[int] = None ) Parameters learning_rate ( float , optional , defaults to 1e-6 ) — Initial learning rate for AdamW optimizer. The default value replaces that of TrainingArguments . max_length ( Optional[int] , optional , defaults to None ) — Maximum length of the sequences (prompt + completion) in the batch. This argument is required if you want to use the default data collator. max_prompt_length ( Optional[int] , optional , defaults to None ) — Maximum length of the prompt. This argument is required if you want to use the default data collator. max_completion_length ( Optional[int] , optional , defaults to None ) — Maximum length of the completion. This argument is required if you want to use the default data collator and your model is an encoder-decoder. beta ( float , optional , defaults to 0.1 ) — Parameter controlling the relative ratio loss weight in the ORPO loss. In the paper , it is denoted by λ. In the code , it is denoted by alpha . disable_dropout ( bool , optional , defaults to True ) — Whether to disable dropout in the model. label_pad_token_id ( int , optional , defaults to -100 ) — Label pad token id. This argument is required if you want to use the default data collator. padding_value ( Optional[int] , optional , defaults to None ) — Padding value to use. If None , the padding value of the tokenizer is used. truncation_mode ( str , optional , defaults to "keep_end" ) — Truncation mode to use when the prompt is too long. Possible values are "keep_end" or "keep_start" . This argument is required if you want to use the default data collator. generate_during_eval ( bool , optional , defaults to False ) — If True , generates and logs completions from the model to W&B during evaluation. is_encoder_decoder ( Optional[bool] , optional , defaults to None ) — When using the model_init argument (callable) to instantiate the model instead of the model argument, you need to specify if the model returned by the callable is an encoder-decoder model. model_init_kwargs ( Optional[dict[str, Any]] , optional , defaults to None ) — Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the model from a string. dataset_num_proc ( Optional[int] , optional , defaults to None ) — Number of processes to use for processing the dataset. Configuration class for the ORPOTrainer . Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. < > Update on GitHub ← Nash-MD PPO → ORP O Trainer Overview Quick start Expected dataset type Example script Usage tips For Mixture of Experts Models: Enabling the auxiliary loss Logged metrics ORPO Trainer ORPO Config |
Generate_images_with_Stable_Diffusion_models_on_AW.txt | Generate images with Stable Diffusion models on AWS Inferentia Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Generate images with Stable Diffusion models on AWS Inferentia AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Generate images with Stable Diffusion models on AWS Inferentia Stable Diffusion There is a notebook version of that tutorial here . 🤗 Optimum extends Diffusers to support inference on the second generation of Neuron devices(powering Trainium and Inferentia 2). It aims at inheriting the ease of Diffusers on Neuron. To get started, make sure you have configured your inf2 / trn1 instance , and installed optimum: Copied pip install "optimum[neuronx, diffusers]" Compile Stable Diffusion To deploy models, you will need to compile them to TorchScript optimized for AWS Neuron. In the case of Stable Diffusion, there are four components which need to be exported to the .neuron format to boost the performance: Text encoder U-Net VAE encoder VAE decoder You can either compile and export a Stable Diffusion Checkpoint via CLI or NeuronStableDiffusionPipeline class. Export via CLI Here is an example of exporting stable diffusion components with Optimum CLI: Copied optimum-cli export neuron --model stabilityai/stable-diffusion-2-1-base \ --batch_size 1 \ --height 512 ` # height in pixels of generated image, eg. 512, 768` \ --width 512 ` # width in pixels of generated image, eg. 512, 768` \ --num_images_per_prompt 1 ` # number of images to generate per prompt, defaults to 1` \ --auto_cast matmul ` # cast only matrix multiplication operations` \ --auto_cast_type bf16 ` # cast operations from FP32 to BF16` \ sd_neuron/ We recommend using a inf2.8xlarge or a larger instance for the model compilation. You will also be able to compile the model with the Optimum CLI on a CPU-only instance (needs ~35 GB memory), and then run the pre-compiled model on inf2.xlarge to reduce the expenses. In this case, don’t forget to disable validation of inference by adding the --disable-validation argument. Export via Python API Here is an example of exporting stable diffusion components with NeuronStableDiffusionPipeline : To apply optimized compute of Unet’s attention score, please configure your environment variable with export NEURON_FUSE_SOFTMAX=1 . Besides, don’t hesitate to tweak the compilation configuration to find the best tradeoff between performance v.s accuracy in your use case. By default, we suggest casting FP32 matrix multiplication operations to BF16 which offers good performance with moderate sacrifice of the accuracy. Check out the guide from AWS Neuron documentation to better understand the options for your compilation. Copied >>> from optimum.neuron import NeuronStableDiffusionPipeline >>> model_id = "runwayml/stable-diffusion-v1-5" >>> compiler_args = { "auto_cast" : "matmul" , "auto_cast_type" : "bf16" } >>> input_shapes = { "batch_size" : 1 , "height" : 512 , "width" : 512 } >>> stable_diffusion = NeuronStableDiffusionPipeline.from_pretrained(model_id, export= True , **compiler_args, **input_shapes) # Save locally or upload to the HuggingFace Hub >>> save_directory = "sd_neuron/" >>> stable_diffusion.save_pretrained(save_directory) >>> stable_diffusion.push_to_hub( ... save_directory, repository_id= "my-neuron-repo" , use_auth_token= True ... ) Text-to-Image NeuronStableDiffusionPipeline class allows you to generate images from a text prompt on neuron devices similar to the experience with Diffusers . With pre-compiled Stable Diffusion models, now generate an image with a prompt on Neuron: Copied >>> from optimum.neuron import NeuronStableDiffusionPipeline >>> stable_diffusion = NeuronStableDiffusionPipeline.from_pretrained( "sd_neuron/" ) >>> prompt = "a photo of an astronaut riding a horse on mars" >>> image = stable_diffusion(prompt).images[ 0 ] Image-to-Image With the NeuronStableDiffusionImg2ImgPipeline class, you can generate a new image conditioned on a text prompt and an initial image. Copied import requests from PIL import Image from io import BytesIO from optimum.neuron import NeuronStableDiffusionImg2ImgPipeline # compile & save model_id = "nitrosocke/Ghibli-Diffusion" input_shapes = { "batch_size" : 1 , "height" : 512 , "width" : 512 } pipeline = NeuronStableDiffusionImg2ImgPipeline.from_pretrained(model_id, export= True , **input_shapes) pipeline.save_pretrained( "sd_img2img/" ) url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) init_image = Image. open (BytesIO(response.content)).convert( "RGB" ) init_image = init_image.resize(( 512 , 512 )) prompt = "ghibli style, a fantasy landscape with snowcapped mountains, trees, lake with detailed reflection. sunlight and cloud in the sky, warm colors, 8K" image = pipeline(prompt=prompt, image=init_image, strength= 0.75 , guidance_scale= 7.5 ).images[ 0 ] image.save( "fantasy_landscape.png" ) image prompt output ghibli style, a fantasy landscape with snowcapped mountains, trees, lake with detailed reflection. warm colors, 8K Inpaint With the NeuronStableDiffusionInpaintPipeline class, you can edit specific parts of an image by providing a mask and a text prompt. Copied import requests from PIL import Image from io import BytesIO from optimum.neuron import NeuronStableDiffusionInpaintPipeline model_id = "runwayml/stable-diffusion-inpainting" input_shapes = { "batch_size" : 1 , "height" : 512 , "width" : 512 } pipeline = NeuronStableDiffusionInpaintPipeline.from_pretrained(model_id, export= True , **input_shapes) pipeline.save_pretrained( "sd_inpaint/" ) def download_image ( url ): response = requests.get(url) return Image. open (BytesIO(response.content)).convert( "RGB" ) img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = download_image(img_url).resize(( 512 , 512 )) mask_image = download_image(mask_url).resize(( 512 , 512 )) prompt = "Face of a yellow cat, high resolution, sitting on a park bench" image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[ 0 ] image.save( "cat_on_bench.png" ) image mask_image prompt output Face of a yellow cat, high resolution, sitting on a park bench InstructPix2Pix With the NeuronStableDiffusionInstructPix2PixPipeline class, you can apply instruction-based image editing using both text guidance and image guidance. Copied import requests import PIL from io import BytesIO from optimum.neuron import NeuronStableDiffusionInstructPix2PixPipeline def download_image ( url ): response = requests.get(url) return PIL.Image. open (BytesIO(response.content)).convert( "RGB" ) model_id = "timbrooks/instruct-pix2pix" input_shapes = { "batch_size" : 1 , "height" : 512 , "width" : 512 } pipe = NeuronStableDiffusionInstructPix2PixPipeline.from_pretrained( model_id, export= True , dynamic_batch_size= True , **input_shapes, ) pipe.save_pretrained( "sd_ip2p/" ) img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" init_image = download_image(img_url).resize(( 512 , 512 )) prompt = "Add a beautiful sunset" image = pipe(prompt=prompt, image=init_image).images[ 0 ] image.save( "sunset_mountain.png" ) image prompt output Add a beautiful sunset Stable Diffusion XL There is a notebook version of that tutorial here . Stable Diffusion XL (SDXL) is a latent diffusion model for text-to-image. Compared to the previous versions of Stable Diffusion models, it improves the quality of generated images with a times larger UNet. Compile Stable Diffusion XL To deploy SDXL models, we will also start by compiling the models. We support the export of following components in the pipeline to boost the speed: Text encoder Second text encoder U-Net (a three times larger UNet than the one in Stable Diffusion pipeline) VAE encoder VAE decoder Export via CLI Here is an example of exporting SDXL components with Optimum CLI: Copied optimum-cli export neuron --model stabilityai/stable-diffusion-xl-base-1.0 \ --batch_size 1 \ --height 1024 ` # height in pixels of generated image, eg. 768, 1024` \ --width 1024 ` # width in pixels of generated image, eg. 768, 1024` \ --num_images_per_prompt 1 ` # number of images to generate per prompt, defaults to 1` \ --auto_cast matmul ` # cast only matrix multiplication operations` \ --auto_cast_type bf16 ` # cast operations from FP32 to BF16` \ sd_neuron_xl/ We recommend using a inf2.8xlarge or a larger instance for the model compilation. You will also be able to compile the model with Optimum CLI on a CPU-only instance (needs ~92 GB memory), and then run the pre-compiled models on inf2.xlarge to reduce the expenses. In this case, don’t forget to disable validation of inference by adding the --disable-validation argument. Export via Python API Here is an example of exporting stable diffusion components with NeuronStableDiffusionXLPipeline : Copied >>> from optimum.neuron import NeuronStableDiffusionXLPipeline >>> model_id = "stabilityai/stable-diffusion-xl-base-1.0" >>> compiler_args = { "auto_cast" : "matmul" , "auto_cast_type" : "bf16" } >>> input_shapes = { "batch_size" : 1 , "height" : 1024 , "width" : 1024 } >>> stable_diffusion_xl = NeuronStableDiffusionXLPipeline.from_pretrained(model_id, export= True , **compiler_args, **input_shapes) # Save locally or upload to the HuggingFace Hub >>> save_directory = "sd_neuron_xl/" >>> stable_diffusion_xl.save_pretrained(save_directory) >>> stable_diffusion_xl.push_to_hub( ... save_directory, repository_id= "my-neuron-repo" , use_auth_token= True ... ) Text-to-Image With pre-compiled SDXL models, now generate an image with a text prompt on Neuron: Copied >>> from optimum.neuron import NeuronStableDiffusionXLPipeline >>> stable_diffusion_xl = NeuronStableDiffusionXLPipeline.from_pretrained( "sd_neuron_xl/" ) >>> prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" >>> image = stable_diffusion_xl(prompt).images[ 0 ] Image-to-Image With NeuronStableDiffusionXLImg2ImgPipeline , you can pass an initial image, and a text prompt to condition generated images: Copied from optimum.neuron import NeuronStableDiffusionXLImg2ImgPipeline from diffusers.utils import load_image prompt = "a dog running, lake, moat" url = "https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/sd_xl/castle_friedrich.png" init_image = load_image(url).convert( "RGB" ) pipe = NeuronStableDiffusionXLImg2ImgPipeline.from_pretrained( "sd_neuron_xl/" ) image = pipe(prompt=prompt, image=init_image).images[ 0 ] image prompt output a dog running, lake, moat Inpaint With NeuronStableDiffusionXLInpaintPipeline , pass the original image and a mask of what you want to replace in the original image. Then replace the masked area with content described in a prompt. Copied from optimum.neuron import NeuronStableDiffusionXLInpaintPipeline from diffusers.utils import load_image img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" mask_url = ( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" ) init_image = load_image(img_url).convert( "RGB" ) mask_image = load_image(mask_url).convert( "RGB" ) prompt = "A deep sea diver floating" pipe = NeuronStableDiffusionXLInpaintPipeline.from_pretrained( "sd_neuron_xl/" ) image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, strength= 0.85 , guidance_scale= 12.5 ).images[ 0 ] image mask_image prompt output A deep sea diver floating Refine Image Quality SDXL includes a refiner model to denoise low-noise stage images generated from the base model. There are two ways to use the refiner: use the base and refiner model together to produce a refined image. use the base model to produce an image, and subsequently use the refiner model to add more details to the image. Base + refiner model Copied from optimum.neuron import NeuronStableDiffusionXLPipeline, NeuronStableDiffusionXLImg2ImgPipeline prompt = "A majestic lion jumping from a big stone at night" base = NeuronStableDiffusionXLPipeline.from_pretrained( "sd_neuron_xl/" ) image = base( prompt=prompt, num_inference_steps= 40 , denoising_end= 0.8 , output_type= "latent" , ).images[ 0 ] del base # To avoid neuron device OOM refiner = NeuronStableDiffusionXLImg2ImgPipeline.from_pretrained( "sd_neuron_xl_refiner/" ) image = refiner( prompt=prompt, num_inference_steps= 40 , denoising_start= 0.8 , image=image, ).images[ 0 ] Base to refiner model Copied from optimum.neuron import NeuronStableDiffusionXLPipeline, NeuronStableDiffusionXLImg2ImgPipeline prompt = "A majestic lion jumping from a big stone at night" base = NeuronStableDiffusionXLPipeline.from_pretrained( "sd_neuron_xl/" ) image = base(prompt=prompt, output_type= "latent" ).images[ 0 ] del base # To avoid neuron device OOM refiner = NeuronStableDiffusionXLImg2ImgPipeline.from_pretrained( "sd_neuron_xl_refiner/" ) image = refiner(prompt=prompt, image=image[ None , :]).images[ 0 ] Base Image Refined Image To avoid Neuron device out of memory, it’s suggested to finish all base inference and release the device memory before running the refiner. Latent Consistency Models Latent Consistency Models (LCMs) were proposed in Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao . LCMs enable inference with fewer steps on any pre-trained LDMs, including Stable Diffusion and SDXL. In optimum-neuron , you can: Use the class NeuronLatentConsistencyModelPipeline to compile and run inference of LCMs distilled from Stable Diffusion (SD) models. And continue to use the class NeuronStableDiffusionXLPipeline for LCMs distilled from SDXL models. Here are examples to compile the LCMs of Stable Diffusion ( SimianLuo/LCM_Dreamshaper_v7 ) and Stable Diffusion XL( latent-consistency/lcm-sdxl ), and then run inference on AWS Inferentia 2 : Compile LCM LCM of Stable Diffusion Copied from optimum.neuron import NeuronLatentConsistencyModelPipeline model_id = "SimianLuo/LCM_Dreamshaper_v7" num_images_per_prompt = 1 input_shapes = { "batch_size" : 1 , "height" : 768 , "width" : 768 , "num_images_per_prompt" : num_images_per_prompt} compiler_args = { "auto_cast" : "matmul" , "auto_cast_type" : "bf16" } stable_diffusion = NeuronLatentConsistencyModelPipeline.from_pretrained( model_id, export= True , **compiler_args, **input_shapes ) save_directory = "lcm_sd_neuron/" stable_diffusion.save_pretrained(save_directory) # Push to hub stable_diffusion.push_to_hub(save_directory, repository_id= "my-neuron-repo" , use_auth_token= True ) # Replace with your repo id, eg. "Jingya/LCM_Dreamshaper_v7_neuronx" LCM of Stable Diffusion XL Copied from optimum.neuron import NeuronStableDiffusionXLPipeline model_id = "stabilityai/stable-diffusion-xl-base-1.0" unet_id = "latent-consistency/lcm-sdxl" num_images_per_prompt = 1 input_shapes = { "batch_size" : 1 , "height" : 1024 , "width" : 1024 , "num_images_per_prompt" : num_images_per_prompt} compiler_args = { "auto_cast" : "matmul" , "auto_cast_type" : "bf16" } stable_diffusion = NeuronStableDiffusionXLPipeline.from_pretrained( model_id, unet_id=unet_id, export= True , **compiler_args, **input_shapes ) save_directory = "lcm_sdxl_neuron/" stable_diffusion.save_pretrained(save_directory) # Push to hub stable_diffusion.push_to_hub(save_directory, repository_id= "my-neuron-repo" , use_auth_token= True ) # Replace with your repo id, eg. "Jingya/lcm-sdxl-neuronx" Text-to-Image Now we can generate images from text prompts on Inf2 using the pre-compiled model: LCM of Stable Diffusion Copied from optimum.neuron import NeuronLatentConsistencyModelPipeline pipe = NeuronLatentConsistencyModelPipeline.from_pretrained( "Jingya/LCM_Dreamshaper_v7_neuronx" ) prompts = [ "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" ] * 2 images = pipe(prompt=prompts, num_inference_steps= 4 , guidance_scale= 8.0 ).images LCM of Stable Diffusion XL Copied from optimum.neuron import NeuronStableDiffusionXLPipeline pipe = NeuronStableDiffusionXLPipeline.from_pretrained( "Jingya/lcm-sdxl-neuronx" ) prompts = [ "a close-up picture of an old man standing in the rain" ] * 2 images = pipe(prompt=prompts, num_inference_steps= 4 , guidance_scale= 8.0 ).images Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable of running inference in as little as 1 step ( check 🤗diffusers for more details ). In optimum-neuron , you can: Use the class NeuronStableDiffusionXLPipeline to compile and run inference. Here we will compile the stabilityai/sdxl-turbo model with Optimum CLI. Compile SDXL Turbo Copied optimum-cli export neuron --model stabilityai/sdxl-turbo --batch_size 1 --height 512 --width 512 --auto_cast matmul --auto_cast_type bf16 sdxl_turbo_neuron/ Text-to-Image Now we can generate images from text prompts on Inf2 using the pre-compiled model: Copied from optimum.neuron import NeuronStableDiffusionXLPipeline pipe = NeuronStableDiffusionXLPipeline.from_pretrained( "sdxl_turbo_neuron/" , data_parallel_mode= "all" ) prompt = [ "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" ] * 2 images = pipe(prompt=prompt, guidance_scale= 0.0 , num_inference_steps= 1 ).images Inf2 instances contain one or more Neuron devices, and each Neuron device includes 2 NeuronCore-v2. By default, we load the whole pipeline of LCM to both Neuron cores. It means that when the batch size is divisible by 2, you can fully leverage the compute power of both cores. Load adapters LoRA Low-Rank Adaptation is fast way to Stable Diffusion to adapt styles of generated images. In Optimum Neuron, we support using one or multiple LoRA adapters by fusing their parameters into the original parameters of the text encoder(s) and the unet during the compilation. Here below is an example of compiling stable diffusion models with LoRA adapters of your choice and using the compiled artifacts to generate styled images: Copied from diffusers import LCMScheduler from optimum.neuron import NeuronStableDiffusionPipeline model_id = "Lykon/dreamshaper-7" adapter_id = "latent-consistency/lcm-lora-sdv1-5" input_shapes = { "batch_size" : 1 , "height" : 512 , "width" : 512 , "num_images_per_prompt" : 1 } compiler_args = { "auto_cast" : "matmul" , "auto_cast_type" : "bf16" } # Compile pipe = NeuronStableDiffusionPipeline.from_pretrained( model_id, export= True , inline_weights_to_neff= True , # caveat: performance drop if neff/weights separated, will be improved by a future Neuron sdk release. lora_model_ids=adapter_id, lora_weight_names= "pytorch_lora_weights.safetensors" , lora_adapter_names= "lcm" , **input_shapes, **compiler_args, ) pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) # Save locally or upload to the HuggingFace Hub pipe.save_pretrained( "dreamshaper_7_lcm_lora_neuron/" ) # Inference prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" image = pipe(prompt, num_inference_steps= 4 , guidance_scale= 0 ).images[ 0 ] ControlNet ControlNet conditions the stable diffusion model with an additional input image. In Optimum Neuron, we support the compilation of one or multiple ControlNet(s) along with the stable diffusion checkpoint. The you can use the compiled artifacts to generate styled images. Compile ControlNet We can either compile one or multiple ControlNet via the Optimum CLI or programatically via the NeuronStableDiffusionControlNetPipeline class by passing the controlnet_ids . Export via the Optimum CLI Copied optimum-cli export neuron -m runwayml/stable-diffusion-v1-5 --batch_size 1 --height 512 --width 512 --controlnet_ids lllyasviel/sd-controlnet-canny --num_images_per_prompt 1 sd_neuron_controlnet/ Export via Python API Copied from optimum.neuron import NeuronStableDiffusionControlNetPipeline model_id = "runwayml/stable-diffusion-v1-5" controlnet_id = "lllyasviel/sd-controlnet-canny" # [Neuron] pipeline input_shapes = { "batch_size" : 1 , "height" : 512 , "width" : 512 , "num_images_per_prompt" : 1 } compiler_args = { "auto_cast" : "matmul" , "auto_cast_type" : "bf16" } pipe = NeuronStableDiffusionControlNetPipeline.from_pretrained( model_id, controlnet_ids=controlnet_id, export= True , **input_shapes, **compiler_args, ) pipe.save_pretrained( "sd_neuron_controlnet" ) Text-to-Image For text-to-image, we can specify an additional conditioning input. Here is an example with a canny image, a white outline of an image on a black background. The ControlNet will use the canny image as a control to guide the model to generate an image with the same outline. Copied import cv2 import numpy as np from diffusers import UniPCMultistepScheduler from diffusers.utils import load_image, make_image_grid from PIL import Image from optimum.neuron import NeuronStableDiffusionControlNetPipeline # prepare canny image original_image = load_image( "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" ) image = np.array(original_image) low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) canny_image = Image.fromarray(image) # load pre-compiled neuron model pipe = NeuronStableDiffusionControlNetPipeline.from_pretrained( "sd_neuron_controlnet" ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # inference output = pipe( "the mona lisa" , image=canny_image).images[ 0 ] compare = make_image_grid([original_image, canny_image, output], rows= 1 , cols= 3 ) compare.save( "compare.png" ) MultiControlNet With Optimum Neuron, you can also compose multiple ControlNet conditionings from different image inputs: Compile multiple ControlNet for SD1.5 Copied optimum-cli export neuron --inline-weights-neff --model jyoung105/stable-diffusion-v1-5 --task stable-diffusion --auto_cast matmul --auto_cast_type bf16 --batch_size 1 --num_images_per_prompt 1 --controlnet_ids lllyasviel/control_v11p_sd15_openpose lllyasviel/control_v11f1p_sd15_depth --height 512 --width 512 sd15-512x512-bf16-openpose-depth Run SD1.5 with OpenPose and Depth conditionings: Copied import numpy as np import torch from PIL import Image from controlnet_aux import OpenposeDetector from transformers import pipeline from diffusers import UniPCMultistepScheduler from diffusers.utils import load_image from optimum.neuron import NeuronStableDiffusionControlNetPipeline # OpenPose+Depth ControlNet model_id = "sd15-512x512-bf16-openpose-depth" # Load ControlNet images # 1. openpose image = load_image( "https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/input.png" ) processor = OpenposeDetector.from_pretrained( 'lllyasviel/ControlNet' ) openpose_image = processor(image) # 2. depth image = load_image( "https://huggingface.co/lllyasviel/control_v11p_sd15_depth/resolve/main/images/input.png" ) depth_estimator = pipeline( 'depth-estimation' ) image = depth_estimator(image)[ 'depth' ] image = np.array(image) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) depth_image = Image.fromarray(image) images = [openpose_image.resize(( 512 , 512 )), depth_image.resize(( 512 , 512 ))] # 3. inference pipe = NeuronStableDiffusionControlNetPipeline.from_pretrained(model_id) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) prompt = "a giant in a fantasy landscape, best quality" negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" image = pipe(prompt=prompt, image=images).images[ 0 ] image.save( 'out.png' ) ControlNet with Stable Diffusion XL Compile Copied optimum-cli export neuron -m stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl --batch_size 1 --height 1024 --width 1024 --controlnet_ids diffusers/controlnet-canny-sdxl-1.0-small --num_images_per_prompt 1 sdxl_neuron_controlnet/ Text-to-Image Copied import cv2 import numpy as np from diffusers.utils import load_image from PIL import Image from optimum.neuron import NeuronStableDiffusionXLControlNetPipeline # Inputs prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" negative_prompt = "low quality, bad quality, sketches" image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" ) image = np.array(image) image = cv2.Canny(image, 100 , 200 ) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) image = Image.fromarray(image) controlnet_conditioning_scale = 0.5 # recommended for good generalization pipe = NeuronStableDiffusionXLControlNetPipeline.from_pretrained( "sdxl_neuron_controlnet" ) images = pipe( prompt, negative_prompt=negative_prompt, image=image, controlnet_conditioning_scale=controlnet_conditioning_scale, ).images images[ 0 ].save( "hug_lab.png" ) PixArt-α Compile Copied optimum-cli export neuron --model PixArt-alpha/PixArt-XL-2-512x512 --batch_size 1 --height 512 --width 512 --num_images_per_prompt 1 --torch_dtype bfloat16 --sequence_length 120 pixart_alpha_neuron_512/ Text-to-Image Copied from optimum.neuron import NeuronPixArtAlphaPipeline neuron_model = NeuronPixArtAlphaPipeline.from_pretrained( "pixart_alpha_neuron_512/" ) prompt = "Oppenheimer sits on the beach on a chair, watching a nuclear exposition with a huge mushroom cloud, 120mm." image = neuron_model(prompt=prompt).images[ 0 ] Are there any other stable diffusion features that you want us to support in 🤗 Optimum-neuron ? Please file an issue to Optimum-neuron Github repo or discuss with us on HuggingFace’s community forum , cheers 🤗 ! ← Sentence Transformers on AWS Inferentia Set up AWS Trainium instance → Generate images with Stable Diffusion models on AW S Inferentia Stable Diffusion Compile Stable Diffusion Text-to- Image Image-to- Image Inpaint Instruct Pix2 Pix Stable Diffusion XL Compile Stable Diffusion XL Text-to- Image Image-to- Image Inpaint Refine Image Quality Base + refiner model Base to refiner model Latent Consistency Models Compile LCM Text-to- Image Stable Diffusion X L Turbo Compile SDX L Turbo Text-to- Image Load adapters LoRA Control Net Compile Control Net Text-to- Image Multi Control Net Control Net with Stable Diffusion XL Compile Text-to- Image Pix Art-α Compile Text-to- Image |
X-LoRA.txt | X-LoRA Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation X-LoRA PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started X-LoRA Mixture of LoRA Experts ( X-LoRA ) is a PEFT method enabling sparse or dense mixture of LoRA experts based on a high granularity (token, layer, sequence) scalings matrix. This leverages frozen LoRA adapters and a frozen base model to drastically reduces the number of parameters that need to be fine-tuned. A unique aspect of X-LoRA is its versatility: it can be applied to any transformers base model with LoRA adapters. This means that, despite the mixture of experts strategy, no changes to the model code must be made. The below graphic demonstrates how the scalings change for different prompts for each token. This highlights the activation of different adapters as the generation progresses and the sequence creates new context. The abstract from the paper is: We report a mixture of expert strategy to create fine-tuned large language models using a deep layer-wise token-level approach based on low-rank adaptation (LoRA). Starting with a set of pre-trained LoRA adapters, our gating strategy uses the hidden states to dynamically mix adapted layers, allowing the resulting X-LoRA model to draw upon different capabilities and create never-before-used deep layer-wise combinations to solve tasks. The design is inspired by the biological principles of universality and diversity, where neural network building blocks are reused in different hierarchical manifestations. Hence, the X-LoRA model can be easily implemented for any existing large language model (LLM) without a need for modifications of the underlying structure. We develop a tailored X-LoRA model that offers scientific capabilities including forward/inverse analysis tasks and enhanced reasoning capability, focused on biomaterial analysis, protein mechanics and design. The impact of this work include access to readily expandable and adaptable models with strong domain knowledge and the capability to integrate across areas of knowledge. Featuring experts in biology, mathematics, reasoning, bio-inspired materials, mechanics and materials, chemistry, protein biophysics, mechanics and quantum-mechanics based molecular properties, we conduct a series of physics-focused case studies. We examine knowledge recall, protein mechanics forward/inverse tasks, protein design, adversarial agentic modeling including ontological knowledge graph construction, as well as molecular design. The model is capable not only of making quantitative predictions of nanomechanical properties of proteins or quantum mechanical molecular properties, but also reasons over the results and correctly predicts likely mechanisms that explain distinct molecular behaviors. . Please cite X-LoRA as: Copied @article{ 10.1063 / 5.0203126 , author = { Buehler , Eric L . and Buehler , Markus J .}, title = "{X-LoRA: Mixture of low-rank adapter experts, a flexible framework for large language models with applications in protein mechanics and molecular design}" , journal = { APL Machine Learning }, volume = { 2 }, number = { 2 }, pages = { 026119 }, year = { 2024 }, month = { 05 }, abstract = "{We report a mixture of expert strategy to create fine-tuned large language models using a deep layer-wise token-level approach based on low-rank adaptation (LoRA). Starting with a set of pre-trained LoRA adapters, our gating strategy uses the hidden states to dynamically mix adapted layers, allowing the resulting X-LoRA model to draw upon different capabilities and create never-before-used deep layer-wise combinations to solve tasks. The design is inspired by the biological principles of universality and diversity, where neural network building blocks are reused in different hierarchical manifestations. Hence, the X-LoRA model can be easily implemented for any existing large language model without a need for modifications of the underlying structure. We develop a tailored X-LoRA model that offers scientific capabilities, including forward/inverse analysis tasks and enhanced reasoning capability, focused on biomaterial analysis, protein mechanics, and design. The impact of this work includes access to readily expandable and adaptable models with strong domain knowledge and the capability to integrate across areas of knowledge. Featuring experts in biology, mathematics, reasoning, bio-inspired materials, mechanics and materials, chemistry, protein biophysics, mechanics, and quantum-mechanics based molecular properties, we conduct a series of physics-focused case studies. We examine knowledge recall, protein mechanics forward/inverse tasks, protein design, adversarial agentic modeling including ontological knowledge graph construction, and molecular design. The model is capable not only of making quantitative predictions of nanomechanical properties of proteins or quantum mechanical molecular properties but also reasoning over the results and correctly predicting likely mechanisms that explain distinct molecular behaviors.}" , issn = { 2770 -9019 }, doi = { 10.1063 / 5.0203126 }, url = {https://doi.org/ 10.1063 / 5.0203126 }, eprint = {https://pubs.aip.org/aip/aml/article-pdf/doi/ 10.1063 / 5.0203126 / 19964043 / 026119 \_1\_5 .0203126 .pdf}, } XLoraConfig class peft. XLoraConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False hidden_size : int = None adapters : dict[str, str] = None enable_softmax : bool = True enable_softmax_topk : bool = False layerwise_scalings : bool = False xlora_depth : int = 1 xlora_size : int = 2048 xlora_dropout_p : float = 0.2 use_trainable_adapters : bool = False softmax_temperature : float = 1.0 top_k_lora : Optional[int] = None scaling_pass_value : float = 0.0 global_scaling_weight : float = 1.0 ) Parameters hidden_size ( int ) — Hidden size of the base model. adapters ( dict ) — Mapping of adapter names to the LoRA adapter id, as per PeftModel.load_adapter. They will be automatically loaded , to use as LoRA experts. When using from_pretrained, pass the new adapters dict as a keyword argument. enable_softmax ( bool , optional , defaults to True ) — Enable softmax application for the X-LoRA classifier. enable_softmax_topk ( bool , optional , defaults to False ) — Enable softmax application for the top-k LoRA adapters. Mutually exclusive to enable_softmax and must only be set if top_k_lora is. softmax_temperature ( float , optional , defaults to 1.0) — Softmax temperature, lower yields sharper predictions layerwise_scalings ( bool , optional , defaults to False ) — If True, generate scalings for each LoRA adapter (each layer). If this is False, then scalings will be broadcasted, the same, to each layer. top_k_lora ( int , optional , defaults to None) — Sparsely select the top_k LoRA experts instead of the default dense method. xlora_depth ( int , optional , defaults to 1) — Depth of the X-LoRA classifier. xlora_size ( int , optional , defaults to 2048) — Hidden size of the X-LoRA classifier, irrelevant if xlora_depth=1 . xlora_dropout_p ( float , optional , defaults to 0.2) — Dropout probability of the X-LoRA classifier, irrelevant if xlora_depth=1 . use_trainable_adapters ( bool , optional , defaults to False) — Make the adapters trainable. scaling_pass_value ( float , optional , defaults to 0) — Scaling pass value. global_scaling_weight ( float , optional , defaults to 1) — Weight to multiply output of each LoRA adapter by. This is the configuration class to store the configuration of a XLoraModel . When the config is reloaded, the paths of the adapters field is disregarded in favor of the saved adapters. As such, only the keys matter during loading. XLoraModel class peft. XLoraModel < source > ( model : nn.Module config : Union[dict[str, XLoraConfig], XLoraConfig] adapter_name : str torch_device : Optional[str] = None ephemeral_gpu_offload : bool = False autocast_adapter_dtype : bool = True **kwargs ) → torch.nn.Module Parameters model ( torch.nn.Module ) — The model to be adapted. config ( XLoraConfig ) — The configuration of the Lora model. adapter_name ( str ) — The name of the adapter, does not affect the LoRA adapter names. Returns torch.nn.Module The X-LoRA model. Creates an X-LoRA (Mixture of LoRA experts), model from a pretrained transformers model. Currently, this X-LoRA implementation only works with models with a transformer architecture. The method is described in detail in https://arxiv.org/abs/2402.07148 . Example: Copied >>> from transformers import AutoModelForCausalLM, AutoConfig, BitsAndBytesConfig >>> from peft import LoraConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training >>> model_config = AutoConfig.from_pretrained( "mistralai/Mistral-7B-Instruct-v0.1" ) >>> config = XLoraConfig( ... task_type= "CAUSAL_LM" , ... hidden_size=model_config.hidden_size, ... xlora_depth= 4 , ... adapters={ ... "adapter_1" : "./path/to/the/checkpoint/" , ... "adapter_2" : "./path/to/the/checkpoint/" , ... "adapter_n" : "./path/to/the/checkpoint/" , ... }, ... ) >>> int8_config = BitsAndBytesConfig(load_in_8bit= True ) >>> model = AutoModelForCausalLM.from_pretrained( ... "mistralai/Mistral-7B-Instruct-v0.1" , ... trust_remote_code= True , ... attn_implementation= "flash_attention_2" , ... device_map= "cuda:0" , ... torch_dtype=torch.bfloat16, ... quantization_config=int8_config, ... ) >>> model = prepare_model_for_kbit_training( 4 ) >>> xlora_model = get_peft_model(model, config) clear_scalings_log < source > ( ) Clear the scalings log. disable_scalings_logging < source > ( ) Disable scalings logging, without clearing the log. enable_scalings_logging < source > ( ) Enable scalings logging. get_bucketed_scalings_log < source > ( ) Returns bucketed scalings, bucketed by seq_len. Each value consists of the positions (the first) and the associated tensors. The positions are paired with the associated tensors and give the position in the scaling log. get_global_scaling_weight < source > ( ) Get the global LoRA weight. get_latest_scalings < source > ( ) Returns the latest scalings prediction, or None if no scalings have been predicted. The tensor is of shape (batch_size, seq_len, n_layers, n_classes). get_scalings_log < source > ( ) Returns a shallow (only copying the list itself not the tensors) copy of the list containing the scalings log. Editing the list does not change the underlying log. The tensors are of shape (batch_size, seq_len, n_layers, n_classes). The seq_len dim may vary with input dimension. set_global_scaling_weight < source > ( weight : float ) Set the global LoRA weight, a scalar to multiply the output of each LoRA adapter by. This is by default 1. This is reflected in the config. set_scaling_pass_value < source > ( value : float | None ) Set the scaling pass value, the value to set the scalings to during the scaling pass. If the value is None, the scaling pass value will be 1/n where n is the number of adapters. set_topk_lora < source > ( value : Optional[int] ) Sparsely select the specified top_k LoRA experts instead of the default dense method. Set to None to use dense. This is reflected in the config. < > Update on GitHub ← LoRA LyCORIS → X- LoRA X Lora Config X Lora Model |
Custom_Layers_and_Utilities.txt | Custom Layers and Utilities Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Custom Layers and Utilities Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Custom Layers and Utilities This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling. Most of those are only useful if you are studying the code of the models in the library. Pytorch custom modules class transformers. Conv1D < source > ( nf nx ) Parameters nf ( int ) — The number of output features. nx ( int ) — The number of input features. 1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2). Basically works like a linear layer but the weights are transposed. class transformers.modeling_utils. PoolerStartLogits < source > ( config : PretrainedConfig ) Parameters config ( PretrainedConfig ) — The config used by the model, will be used to grab the hidden_size of the model. Compute SQuAD start logits from sequence hidden states. forward < source > ( hidden_states : FloatTensor p_mask : typing.Optional[torch.FloatTensor] = None ) → torch.FloatTensor Parameters hidden_states ( torch.FloatTensor of shape (batch_size, seq_len, hidden_size) ) — The final hidden states of the model. p_mask ( torch.FloatTensor of shape (batch_size, seq_len) , optional ) — Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked. Returns torch.FloatTensor The start logits for SQuAD. class transformers.modeling_utils. PoolerEndLogits < source > ( config : PretrainedConfig ) Parameters config ( PretrainedConfig ) — The config used by the model, will be used to grab the hidden_size of the model and the layer_norm_eps to use. Compute SQuAD end logits from sequence hidden states. forward < source > ( hidden_states : FloatTensor start_states : typing.Optional[torch.FloatTensor] = None start_positions : typing.Optional[torch.LongTensor] = None p_mask : typing.Optional[torch.FloatTensor] = None ) → torch.FloatTensor Parameters hidden_states ( torch.FloatTensor of shape (batch_size, seq_len, hidden_size) ) — The final hidden states of the model. start_states ( torch.FloatTensor of shape (batch_size, seq_len, hidden_size) , optional ) — The hidden states of the first tokens for the labeled span. start_positions ( torch.LongTensor of shape (batch_size,) , optional ) — The position of the first token for the labeled span. p_mask ( torch.FloatTensor of shape (batch_size, seq_len) , optional ) — Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked. Returns torch.FloatTensor The end logits for SQuAD. One of start_states or start_positions should be not None . If both are set, start_positions overrides start_states . class transformers.modeling_utils. PoolerAnswerClass < source > ( config ) Parameters config ( PretrainedConfig ) — The config used by the model, will be used to grab the hidden_size of the model. Compute SQuAD 2.0 answer class from classification and start tokens hidden states. forward < source > ( hidden_states : FloatTensor start_states : typing.Optional[torch.FloatTensor] = None start_positions : typing.Optional[torch.LongTensor] = None cls_index : typing.Optional[torch.LongTensor] = None ) → torch.FloatTensor Parameters hidden_states ( torch.FloatTensor of shape (batch_size, seq_len, hidden_size) ) — The final hidden states of the model. start_states ( torch.FloatTensor of shape (batch_size, seq_len, hidden_size) , optional ) — The hidden states of the first tokens for the labeled span. start_positions ( torch.LongTensor of shape (batch_size,) , optional ) — The position of the first token for the labeled span. cls_index ( torch.LongTensor of shape (batch_size,) , optional ) — Position of the CLS token for each sentence in the batch. If None , takes the last token. Returns torch.FloatTensor The SQuAD 2.0 answer class. One of start_states or start_positions should be not None . If both are set, start_positions overrides start_states . class transformers.modeling_utils. SquadHeadOutput < source > ( loss : typing.Optional[torch.FloatTensor] = None start_top_log_probs : typing.Optional[torch.FloatTensor] = None start_top_index : typing.Optional[torch.LongTensor] = None end_top_log_probs : typing.Optional[torch.FloatTensor] = None end_top_index : typing.Optional[torch.LongTensor] = None cls_logits : typing.Optional[torch.FloatTensor] = None ) Parameters loss ( torch.FloatTensor of shape (1,) , optional , returned if both start_positions and end_positions are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses. start_top_log_probs ( torch.FloatTensor of shape (batch_size, config.start_n_top) , optional , returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search). start_top_index ( torch.LongTensor of shape (batch_size, config.start_n_top) , optional , returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search). end_top_log_probs ( torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top) , optional , returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities (beam-search). end_top_index ( torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top) , optional , returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search). cls_logits ( torch.FloatTensor of shape (batch_size,) , optional , returned if start_positions or end_positions is not provided) — Log probabilities for the is_impossible label of the answers. Base class for outputs of question answering models using a SQuADHead . class transformers.modeling_utils. SQuADHead < source > ( config ) Parameters config ( PretrainedConfig ) — The config used by the model, will be used to grab the hidden_size of the model and the layer_norm_eps to use. A SQuAD head inspired by XLNet. forward < source > ( hidden_states : FloatTensor start_positions : typing.Optional[torch.LongTensor] = None end_positions : typing.Optional[torch.LongTensor] = None cls_index : typing.Optional[torch.LongTensor] = None is_impossible : typing.Optional[torch.LongTensor] = None p_mask : typing.Optional[torch.FloatTensor] = None return_dict : bool = False ) → transformers.modeling_utils.SquadHeadOutput or tuple(torch.FloatTensor) Parameters hidden_states ( torch.FloatTensor of shape (batch_size, seq_len, hidden_size) ) — Final hidden states of the model on the sequence tokens. start_positions ( torch.LongTensor of shape (batch_size,) , optional ) — Positions of the first token for the labeled span. end_positions ( torch.LongTensor of shape (batch_size,) , optional ) — Positions of the last token for the labeled span. cls_index ( torch.LongTensor of shape (batch_size,) , optional ) — Position of the CLS token for each sentence in the batch. If None , takes the last token. is_impossible ( torch.LongTensor of shape (batch_size,) , optional ) — Whether the question has a possible answer in the paragraph or not. p_mask ( torch.FloatTensor of shape (batch_size, seq_len) , optional ) — Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked. return_dict ( bool , optional , defaults to False ) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.modeling_utils.SquadHeadOutput or tuple(torch.FloatTensor) A transformers.modeling_utils.SquadHeadOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False ) comprising various elements depending on the configuration ( <class 'transformers.configuration_utils.PretrainedConfig'> ) and inputs. loss ( torch.FloatTensor of shape (1,) , optional , returned if both start_positions and end_positions are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses. start_top_log_probs ( torch.FloatTensor of shape (batch_size, config.start_n_top) , optional , returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search). start_top_index ( torch.LongTensor of shape (batch_size, config.start_n_top) , optional , returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search). end_top_log_probs ( torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top) , optional , returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities (beam-search). end_top_index ( torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top) , optional , returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search). cls_logits ( torch.FloatTensor of shape (batch_size,) , optional , returned if start_positions or end_positions is not provided) — Log probabilities for the is_impossible label of the answers. class transformers.modeling_utils. SequenceSummary < source > ( config : PretrainedConfig ) Parameters config ( PretrainedConfig ) — The config used by the model. Relevant arguments in the config class of the model are (refer to the actual config class of your model for the default values it uses): summary_type ( str ) — The method to use to make this summary. Accepted values are: "last" — Take the last token hidden state (like XLNet) "first" — Take the first token hidden state (like Bert) "mean" — Take the mean of all tokens hidden states "cls_index" — Supply a Tensor of classification token position (GPT/GPT-2) "attn" — Not implemented now, use multi-head attention summary_use_proj ( bool ) — Add a projection after the vector extraction. summary_proj_to_labels ( bool ) — If True , the projection outputs to config.num_labels classes (otherwise to config.hidden_size ). summary_activation ( Optional[str] ) — Set to "tanh" to add a tanh activation to the output, another string or None will add no activation. summary_first_dropout ( float ) — Optional dropout probability before the projection and activation. summary_last_dropout ( float )— Optional dropout probability after the projection and activation. Compute a single vector summary of a sequence hidden states. forward < source > ( hidden_states : FloatTensor cls_index : typing.Optional[torch.LongTensor] = None ) → torch.FloatTensor Parameters hidden_states ( torch.FloatTensor of shape [batch_size, seq_len, hidden_size] ) — The hidden states of the last layer. cls_index ( torch.LongTensor of shape [batch_size] or [batch_size, ...] where … are optional leading dimensions of hidden_states , optional ) — Used if summary_type == "cls_index" and takes the last token of the sequence as classification token. Returns torch.FloatTensor The summary of the sequence hidden states. Compute a single vector summary of a sequence hidden states. PyTorch Helper Functions transformers.apply_chunking_to_forward < source > ( forward_fn : Callable[..., torch.Tensor] chunk_size : int chunk_dim : int *input_tensors ) → torch.Tensor Parameters forward_fn ( Callable[..., torch.Tensor] ) — The forward function of the model. chunk_size ( int ) — The chunk size of a chunked tensor: num_chunks = len(input_tensors[0]) / chunk_size . chunk_dim ( int ) — The dimension over which the input_tensors should be chunked. input_tensors ( Tuple[torch.Tensor] ) — The input tensors of forward_fn which will be chunked Returns torch.Tensor A tensor with the same shape as the forward_fn would have given if applied`. This function chunks the input_tensors into smaller input tensor parts of size chunk_size over the dimension chunk_dim . It then applies a layer forward_fn to each chunk independently to save memory. If the forward_fn is independent across the chunk_dim this function will yield the same result as directly applying forward_fn to input_tensors . Examples: Copied # rename the usual forward() fn to forward_chunk() def forward_chunk ( self, hidden_states ): hidden_states = self.decoder(hidden_states) return hidden_states # implement a chunked forward function def forward ( self, hidden_states ): return apply_chunking_to_forward(self.forward_chunk, self.chunk_size_lm_head, self.seq_len_dim, hidden_states) transformers.pytorch_utils.find_pruneable_heads_and_indices < source > ( heads : List[int] n_heads : int head_size : int already_pruned_heads : Set[int] ) → Tuple[Set[int], torch.LongTensor] Parameters heads ( List[int] ) — List of the indices of heads to prune. n_heads ( int ) — The number of heads in the model. head_size ( int ) — The size of each head. already_pruned_heads ( Set[int] ) — A set of already pruned heads. Returns Tuple[Set[int], torch.LongTensor] A tuple with the indices of heads to prune taking already_pruned_heads into account and the indices of rows/columns to keep in the layer weight. Finds the heads and their indices taking already_pruned_heads into account. transformers.prune_layer < source > ( layer : Union[nn.Linear, Conv1D] index : torch.LongTensor dim : Optional[int] = None ) → torch.nn.Linear or Conv1D Parameters layer ( Union[torch.nn.Linear, Conv1D] ) — The layer to prune. index ( torch.LongTensor ) — The indices to keep in the layer. dim ( int , optional ) — The dimension on which to keep the indices. Returns torch.nn.Linear or Conv1D The pruned layer as a new layer with requires_grad=True . Prune a Conv1D or linear layer to keep only entries in index. Used to remove heads. transformers.pytorch_utils.prune_conv1d_layer < source > ( layer : Conv1D index : torch.LongTensor dim : int = 1 ) → Conv1D Parameters layer ( Conv1D ) — The layer to prune. index ( torch.LongTensor ) — The indices to keep in the layer. dim ( int , optional , defaults to 1) — The dimension on which to keep the indices. Returns Conv1D The pruned layer as a new layer with requires_grad=True . Prune a Conv1D layer to keep only entries in index. A Conv1D work as a Linear layer (see e.g. BERT) but the weights are transposed. Used to remove heads. transformers.pytorch_utils.prune_linear_layer < source > ( layer : nn.Linear index : torch.LongTensor dim : int = 0 ) → torch.nn.Linear Parameters layer ( torch.nn.Linear ) — The layer to prune. index ( torch.LongTensor ) — The indices to keep in the layer. dim ( int , optional , defaults to 0) — The dimension on which to keep the indices. Returns torch.nn.Linear The pruned layer as a new layer with requires_grad=True . Prune a linear layer to keep only entries in index. Used to remove heads. TensorFlow custom layers class transformers.modeling_tf_utils. TFConv1D < source > ( nf nx initializer_range = 0.02 **kwargs ) Parameters nf ( int ) — The number of output features. nx ( int ) — The number of input features. initializer_range ( float , optional , defaults to 0.02) — The standard deviation to use to initialize the weights. kwargs ( Dict[str, Any] , optional ) — Additional keyword arguments passed along to the __init__ of keras.layers.Layer . 1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2). Basically works like a linear layer but the weights are transposed. class transformers. TFSequenceSummary < source > ( config : PretrainedConfig initializer_range : float = 0.02 **kwargs ) Parameters config ( PretrainedConfig ) — The config used by the model. Relevant arguments in the config class of the model are (refer to the actual config class of your model for the default values it uses): summary_type ( str ) — The method to use to make this summary. Accepted values are: "last" — Take the last token hidden state (like XLNet) "first" — Take the first token hidden state (like Bert) "mean" — Take the mean of all tokens hidden states "cls_index" — Supply a Tensor of classification token position (GPT/GPT-2) "attn" — Not implemented now, use multi-head attention summary_use_proj ( bool ) — Add a projection after the vector extraction. summary_proj_to_labels ( bool ) — If True , the projection outputs to config.num_labels classes (otherwise to config.hidden_size ). summary_activation ( Optional[str] ) — Set to "tanh" to add a tanh activation to the output, another string or None will add no activation. summary_first_dropout ( float ) — Optional dropout probability before the projection and activation. summary_last_dropout ( float )— Optional dropout probability after the projection and activation. initializer_range ( float , optional , defaults to 0.02) — The standard deviation to use to initialize the weights. kwargs ( Dict[str, Any] , optional ) — Additional keyword arguments passed along to the __init__ of keras.layers.Layer . Compute a single vector summary of a sequence hidden states. TensorFlow loss functions class transformers.modeling_tf_utils. TFCausalLanguageModelingLoss < source > ( ) Loss function suitable for causal language modeling (CLM), that is, the task of guessing the next token. Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. class transformers.modeling_tf_utils. TFMaskedLanguageModelingLoss < source > ( ) Loss function suitable for masked language modeling (MLM), that is, the task of guessing the masked tokens. Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. class transformers.modeling_tf_utils. TFMultipleChoiceLoss < source > ( ) Loss function suitable for multiple choice tasks. class transformers.modeling_tf_utils. TFQuestionAnsweringLoss < source > ( ) Loss function suitable for question answering. class transformers.modeling_tf_utils. TFSequenceClassificationLoss < source > ( ) Loss function suitable for sequence classification. class transformers.modeling_tf_utils. TFTokenClassificationLoss < source > ( ) Loss function suitable for token classification. Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. TensorFlow Helper Functions transformers.modeling_tf_utils.get_initializer < source > ( initializer_range : float = 0.02 ) → keras.initializers.TruncatedNormal Parameters initializer_range ( float , defaults to 0.02) — Standard deviation of the initializer range. Returns keras.initializers.TruncatedNormal The truncated normal initializer. Creates a keras.initializers.TruncatedNormal with the given range. transformers.modeling_tf_utils.keras_serializable < source > ( ) Parameters cls (a keras.layers.Layers subclass ) — Typically a TF.MainLayer class in this project, in general must accept a config argument to its initializer. Decorate a Keras Layer class to support Keras serialization. This is done by: Adding a transformers_config dict to the Keras config dictionary in get_config (called by Keras at serialization time. Wrapping __init__ to accept that transformers_config dict (passed by Keras at deserialization time) and convert it to a config object for the actual layer initializer. Registering the class as a custom object in Keras (if the Tensorflow version supports this), so that it does not need to be supplied in custom_objects in the call to keras.models.load_model . transformers.shape_list < source > ( tensor : typing.Union[tensorflow.python.framework.tensor.Tensor, numpy.ndarray] ) → List[int] Parameters tensor ( tf.Tensor or np.ndarray ) — The tensor we want the shape of. Returns List[int] The shape of the tensor as a list. Deal with dynamic shape in tensorflow cleanly. < > Update on GitHub ← Graphormer Utilities for pipelines → Custom Layers and Utilities Pytorch custom modules Py Torch Helper Functions Tensor Flow custom layers Tensor Flow loss functions Tensor Flow Helper Functions |
Megatron-LM_utilities.txt | Megatron-LM utilities Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Megatron-LM utilities Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Megatron-LM utilities MegatronLMPlugin class accelerate.utils. MegatronLMPlugin < source > ( tp_degree : int = None pp_degree : int = None num_micro_batches : int = None gradient_clipping : float = None sequence_parallelism : bool = None recompute_activations : bool = None use_distributed_optimizer : bool = None pipeline_model_parallel_split_rank : int = None num_layers_per_virtual_pipeline_stage : int = None is_train_batch_min : str = True train_iters : int = None train_samples : int = None weight_decay_incr_style : str = 'constant' start_weight_decay : float = None end_weight_decay : float = None lr_decay_style : str = 'linear' lr_decay_iters : int = None lr_decay_samples : int = None lr_warmup_iters : int = None lr_warmup_samples : int = None lr_warmup_fraction : float = None min_lr : float = 0 consumed_samples : typing.List[int] = None no_wd_decay_cond : typing.Optional[typing.Callable] = None scale_lr_cond : typing.Optional[typing.Callable] = None lr_mult : float = 1.0 megatron_dataset_flag : bool = False seq_length : int = None encoder_seq_length : int = None decoder_seq_length : int = None tensorboard_dir : str = None set_all_logging_options : bool = False eval_iters : int = 100 eval_interval : int = 1000 return_logits : bool = False custom_train_step_class : typing.Optional[typing.Any] = None custom_train_step_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None custom_model_provider_function : typing.Optional[typing.Callable] = None custom_prepare_model_function : typing.Optional[typing.Callable] = None custom_megatron_datasets_provider_function : typing.Optional[typing.Callable] = None custom_get_batch_function : typing.Optional[typing.Callable] = None custom_loss_function : typing.Optional[typing.Callable] = None other_megatron_args : typing.Optional[typing.Dict[str, typing.Any]] = None ) Parameters tp_degree ( int , defaults to None ) — Tensor parallelism degree. pp_degree ( int , defaults to None ) — Pipeline parallelism degree. num_micro_batches ( int , defaults to None ) — Number of micro-batches. gradient_clipping ( float , defaults to None ) — Gradient clipping value based on global L2 Norm (0 to disable). sequence_parallelism ( bool , defaults to None ) — Enable sequence parallelism. recompute_activations ( bool , defaults to None ) — Enable selective activation recomputation. use_distributed_optimizr ( bool , defaults to None ) — Enable distributed optimizer. pipeline_model_parallel_split_rank ( int , defaults to None ) — Rank where encoder and decoder should be split. num_layers_per_virtual_pipeline_stage ( int , defaults to None ) — Number of layers per virtual pipeline stage. is_train_batch_min ( str , defaults to True ) — If both tran & eval dataloaders are specified, this will decide the micro_batch_size . train_iters ( int , defaults to None ) — Total number of samples to train over all training runs. Note that either train-iters or train-samples should be provided when using MegatronLMDummyScheduler . train_samples ( int , defaults to None ) — Total number of samples to train over all training runs. Note that either train-iters or train-samples should be provided when using MegatronLMDummyScheduler . weight_decay_incr_style ( str , defaults to 'constant' ) — Weight decay increment function. choices=[“constant”, “linear”, “cosine”]. start_weight_decay ( float , defaults to None ) — Initial weight decay coefficient for L2 regularization. end_weight_decay ( float , defaults to None ) — End of run weight decay coefficient for L2 regularization. lr_decay_style ( str , defaults to 'linear' ) — Learning rate decay function. choices=[‘constant’, ‘linear’, ‘cosine’]. lr_decay_iters ( int , defaults to None ) — Number of iterations for learning rate decay. If None defaults to train_iters . lr_decay_samples ( int , defaults to None ) — Number of samples for learning rate decay. If None defaults to train_samples . lr_warmup_iters ( int , defaults to None ) — Number of iterations to linearly warmup learning rate over. lr_warmup_samples ( int , defaults to None ) — Number of samples to linearly warmup learning rate over. lr_warmup_fraction ( float , defaults to None ) — Fraction of lr-warmup-(iters/samples) to linearly warmup learning rate over. min_lr ( float , defaults to 0 ) — Minumum value for learning rate. The scheduler clip values below this threshold. consumed_samples ( List , defaults to None ) — Number of samples consumed in the same order as the dataloaders to accelerator.prepare call. no_wd_decay_cond ( Optional , defaults to None ) — Condition to disable weight decay. scale_lr_cond ( Optional , defaults to None ) — Condition to scale learning rate. lr_mult ( float , defaults to 1.0 ) — Learning rate multiplier. megatron_dataset_flag ( bool , defaults to False ) — Whether the format of dataset follows Megatron-LM Indexed/Cached/MemoryMapped format. seq_length ( int , defaults to None ) — Maximum sequence length to process. encoder_seq_length ( int , defaults to None ) — Maximum sequence length to process for the encoder. decoder_seq_length ( int , defaults to None ) — Maximum sequence length to process for the decoder. tensorboard_dir ( str , defaults to None ) — Path to save tensorboard logs. set_all_logging_options ( bool , defaults to False ) — Whether to set all logging options. eval_iters ( int , defaults to 100 ) — Number of iterations to run for evaluation validation/test for. eval_interval ( int , defaults to 1000 ) — Interval between running evaluation on validation set. return_logits ( bool , defaults to False ) — Whether to return logits from the model. custom_train_step_class ( Optional , defaults to None ) — Custom train step class. custom_train_step_kwargs ( Optional , defaults to None ) — Custom train step kwargs. custom_model_provider_function ( Optional , defaults to None ) — Custom model provider function. custom_prepare_model_function ( Optional , defaults to None ) — Custom prepare model function. custom_megatron_datasets_provider_function ( Optional , defaults to None ) — Custom megatron train_valid_test datasets provider function. custom_get_batch_function ( Optional , defaults to None ) — Custom get batch function. custom_loss_function ( Optional , defaults to None ) — Custom loss function. other_megatron_args ( Optional , defaults to None ) — Other Megatron-LM arguments. Please refer Megatron-LM. Plugin for Megatron-LM to enable tensor, pipeline, sequence and data parallelism. Also to enable selective activation recomputation and optimized fused kernels. MegatronLMDummyScheduler class accelerate.utils. MegatronLMDummyScheduler < source > ( optimizer total_num_steps = None warmup_num_steps = 0 **kwargs ) Parameters optimizer ( torch.optim.optimizer.Optimizer ) — The optimizer to wrap. total_num_steps (int) — Total number of steps. warmup_num_steps (int) — Number of steps for warmup. * *kwargs (additional keyword arguments, optional ) — Other arguments. Dummy scheduler presents model parameters or param groups, this is primarily used to follow conventional training loop when scheduler config is specified in the deepspeed config file. MegatronLMDummyDataLoader class accelerate.utils. MegatronLMDummyDataLoader < source > ( **dataset_kwargs ) Parameters * *dataset_kwargs — Megatron data arguments. Dummy dataloader presents model parameters or param groups, this is primarily used to follow conventional training AbstractTrainStep class accelerate.utils. AbstractTrainStep < source > ( name ) Abstract class for batching, forward pass and loss handler. GPTTrainStep class accelerate.utils. GPTTrainStep < source > ( accelerator args ) Parameters args ( argparse.Namespace ) — Megatron-LM arguments. GPT train step class. BertTrainStep class accelerate.utils. BertTrainStep < source > ( accelerator args ) Parameters args ( argparse.Namespace ) — Megatron-LM arguments. Bert train step class. T5TrainStep class accelerate.utils. T5TrainStep < source > ( accelerator args ) Parameters args ( argparse.Namespace ) — Megatron-LM arguments. T5 train step class. avg_losses_across_data_parallel_group accelerate.utils.avg_losses_across_data_parallel_group < source > ( losses ) Parameters losses (List[Tensor]) — List of losses to average across data parallel group. Average losses across data parallel group. < > Update on GitHub ← Utility functions and classes Fully Sharded Data Parallel utilities → Megatron-L M utilities MegatronLM Plugin MegatronLM Dummy Scheduler MegatronLM Dummy Data Loader Abstract Train Step GPT Train Step Bert Train Step T5 Train Step avg_losses_across_data_parallel_group |
Class__InferenceOutputError.txt | Class: InferenceOutputError Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Class: InferenceOutputError Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Class: InferenceOutputError Hierarchy TypeError ↳ InferenceOutputError Constructors constructor • new InferenceOutputError ( message ): InferenceOutputError Parameters Name Type message string Returns InferenceOutputError Overrides TypeError.constructor Defined in inference/src/lib/InferenceOutputError.ts:2 Properties cause • Optional cause : unknown Inherited from TypeError.cause Defined in doc-internal/node_modules/.pnpm/ [email protected] /node_modules/typescript/lib/lib.es2022.error.d.ts:24 message • message : string Inherited from TypeError.message Defined in doc-internal/node_modules/.pnpm/ [email protected] /node_modules/typescript/lib/lib.es5.d.ts:1077 name • name : string Inherited from TypeError.name Defined in doc-internal/node_modules/.pnpm/ [email protected] /node_modules/typescript/lib/lib.es5.d.ts:1076 stack • Optional stack : string Inherited from TypeError.stack Defined in doc-internal/node_modules/.pnpm/ [email protected] /node_modules/typescript/lib/lib.es5.d.ts:1078 prepareStackTrace ▪ Static Optional prepareStackTrace : ( err : Error , stackTraces : CallSite []) => any Optional override for formatting stack traces See https://v8.dev/docs/stack-trace-api#customizing-stack-traces Type declaration ▸ ( err , stackTraces ): any Parameters Name Type err Error stackTraces CallSite [] Returns any Inherited from TypeError.prepareStackTrace Defined in inference/node_modules/.pnpm/@ [email protected] /node_modules/@types/node/globals.d.ts:11 stackTraceLimit ▪ Static stackTraceLimit : number Inherited from TypeError.stackTraceLimit Defined in inference/node_modules/.pnpm/@ [email protected] /node_modules/@types/node/globals.d.ts:13 Methods captureStackTrace ▸ captureStackTrace ( targetObject , constructorOpt? ): void Create .stack property on a target object Parameters Name Type targetObject object constructorOpt? Function Returns void Inherited from TypeError.captureStackTrace Defined in inference/node_modules/.pnpm/@ [email protected] /node_modules/@types/node/globals.d.ts:4 < > Update on GitHub ← HfInferenceEndpoint AudioClassificationOutputValue → Class: Inference Output Error Hierarchy Constructors constructor Parameters Returns Overrides Defined in Properties cause Inherited from Defined in message Inherited from Defined in name Inherited from Defined in stack Inherited from Defined in prepare Stack Trace Type declaration Inherited from Defined in stack Trace Limit Inherited from Defined in Methods capture Stack Trace Parameters Returns Inherited from Defined in |
Considerations_for_model_evaluation.txt | Considerations for model evaluation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Considerations for model evaluation Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Considerations for model evaluation Developing an ML model is rarely a one-shot deal: it often involves multiple stages of defining the model architecture and tuning hyper-parameters before converging on a final set. Responsible model evaluation is a key part of this process, and 🤗 Evaluate is here to help! Here are some things to keep in mind when evaluating your model using the 🤗 Evaluate library: Properly splitting your data Good evaluation generally requires three splits of your dataset: train : this is used for training your model. validation : this is used for validating the model hyperparameters. test : this is used for evaluating your model. Many of the datasets on the 🤗 Hub are separated into 2 splits: train and validation ; others are split into 3 splits ( train , validation and test ) — make sure to use the right split for the right purpose! Some datasets on the 🤗 Hub are already separated into these three splits. However, there are also many that only have a train/validation or only train split. If the dataset you’re using doesn’t have a predefined train-test split, it is up to you to define which part of the dataset you want to use for training your model and which you want to use for hyperparameter tuning or final evaluation. Training and evaluating on the same split can misrepresent your results! If you overfit on your training data the evaluation results on that split will look great but the model will perform poorly on new data. Depending on the size of the dataset, you can keep anywhere from 10-30% for evaluation and the rest for training, while aiming to set up the test set to reflect the production data as close as possible. Check out this thread for a more in-depth discussion of dataset splitting! The impact of class imbalance While many academic datasets, such as the IMDb dataset of movie reviews, are perfectly balanced, most real-world datasets are not. In machine learning a balanced dataset corresponds to a datasets where all labels are represented equally. In the case of the IMDb dataset this means that there are as many positive as negative reviews in the dataset. In an imbalanced dataset this is not the case: in fraud detection for example there are usually many more non-fraud cases than fraud cases in the dataset. Having an imbalanced dataset can skew the results of your metrics. Imagine a dataset with 99 “non-fraud” cases and 1 “fraud” case. A simple model that always predicts “non-fraud” cases would give yield a 99% accuracy which might sound good at first until you realize that you will never catch a fraud case. Often, using more than one metric can help get a better idea of your model’s performance from different points of view. For instance, metrics like recall and precision can be used together, and the f1 score is actually the harmonic mean of the two. In cases where a dataset is balanced, using accuracy can reflect the overall model performance: In cases where there is an imbalance, using F1 score can be a better representation of performance, given that it encompasses both precision and recall. Using accuracy in an imbalanced setting is less ideal, since it is not sensitive to minority classes and will not faithfully reflect model performance on them. Offline vs. online model evaluation There are multiple ways to evaluate models, and an important distinction is offline versus online evaluation: Offline evaluation is done before deploying a model or using insights generated from a model, using static datasets and metrics. Online evaluation means evaluating how a model is performing after deployment and during its use in production. These two types of evaluation can use different metrics and measure different aspects of model performance. For example, offline evaluation can compare a model to other models based on their performance on common benchmarks, whereas online evaluation will evaluate aspects such as latency and accuracy of the model based on production data (for example, the number of user queries that it was able to address). Trade-offs in model evaluation When evaluating models in practice, there are often trade-offs that have to be made between different aspects of model performance: for instance, choosing a model that is slightly less accurate but that has a faster inference time, compared to a high-accuracy that has a higher memory footprint and requires access to more GPUs. Here are other aspects of model performance to consider during evaluation: Interpretability When evaluating models, interpretability (i.e. the ability to interpret results) can be very important, especially when deploying models in production. For instance, metrics such as exact match have a set range (between 0 and 1, or 0% and 100%) and are easily understandable to users: for a pair of strings, the exact match score is 1 if the two strings are the exact same, and 0 otherwise. Other metrics, such as BLEU are harder to interpret: while they also range between 0 and 1, they can vary greatly depending on which parameters are used to generate the scores, especially when different tokenization and normalization techniques are used (see the metric card for more information about BLEU limitations). This means that it is difficult to interpret a BLEU score without having more information about the procedure used for obtaining it. Interpretability can be more or less important depending on the evaluation use case, but it is a useful aspect of model evaluation to keep in mind, since communicating and comparing model evaluations is an important part of responsible machine learning. Inference speed and memory footprint While recent years have seen increasingly large ML models achieve high performance on a large variety of tasks and benchmarks, deploying these multi-billion parameter models in practice can be a challenge in itself, and many organizations lack the resources for this. This is why considering the inference speed and memory footprint of models is important, especially when doing online model evaluation. Inference speed refers to the time that it takes for a model to make a prediction — this will vary depending on the hardware used and the way in which models are queried, e.g. in real time via an API or in batch jobs that run once a day. Memory footprint refers to the size of the model weights and how much hardware memory they occupy. If a model is too large to fit on a single GPU or CPU, then it has to be split over multiple ones, which can be more or less difficult depending on the model architecture and the deployment method. When doing online model evaluation, there is often a trade-off to be done between inference speed and accuracy or precision, whereas this is less the case for offline evaluation. Limitations and bias All models and all metrics have their limitations and biases, which depend on the way in which they were trained, the data that was used, and their intended uses. It is important to measure and communicate these limitations clearly to prevent misuse and unintended impacts, for instance via model cards which document the training and evaluation process. Measuring biases can be done by evaluating models on datasets such as Wino Bias or MD Gender Bias , and by doing Interactive Error Analyis to try to identify which subsets of the evaluation dataset a model performs poorly on. We are currently working on additional measurements that can be used to quantify different dimensions of bias in both models and datasets — stay tuned for more documentation on this topic! ← Types of evaluations Main classes → Considerations for model evaluation Properly splitting your data The impact of class imbalance Offline vs. online model evaluation Trade-offs in model evaluation Interpretability Inference speed and memory footprint Limitations and bias |
ChatUI_on_Spaces.txt | ChatUI on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation ChatUI on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started ChatUI on Spaces Hugging Chat is an open-source interface enabling everyone to try open-source large language models such as Falcon, StarCoder, and BLOOM. Thanks to an official Docker template called ChatUI, you can deploy your own Hugging Chat based on a model of your choice with a few clicks using Hugging Face’s infrastructure. Deploy your own Chat UI To get started, simply head here . In the backend of this application, text-generation-inference is used for better optimized model inference. Since these models can’t run on CPUs, you can select the GPU depending on your choice of model. You should provide a MongoDB endpoint where your chats will be written. If you leave this section blank, your logs will be persisted to a database inside the Space. Note that Hugging Face does not have access to your chats. You can configure the name and the theme of the Space by providing the application name and application color parameters. Below this, you can select the Hugging Face Hub ID of the model you wish to serve. You can also change the generation hyperparameters in the dictionary below in JSON format. Note : If you’d like to deploy a model with gated access or a model in a private repository, you can simply provide HF_TOKEN in repository secrets. You need to set its value to an access token you can get from here . Once the creation is complete, you will see Building on your Space. Once built, you can try your own HuggingChat! Start chatting! Read more HF Docker Spaces chat-ui GitHub Repository text-generation-inference GitHub repository < > Update on GitHub ← ZenML on Spaces Panel on Spaces → ChatU I on Spaces Deploy your own Chat UI Read more |
Building_a_Next.js_application.txt | Building a Next.js application Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Building a Next.js application Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Building a Next.js application In this tutorial, we’ll build a simple Next.js application that performs sentiment analysis using Transformers.js! Since Transformers.js can run in the browser or in Node.js, you can choose whether you want to perform inference client-side or server-side (we’ll show you how to do both). In either case, we will be developing with the new App Router paradigm. The final product will look something like this: Useful links: Demo site: client-side or server-side Source code: client-side or server-side Prerequisites Node.js version 18+ npm version 9+ Client-side inference Step 1: Initialise the project Start by creating a new Next.js application using create-next-app : Copied npx create-next-app@latest On installation, you’ll see various prompts. For this demo, we’ll be selecting those shown below in bold: √ What is your project named? ... next √ Would you like to use TypeScript? ... No / Yes √ Would you like to use ESLint? ... No / Yes √ Would you like to use Tailwind CSS? ... No / Yes √ Would you like to use `src/` directory? ... No / Yes √ Would you like to use App Router? (recommended) ... No / Yes √ Would you like to customize the default import alias? ... No / Yes Step 2: Install and configure Transformers.js You can install Transformers.js from NPM with the following command: Copied npm i @huggingface/transformers We also need to update the next.config.js file to ignore node-specific modules when bundling for the browser: Copied /** @type { import('next').NextConfig } */ const nextConfig = { // (Optional) Export as a static site // See https://nextjs.org/docs/pages/building-your-application/deploying/static-exports#configuration output : 'export' , // Feel free to modify/remove this option // Override the default webpack configuration webpack : ( config ) => { // See https://webpack.js.org/configuration/resolve/#resolvealias config. resolve . alias = { ...config. resolve . alias , "sharp$" : false , "onnxruntime-node$" : false , } return config; }, } module . exports = nextConfig Next, we’ll create a new Web Worker script where we’ll place all ML-related code. This is to ensure that the main thread is not blocked while the model is loading and performing inference. For this application, we’ll be using Xenova/distilbert-base-uncased-finetuned-sst-2-english , a ~67M parameter model finetuned on the Stanford Sentiment Treebank dataset. Add the following code to ./src/app/worker.js : Copied import { pipeline, env } from "@huggingface/transformers" ; // Skip local model check env. allowLocalModels = false ; // Use the Singleton pattern to enable lazy construction of the pipeline. class PipelineSingleton { static task = 'text-classification' ; static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english' ; static instance = null ; static async getInstance ( progress_callback = null ) { if ( this . instance === null ) { this . instance = pipeline ( this . task , this . model , { progress_callback }); } return this . instance ; } } // Listen for messages from the main thread self. addEventListener ( 'message' , async (event) => { // Retrieve the classification pipeline. When called for the first time, // this will load the pipeline and save it for future use. let classifier = await PipelineSingleton . getInstance ( x => { // We also add a progress callback to the pipeline so that we can // track model loading. self. postMessage (x); }); // Actually perform the classification let output = await classifier (event. data . text ); // Send the output back to the main thread self. postMessage ({ status : 'complete' , output : output, }); }); Step 3: Design the user interface We’ll now modify the default ./src/app/page.js file so that it connects to our worker thread. Since we’ll only be performing in-browser inference, we can opt-in to Client components using the 'use client' directive . Copied 'use client' import { useState, useEffect, useRef, useCallback } from 'react' export default function Home ( ) { /* TODO: Add state variables */ // Create a reference to the worker object. const worker = useRef ( null ); // We use the `useEffect` hook to set up the worker as soon as the `App` component is mounted. useEffect ( () => { if (!worker. current ) { // Create the worker if it does not yet exist. worker. current = new Worker ( new URL ( './worker.js' , import . meta . url ), { type : 'module' }); } // Create a callback function for messages from the worker thread. const onMessageReceived = ( e ) => { /* TODO: See below */ }; // Attach the callback function as an event listener. worker. current . addEventListener ( 'message' , onMessageReceived); // Define a cleanup function for when the component is unmounted. return () => worker. current . removeEventListener ( 'message' , onMessageReceived); }); const classify = useCallback ( ( text ) => { if (worker. current ) { worker. current . postMessage ({ text }); } }, []); return ( /* TODO: See below */ ) } Initialise the following state variables at the beginning of the Home component: Copied // Keep track of the classification result and the model loading status. const [result, setResult] = useState ( null ); const [ready, setReady] = useState ( null ); and fill in the onMessageReceived function to update these variables when the worker thread sends a message: Copied const onMessageReceived = ( e ) => { switch (e. data . status ) { case 'initiate' : setReady ( false ); break ; case 'ready' : setReady ( true ); break ; case 'complete' : setResult (e. data . output [ 0 ]) break ; } }; Finally, we can add a simple UI to the Home component, consisting of an input textbox and a preformatted text element to display the classification result: Copied <main className= "flex min-h-screen flex-col items-center justify-center p-12" > < h1 className = "text-5xl font-bold mb-2 text-center" > Transformers.js </ h1 > < h2 className = "text-2xl mb-4 text-center" > Next.js template </ h2 > < input className = "w-full max-w-xs p-2 border border-gray-300 rounded mb-4" type = "text" placeholder = "Enter text here" onInput = {e => { classify(e.target.value); }} /> {ready !== null && ( < pre className = "bg-gray-100 p-2 rounded" > { (!ready || !result) ? 'Loading...' : JSON.stringify(result, null, 2) } </ pre > )} </main> You can now run your application using the following command: Copied npm run dev Visit the URL shown in the terminal (e.g., http://localhost:3000/ ) to see your application in action! (Optional) Step 4: Build and deploy To build your application, simply run: Copied npm run build This will bundle your application and output the static files to the out folder. For this demo, we will deploy our application as a static Hugging Face Space , but you can deploy it anywhere you like! If you haven’t already, you can create a free Hugging Face account here . Visit https://huggingface.co/new-space and fill in the form. Remember to select “Static” as the space type. Click the “Create space” button at the bottom of the page. Go to “Files” → “Add file” → “Upload files”. Drag the files from the out folder into the upload box and click “Upload”. After they have uploaded, scroll down to the button and click “Commit changes to main”. That’s it! Your application should now be live at https://huggingface.co/spaces/<your-username>/<your-space-name> ! Server-side inference While there are many different ways to perform server-side inference, the simplest (which we will discuss in this tutorial) is using the new Route Handlers feature. Step 1: Initialise the project Start by creating a new Next.js application using create-next-app : Copied npx create-next-app@latest On installation, you’ll see various prompts. For this demo, we’ll be selecting those shown below in bold: √ What is your project named? ... next √ Would you like to use TypeScript? ... No / Yes √ Would you like to use ESLint? ... No / Yes √ Would you like to use Tailwind CSS? ... No / Yes √ Would you like to use `src/` directory? ... No / Yes √ Would you like to use App Router? (recommended) ... No / Yes √ Would you like to customize the default import alias? ... No / Yes Step 2: Install and configure Transformers.js You can install Transformers.js from NPM with the following command: Copied npm i @huggingface/transformers We also need to update the next.config.js file to prevent Webpack from bundling certain packages: Copied /** @type { import('next').NextConfig } */ const nextConfig = { // (Optional) Export as a standalone site // See https://nextjs.org/docs/pages/api-reference/next-config-js/output#automatically-copying-traced-files output : 'standalone' , // Feel free to modify/remove this option // Indicate that these packages should not be bundled by webpack experimental : { serverComponentsExternalPackages : [ 'sharp' , 'onnxruntime-node' ], }, }; module . exports = nextConfig Next, let’s set up our Route Handler. We can do this by creating two files in a new ./src/app/classify/ directory: pipeline.js - to handle the construction of our pipeline. Copied import { pipeline } from "@huggingface/transformers" ; // Use the Singleton pattern to enable lazy construction of the pipeline. // NOTE: We wrap the class in a function to prevent code duplication (see below). const P = ( ) => class PipelineSingleton { static task = 'text-classification' ; static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english' ; static instance = null ; static async getInstance ( progress_callback = null ) { if ( this . instance === null ) { this . instance = pipeline ( this . task , this . model , { progress_callback }); } return this . instance ; } } let PipelineSingleton ; if (process. env . NODE_ENV !== 'production' ) { // When running in development mode, attach the pipeline to the // global object so that it's preserved between hot reloads. // For more information, see https://vercel.com/guides/nextjs-prisma-postgres if (! global . PipelineSingleton ) { global . PipelineSingleton = P (); } PipelineSingleton = global . PipelineSingleton ; } else { PipelineSingleton = P (); } export default PipelineSingleton ; route.js - to process requests made to the /classify route. Copied import { NextResponse } from 'next/server' import PipelineSingleton from './pipeline.js' ; export async function GET ( request ) { const text = request. nextUrl . searchParams . get ( 'text' ); if (!text) { return NextResponse . json ({ error : 'Missing text parameter' , }, { status : 400 }); } // Get the classification pipeline. When called for the first time, // this will load the pipeline and cache it for future use. const classifier = await PipelineSingleton . getInstance (); // Actually perform the classification const result = await classifier (text); return NextResponse . json (result); } Step 3: Design the user interface We’ll now modify the default ./src/app/page.js file to make requests to our newly-created Route Handler. Copied 'use client' import { useState } from 'react' export default function Home ( ) { // Keep track of the classification result and the model loading status. const [result, setResult] = useState ( null ); const [ready, setReady] = useState ( null ); const classify = async ( text ) => { if (!text) return ; if (ready === null ) setReady ( false ); // Make a request to the /classify route on the server. const result = await fetch ( `/classify?text= ${ encodeURIComponent (text)} ` ); // If this is the first time we've made a request, set the ready flag. if (!ready) setReady ( true ); const json = await result. json (); setResult (json); }; return ( < main className = "flex min-h-screen flex-col items-center justify-center p-12" > < h1 className = "text-5xl font-bold mb-2 text-center" > Transformers.js </ h1 > < h2 className = "text-2xl mb-4 text-center" > Next.js template (server-side) </ h2 > < input type = "text" className = "w-full max-w-xs p-2 border border-gray-300 rounded mb-4" placeholder = "Enter text here" onInput = {e => { classify(e.target.value); }} /> {ready !== null && ( < pre className = "bg-gray-100 p-2 rounded" > { (!ready || !result) ? 'Loading...' : JSON.stringify(result, null, 2)} </ pre > )} </ main > ) } You can now run your application using the following command: Copied npm run dev Visit the URL shown in the terminal (e.g., http://localhost:3000/ ) to see your application in action! (Optional) Step 4: Build and deploy For this demo, we will build and deploy our application to Hugging Face Spaces . If you haven’t already, you can create a free Hugging Face account here . Create a new Dockerfile in your project’s root folder. You can use our example Dockerfile as a template. Visit https://huggingface.co/new-space and fill in the form. Remember to select “Docker” as the space type (you can choose the “Blank” Docker template). Click the “Create space” button at the bottom of the page. Go to “Files” → “Add file” → “Upload files”. Drag the files from your project folder (excluding node_modules and .next , if present) into the upload box and click “Upload”. After they have uploaded, scroll down to the button and click “Commit changes to main”. Add the following lines to the top of your README.md : Copied --- title: Next Server Example App emoji: 🔥 colorFrom: yellow colorTo: red sdk: docker pinned: false app_port: 3000 --- That’s it! Your application should now be live at https://huggingface.co/spaces/<your-username>/<your-space-name> ! < > Update on GitHub ← Building a React Application Building a Browser Extension → Building a Next.js application Prerequisites Client-side inference Step 1: Initialise the project Step 2: Install and configure Transformers.js Step 3: Design the user interface ( Optional) Step 4: Build and deploy Server-side inference Step 1: Initialise the project Step 2: Install and configure Transformers.js Step 3: Design the user interface ( Optional) Step 4: Build and deploy |
Load.txt | Load Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Load Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Load Your data can be stored in various places; they can be on your local machine’s disk, in a Github repository, and in in-memory data structures like Python dictionaries and Pandas DataFrames. Wherever a dataset is stored, 🤗 Datasets can help you load it. This guide will show you how to load a dataset from: The Hugging Face Hub Local files In-memory data Offline A specific slice of a split Local loading script (legacy) For more details specific to loading other dataset modalities, take a look at the load audio dataset guide , the load image dataset guide , the load video dataset guide , or the load text dataset guide . Hugging Face Hub Datasets are loaded from a dataset loading script that downloads and generates the dataset. However, you can also load a dataset from any dataset repository on the Hub without a loading script! Begin by creating a dataset repository and upload your data files. Now you can use the load_dataset() function to load the dataset. For example, try loading the files from this demo repository by providing the repository namespace and dataset name. This dataset repository contains CSV files, and the code below loads the dataset from the CSV files: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "lhoestq/demo1" ) Some datasets may have more than one version based on Git tags, branches, or commits. Use the revision parameter to specify the dataset version you want to load: Copied >>> dataset = load_dataset( ... "lhoestq/custom_squad" , ... revision= "main" # tag name, or branch name, or commit hash ... ) Refer to the Upload a dataset to the Hub tutorial for more details on how to create a dataset repository on the Hub, and how to upload your data files. A dataset without a loading script by default loads all the data into the train split. Use the data_files parameter to map data files to splits like train , validation and test : Copied >>> data_files = { "train" : "train.csv" , "test" : "test.csv" } >>> dataset = load_dataset( "namespace/your_dataset_name" , data_files=data_files) If you don’t specify which data files to use, load_dataset() will return all the data files. This can take a long time if you load a large dataset like C4, which is approximately 13TB of data. You can also load a specific subset of the files with the data_files or data_dir parameter. These parameters can accept a relative path which resolves to the base path corresponding to where the dataset is loaded from. Copied >>> from datasets import load_dataset # load files that match the grep pattern >>> c4_subset = load_dataset( "allenai/c4" , data_files= "en/c4-train.0000*-of-01024.json.gz" ) # load dataset from the en directory on the Hub >>> c4_subset = load_dataset( "allenai/c4" , data_dir= "en" ) The split parameter can also map a data file to a specific split: Copied >>> data_files = { "validation" : "en/c4-validation.*.json.gz" } >>> c4_validation = load_dataset( "allenai/c4" , data_files=data_files, split= "validation" ) Local and remote files Datasets can be loaded from local files stored on your computer and from remote files. The datasets are most likely stored as a csv , json , txt or parquet file. The load_dataset() function can load each of these file types. CSV 🤗 Datasets can read a dataset made up of one or several CSV files (in this case, pass your CSV files as a list): Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "csv" , data_files= "my_file.csv" ) For more details, check out the how to load tabular datasets from CSV files guide. JSON JSON files are loaded directly with load_dataset() as shown below: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "json" , data_files= "my_file.json" ) JSON files have diverse formats, but we think the most efficient format is to have multiple JSON objects; each line represents an individual row of data. For example: Copied { "a" : 1 , "b" : 2.0 , "c" : "foo" , "d" : false } { "a" : 4 , "b" : -5.5 , "c" : null , "d" : true } Another JSON format you may encounter is a nested field, in which case you’ll need to specify the field argument as shown in the following: Copied { "version" : "0.1.0" , "data" : [{ "a" : 1 , "b" : 2.0 , "c" : "foo" , "d" : false}, { "a" : 4 , "b" : - 5.5 , "c" : null, "d" : true}] } >>> from datasets import load_dataset >>> dataset = load_dataset( "json" , data_files= "my_file.json" , field= "data" ) To load remote JSON files via HTTP, pass the URLs instead: Copied >>> base_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/" >>> dataset = load_dataset( "json" , data_files={ "train" : base_url + "train-v1.1.json" , "validation" : base_url + "dev-v1.1.json" }, field= "data" ) While these are the most common JSON formats, you’ll see other datasets that are formatted differently. 🤗 Datasets recognizes these other formats and will fallback accordingly on the Python JSON loading methods to handle them. Parquet Parquet files are stored in a columnar format, unlike row-based files like a CSV. Large datasets may be stored in a Parquet file because it is more efficient and faster at returning your query. To load a Parquet file: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "parquet" , data_files={ 'train' : 'train.parquet' , 'test' : 'test.parquet' }) To load remote Parquet files via HTTP, pass the URLs instead: Copied >>> base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/" >>> data_files = { "train" : base_url + "wikipedia-train.parquet" } >>> wiki = load_dataset( "parquet" , data_files=data_files, split= "train" ) Arrow Arrow files are stored in an in-memory columnar format, unlike row-based formats like CSV and uncompressed formats like Parquet. To load an Arrow file: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "arrow" , data_files={ 'train' : 'train.arrow' , 'test' : 'test.arrow' }) To load remote Arrow files via HTTP, pass the URLs instead: Copied >>> base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/" >>> data_files = { "train" : base_url + "wikipedia-train.arrow" } >>> wiki = load_dataset( "arrow" , data_files=data_files, split= "train" ) Arrow is the file format used by 🤗 Datasets under the hood, therefore you can load a local Arrow file using Dataset.from_file() directly: Copied >>> from datasets import Dataset >>> dataset = Dataset.from_file( "data.arrow" ) Unlike load_dataset() , Dataset.from_file() memory maps the Arrow file without preparing the dataset in the cache, saving you disk space. The cache directory to store intermediate processing results will be the Arrow file directory in that case. For now only the Arrow streaming format is supported. The Arrow IPC file format (also known as Feather V2) is not supported. SQL Read database contents with from_sql() by specifying the URI to connect to your database. You can read both table names and queries: Copied >>> from datasets import Dataset # load entire table >>> dataset = Dataset.from_sql( "data_table_name" , con= "sqlite:///sqlite_file.db" ) # load from query >>> dataset = Dataset.from_sql( "SELECT text FROM table WHERE length(text) > 100 LIMIT 10" , con= "sqlite:///sqlite_file.db" ) For more details, check out the how to load tabular datasets from SQL databases guide. WebDataset The WebDataset format is based on TAR archives and is suitable for big image datasets. Because of their size, WebDatasets are generally loaded in streaming mode (using streaming=True ). You can load a WebDataset like this: Copied >>> from datasets import load_dataset >>> >>> path = "path/to/train/*.tar" >>> dataset = load_dataset( "webdataset" , data_files={ "train" : path}, split= "train" , streaming= True ) To load remote WebDatasets via HTTP, pass the URLs instead: Copied >>> from datasets import load_dataset >>> >>> base_url = "https://huggingface.co/datasets/lhoestq/small-publaynet-wds/resolve/main/publaynet-train-{i:06d}.tar" >>> urls = [base_url. format (i=i) for i in range ( 4 )] >>> dataset = load_dataset( "webdataset" , data_files={ "train" : urls}, split= "train" , streaming= True ) Multiprocessing When a dataset is made of several files (that we call “shards”), it is possible to significantly speed up the dataset downloading and preparation step. You can choose how many processes you’d like to use to prepare a dataset in parallel using num_proc . In this case, each process is given a subset of shards to prepare: Copied from datasets import load_dataset imagenet = load_dataset( "imagenet-1k" , num_proc= 8 ) ml_librispeech_spanish = load_dataset( "facebook/multilingual_librispeech" , "spanish" , num_proc= 8 ) In-memory data 🤗 Datasets will also allow you to create a Dataset directly from in-memory data structures like Python dictionaries and Pandas DataFrames. Python dictionary Load Python dictionaries with from_dict() : Copied >>> from datasets import Dataset >>> my_dict = { "a" : [ 1 , 2 , 3 ]} >>> dataset = Dataset.from_dict(my_dict) Python list of dictionaries Load a list of Python dictionaries with from_list() : Copied >>> from datasets import Dataset >>> my_list = [{ "a" : 1 }, { "a" : 2 }, { "a" : 3 }] >>> dataset = Dataset.from_list(my_list) Python generator Create a dataset from a Python generator with from_generator() : Copied >>> from datasets import Dataset >>> def my_gen (): ... for i in range ( 1 , 4 ): ... yield { "a" : i} ... >>> dataset = Dataset.from_generator(my_gen) This approach supports loading data larger than available memory. You can also define a sharded dataset by passing lists to gen_kwargs : Copied >>> def gen ( shards ): ... for shard in shards: ... with open (shard) as f: ... for line in f: ... yield { "line" : line} ... >>> shards = [ f"data {i} .txt" for i in range ( 32 )] >>> ds = IterableDataset.from_generator(gen, gen_kwargs={ "shards" : shards}) >>> ds = ds.shuffle(seed= 42 , buffer_size= 10_000 ) # shuffles the shards order + uses a shuffle buffer >>> from torch.utils.data import DataLoader >>> dataloader = DataLoader(ds.with_format( "torch" ), num_workers= 4 ) # give each worker a subset of 32/4=8 shards Pandas DataFrame Load Pandas DataFrames with from_pandas() : Copied >>> from datasets import Dataset >>> import pandas as pd >>> df = pd.DataFrame({ "a" : [ 1 , 2 , 3 ]}) >>> dataset = Dataset.from_pandas(df) For more details, check out the how to load tabular datasets from Pandas DataFrames guide. Offline Even if you don’t have an internet connection, it is still possible to load a dataset. As long as you’ve downloaded a dataset from the Hub repository before, it should be cached. This means you can reload the dataset from the cache and use it offline. If you know you won’t have internet access, you can run 🤗 Datasets in full offline mode. This saves time because instead of waiting for the Dataset builder download to time out, 🤗 Datasets will look directly in the cache. Set the environment variable HF_HUB_OFFLINE to 1 to enable full offline mode. Slice splits You can also choose only to load specific slices of a split. There are two options for slicing a split: using strings or the ReadInstruction API. Strings are more compact and readable for simple cases, while ReadInstruction is easier to use with variable slicing parameters. Concatenate a train and test split by: Copied String API ReadInstruction >>> train_test_ds = datasets.load_dataset( "bookcorpus" , split= "train+test" ) Select specific rows of the train split: Copied String API ReadInstruction >>> train_10_20_ds = datasets.load_dataset( "bookcorpus" , split= "train[10:20]" ) Or select a percentage of a split with: Copied String API ReadInstruction >>> train_10pct_ds = datasets.load_dataset( "bookcorpus" , split= "train[:10%]" ) Select a combination of percentages from each split: Copied String API ReadInstruction >>> train_10_80pct_ds = datasets.load_dataset( "bookcorpus" , split= "train[:10%]+train[-80%:]" ) Finally, you can even create cross-validated splits. The example below creates 10-fold cross-validated splits. Each validation dataset is a 10% chunk, and the training dataset makes up the remaining complementary 90% chunk: Copied String API ReadInstruction >>> val_ds = datasets.load_dataset( "bookcorpus" , split=[ f"train[ {k} %: {k+ 10 } %]" for k in range ( 0 , 100 , 10 )]) >>> train_ds = datasets.load_dataset( "bookcorpus" , split=[ f"train[: {k} %]+train[ {k+ 10 } %:]" for k in range ( 0 , 100 , 10 )]) Percent slicing and rounding The default behavior is to round the boundaries to the nearest integer for datasets where the requested slice boundaries do not divide evenly by 100. As shown below, some slices may contain more examples than others. For instance, if the following train split includes 999 records, then: Copied # 19 records, from 500 (included) to 519 (excluded). >>> train_50_52_ds = datasets.load_dataset( "bookcorpus" , split= "train[50%:52%]" ) # 20 records, from 519 (included) to 539 (excluded). >>> train_52_54_ds = datasets.load_dataset( "bookcorpus" , split= "train[52%:54%]" ) If you want equal sized splits, use pct1_dropremainder rounding instead. This treats the specified percentage boundaries as multiples of 1%. Copied # 18 records, from 450 (included) to 468 (excluded). >>> train_50_52pct1_ds = datasets.load_dataset( "bookcorpus" , split=datasets.ReadInstruction( "train" , from_= 50 , to= 52 , unit= "%" , rounding= "pct1_dropremainder" )) # 18 records, from 468 (included) to 486 (excluded). >>> train_52_54pct1_ds = datasets.load_dataset( "bookcorpus" , split=datasets.ReadInstruction( "train" ,from_= 52 , to= 54 , unit= "%" , rounding= "pct1_dropremainder" )) # Or equivalently: >>> train_50_52pct1_ds = datasets.load_dataset( "bookcorpus" , split= "train[50%:52%](pct1_dropremainder)" ) >>> train_52_54pct1_ds = datasets.load_dataset( "bookcorpus" , split= "train[52%:54%](pct1_dropremainder)" ) pct1_dropremainder rounding may truncate the last examples in a dataset if the number of examples in your dataset don’t divide evenly by 100. Troubleshooting Sometimes, you may get unexpected results when you load a dataset. Two of the most common issues you may encounter are manually downloading a dataset and specifying features of a dataset. Manual download Certain datasets require you to manually download the dataset files due to licensing incompatibility or if the files are hidden behind a login page. This causes load_dataset() to throw an AssertionError . But 🤗 Datasets provides detailed instructions for downloading the missing files. After you’ve downloaded the files, use the data_dir argument to specify the path to the files you just downloaded. For example, if you try to download a configuration from the MATINF dataset: Copied >>> dataset = load_dataset( "matinf" , "summarization" ) Downloading and preparing dataset matinf/summarization (download: Unknown size, generated: 246.89 MiB, post-processed: Unknown size, total: 246.89 MiB) to /root/.cache/huggingface/datasets/matinf/summarization/ 1.0 .0 /82eee5e71c3ceaf20d909bca36ff237452b4e4ab195d3be7ee1c78b53e6f540e... AssertionError: The dataset matinf with config summarization requires manual data. Please follow the manual download instructions: To use MATINF you have to download it manually. Please fill this google form (https://forms.gle/nkH4LVE4iNQeDzsc9). You will receive a download link and a password once you complete the form. Please extract all files in one folder and load the dataset with : *datasets.load_dataset( 'matinf' , data_dir= 'path/to/folder/folder_name' )*. Manual data can be loaded with `datasets.load_dataset(matinf, data_dir= '<path/to/manual/data>' ) If you’ve already downloaded a dataset from the Hub with a loading script to your computer, then you need to pass an absolute path to the data_dir or data_files parameter to load that dataset. Otherwise, if you pass a relative path, load_dataset() will load the directory from the repository on the Hub instead of the local directory. Specify features When you create a dataset from local files, the Features are automatically inferred by Apache Arrow . However, the dataset’s features may not always align with your expectations, or you may want to define the features yourself. The following example shows how you can add custom labels with the ClassLabel feature. Start by defining your own labels with the Features class: Copied >>> class_names = [ "sadness" , "joy" , "love" , "anger" , "fear" , "surprise" ] >>> emotion_features = Features({ 'text' : Value( 'string' ), 'label' : ClassLabel(names=class_names)}) Next, specify the features parameter in load_dataset() with the features you just created: Copied >>> dataset = load_dataset( 'csv' , data_files=file_dict, delimiter= ';' , column_names=[ 'text' , 'label' ], features=emotion_features) Now when you look at your dataset features, you can see it uses the custom labels you defined: Copied >>> dataset[ 'train' ].features { 'text' : Value(dtype= 'string' , id = None ), 'label' : ClassLabel(names=[ 'sadness' , 'joy' , 'love' , 'anger' , 'fear' , 'surprise' ], id = None )} (Legacy) Local loading script You may have a 🤗 Datasets loading script locally on your computer. In this case, load the dataset by passing one of the following paths to load_dataset() : The local path to the loading script file. The local path to the directory containing the loading script file (only if the script file has the same name as the directory). Pass trust_remote_code=True to allow 🤗 Datasets to execute the loading script: Copied >>> dataset = load_dataset( "path/to/local/loading_script/loading_script.py" , split= "train" , trust_remote_code= True ) >>> dataset = load_dataset( "path/to/local/loading_script" , split= "train" , trust_remote_code= True ) # equivalent because the file has the same name as the directory < > Update on GitHub ← Overview Process → Load Hugging Face Hub Local and remote files CSV JSON Parquet Arrow SQL Web Dataset Multiprocessing In-memory data Python dictionary Python list of dictionaries Python generator Pandas Data Frame Offline Slice splits Percent slicing and rounding Troubleshooting Manual download Specify features ( Legacy) Local loading script |
IP-Adapter.txt | IP-Adapter Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation IP-Adapter Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started IP-Adapter IP-Adapter is an image prompt adapter that can be plugged into diffusion models to enable image prompting without any changes to the underlying model. Furthermore, this adapter can be reused with other models finetuned from the same base model and it can be combined with other adapters like ControlNet . The key idea behind IP-Adapter is the decoupled cross-attention mechanism which adds a separate cross-attention layer just for image features instead of using the same cross-attention layer for both text and image features. This allows the model to learn more image-specific features. Learn how to load an IP-Adapter in the Load adapters guide, and make sure you check out the IP-Adapter Plus section which requires manually loading the image encoder. This guide will walk you through using IP-Adapter for various tasks and use cases. General tasks Let’s take a look at how to use IP-Adapter’s image prompting capabilities with the StableDiffusionXLPipeline for tasks like text-to-image, image-to-image, and inpainting. We also encourage you to try out other pipelines such as Stable Diffusion, LCM-LoRA, ControlNet, T2I-Adapter, or AnimateDiff! In all the following examples, you’ll see the set_ip_adapter_scale() method. This method controls the amount of text or image conditioning to apply to the model. A value of 1.0 means the model is only conditioned on the image prompt. Lowering this value encourages the model to produce more diverse images, but they may not be as aligned with the image prompt. Typically, a value of 0.5 achieves a good balance between the two prompt types and produces good results. In the examples below, try adding low_cpu_mem_usage=True to the load_ip_adapter() method to speed up the loading time. Text-to-image Image-to-image Inpainting Video Crafting the precise text prompt to generate the image you want can be difficult because it may not always capture what you’d like to express. Adding an image alongside the text prompt helps the model better understand what it should generate and can lead to more accurate results. Load a Stable Diffusion XL (SDXL) model and insert an IP-Adapter into the model with the load_ip_adapter() method. Use the subfolder parameter to load the SDXL model weights. Copied from diffusers import AutoPipelineForText2Image from diffusers.utils import load_image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16).to( "cuda" ) pipeline.load_ip_adapter( "h94/IP-Adapter" , subfolder= "sdxl_models" , weight_name= "ip-adapter_sdxl.bin" ) pipeline.set_ip_adapter_scale( 0.6 ) Create a text prompt and load an image prompt before passing them to the pipeline to generate an image. Copied image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png" ) generator = torch.Generator(device= "cpu" ).manual_seed( 0 ) images = pipeline( prompt= "a polar bear sitting in a chair drinking a milkshake" , ip_adapter_image=image, negative_prompt= "deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality" , num_inference_steps= 100 , generator=generator, ).images images[ 0 ] IP-Adapter image generated image Configure parameters There are a couple of IP-Adapter parameters that are useful to know about and can help you with your image generation tasks. These parameters can make your workflow more efficient or give you more control over image generation. Image embeddings IP-Adapter enabled pipelines provide the ip_adapter_image_embeds parameter to accept precomputed image embeddings. This is particularly useful in scenarios where you need to run the IP-Adapter pipeline multiple times because you have more than one image. For example, multi IP-Adapter is a specific use case where you provide multiple styling images to generate a specific image in a specific style. Loading and encoding multiple images each time you use the pipeline would be inefficient. Instead, you can precompute and save the image embeddings to disk (which can save a lot of space if you’re using high-quality images) and load them when you need them. This parameter also gives you the flexibility to load embeddings from other sources. For example, ComfyUI image embeddings for IP-Adapters are compatible with Diffusers and should work ouf-of-the-box! Call the prepare_ip_adapter_image_embeds() method to encode and generate the image embeddings. Then you can save them to disk with torch.save . If you’re using IP-Adapter with ip_adapter_image_embedding instead of ip_adapter_image ’, you can set load_ip_adapter(image_encoder_folder=None,...) because you don’t need to load an encoder to generate the image embeddings. Copied image_embeds = pipeline.prepare_ip_adapter_image_embeds( ip_adapter_image=image, ip_adapter_image_embeds= None , device= "cuda" , num_images_per_prompt= 1 , do_classifier_free_guidance= True , ) torch.save(image_embeds, "image_embeds.ipadpt" ) Now load the image embeddings by passing them to the ip_adapter_image_embeds parameter. Copied image_embeds = torch.load( "image_embeds.ipadpt" ) images = pipeline( prompt= "a polar bear sitting in a chair drinking a milkshake" , ip_adapter_image_embeds=image_embeds, negative_prompt= "deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality" , num_inference_steps= 100 , generator=generator, ).images IP-Adapter masking Binary masks specify which portion of the output image should be assigned to an IP-Adapter. This is useful for composing more than one IP-Adapter image. For each input IP-Adapter image, you must provide a binary mask. To start, preprocess the input IP-Adapter images with the ~image_processor.IPAdapterMaskProcessor.preprocess() to generate their masks. For optimal results, provide the output height and width to ~image_processor.IPAdapterMaskProcessor.preprocess() . This ensures masks with different aspect ratios are appropriately stretched. If the input masks already match the aspect ratio of the generated image, you don’t have to set the height and width . Copied from diffusers.image_processor import IPAdapterMaskProcessor mask1 = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask1.png" ) mask2 = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask2.png" ) output_height = 1024 output_width = 1024 processor = IPAdapterMaskProcessor() masks = processor.preprocess([mask1, mask2], height=output_height, width=output_width) mask one mask two When there is more than one input IP-Adapter image, load them as a list and provide the IP-Adapter scale list. Each of the input IP-Adapter images here corresponds to one of the masks generated above. Copied pipeline.load_ip_adapter( "h94/IP-Adapter" , subfolder= "sdxl_models" , weight_name=[ "ip-adapter-plus-face_sdxl_vit-h.safetensors" ]) pipeline.set_ip_adapter_scale([[ 0.7 , 0.7 ]]) # one scale for each image-mask pair face_image1 = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png" ) face_image2 = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl2.png" ) ip_images = [[face_image1, face_image2]] masks = [masks.reshape( 1 , masks.shape[ 0 ], masks.shape[ 2 ], masks.shape[ 3 ])] IP-Adapter image one IP-Adapter image two Now pass the preprocessed masks to cross_attention_kwargs in the pipeline call. Copied generator = torch.Generator(device= "cpu" ).manual_seed( 0 ) num_images = 1 image = pipeline( prompt= "2 girls" , ip_adapter_image=ip_images, negative_prompt= "monochrome, lowres, bad anatomy, worst quality, low quality" , num_inference_steps= 20 , num_images_per_prompt=num_images, generator=generator, cross_attention_kwargs={ "ip_adapter_masks" : masks} ).images[ 0 ] image IP-Adapter masking applied no IP-Adapter masking applied Specific use cases IP-Adapter’s image prompting and compatibility with other adapters and models makes it a versatile tool for a variety of use cases. This section covers some of the more popular applications of IP-Adapter, and we can’t wait to see what you come up with! Face model Generating accurate faces is challenging because they are complex and nuanced. Diffusers supports two IP-Adapter checkpoints specifically trained to generate faces from the h94/IP-Adapter repository: ip-adapter-full-face_sd15.safetensors is conditioned with images of cropped faces and removed backgrounds ip-adapter-plus-face_sd15.safetensors uses patch embeddings and is conditioned with images of cropped faces Additionally, Diffusers supports all IP-Adapter checkpoints trained with face embeddings extracted by insightface face models. Supported models are from the h94/IP-Adapter-FaceID repository. For face models, use the h94/IP-Adapter checkpoint. It is also recommended to use DDIMScheduler or EulerDiscreteScheduler for face models. Copied import torch from diffusers import StableDiffusionPipeline, DDIMScheduler from diffusers.utils import load_image pipeline = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, ).to( "cuda" ) pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) pipeline.load_ip_adapter( "h94/IP-Adapter" , subfolder= "models" , weight_name= "ip-adapter-full-face_sd15.bin" ) pipeline.set_ip_adapter_scale( 0.5 ) image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_einstein_base.png" ) generator = torch.Generator(device= "cpu" ).manual_seed( 26 ) image = pipeline( prompt= "A photo of Einstein as a chef, wearing an apron, cooking in a French restaurant" , ip_adapter_image=image, negative_prompt= "lowres, bad anatomy, worst quality, low quality" , num_inference_steps= 100 , generator=generator, ).images[ 0 ] image IP-Adapter image generated image To use IP-Adapter FaceID models, first extract face embeddings with insightface . Then pass the list of tensors to the pipeline as ip_adapter_image_embeds . Copied import torch from diffusers import StableDiffusionPipeline, DDIMScheduler from diffusers.utils import load_image from insightface.app import FaceAnalysis pipeline = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, ).to( "cuda" ) pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) pipeline.load_ip_adapter( "h94/IP-Adapter-FaceID" , subfolder= None , weight_name= "ip-adapter-faceid_sd15.bin" , image_encoder_folder= None ) pipeline.set_ip_adapter_scale( 0.6 ) image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png" ) ref_images_embeds = [] app = FaceAnalysis(name= "buffalo_l" , providers=[ 'CUDAExecutionProvider' , 'CPUExecutionProvider' ]) app.prepare(ctx_id= 0 , det_size=( 640 , 640 )) image = cv2.cvtColor(np.asarray(image), cv2.COLOR_BGR2RGB) faces = app.get(image) image = torch.from_numpy(faces[ 0 ].normed_embedding) ref_images_embeds.append(image.unsqueeze( 0 )) ref_images_embeds = torch.stack(ref_images_embeds, dim= 0 ).unsqueeze( 0 ) neg_ref_images_embeds = torch.zeros_like(ref_images_embeds) id_embeds = torch.cat([neg_ref_images_embeds, ref_images_embeds]).to(dtype=torch.float16, device= "cuda" ) generator = torch.Generator(device= "cpu" ).manual_seed( 42 ) images = pipeline( prompt= "A photo of a girl" , ip_adapter_image_embeds=[id_embeds], negative_prompt= "monochrome, lowres, bad anatomy, worst quality, low quality" , num_inference_steps= 20 , num_images_per_prompt= 1 , generator=generator ).images Both IP-Adapter FaceID Plus and Plus v2 models require CLIP image embeddings. You can prepare face embeddings as shown previously, then you can extract and pass CLIP embeddings to the hidden image projection layers. Copied from insightface.utils import face_align ref_images_embeds = [] ip_adapter_images = [] app = FaceAnalysis(name= "buffalo_l" , providers=[ 'CUDAExecutionProvider' , 'CPUExecutionProvider' ]) app.prepare(ctx_id= 0 , det_size=( 640 , 640 )) image = cv2.cvtColor(np.asarray(image), cv2.COLOR_BGR2RGB) faces = app.get(image) ip_adapter_images.append(face_align.norm_crop(image, landmark=faces[ 0 ].kps, image_size= 224 )) image = torch.from_numpy(faces[ 0 ].normed_embedding) ref_images_embeds.append(image.unsqueeze( 0 )) ref_images_embeds = torch.stack(ref_images_embeds, dim= 0 ).unsqueeze( 0 ) neg_ref_images_embeds = torch.zeros_like(ref_images_embeds) id_embeds = torch.cat([neg_ref_images_embeds, ref_images_embeds]).to(dtype=torch.float16, device= "cuda" ) clip_embeds = pipeline.prepare_ip_adapter_image_embeds( [ip_adapter_images], None , torch.device( "cuda" ), num_images, True )[ 0 ] pipeline.unet.encoder_hid_proj.image_projection_layers[ 0 ].clip_embeds = clip_embeds.to(dtype=torch.float16) pipeline.unet.encoder_hid_proj.image_projection_layers[ 0 ].shortcut = False # True if Plus v2 Multi IP-Adapter More than one IP-Adapter can be used at the same time to generate specific images in more diverse styles. For example, you can use IP-Adapter-Face to generate consistent faces and characters, and IP-Adapter Plus to generate those faces in a specific style. Read the IP-Adapter Plus section to learn why you need to manually load the image encoder. Load the image encoder with CLIPVisionModelWithProjection . Copied import torch from diffusers import AutoPipelineForText2Image, DDIMScheduler from transformers import CLIPVisionModelWithProjection from diffusers.utils import load_image image_encoder = CLIPVisionModelWithProjection.from_pretrained( "h94/IP-Adapter" , subfolder= "models/image_encoder" , torch_dtype=torch.float16, ) Next, you’ll load a base model, scheduler, and the IP-Adapters. The IP-Adapters to use are passed as a list to the weight_name parameter: ip-adapter-plus_sdxl_vit-h uses patch embeddings and a ViT-H image encoder ip-adapter-plus-face_sdxl_vit-h has the same architecture but it is conditioned with images of cropped faces Copied pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, image_encoder=image_encoder, ) pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) pipeline.load_ip_adapter( "h94/IP-Adapter" , subfolder= "sdxl_models" , weight_name=[ "ip-adapter-plus_sdxl_vit-h.safetensors" , "ip-adapter-plus-face_sdxl_vit-h.safetensors" ] ) pipeline.set_ip_adapter_scale([ 0.7 , 0.3 ]) pipeline.enable_model_cpu_offload() Load an image prompt and a folder containing images of a certain style you want to use. Copied face_image = load_image( "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png" ) style_folder = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy" style_images = [load_image( f" {style_folder} /img {i} .png" ) for i in range ( 10 )] IP-Adapter image of face IP-Adapter style images Pass the image prompt and style images as a list to the ip_adapter_image parameter, and run the pipeline! Copied generator = torch.Generator(device= "cpu" ).manual_seed( 0 ) image = pipeline( prompt= "wonderwoman" , ip_adapter_image=[style_images, face_image], negative_prompt= "monochrome, lowres, bad anatomy, worst quality, low quality" , num_inference_steps= 50 , num_images_per_prompt= 1 , generator=generator, ).images[ 0 ] image Instant generation Latent Consistency Models (LCM) are diffusion models that can generate images in as little as 4 steps compared to other diffusion models like SDXL that typically require way more steps. This is why image generation with an LCM feels “instantaneous”. IP-Adapters can be plugged into an LCM-LoRA model to instantly generate images with an image prompt. The IP-Adapter weights need to be loaded first, then you can use load_lora_weights() to load the LoRA style and weight you want to apply to your image. Copied from diffusers import DiffusionPipeline, LCMScheduler import torch from diffusers.utils import load_image model_id = "sd-dreambooth-library/herge-style" lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipeline.load_ip_adapter( "h94/IP-Adapter" , subfolder= "models" , weight_name= "ip-adapter_sd15.bin" ) pipeline.load_lora_weights(lcm_lora_id) pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config) pipeline.enable_model_cpu_offload() Try using with a lower IP-Adapter scale to condition image generation more on the herge_style checkpoint, and remember to use the special token herge_style in your prompt to trigger and apply the style. Copied pipeline.set_ip_adapter_scale( 0.4 ) prompt = "herge_style woman in armor, best quality, high quality" generator = torch.Generator(device= "cpu" ).manual_seed( 0 ) ip_adapter_image = load_image( "https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png" ) image = pipeline( prompt=prompt, ip_adapter_image=ip_adapter_image, num_inference_steps= 4 , guidance_scale= 1 , ).images[ 0 ] image Structural control To control image generation to an even greater degree, you can combine IP-Adapter with a model like ControlNet . A ControlNet is also an adapter that can be inserted into a diffusion model to allow for conditioning on an additional control image. The control image can be depth maps, edge maps, pose estimations, and more. Load a ControlNetModel checkpoint conditioned on depth maps, insert it into a diffusion model, and load the IP-Adapter. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel import torch from diffusers.utils import load_image controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16) pipeline = StableDiffusionControlNetPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , controlnet=controlnet, torch_dtype=torch.float16) pipeline.to( "cuda" ) pipeline.load_ip_adapter( "h94/IP-Adapter" , subfolder= "models" , weight_name= "ip-adapter_sd15.bin" ) Now load the IP-Adapter image and depth map. Copied ip_adapter_image = load_image( "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png" ) depth_map = load_image( "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png" ) IP-Adapter image depth map Pass the depth map and IP-Adapter image to the pipeline to generate an image. Copied generator = torch.Generator(device= "cpu" ).manual_seed( 33 ) image = pipeline( prompt= "best quality, high quality" , image=depth_map, ip_adapter_image=ip_adapter_image, negative_prompt= "monochrome, lowres, bad anatomy, worst quality, low quality" , num_inference_steps= 50 , generator=generator, ).images[ 0 ] image Style & layout control InstantStyle is a plug-and-play method on top of IP-Adapter, which disentangles style and layout from image prompt to control image generation. This way, you can generate images following only the style or layout from image prompt, with significantly improved diversity. This is achieved by only activating IP-Adapters to specific parts of the model. By default IP-Adapters are inserted to all layers of the model. Use the set_ip_adapter_scale() method with a dictionary to assign scales to IP-Adapter at different layers. Copied from diffusers import AutoPipelineForText2Image from diffusers.utils import load_image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16).to( "cuda" ) pipeline.load_ip_adapter( "h94/IP-Adapter" , subfolder= "sdxl_models" , weight_name= "ip-adapter_sdxl.bin" ) scale = { "down" : { "block_2" : [ 0.0 , 1.0 ]}, "up" : { "block_0" : [ 0.0 , 1.0 , 0.0 ]}, } pipeline.set_ip_adapter_scale(scale) This will activate IP-Adapter at the second layer in the model’s down-part block 2 and up-part block 0. The former is the layer where IP-Adapter injects layout information and the latter injects style. Inserting IP-Adapter to these two layers you can generate images following both the style and layout from image prompt, but with contents more aligned to text prompt. Copied style_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" ) generator = torch.Generator(device= "cpu" ).manual_seed( 26 ) image = pipeline( prompt= "a cat, masterpiece, best quality, high quality" , ip_adapter_image=style_image, negative_prompt= "text, watermark, lowres, low quality, worst quality, deformed, glitch, low contrast, noisy, saturation, blurry" , guidance_scale= 5 , num_inference_steps= 30 , generator=generator, ).images[ 0 ] image IP-Adapter image generated image In contrast, inserting IP-Adapter to all layers will often generate images that overly focus on image prompt and diminish diversity. Activate IP-Adapter only in the style layer and then call the pipeline again. Copied scale = { "up" : { "block_0" : [ 0.0 , 1.0 , 0.0 ]}, } pipeline.set_ip_adapter_scale(scale) generator = torch.Generator(device= "cpu" ).manual_seed( 26 ) image = pipeline( prompt= "a cat, masterpiece, best quality, high quality" , ip_adapter_image=style_image, negative_prompt= "text, watermark, lowres, low quality, worst quality, deformed, glitch, low contrast, noisy, saturation, blurry" , guidance_scale= 5 , num_inference_steps= 30 , generator=generator, ).images[ 0 ] image IP-Adapter only in style layer IP-Adapter in all layers Note that you don’t have to specify all layers in the dictionary. Those not included in the dictionary will be set to scale 0 which means disable IP-Adapter by default. < > Update on GitHub ← Kandinsky PAG → I P- Adapter General tasks Configure parameters Image embeddings I P- Adapter masking Specific use cases Face model Multi I P- Adapter Instant generation Structural control Style & layout control |
Models_Download_Stats.txt | Models Download Stats Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Models Download Stats Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Models Download Stats How are downloads counted for models? Counting the number of downloads for models is not a trivial task, as a single model repository might contain multiple files, including multiple model weight files (e.g., with sharded models) and different formats depending on the library (GGUF, PyTorch, TensorFlow, etc.). To avoid double counting downloads (e.g., counting a single download of a model as multiple downloads), the Hub uses a set of query files that are employed for download counting. No information is sent from the user, and no additional calls are made for this. The count is done server-side as the Hub serves files for downloads. Every HTTP request to these files, including GET and HEAD , will be counted as a download. By default, when no library is specified, the Hub uses config.json as the default query file. Otherwise, the query file depends on each library, and the Hub might examine files such as pytorch_model.bin or adapter_config.json . Which are the query files for different libraries? By default, the Hub looks at config.json , config.yaml , hyperparams.yaml , and meta.yaml . Some libraries override these defaults by specifying their own filter (specifying countDownloads ). The code that defines these overrides is open-source . For example, for the nemo library, all files with .nemo extension are used to count downloads. Can I add my query files for my library? Yes, you can open a Pull Request here . Here is a minimal example adding download metrics for VFIMamba. Check out the integration guide for more details. How are GGUF files handled? GGUF files are self-contained and are not tied to a single library, so all of them are counted for downloads. This will double count downloads in the case a user performs cloning of a whole repository, but most users and interfaces download a single GGUF file for a given repo. How is diffusers handled? The diffusers library is an edge case and has its filter configured in the internal codebase. The filter ensures repos tagged as diffusers count both files loaded via the library as well as through UIs that require users to manually download the top-level safetensors. Copied filter: [ { bool: { /// Include documents that match at least one of the following rules should: [ /// Downloaded from diffusers lib { term: { path: "model_index.json" } , } , /// Direct downloads (LoRa, Auto1111 and others) /// Filter out nested safetensors and pickle weights to avoid double counting downloads from the diffusers lib { regexp: { path: "[^/]*\\.safetensors" } , } , { regexp: { path: "[^/]*\\.ckpt" } , } , { regexp: { path: "[^/]*\\.bin" } , } , ], minimum_should_match: 1 , } , } , ] } < > Update on GitHub ← Inference API docs Frequently Asked Questions → Models Download Stats How are downloads counted for models? Which are the query files for different libraries? Can I add my query files for my library? How are GGU F files handled? How is diffusers handled? |
Jupyter_Notebooks_on_the_Hugging_Face_Hub.txt | Jupyter Notebooks on the Hugging Face Hub Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Jupyter Notebooks on the Hugging Face Hub Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Jupyter Notebooks on the Hugging Face Hub Jupyter notebooks are a very popular format for sharing code and data analysis for machine learning and data science. They are interactive documents that can contain code, visualizations, and text. Rendering Jupyter notebooks on the Hub Under the hood, Jupyter Notebook files (usually shared with a .ipynb extension) are JSON files. While viewing these files directly is possible, it’s not a format intended to be read by humans. The Hub has rendering support for notebooks hosted on the Hub. This means that notebooks are displayed in a human-readable format. Notebooks will be rendered when included in any type of repository on the Hub. This includes models, datasets, and Spaces. Launch in Google Colab Google Colab is a free Jupyter Notebook environment that requires no setup and runs entirely in the cloud. It’s a great way to run Jupyter Notebooks without having to install anything on your local machine. Notebooks hosted on the Hub are automatically given a “launch in Google Colab” button. This allows you to open the notebook in Colab with a single click. < > Update on GitHub ← How-to: Create automatic metadata quality reports Storage Limits → Jupyter Notebooks on the Hugging Face Hub Rendering Jupyter notebooks on the Hub Launch in Google Colab |
Speculation.txt | Speculation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Speculation text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Speculation Speculative decoding, assisted generation, Medusa, and others are a few different names for the same idea. The idea is to generate tokens before the large model actually runs, and only check if those tokens where valid. So you are making more computations on your LLM, but if you are correct you produce 1, 2, 3 etc.. tokens on a single LLM pass. Since LLMs are usually memory bound (and not compute bound), provided your guesses are correct enough, this is a 2-3x faster inference (It can be much more for code oriented tasks for instance). You can check a more detailed explanation . Text-generation inference supports 2 main speculative methods: Medusa N-gram Medusa Medusa is a simple method to create many tokens in a single pass using fine-tuned LM heads in addition to your existing models. You can check a few existing fine-tunes for popular models: text-generation-inference/gemma-7b-it-medusa text-generation-inference/Mixtral-8x7B-Instruct-v0.1-medusa text-generation-inference/Mistral-7B-Instruct-v0.2-medusa In order to create your own medusa heads for your own finetune, you should check own the original medusa repo. Read for more in Train Medusa . In order to use medusa models in TGI, simply point to a medusa enabled model, and everything will load automatically. N-gram If you don’t have a medusa model, or don’t have the resource to fine-tune, you can try to use n-gram . N-gram works by trying to find matching tokens in the previous sequence, and use those as speculation for generating new tokens. For example, if the tokens “np.mean” appear multiple times in the sequence, the model can speculate that the next continuation of the tokens “np.” is probably also “mean”. This is an extremely simple method, which works best for code, or highly repetitive text. This might not be beneficial, if the speculation misses too much. In order to enable n-gram speculation simply use --speculate 2 in your flags. Details about the flag < > Update on GitHub ← Flash Attention How Guidance Works (via outlines) → Speculation Medusa N-gram |
Accelerated_PyTorch_Training_on_Mac.txt | Accelerated PyTorch Training on Mac Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Accelerated PyTorch Training on Mac Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Accelerated PyTorch Training on Mac With PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac. Apple’s Metal Performance Shaders (MPS) as a backend for PyTorch enables this and can be used via the new "mps" device. This will map computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. For more information please refer official documents Introducing Accelerated PyTorch Training on Mac and MPS BACKEND . Benefits of Training and Inference using Apple Silicon Chips Enables users to train larger networks or batch sizes locally Reduces data retrieval latency and provides the GPU with direct access to the full memory store due to unified memory architecture. Therefore, improving end-to-end performance. Reduces costs associated with cloud-based development or the need for additional local GPUs. Pre-requisites : To install torch with mps support, please follow this nice medium article GPU-Acceleration Comes to PyTorch on M1 Macs . How it works out of the box It is enabled by default on MacOs machines with MPS enabled Apple Silicon GPUs. To disable it, pass --cpu flag to accelerate launch command or answer the corresponding question when answering the accelerate config questionnaire. You can directly run the following script to test it out on MPS enabled Apple Silicon machines: Copied accelerate launch /examples/cv_example.py --data_dir images A few caveats to be aware of Distributed setups gloo and nccl are not working with mps device. This means that currently only single GPU of mps device type can be used. Finally, please, remember that, Accelerate only integrates MPS backend, therefore if you have any problems or questions with regards to MPS backend usage, please, file an issue with PyTorch GitHub . < > Update on GitHub ← Amazon SageMaker IPEX training with CPU → Accelerated Py Torch Training on Mac Benefits of Training and Inference using Apple Silicon Chips How it works out of the box A few caveats to be aware of |
Interface__CommitDeletedEntry.txt | Interface: CommitDeletedEntry Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: CommitDeletedEntry Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: CommitDeletedEntry Properties operation • operation : "delete" Defined in hub/src/lib/commit.ts:31 path • path : string Defined in hub/src/lib/commit.ts:32 < > Update on GitHub ← CommitData CommitFile → Interface: Commit Deleted Entry Properties operation Defined in path Defined in |
Export_to_ONNX.txt | Export to ONNX Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Export to ONNX Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Export to ONNX Deploying 🤗 Transformers models in production environments often requires, or can benefit from exporting the models into a serialized format that can be loaded and executed on specialized runtimes and hardware. 🤗 Optimum is an extension of Transformers that enables exporting models from PyTorch or TensorFlow to serialized formats such as ONNX and TFLite through its exporters module. 🤗 Optimum also provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency. This guide demonstrates how you can export 🤗 Transformers models to ONNX with 🤗 Optimum, for the guide on exporting models to TFLite, please refer to the Export to TFLite page . Export to ONNX ONNX (Open Neural Network eXchange) is an open standard that defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow. When a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an intermediate representation ) which represents the flow of data through the neural network. By exposing a graph with standardized operators and data types, ONNX makes it easy to switch between frameworks. For example, a model trained in PyTorch can be exported to ONNX format and then imported in TensorFlow (and vice versa). Once exported to ONNX format, a model can be: optimized for inference via techniques such as graph optimization and quantization . run with ONNX Runtime via ORTModelForXXX classes , which follow the same AutoModel API as the one you are used to in 🤗 Transformers. run with optimized inference pipelines , which has the same API as the pipeline() function in 🤗 Transformers. 🤗 Optimum provides support for the ONNX export by leveraging configuration objects. These configuration objects come ready-made for a number of model architectures, and are designed to be easily extendable to other architectures. For the list of ready-made configurations, please refer to 🤗 Optimum documentation . There are two ways to export a 🤗 Transformers model to ONNX, here we show both: export with 🤗 Optimum via CLI. export with 🤗 Optimum with optimum.onnxruntime . Exporting a 🤗 Transformers model to ONNX with CLI To export a 🤗 Transformers model to ONNX, first install an extra dependency: Copied pip install optimum[exporters] To check out all available arguments, refer to the 🤗 Optimum docs , or view help in command line: Copied optimum-cli export onnx -- help To export a model’s checkpoint from the 🤗 Hub, for example, distilbert/distilbert-base-uncased-distilled-squad , run the following command: Copied optimum-cli export onnx --model distilbert/distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/ You should see the logs indicating progress and showing where the resulting model.onnx is saved, like this: Copied Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx... -[✓] ONNX model output names match reference model (start_logits, end_logits) - Validating ONNX Model output "start_logits" : -[✓] (2, 16) matches (2, 16) -[✓] all values close (atol: 0.0001) - Validating ONNX Model output "end_logits" : -[✓] (2, 16) matches (2, 16) -[✓] all values close (atol: 0.0001) The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model’s weights and tokenizer files in the same directory ( local_path ). When using CLI, pass the local_path to the model argument instead of the checkpoint name on 🤗 Hub and provide the --task argument. You can review the list of supported tasks in the 🤗 Optimum documentation . If task argument is not provided, it will default to the model architecture without any task specific head. Copied optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/ The resulting model.onnx file can then be run on one of the many accelerators that support the ONNX standard. For example, we can load and run the model with ONNX Runtime as follows: Copied >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert_base_uncased_squad_onnx" ) >>> model = ORTModelForQuestionAnswering.from_pretrained( "distilbert_base_uncased_squad_onnx" ) >>> inputs = tokenizer( "What am I using?" , "Using DistilBERT with ONNX Runtime!" , return_tensors= "pt" ) >>> outputs = model(**inputs) The process is identical for TensorFlow checkpoints on the Hub. For instance, here’s how you would export a pure TensorFlow checkpoint from the Keras organization : Copied optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/ Exporting a 🤗 Transformers model to ONNX with optimum.onnxruntime Alternative to CLI, you can export a 🤗 Transformers model to ONNX programmatically like so: Copied >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = "distilbert_base_uncased_squad" >>> save_directory = "onnx/" >>> # Load a model from transformers and export it to ONNX >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export= True ) >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> # Save the onnx model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) Exporting a model for an unsupported architecture If you wish to contribute by adding support for a model that cannot be currently exported, you should first check if it is supported in optimum.exporters.onnx , and if it is not, contribute to 🤗 Optimum directly. Exporting a model with transformers.onnx transformers.onnx is no longer maintained, please export models with 🤗 Optimum as described above. This section will be removed in the future versions. To export a 🤗 Transformers model to ONNX with transformers.onnx , install extra dependencies: Copied pip install transformers[onnx] Use transformers.onnx package as a Python module to export a checkpoint using a ready-made configuration: Copied python -m transformers.onnx --model=distilbert/distilbert-base-uncased onnx/ This exports an ONNX graph of the checkpoint defined by the --model argument. Pass any checkpoint on the 🤗 Hub or one that’s stored locally. The resulting model.onnx file can then be run on one of the many accelerators that support the ONNX standard. For example, load and run the model with ONNX Runtime as follows: Copied >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained( "distilbert/distilbert-base-uncased" ) >>> session = InferenceSession( "onnx/model.onnx" ) >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer( "Using DistilBERT with ONNX Runtime!" , return_tensors= "np" ) >>> outputs = session.run(output_names=[ "last_hidden_state" ], input_feed= dict (inputs)) The required output names (like ["last_hidden_state"] ) can be obtained by taking a look at the ONNX configuration of each model. For example, for DistilBERT we have: Copied >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print ( list (onnx_config.outputs.keys())) [ "last_hidden_state" ] The process is identical for TensorFlow checkpoints on the Hub. For example, export a pure TensorFlow checkpoint like so: Copied python -m transformers.onnx --model=keras-io/transformers-qa onnx/ To export a model that’s stored locally, save the model’s weights and tokenizer files in the same directory (e.g. local-pt-checkpoint ), then export it to ONNX by pointing the --model argument of the transformers.onnx package to the desired directory: Copied python -m transformers.onnx --model=local-pt-checkpoint onnx/ < > Update on GitHub ← Run training on Amazon SageMaker Export to TFLite → Export to ONNX Export to ONNX Exporting a 🤗 Transformers model to ONN X with CLI Exporting a 🤗 Transformers model to ONN X with optimum.onnxruntime Exporting a model for an unsupported architecture Exporting a model with transformers.onnx |
Flax_API.txt | Flax API Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Safetensors documentation Flax API Safetensors 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.5.0-rc.0 v0.3.2 v0.2.9 EN Getting started 🤗 Safetensors Speed Comparison Tensor Sharing in Pytorch Metadata Parsing Convert weights to safetensors API Torch API Tensorflow API PaddlePaddle API Flax API Numpy API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.5.0-rc.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Flax API safetensors.flax.load_file < source > ( filename : typing.Union[str, os.PathLike] ) → Dict[str, Array] Parameters filename ( str , or os.PathLike )) — The name of the file which contains the tensors Returns Dict[str, Array] dictionary that contains name as key, value as Array Loads a safetensors file into flax format. Example: Copied from safetensors.flax import load_file file_path = "./my_folder/bert.safetensors" loaded = load_file(file_path) safetensors.flax.load < source > ( data : bytes ) → Dict[str, Array] Parameters data ( bytes ) — The content of a safetensors file Returns Dict[str, Array] dictionary that contains name as key, value as Array on cpu Loads a safetensors file into flax format from pure bytes. Example: Copied from safetensors.flax import load file_path = "./my_folder/bert.safetensors" with open (file_path, "rb" ) as f: data = f.read() loaded = load(data) safetensors.flax.save_file < source > ( tensors : typing.Dict[str, jax.Array] filename : typing.Union[str, os.PathLike] metadata : typing.Optional[typing.Dict[str, str]] = None ) → None Parameters tensors ( Dict[str, Array] ) — The incoming tensors. Tensors need to be contiguous and dense. filename ( str , or os.PathLike )) — The filename we’re saving into. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns None Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.flax import save_file from jax import numpy as jnp tensors = { "embedding" : jnp.zeros(( 512 , 1024 )), "attention" : jnp.zeros(( 256 , 256 ))} save_file(tensors, "model.safetensors" ) safetensors.flax.save < source > ( tensors : typing.Dict[str, jax.Array] metadata : typing.Optional[typing.Dict[str, str]] = None ) → bytes Parameters tensors ( Dict[str, Array] ) — The incoming tensors. Tensors need to be contiguous and dense. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns bytes The raw bytes representing the format Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.flax import save from jax import numpy as jnp tensors = { "embedding" : jnp.zeros(( 512 , 1024 )), "attention" : jnp.zeros(( 256 , 256 ))} byte_data = save(tensors) < > Update on GitHub ← PaddlePaddle API Numpy API → Flax API |
Community_Projects.txt | Community Projects Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Community Projects Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Community Projects Welcome to Community Projects. This space is dedicated to showcasing the incredible work and innovative applications created by our vibrant community using the diffusers library. This section aims to: Highlight diverse and inspiring projects built with diffusers Foster knowledge sharing within our community Provide real-world examples of how diffusers can be leveraged Happy exploring, and thank you for being part of the Diffusers community! Project Name Description dream-textures Stable Diffusion built-in to Blender HiDiffusion Increases the resolution and speed of your diffusion model by only adding a single line of code IC-Light IC-Light is a project to manipulate the illumination of images InstantID InstantID : Zero-shot Identity-Preserving Generation in Seconds IOPaint Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures. Kohya Gradio GUI for Kohya's Stable Diffusion trainers MagicAnimate MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model OOTDiffusion Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on SD.Next SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models stable-dreamfusion Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion StoryDiffusion StoryDiffusion can create a magic story by generating consistent images and videos. StreamDiffusion A Pipeline-Level Solution for Real-Time Interactive Generation Stable Diffusion Server A server configured for Inpainting/Generation/img2img with one stable diffusion model < > Update on GitHub ← Evaluating Diffusion Models Configuration → Community Projects |
xFormers.txt | xFormers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation xFormers Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started xFormers We recommend xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from pip : Copied pip install xformers The xFormers pip package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend installing xFormers from the source . After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption as shown in this section . According to this issue , xFormers v0.0.16 cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. < > Update on GitHub ← PyTorch 2.0 Token merging → x Formers |
PyTorch_training_on_Apple_silicon.txt | PyTorch training on Apple silicon Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation PyTorch training on Apple silicon Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started PyTorch training on Apple silicon Previously, training models on a Mac was limited to the CPU only. With the release of PyTorch v1.12, you can take advantage of training models with Apple’s silicon GPUs for significantly faster performance and training. This is powered in PyTorch by integrating Apple’s Metal Performance Shaders (MPS) as a backend. The MPS backend implements PyTorch operations as custom Metal shaders and places these modules on a mps device. Some PyTorch operations are not implemented in MPS yet and will throw an error. To avoid this, you should set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU kernels instead (you’ll still see a UserWarning ). If you run into any other errors, please open an issue in the PyTorch repository because the Trainer only integrates the MPS backend. With the mps device set, you can: train larger networks or batch sizes locally reduce data retrieval latency because the GPU’s unified memory architecture allows direct access to the full memory store reduce costs because you don’t need to train on cloud-based GPUs or add additional local GPUs Get started by making sure you have PyTorch installed. MPS acceleration is supported on macOS 12.3+. Copied pip install torch torchvision torchaudio TrainingArguments uses the mps device by default if it’s available which means you don’t need to explicitly set the device. For example, you can run the run_glue.py script with the MPS backend automatically enabled without making any changes. Copied export TASK_NAME=mrpc python examples/pytorch/text-classification/run_glue.py \ --model_name_or_path google-bert/bert-base-cased \ --task_name $TASK_NAME \ - --use_mps_device \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir Backends for distributed setups like gloo and nccl are not supported by the mps device which means you can only train on a single GPU with the MPS backend. You can learn more about the MPS backend in the Introducing Accelerated PyTorch Training on Mac blog post. < > Update on GitHub ← Training on TPU with TensorFlow Custom hardware for training → Py Torch training on Apple silicon |
Generation.txt | Generation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Generation Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Generation Each framework has a generate method for text generation implemented in their respective GenerationMixin class: PyTorch generate() is implemented in GenerationMixin . TensorFlow generate() is implemented in TFGenerationMixin . Flax/JAX generate() is implemented in FlaxGenerationMixin . Regardless of your framework of choice, you can parameterize the generate method with a GenerationConfig class instance. Please refer to this class for the complete list of generation parameters, which control the behavior of the generation method. To learn how to inspect a model’s generation configuration, what are the defaults, how to change the parameters ad hoc, and how to create and save a customized generation configuration, refer to the text generation strategies guide . The guide also explains how to use related features, like token streaming. GenerationConfig class transformers. GenerationConfig < source > ( **kwargs ) Parameters that control the length of the output max_length ( int , optional , defaults to 20) — The maximum length the generated tokens can have. Corresponds to the length of the input prompt + max_new_tokens . Its effect is overridden by max_new_tokens , if also set. max_new_tokens ( int , optional ) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. min_length ( int , optional , defaults to 0) — The minimum length of the sequence to be generated. Corresponds to the length of the input prompt + min_new_tokens . Its effect is overridden by min_new_tokens , if also set. min_new_tokens ( int , optional ) — The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt. early_stopping ( bool or str , optional , defaults to False ) — Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: True , where the generation stops as soon as there are num_beams complete candidates; False , where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; "never" , where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm). max_time ( float , optional ) — The maximum amount of time you allow the computation to run for in seconds. generation will still finish the current pass after allocated time has been passed. stop_strings ( str or List[str] , optional ) — A string or a list of strings that should terminate generation if the model outputs them. Parameters that control the generation strategy used do_sample ( bool , optional , defaults to False ) — Whether or not to use sampling ; use greedy decoding otherwise. num_beams ( int , optional , defaults to 1) — Number of beams for beam search. 1 means no beam search. num_beam_groups ( int , optional , defaults to 1) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. this paper for more details. penalty_alpha ( float , optional ) — The values balance the model confidence and the degeneration penalty in contrastive search decoding. dola_layers ( str or List[int] , optional ) — The layers to use for DoLa decoding. If None , DoLa decoding is not used. If a string, it must be one of “low” or “high”, which means using the lower part or higher part of the model layers, respectively. “low” means the first half of the layers up to the first 20 layers, and “high” means the last half of the layers up to the last 20 layers. If a list of integers, it must contain the indices of the layers to use for candidate premature layers in DoLa. The 0-th layer is the word embedding layer of the model. Set to 'low' to improve long-answer reasoning tasks, 'high' to improve short-answer tasks. Check the documentation or the paper for more details. Parameters that control the cache use_cache ( bool , optional , defaults to True ) — Whether or not the model should use the past last key/values attentions (if applicable to the model) to speed up decoding. cache_implementation ( str , optional , default to None ) — Name of the cache class that will be instantiated in generate , for faster decoding. Possible values are: "static" : StaticCache "offloaded_static" : OffloadedStaticCache "sliding_window" : SlidingWindowCache "hybrid" : HybridCache "mamba" : MambaCache "quantized" : QuantizedCache We support other cache types, but they must be manually instantiated and passed to generate through the past_key_values argument. See our cache documentation for further information. cache_config ( CacheConfig or dict , optional , default to None ) — Arguments used in the key-value cache class can be passed in cache_config . Can be passed as a Dict and it will be converted to its repsective CacheConfig internally. Otherwise can be passed as a CacheConfig class matching the indicated cache_implementation . return_legacy_cache ( bool , optional , default to True ) — Whether to return the legacy or new format of the cache when DynamicCache is used by default. Parameters for manipulation of the model output logits temperature ( float , optional , defaults to 1.0) — The value used to module the next token probabilities. This value is set in a model’s generation_config.json file. If it isn’t set, the default value is 1.0 top_k ( int , optional , defaults to 50) — The number of highest probability vocabulary tokens to keep for top-k-filtering. This value is set in a model’s generation_config.json file. If it isn’t set, the default value is 50. top_p ( float , optional , defaults to 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. This value is set in a model’s generation_config.json file. If it isn’t set, the default value is 1.0 min_p ( float , optional ) — Minimum token probability, which will be scaled by the probability of the most likely token. It must be a value between 0 and 1. Typical values are in the 0.01-0.2 range, comparably selective as setting top_p in the 0.99-0.8 range (use the opposite of normal top_p values). typical_p ( float , optional , defaults to 1.0) — Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to typical_p or higher are kept for generation. See this paper for more details. epsilon_cutoff ( float , optional , defaults to 0.0) — If set to float strictly between 0 and 1, only tokens with a conditional probability greater than epsilon_cutoff will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details. eta_cutoff ( float , optional , defaults to 0.0) — Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly between 0 and 1, a token is only considered if it is greater than either eta_cutoff or sqrt(eta_cutoff) * exp(-entropy(softmax(next_token_logits))) . The latter term is intuitively the expected next token probability, scaled by sqrt(eta_cutoff) . In the paper, suggested values range from 3e-4 to 2e-3, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details. diversity_penalty ( float , optional , defaults to 0.0) — This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty is only effective if group beam search is enabled. repetition_penalty ( float , optional , defaults to 1.0) — The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details. encoder_repetition_penalty ( float , optional , defaults to 1.0) — The paramater for encoder_repetition_penalty. An exponential penalty on sequences that are not in the original input. 1.0 means no penalty. length_penalty ( float , optional , defaults to 1.0) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences. no_repeat_ngram_size ( int , optional , defaults to 0) — If set to int > 0, all ngrams of that size can only occur once. bad_words_ids ( List[List[int]] , optional ) — List of list of token ids that are not allowed to be generated. Check NoBadWordsLogitsProcessor for further documentation and examples. force_words_ids ( List[List[int]] or List[List[List[int]]] , optional ) — List of token ids that must be generated. If given a List[List[int]] , this is treated as a simple list of words that must be included, the opposite to bad_words_ids . If given List[List[List[int]]] , this triggers a disjunctive constraint , where one can allow different forms of each word. renormalize_logits ( bool , optional , defaults to False ) — Whether to renormalize the logits after applying all the logits processors (including the custom ones). It’s highly recommended to set this flag to True as the search algorithms suppose the score logits are normalized but some logit processors break the normalization. constraints ( List[Constraint] , optional ) — Custom constraints that can be added to the generation to ensure that the output will contain the use of certain tokens as defined by Constraint objects, in the most sensible way possible. forced_bos_token_id ( int , optional , defaults to model.config.forced_bos_token_id ) — The id of the token to force as the first generated token after the decoder_start_token_id . Useful for multilingual models like mBART where the first generated token needs to be the target language token. forced_eos_token_id ( int or List[int] , *optional*, defaults to model.config.forced_eos_token_id ) -- The id of the token to force as the last generated token when max_length` is reached. Optionally, use a list to set multiple end-of-sequence tokens. remove_invalid_values ( bool , optional , defaults to model.config.remove_invalid_values ) — Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation. exponential_decay_length_penalty ( tuple(int, float) , optional ) — This Tuple adds an exponentially increasing length penalty, after a certain amount of tokens have been generated. The tuple shall consist of: (start_index, decay_factor) where start_index indicates where penalty starts and decay_factor represents the factor of exponential decay suppress_tokens ( List[int] , optional ) — A list of tokens that will be suppressed at generation. The SupressTokens logit processor will set their log probs to -inf so that they are not sampled. begin_suppress_tokens ( List[int] , optional ) — A list of tokens that will be suppressed at the beginning of the generation. The SupressBeginTokens logit processor will set their log probs to -inf so that they are not sampled. forced_decoder_ids ( List[List[int]] , optional ) — A list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. For example, [[1, 123]] means the second generated token will always be a token of index 123. sequence_bias ( Dict[Tuple[int], float] , optional )) — Dictionary that maps a sequence of tokens to its bias term. Positive biases increase the odds of the sequence being selected, while negative biases do the opposite. Check SequenceBiasLogitsProcessor for further documentation and examples. token_healing ( bool , optional , defaults to False ) — Heal tail tokens of prompts by replacing them with their appropriate extensions. This enhances the quality of completions for prompts affected by greedy tokenization bias. guidance_scale ( float , optional ) — The guidance scale for classifier free guidance (CFG). CFG is enabled by setting guidance_scale > 1 . Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. low_memory ( bool , optional ) — Switch to sequential beam search and sequential topk for contrastive search to reduce peak memory. Used with beam search and contrastive search. watermarking_config ( BaseWatermarkingConfig or dict , optional ) — Arguments used to watermark the model outputs by adding a small bias to randomly selected set of “green” tokens. See the docs of SynthIDTextWatermarkingConfig and WatermarkingConfig for more details. If passed as Dict , it will be converted to a WatermarkingConfig internally. Parameters that define the output variables of generate num_return_sequences ( int , optional , defaults to 1) — The number of independently computed returned sequences for each element in the batch. output_attentions ( bool , optional , defaults to False ) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details. output_hidden_states ( bool , optional , defaults to False ) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details. output_scores ( bool , optional , defaults to False ) — Whether or not to return the prediction scores. See scores under returned tensors for more details. output_logits ( bool , optional ) — Whether or not to return the unprocessed prediction logit scores. See logits under returned tensors for more details. return_dict_in_generate ( bool , optional , defaults to False ) — Whether or not to return a ModelOutput , as opposed to returning exclusively the generated sequence. This flag must be set to True to return the generation cache (when use_cache is True ) or optional outputs (see flags starting with output_ ) Special tokens that can be used at generation time pad_token_id ( int , optional ) — The id of the padding token. bos_token_id ( int , optional ) — The id of the beginning-of-sequence token. eos_token_id ( Union[int, List[int]] , optional ) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. Generation parameters exclusive to encoder-decoder models encoder_no_repeat_ngram_size ( int , optional , defaults to 0) — If set to int > 0, all ngrams of that size that occur in the encoder_input_ids cannot occur in the decoder_input_ids . decoder_start_token_id ( int or List[int] , optional ) — If an encoder-decoder model starts decoding with a different token than bos , the id of that token or a list of length batch_size . Indicating a list enables different start ids for each element in the batch (e.g. multilingual models with different target languages in one batch) Generation parameters exclusive to assistant generation is_assistant ( bool , optional , defaults to False ) — Whether the model is an assistant (draft) model. num_assistant_tokens ( int , optional , defaults to 20) — Defines the number of speculative tokens that shall be generated by the assistant model before being checked by the target model at each iteration. Higher values for num_assistant_tokens make the generation more speculative : If the assistant model is performant larger speed-ups can be reached, if the assistant model requires lots of corrections, lower speed-ups are reached. num_assistant_tokens_schedule ( str , optional , defaults to "constant" ) — Defines the schedule at which max assistant tokens shall be changed during inference. "heuristic" : When all speculative tokens are correct, increase num_assistant_tokens by 2 else reduce by 1. num_assistant_tokens value is persistent over multiple generation calls with the same assistant model. "heuristic_transient" : Same as "heuristic" but num_assistant_tokens is reset to its initial value after each generation call. "constant" : num_assistant_tokens stays unchanged during generation assistant_confidence_threshold ( float , optional , defaults to 0.4) — The confidence threshold for the assistant model. If the assistant model’s confidence in its prediction for the current token is lower than this threshold, the assistant model stops the current token generation iteration, even if the number of speculative tokens (defined by num_assistant_tokens ) is not yet reached. The assistant’s confidence threshold is adjusted throughout the speculative iterations to reduce the number of unnecessary draft and target forward passes, biased towards avoiding false negatives. assistant_confidence_threshold value is persistent over multiple generation calls with the same assistant model. It is an unsupervised version of the dynamic speculation lookahead from Dynamic Speculation Lookahead Accelerates Speculative Decoding of Large Language Models https://arxiv.org/abs/2405.04304 . prompt_lookup_num_tokens ( int , optional ) — The number of tokens to be output as candidate tokens. max_matching_ngram_size ( int , optional ) — The maximum ngram size to be considered for matching in the prompt. Default to 2 if not provided. assistant_early_exit( int , optional ) — If set to a positive integer, early exit of the model will be used as an assistant. Can only be used with models that support early exit (i.e. models where logits from intermediate layers can be interpreted by the LM head). assistant_lookbehind( int , optional , defaults to 10) — If set to a positive integer, the re-encodeing process will additionally consider the last assistant_lookbehind assistant tokens to correctly align tokens. Can only be used with different tokenizers in speculative decoding. See this blog for more details. target_lookbehind( int , optional , defaults to 10) — If set to a positive integer, the re-encodeing process will additionally consider the last target_lookbehind target tokens to correctly align tokens. Can only be used with different tokenizers in speculative decoding. See this blog for more details. Parameters related to performances and compilation compile_config (CompileConfig, optional ) — If using a static cache, this controls how generate will compile the forward pass for performance gains. Wild card generation_kwargs — Additional generation kwargs will be forwarded to the generate function of the model. Kwargs that are not present in generate ’s signature will be used in the model forward pass. Class that holds a configuration for a generation task. A generate call supports the following generation methods for text-decoder, text-to-text, speech-to-text, and vision-to-text models: greedy decoding if num_beams=1 and do_sample=False contrastive search if penalty_alpha>0. and top_k>1 multinomial sampling if num_beams=1 and do_sample=True beam-search decoding if num_beams>1 and do_sample=False beam-search multinomial sampling if num_beams>1 and do_sample=True diverse beam-search decoding if num_beams>1 and num_beam_groups>1 constrained beam-search decoding if constraints!=None or force_words_ids!=None assisted decoding if assistant_model or prompt_lookup_num_tokens is passed to .generate() dola decoding if dola_layers is passed to .generate() To learn more about decoding strategies refer to the text generation strategies guide . A large number of these flags control the logits or the stopping criteria of the generation. Make sure you check the generate-related classes for a full description of the possible manipulations, as well as examples of their usage. from_pretrained < source > ( pretrained_model_name : typing.Union[str, os.PathLike] config_file_name : typing.Union[str, os.PathLike, NoneType] = None cache_dir : typing.Union[str, os.PathLike, NoneType] = None force_download : bool = False local_files_only : bool = False token : typing.Union[bool, str, NoneType] = None revision : str = 'main' **kwargs ) → GenerationConfig Parameters pretrained_model_name ( str or os.PathLike ) — This can be either: a string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co. a path to a directory containing a configuration file saved using the save_pretrained() method, e.g., ./my_model_directory/ . config_file_name ( str or os.PathLike , optional , defaults to "generation_config.json" ) — Name of the generation configuration JSON file to be loaded from pretrained_model_name . cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download ( bool , optional , defaults to False ) — Whether or not to force to (re-)download the configuration files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token ( str or bool , optional ) — The token to use as HTTP bearer authorization for remote files. If True , or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface ). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. To test a pull request you made on the Hub, you can pass revision="refs/pr/<pr_number>" . return_unused_kwargs ( bool , optional , defaults to False ) — If False , then this function returns just the final configuration object. If True , then this functions returns a Tuple(config, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs which has not been used to update config and is otherwise ignored. subfolder ( str , optional , defaults to "" ) — In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here. kwargs ( Dict[str, Any] , optional ) — The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter. Returns GenerationConfig The configuration object instantiated from this pretrained model. Instantiate a GenerationConfig from a generation configuration file. Examples: Copied >>> from transformers import GenerationConfig >>> # Download configuration from huggingface.co and cache. >>> generation_config = GenerationConfig.from_pretrained( "openai-community/gpt2" ) >>> # E.g. config was saved using *save_pretrained('./test/saved_model/')* >>> generation_config.save_pretrained( "./test/saved_model/" ) >>> generation_config = GenerationConfig.from_pretrained( "./test/saved_model/" ) >>> # You can also specify configuration names to your generation configuration file >>> generation_config.save_pretrained( "./test/saved_model/" , config_file_name= "my_configuration.json" ) >>> generation_config = GenerationConfig.from_pretrained( "./test/saved_model/" , "my_configuration.json" ) >>> # If you'd like to try a minor variation to an existing configuration, you can also pass generation >>> # arguments to `.from_pretrained()`. Be mindful that typos and unused arguments will be ignored >>> generation_config, unused_kwargs = GenerationConfig.from_pretrained( ... "openai-community/gpt2" , top_k= 1 , foo= False , do_sample= True , return_unused_kwargs= True ... ) >>> generation_config.top_k 1 >>> unused_kwargs { 'foo' : False } from_model_config < source > ( model_config : PretrainedConfig ) → GenerationConfig Parameters model_config ( PretrainedConfig ) — The model config that will be used to instantiate the generation config. Returns GenerationConfig The configuration object instantiated from those parameters. Instantiates a GenerationConfig from a PretrainedConfig . This function is useful to convert legacy PretrainedConfig objects, which may contain generation parameters, into a stand-alone GenerationConfig . save_pretrained < source > ( save_directory : typing.Union[str, os.PathLike] config_file_name : typing.Union[str, os.PathLike, NoneType] = None push_to_hub : bool = False **kwargs ) Parameters save_directory ( str or os.PathLike ) — Directory where the configuration JSON file will be saved (will be created if it does not exist). config_file_name ( str or os.PathLike , optional , defaults to "generation_config.json" ) — Name of the generation configuration JSON file to be saved in save_directory . push_to_hub ( bool , optional , defaults to False ) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). kwargs ( Dict[str, Any] , optional ) — Additional key word arguments passed along to the push_to_hub() method. Save a generation configuration object to the directory save_directory , so that it can be re-loaded using the from_pretrained() class method. update < source > ( **kwargs ) → Dict[str, Any] Parameters kwargs ( Dict[str, Any] ) — Dictionary of attributes to tentatively update this class. Returns Dict[str, Any] Dictionary containing all the key-value pairs that were not used to update the instance. Updates attributes of this class instance with attributes from kwargs if they match existing attributes, returning all the unused kwargs. validate < source > ( is_init = False ) Parameters is_init ( bool , optional , defaults to False ) — Whether the validation is performed during the initialization of the instance. Validates the values of the attributes of the GenerationConfig instance. Raises exceptions in the presence of parameterization that can be detected as incorrect from the configuration instance alone. Note that some parameters not validated here are best validated at generate runtime, as they may depend on other inputs and/or the model, such as parameters related to the generation length. get_generation_mode < source > ( assistant_model : typing.Optional[ForwardRef('PreTrainedModel')] = None ) → GenerationMode Parameters assistant_model ( PreTrainedModel , optional ) — The assistant model to be used for assisted generation. If set, the generation mode will be assisted generation. Returns GenerationMode The generation mode triggered by the instance. Returns the generation mode triggered by the GenerationConfig instance. GenerationMixin class transformers. GenerationMixin < source > ( ) A class containing all functions for auto-regressive text generation, to be used as a mixin in PreTrainedModel . The class exposes generate() , which can be used for: greedy decoding if num_beams=1 and do_sample=False contrastive search if penalty_alpha>0 and top_k>1 multinomial sampling if num_beams=1 and do_sample=True beam-search decoding if num_beams>1 and do_sample=False beam-search multinomial sampling if num_beams>1 and do_sample=True diverse beam-search decoding if num_beams>1 and num_beam_groups>1 constrained beam-search decoding if constraints!=None or force_words_ids!=None assisted decoding if assistant_model or prompt_lookup_num_tokens is passed to .generate() To learn more about decoding strategies refer to the text generation strategies guide . generate < source > ( inputs : typing.Optional[torch.Tensor] = None generation_config : typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None logits_processor : typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria : typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None prefix_allowed_tokens_fn : typing.Optional[typing.Callable[[int, torch.Tensor], typing.List[int]]] = None synced_gpus : typing.Optional[bool] = None assistant_model : typing.Optional[ForwardRef('PreTrainedModel')] = None streamer : typing.Optional[ForwardRef('BaseStreamer')] = None negative_prompt_ids : typing.Optional[torch.Tensor] = None negative_prompt_attention_mask : typing.Optional[torch.Tensor] = None **kwargs ) → ModelOutput or torch.LongTensor Parameters inputs ( torch.Tensor of varying shape depending on the modality, optional ) — The sequence used as a prompt for the generation or as model inputs to the encoder. If None the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should be in the format of input_ids . For encoder-decoder models inputs can represent any of input_ids , input_values , input_features , or pixel_values . generation_config ( GenerationConfig , optional ) — The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which has the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig ’s default values, whose documentation should be checked to parameterize generation. logits_processor ( LogitsProcessorList , optional ) — Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. stopping_criteria ( StoppingCriteriaList , optional ) — Custom stopping criteria that complements the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. If your stopping criteria depends on the scores input, make sure you pass return_dict_in_generate=True, output_scores=True to generate . This feature is intended for advanced users. prefix_allowed_tokens_fn ( Callable[[int, torch.Tensor], List[int]] , optional ) — If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID batch_id and input_ids . It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID batch_id and the previously generated tokens inputs_ids . This argument is useful for constrained generation conditioned on the prefix, as described in Autoregressive Entity Retrieval . synced_gpus ( bool , optional ) — Whether to continue running the while loop until max_length. Unless overridden, this flag will be set to True if using FullyShardedDataParallel or DeepSpeed ZeRO Stage 3 with multiple GPUs to avoid deadlocking if one GPU finishes generating before other GPUs. Otherwise, defaults to False . assistant_model ( PreTrainedModel , optional ) — An assistant model that can be used to accelerate generation. The assistant model must have the exact same tokenizer. The acceleration is achieved when forecasting candidate tokens with the assistant model is much faster than running generation with the model you’re calling generate from. As such, the assistant model should be much smaller. streamer ( BaseStreamer , optional ) — Streamer object that will be used to stream the generated sequences. Generated tokens are passed through streamer.put(token_ids) and the streamer is responsible for any further processing. negative_prompt_ids ( torch.LongTensor of shape (batch_size, sequence_length) , optional ) — The negative prompt needed for some processors such as CFG. The batch size must match the input batch size. This is an experimental feature, subject to breaking API changes in future versions. negative_prompt_attention_mask ( torch.LongTensor of shape (batch_size, sequence_length) , optional ) — Attention_mask for negative_prompt_ids . kwargs ( Dict[str, Any] , optional ) — Ad hoc parametrization of generation_config and/or additional model-specific kwargs that will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_ . Returns ModelOutput or torch.LongTensor A ModelOutput (if return_dict_in_generate=True or when config.return_dict_in_generate=True ) or a torch.LongTensor . If the model is not an encoder-decoder model ( model.config.is_encoder_decoder=False ), the possible ModelOutput types are: GenerateDecoderOnlyOutput , GenerateBeamDecoderOnlyOutput If the model is an encoder-decoder model ( model.config.is_encoder_decoder=True ), the possible ModelOutput types are: GenerateEncoderDecoderOutput , GenerateBeamEncoderDecoderOutput Generates sequences of token ids for models with a language modeling head. Most generation-controlling parameters are set in generation_config which, if not passed, will be set to the model’s default generation configuration. You can override any generation_config by passing the corresponding parameters to generate(), e.g. .generate(inputs, num_beams=4, do_sample=True) . For an overview of generation strategies and code examples, check out the following guide . compute_transition_scores < source > ( sequences : Tensor scores : typing.Tuple[torch.Tensor] beam_indices : typing.Optional[torch.Tensor] = None normalize_logits : bool = False ) → torch.Tensor Parameters sequences ( torch.LongTensor ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(torch.FloatTensor) ) — Transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size) . beam_indices ( torch.LongTensor , optional ) — Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length) . Only required if a num_beams>1 at generate-time. normalize_logits ( bool , optional , defaults to False ) — Whether to normalize the logits (which, for legacy reasons, may be unnormalized). Returns torch.Tensor A torch.Tensor of shape (batch_size*num_return_sequences, sequence_length) containing the transition scores (logits) Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quicky obtain the scores of the selected tokens at generation time. Examples: Copied >>> from transformers import GPT2Tokenizer, AutoModelForCausalLM >>> import numpy as np >>> tokenizer = GPT2Tokenizer.from_pretrained( "gpt2" ) >>> model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> tokenizer.pad_token_id = tokenizer.eos_token_id >>> inputs = tokenizer([ "Today is" ], return_tensors= "pt" ) >>> # Example 1: Print the scores for each token generated with Greedy Search >>> outputs = model.generate(**inputs, max_new_tokens= 5 , return_dict_in_generate= True , output_scores= True ) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, normalize_logits= True ... ) >>> # input_length is the length of the input prompt for decoder-only models, like the GPT family, and 1 for >>> # encoder-decoder models, like BART or T5. >>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[ 1 ] >>> generated_tokens = outputs.sequences[:, input_length:] >>> for tok, score in zip (generated_tokens[ 0 ], transition_scores[ 0 ]): ... # | token | token string | log probability | probability ... print ( f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy(): .3 f} | {np.exp(score.numpy()): .2 %} " ) | 262 | the | - 1.414 | 24.33 % | 1110 | day | - 2.609 | 7.36 % | 618 | when | - 2.010 | 13.40 % | 356 | we | - 1.859 | 15.58 % | 460 | can | - 2.508 | 8.14 % >>> # Example 2: Reconstruct the sequence scores from Beam Search >>> outputs = model.generate( ... **inputs, ... max_new_tokens= 5 , ... num_beams= 4 , ... num_return_sequences= 4 , ... return_dict_in_generate= True , ... output_scores= True , ... ) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits= False ... ) >>> # If you sum the generated tokens' scores and apply the length penalty, you'll get the sequence scores. >>> # Tip 1: recomputing the scores is only guaranteed to match with `normalize_logits=False`. Depending on the >>> # use case, you might want to recompute it with `normalize_logits=True`. >>> # Tip 2: the output length does NOT include the input length >>> output_length = np. sum (transition_scores.numpy() < 0 , axis= 1 ) >>> length_penalty = model.generation_config.length_penalty >>> reconstructed_scores = transition_scores. sum (axis= 1 ) / (output_length**length_penalty) >>> print (np.allclose(outputs.sequences_scores, reconstructed_scores)) True TFGenerationMixin class transformers. TFGenerationMixin < source > ( ) A class containing all of the functions supporting generation, to be used as a mixin in TFPreTrainedModel . The class exposes generate() , which can be used for: greedy decoding by calling greedy_search() if num_beams=1 and do_sample=False contrastive search by calling contrastive_search() if penalty_alpha>0 and top_k>1 multinomial sampling by calling sample() if num_beams=1 and do_sample=True beam-search decoding by calling beam_search() if num_beams>1 You do not need to call any of the above methods directly. Pass custom parameter values to ‘generate’ instead. To learn more about decoding strategies refer to the text generation strategies guide . generate < source > ( inputs : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None generation_config : typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None logits_processor : typing.Optional[transformers.generation.tf_logits_process.TFLogitsProcessorList] = None seed = None **kwargs ) → ModelOutput or tf.Tensor Parameters inputs ( tf.Tensor of varying shape depending on the modality, optional ) — The sequence used as a prompt for the generation or as model inputs to the encoder. If None the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should of in the format of input_ids . For encoder-decoder models inputs can represent any of input_ids , input_values , input_features , or pixel_values . generation_config ( ~generation.GenerationConfig , optional ) — The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig ’s default values, whose documentation should be checked to parameterize generation. logits_processor ( LogitsProcessorList , optional ) — Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. seed ( List[int] , optional ) — Random seed to control sampling, containing two integers, used when do_sample is True . See the seed argument from stateless functions in tf.random . kwargs ( Dict[str, Any] , optional ) — Ad hoc parametrization of generate_config and/or additional model-specific kwargs that will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_ . Returns ModelOutput or tf.Tensor A ModelOutput (if return_dict_in_generate=True or when config.return_dict_in_generate=True ) or a tf.Tensor . If the model is not an encoder-decoder model ( model.config.is_encoder_decoder=False ), the possible ModelOutput types are: TFGreedySearchDecoderOnlyOutput , TFSampleDecoderOnlyOutput , TFBeamSearchDecoderOnlyOutput , TFBeamSampleDecoderOnlyOutput If the model is an encoder-decoder model ( model.config.is_encoder_decoder=True ), the possible ModelOutput types are: TFGreedySearchEncoderDecoderOutput , TFSampleEncoderDecoderOutput , TFBeamSearchEncoderDecoderOutput , TFBeamSampleEncoderDecoderOutput Generates sequences of token ids for models with a language modeling head. Most generation-controlling parameters are set in generation_config which, if not passed, will be set to the model’s default generation configuration. You can override any generation_config by passing the corresponding parameters to generate, e.g. .generate(inputs, num_beams=4, do_sample=True) . For an overview of generation strategies and code examples, check out the following guide . compute_transition_scores < source > ( sequences : Tensor scores : typing.Tuple[tensorflow.python.framework.tensor.Tensor] beam_indices : typing.Optional[tensorflow.python.framework.tensor.Tensor] = None normalize_logits : bool = False ) → tf.Tensor Parameters sequences ( tf.Tensor ) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id . scores ( tuple(tf.Tensor) ) — Transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size) . beam_indices ( tf.Tensor , optional ) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) . Only required if a num_beams>1 at generate-time. normalize_logits ( bool , optional , defaults to False ) — Whether to normalize the logits (which, for legacy reasons, may be unnormalized). Returns tf.Tensor A tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) containing the transition scores (logits) Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quicky obtain the scores of the selected tokens at generation time. Examples: Copied >>> from transformers import GPT2Tokenizer, TFAutoModelForCausalLM >>> import numpy as np >>> tokenizer = GPT2Tokenizer.from_pretrained( "openai-community/gpt2" ) >>> model = TFAutoModelForCausalLM.from_pretrained( "openai-community/gpt2" ) >>> tokenizer.pad_token_id = tokenizer.eos_token_id >>> inputs = tokenizer([ "Today is" ], return_tensors= "tf" ) >>> # Example 1: Print the scores for each token generated with Greedy Search >>> outputs = model.generate(**inputs, max_new_tokens= 5 , return_dict_in_generate= True , output_scores= True ) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, normalize_logits= True ... ) >>> # input_length is the length of the input prompt for decoder-only models, like the GPT family, and 1 for >>> # encoder-decoder models, like BART or T5. >>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[ 1 ] >>> generated_tokens = outputs.sequences[:, input_length:] >>> for tok, score in zip (generated_tokens[ 0 ], transition_scores[ 0 ]): ... # | token | token string | logits | probability ... print ( f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy(): .3 f} | {np.exp(score.numpy()): .2 %} " ) | 262 | the | - 1.414 | 24.33 % | 1110 | day | - 2.609 | 7.36 % | 618 | when | - 2.010 | 13.40 % | 356 | we | - 1.859 | 15.58 % | 460 | can | - 2.508 | 8.14 % >>> # Example 2: Reconstruct the sequence scores from Beam Search >>> outputs = model.generate( ... **inputs, ... max_new_tokens= 5 , ... num_beams= 4 , ... num_return_sequences= 4 , ... return_dict_in_generate= True , ... output_scores= True , ... ) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits= False ... ) >>> # If you sum the generated tokens' scores and apply the length penalty, you'll get the sequence scores. >>> # Tip: recomputing the scores is only guaranteed to match with `normalize_logits=False`. Depending on the >>> # use case, you might want to recompute it with `normalize_logits=True`. >>> output_length = np. sum (transition_scores.numpy() < 0 , axis= 1 ) >>> length_penalty = model.generation_config.length_penalty >>> reconstructed_scores = np. sum (transition_scores, axis= 1 ) / (output_length**length_penalty) >>> print (np.allclose(outputs.sequences_scores, reconstructed_scores)) True FlaxGenerationMixin class transformers. FlaxGenerationMixin < source > ( ) A class containing all functions for auto-regressive text generation, to be used as a mixin in FlaxPreTrainedModel . The class exposes generate() , which can be used for: greedy decoding by calling _greedy_search() if num_beams=1 and do_sample=False multinomial sampling by calling _sample() if num_beams=1 and do_sample=True beam-search decoding by calling _beam_search() if num_beams>1 and do_sample=False You do not need to call any of the above methods directly. Pass custom parameter values to ‘generate’ instead. To learn more about decoding strategies refer to the text generation strategies guide . generate < source > ( input_ids : Array generation_config : typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None prng_key : typing.Optional[jax.Array] = None trace : bool = True params : typing.Optional[typing.Dict[str, jax.Array]] = None logits_processor : typing.Optional[transformers.generation.flax_logits_process.FlaxLogitsProcessorList] = None **kwargs ) Parameters input_ids ( jnp.ndarray of shape (batch_size, sequence_length) ) — The sequence used as a prompt for the generation. generation_config ( ~generation.GenerationConfig , optional ) — The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig ’s default values, whose documentation should be checked to parameterize generation. trace ( bool , optional , defaults to True ) — Whether to trace generation. Setting trace=False should only be used for debugging and will lead to a considerably slower runtime. params ( Dict[str, jnp.ndarray] , optional ) — Optionally the model parameters can be passed. Can be useful for parallelized generation. logits_processor ( FlaxLogitsProcessorList , optional ) — Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. kwargs ( Dict[str, Any] , optional ) — Ad hoc parametrization of generate_config and/or additional model-specific kwargs that will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_ . Generates sequences of token ids for models with a language modeling head. < > Update on GitHub ← Models ONNX → Generation Generation Config Generation Mixin TF Generation Mixin Flax Generation Mixin |
Interface__TextGenerationStreamDetails.txt | Interface: TextGenerationStreamDetails Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: TextGenerationStreamDetails Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: TextGenerationStreamDetails Properties best _ of _ sequences • Optional best_of_sequences : TextGenerationStreamBestOfSequence [] Additional sequences when using the best_of parameter Defined in inference/src/tasks/nlp/textGenerationStream.ts:66 finish _ reason • finish_reason : TextGenerationStreamFinishReason Generation finish reason Defined in inference/src/tasks/nlp/textGenerationStream.ts:56 generated _ tokens • generated_tokens : number Number of generated tokens Defined in inference/src/tasks/nlp/textGenerationStream.ts:58 prefill • prefill : TextGenerationStreamPrefillToken [] Prompt tokens Defined in inference/src/tasks/nlp/textGenerationStream.ts:62 seed • Optional seed : number Sampling seed if sampling was activated Defined in inference/src/tasks/nlp/textGenerationStream.ts:60 tokens • tokens : TextGenerationStreamToken [] Defined in inference/src/tasks/nlp/textGenerationStream.ts:64 < > Update on GitHub ← TextGenerationStreamBestOfSequence TextGenerationStreamOutput → Interface: Text Generation Stream Details Properties best _ of _ sequences Defined in finish _ reason Defined in generated _ tokens Defined in prefill Defined in seed Defined in tokens Defined in |
Interface__TextGenerationStreamPrefillToken.txt | Interface: TextGenerationStreamPrefillToken Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: TextGenerationStreamPrefillToken Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: TextGenerationStreamPrefillToken Properties id • id : number Token ID from the model tokenizer Defined in inference/src/tasks/nlp/textGenerationStream.ts:21 logprob • Optional logprob : number Logprob Optional since the logprob of the first token cannot be computed Defined in inference/src/tasks/nlp/textGenerationStream.ts:28 text • text : string Token text Defined in inference/src/tasks/nlp/textGenerationStream.ts:23 < > Update on GitHub ← TextGenerationStreamOutput TextGenerationStreamToken → Interface: Text Generation Stream Prefill Token Properties id Defined in logprob Defined in text Defined in |
Image_Classification.txt | Image classification Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Image classification Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Image classification Image classification datasets are used to train a model to classify an entire image. There are a wide variety of applications enabled by these datasets such as identifying endangered wildlife species or screening for disease in medical images. This guide will show you how to apply transformations to an image classification dataset. Before you start, make sure you have up-to-date versions of albumentations and cv2 installed: Copied pip install -U albumentations opencv-python This guide uses the Beans dataset for identifying the type of bean plant disease based on an image of its leaf. Load the dataset and take a look at an example: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "beans" ) >>> dataset[ "train" ][ 10 ] { 'image' : <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x7F8D2F4D7A10 >, 'image_file_path' : '/root/.cache/huggingface/datasets/downloads/extracted/b0a21163f78769a2cf11f58dfc767fb458fc7cea5c05dccc0144a2c0f0bc1292/train/angular_leaf_spot/angular_leaf_spot_train.204.jpg' , 'labels' : 0 } The dataset has three fields: image : a PIL image object. image_file_path : the path to the image file. labels : the label or category of the image. Next, check out an image: Now apply some augmentations with albumentations . You’ll randomly crop the image, flip it horizontally, and adjust its brightness. Copied >>> import cv2 >>> import albumentations >>> import numpy as np >>> transform = albumentations.Compose([ ... albumentations.RandomCrop(width= 256 , height= 256 ), ... albumentations.HorizontalFlip(p= 0.5 ), ... albumentations.RandomBrightnessContrast(p= 0.2 ), ... ]) Create a function to apply the transformation to the images: Copied >>> def transforms ( examples ): ... examples[ "pixel_values" ] = [ ... transform(image=np.array(image))[ "image" ] for image in examples[ "image" ] ... ] ... ... return examples Use the set_transform() function to apply the transformation on-the-fly to batches of the dataset to consume less disk space: Copied >>> dataset.set_transform(transforms) You can verify the transformation worked by indexing into the pixel_values of the first example: Copied >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[ "train" ][ 0 ][ "pixel_values" ] >>> plt.imshow(img) Now that you know how to process a dataset for image classification, learn how to train an image classification model and use it for inference. < > Update on GitHub ← Depth estimation Semantic segmentation → Image classification |
Online_DPO_Trainer.txt | Online DPO Trainer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Online DPO Trainer TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Online DPO Trainer Overview Online DPO was proposed in Direct Language Model Alignment from Online AI Feedback by Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, Johan Ferret, and Mathieu Blondel. The abstract from the paper is the following: Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator. This post-training method was contributed by Michael Noukhovitch , Shengyi Costa Huang , Quentin Gallouédec , and Edward Beeching . Quick start This example demonstrates how to train a model using the online DPO method. We use the Qwen 0.5B model as the base model and PairRMJudge as a judge. We use the prompts from the UltraFeedback dataset . You can view the prompts in the dataset here: Below is the script to train the model: Copied # train_online_dpo.py from datasets import load_dataset from trl import OnlineDPOConfig, OnlineDPOTrainer, PairRMJudge from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen2-0.5B-Instruct" ) judge = PairRMJudge() train_dataset = load_dataset( "trl-lib/ultrafeedback-prompt" , split= "train" ) training_args = OnlineDPOConfig(output_dir= "Qwen2-0.5B-OnlineDPO" , logging_steps= 10 ) trainer = OnlineDPOTrainer( model=model, judge=judge, args=training_args, processing_class=tokenizer, train_dataset=train_dataset ) trainer.train() Execute the script using the following command: Copied accelerate launch train_online_dpo.py Distributed across 8 GPUs, the training takes approximately 1 hour. You can verify the training progress by checking the reward graph. An increasing trend in both the reward for rejected and chosen completions indicates that the model is improving and generating better responses over time. To see how the trained model performs, you can use the TRL Chat CLI . $ trl chat --model_name_or_path trl-lib/Qwen2-0.5B-OnlineDPO <quentin_gallouedec>: What is the best programming language? <trl-lib/Qwen2-0.5B-OnlineDPO>: The best programming language depends on your specific needs and priorities. Some people prefer imperative programming languages (like Haskell or Lisp), while others prefer functional programming languages (like Scala or Python). It's important to consider your work style, programming environment, and project requirements when choosing a programming language. Expected dataset type Online DPO only requires a prompt-only dataset (unlike offline DPO, that expects preference dataset ). The OnlineDPOTrainer supports both conversational and standard dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset. Usage tips Use a reward model Instead of a judge, you can chose to use a reward model — see Reward Bench for a leaderboard of public models you can use. Below is a code example showing how to replace a judge with the trl-lib/Qwen2-0.5B-Reward model: Copied - from trl import PairRMJudge + from transformers import AutoModelForSequenceClassification - judge = PairRMJudge() + reward_model = AutoModelForSequenceClassification.from_pretrained("trl-lib/Qwen2-0.5B-Reward", num_labels=1) + reward_tokenizer = AutoTokenizer.from_pretrained("trl-lib/Qwen2-0.5B-Reward") trainer = OnlineDPOTrainer( ... - judge=judge, + reward_model=reward_model, + reward_processing_class=reward_tokenizer, ... ) Encourage EOS token generation When using a reward model, we may want the model to generate completions within a given length. During training, the model will generate completions up to the maximum length specified in the max_new_tokens argument of OnlineDPOConfig . If you want to penalize the model for not generating an EOS token before reaching the maximum length, you can use the missing_eos_penalty argument of OnlineDPOConfig : Copied training_args = OnlineDPOConfig(..., max_new_tokens= 128 , missing_eos_penalty= 1.0 ) Logging Completions To better understand your model’s behavior during training, you can log sample completions periodically using the LogCompletionsCallback . Copied trainer = OnlineDPOTrainer(..., eval_dataset=eval_dataset) completions_callback = LogCompletionsCallback(trainer, num_prompts= 8 ) trainer.add_callback(completions_callback) This callback logs the model’s generated completions directly to Weights & Biases. Example script We provide an example script to train a model using the online DPO method. The script is available in examples/scripts/dpo_online.py To test the online DPO script with the Qwen2.5 0.5B model on the UltraFeedback dataset , run the following command: Copied python examples/scripts/dpo_online.py \ --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \ --judge pair_rm \ --dataset_name trl-lib/ultrafeedback-prompt \ --learning_rate 5.0e-7 \ --logging_steps 25 \ --output_dir Qwen2.5-0.5B-Online-DPO-PairRM \ --warmup_ratio 0.1 \ --push_to_hub Logged metrics The logged metrics are as follows. Here is an example tracked run at Weights and Biases objective/kl : The mean Kullback-Leibler (KL) divergence between the current model and reference model. objective/entropy : The mean entropy of the model, indicating the randomness of the actions chosen by the model. objective/non_score_reward : The mean reward from non-score-related sources, basically beta * kl.sum(1) , where beta is the KL penalty coefficient and kl is the per-token KL divergence. objective/rlhf_reward : The mean RLHF reward, which is scores - non_score_reward . The rlhf_reward is the ultimate objective of online DPO training. If training works as intended, this metric should keep going up. objective/scores : The mean scores returned by the reward model. objective/scores_margin : The mean score margin (according to the external reward model) between the chosen and rejected completions. rewards/chosen : The mean reward (according to online DPO’s implicit reward model)of the chosen completions. rewards/rejected : The mean reward (according to online DPO’s implicit reward model) of the rejected completions. rewards/accuracies : The accuracies of the online DPO’s implicit reward model. rewards/margins : The mean reward margin (according to online DPO’s implicit reward model) between the chosen and rejected completions. logps/chosen : The mean log probabilities of the chosen completions. logps/rejected : The mean log probabilities of the rejected completions. val/contain_eos_token : The fraction of completions which contain an EOS token. beta : The parameter that controls the weight of the loss term representing the deviation from the reference model. Typically fixed, but can be made dynamic by passing a list to OnlineDPOConfig . Benchmark experiments To validate the online DPO implementation works, we ran experiments with the Pythia 1B, 2.8B, and 6.9B models on a single node of 8 x H100s. Here are the commands we used to run the experiments. We take the SFT / RM models directly from The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization . Copied # 1B Online DPO experiment accelerate launch --config_file examples/accelerate_configs/multi_gpu.yaml \ examples/scripts/dpo_online.py \ --model_name_or_path trl-lib/pythia-1b-deduped-tldr-sft \ --reward_model_path trl-lib/pythia-1b-deduped-tldr-rm \ --dataset_name trl-lib/tldr \ --learning_rate 5.0e-7 \ --output_dir pythia-1b-deduped-tldr-online-dpo \ --beta 0.1 \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 2 \ --num_train_epochs 3 \ --max_new_tokens 53 \ --warmup_ratio 0.1 \ --missing_eos_penalty 1.0 \ --logging_steps 20 \ --save_steps 0.1 \ --push_to_hub # 2.8B Online DPO experiment accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \ examples/scripts/dpo_online.py \ --model_name_or_path trl-lib/pythia-2.8b-deduped-tldr-sft \ --reward_model_path trl-lib/pythia-2.8b-deduped-tldr-rm \ --dataset_name trl-lib/tldr \ --learning_rate 5.0e-7 \ --output_dir pythia-2.8b-deduped-tldr-online-dpo \ --beta 0.1 \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 2 \ --num_train_epochs 3 \ --max_new_tokens 53 \ --warmup_ratio 0.1 \ --missing_eos_penalty 1.0 \ --bf16 \ --logging_steps 20 \ --save_steps 0.1 \ --push_to_hub # 6.9B Online DPO experiment accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \ examples/scripts/dpo_online.py \ --model_name_or_path trl-lib/pythia-6.9b-deduped-tldr-sft \ --reward_model_path trl-lib/pythia-6.9b-deduped-tldr-rm \ --dataset_name trl-lib/tldr \ --learning_rate 5.0e-7 \ --output_dir pythia-6.9b-deduped-tldr-online-dpo \ --beta 0.1 \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 4 \ --num_train_epochs 3 \ --max_new_tokens 53 \ --warmup_ratio 0.1 \ --missing_eos_penalty 1.0 \ --bf16 \ --gradient_checkpointing \ --logging_steps 20 \ --save_steps 0.1 \ --push_to_hub Checkpoints and experiment tracking are available at: 🤗 Model checkpoints 🐝 Tracked experiment To evaluate, we use vLLM to load the checkpoints and GPT-4o mini as a judge model to evaluate the generated TL;DR against the reference TL;DR. For more information on how to use judges, see Judges . Copied $ python examples/scripts/evals/judge_tldr.py --model_name_or_path trl-lib/pythia-1b-deduped-tldr-sft --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 33.00% python examples/scripts/evals/judge_tldr.py --model_name_or_path trl-lib/pythia-6.9b-deduped-tldr-sft --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 41.50% python examples/scripts/evals/judge_tldr.py --model_name_or_path trl-lib/pythia-1b-deduped-tldr-online-dpo --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 62.60% python examples/scripts/evals/judge_tldr.py --model_name_or_path trl-lib/pythia-6.9b-deduped-tldr-online-dpo --judge_model gpt-4o-mini --num_examples 1000 Model win rate: 74.20% We can then plot the RLHF scaling chart. Copied import matplotlib.pyplot as plt results = { "SFT" : { 1.0e9 : 0.21 , 2.8e9 : 0.27 , 6.9e9 : 0.316 }, "online-dpo" : { 1.0e9 : 0.542 , 2.8e9 : 0.746 , 6.9e9 : 0.796 }, "offline-dpo" : { 1.0e9 : 0.422 , 2.8e9 : 0.517 , 6.9e9 : 0.701 }, } plt.plot(results[ "SFT" ].keys(), results[ "SFT" ].values(), label= "SFT" , marker= "o" ) plt.plot(results[ "online-dpo" ].keys(), results[ "online-dpo" ].values(), label= "Online-dpo with RM judge" , marker= "o" ) plt.plot(results[ "offline-dpo" ].keys(), results[ "offline-dpo" ].values(), label= "Offline-dpo" , marker= "o" ) plt.axhline(y= 0.5 , color= "black" , linestyle= "-." , label= "Human reference summary" ) plt.xscale( "log" ) plt.xlabel( "Model size" ) plt.ylabel( "Win rate against reference summaries\n(according to GPT-4-0613)" ) plt.title( "DPO scaling by model size" ) plt.legend() plt.xlim( 5e8 , 1.2e10 ) plt.xticks([ 1e9 , 3e9 , 1e10 ], [ "1B" , "3B" , "10B" ]) plt.grid( True , which= "both" , ls= "--" , c= "0.7" ) plt.tight_layout() plt.show() The online DPO checkpoint gets increasingly more win rate as we scale up the model sizes. This is a good sign that the online DPO implementation is working as intended. OnlineDPOTrainer class trl. OnlineDPOTrainer < source > ( model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] ref_model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, NoneType] = None reward_model : typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module, NoneType] = None judge : typing.Optional[trl.trainer.judges.BasePairwiseJudge] = None args : typing.Optional[trl.trainer.online_dpo_config.OnlineDPOConfig] = None data_collator : typing.Optional[transformers.data.data_collator.DataCollator] = None train_dataset : typing.Union[datasets.arrow_dataset.Dataset, torch.utils.data.dataset.IterableDataset, ForwardRef('datasets.Dataset'), NoneType] = None eval_dataset : typing.Union[datasets.arrow_dataset.Dataset, dict[str, datasets.arrow_dataset.Dataset], ForwardRef('datasets.Dataset'), NoneType] = None processing_class : typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.image_processing_utils.BaseImageProcessor, transformers.feature_extraction_utils.FeatureExtractionMixin, transformers.processing_utils.ProcessorMixin, NoneType] = None reward_processing_class : typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None peft_config : typing.Optional[dict] = None compute_metrics : typing.Optional[typing.Callable[[transformers.trainer_utils.EvalPrediction], dict]] = None callbacks : typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None optimizers : tuple = (None, None) preprocess_logits_for_metrics : typing.Optional[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None ) Parameters model ( transformers.PreTrainedModel or torch.nn.Module ) — The model to train, preferably an AutoModelForCausalLM . ref_model ( transformers.PreTrainedModel or torch.nn.Module or None ) — The reference model to use for training. If None is specified, the reference model will be created from the model. reward_model ( transformers.PreTrainedModel or torch.nn.Module or None ) — The reward model to score completions with, preferably an AutoModelForSequenceClassification . judge ( BasePairwiseJudge ) — The judge to use for pairwise comparison of model completions. args ( OnlineDPOConfig ) — The online DPO config arguments to use for training. data_collator ( transformers.DataCollator ) — The data collator to use for training. If None is specified, the default data collator ( DPODataCollatorWithPadding ) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences. train_dataset ( datasets.Dataset ) — The dataset to use for training. eval_dataset ( datasets.Dataset ) — The dataset to use for evaluation. processing_class ( PreTrainedTokenizerBase or BaseImageProcessor or FeatureExtractionMixin or ProcessorMixin , optional ) — Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model. peft_config ( dict ) — The peft config to use for training. compute_metrics ( Callable[[EvalPrediction], dict] , optional ) — The function to use to compute the metrics. Must take a EvalPrediction and return a dictionary string to metric values. callbacks ( list[transformers.TrainerCallback] ) — The callbacks to use for training. optimizers ( tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] ) — The optimizer and scheduler to use for training. preprocess_logits_for_metrics ( Callable[[torch.Tensor, torch.Tensor], torch.Tensor] ) — The function to use to preprocess the logits before computing the metrics. Initialize OnlineDPOTrainer. create_model_card < source > ( model_name : typing.Optional[str] = None dataset_name : typing.Optional[str] = None tags : typing.Union[str, list[str], NoneType] = None ) Parameters model_name ( str , optional , defaults to None ) — The name of the model. dataset_name ( str , optional , defaults to None ) — The name of the dataset used for training. tags ( str , list[str] or None , optional , defaults to None ) — Tags to be associated with the model card. Creates a draft of a model card using the information available to the Trainer . tokenize_row < source > ( feature is_encoder_decoder : bool tokenizer : PreTrainedTokenizerBase ) Tokenize a single row from a DPO specific dataset. OnlineDPOConfig class trl. OnlineDPOConfig < source > ( output_dir : str overwrite_output_dir : bool = False do_train : bool = False do_eval : bool = False do_predict : bool = False eval_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only : bool = False per_device_train_batch_size : int = 8 per_device_eval_batch_size : int = 8 per_gpu_train_batch_size : typing.Optional[int] = None per_gpu_eval_batch_size : typing.Optional[int] = None gradient_accumulation_steps : int = 1 eval_accumulation_steps : typing.Optional[int] = None eval_delay : typing.Optional[float] = 0 torch_empty_cache_steps : typing.Optional[int] = None learning_rate : float = 5e-07 weight_decay : float = 0.0 adam_beta1 : float = 0.9 adam_beta2 : float = 0.999 adam_epsilon : float = 1e-08 max_grad_norm : float = 1.0 num_train_epochs : float = 3.0 max_steps : int = -1 lr_scheduler_type : typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs : typing.Union[dict, str, NoneType] = <factory> warmup_ratio : float = 0.0 warmup_steps : int = 0 log_level : typing.Optional[str] = 'passive' log_level_replica : typing.Optional[str] = 'warning' log_on_each_node : bool = True logging_dir : typing.Optional[str] = None logging_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step : bool = False logging_steps : float = 500 logging_nan_inf_filter : bool = True save_strategy : typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps : float = 500 save_total_limit : typing.Optional[int] = None save_safetensors : typing.Optional[bool] = True save_on_each_node : bool = False save_only_model : bool = False restore_callback_states_from_checkpoint : bool = False no_cuda : bool = False use_cpu : bool = False use_mps_device : bool = False seed : int = 42 data_seed : typing.Optional[int] = None jit_mode_eval : bool = False use_ipex : bool = False bf16 : bool = False fp16 : bool = False fp16_opt_level : str = 'O1' half_precision_backend : str = 'auto' bf16_full_eval : bool = False fp16_full_eval : bool = False tf32 : typing.Optional[bool] = None local_rank : int = -1 ddp_backend : typing.Optional[str] = None tpu_num_cores : typing.Optional[int] = None tpu_metrics_debug : bool = False debug : typing.Union[str, typing.List[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last : bool = False eval_steps : typing.Optional[float] = None dataloader_num_workers : int = 0 dataloader_prefetch_factor : typing.Optional[int] = None past_index : int = -1 run_name : typing.Optional[str] = None disable_tqdm : typing.Optional[bool] = None remove_unused_columns : typing.Optional[bool] = True label_names : typing.Optional[typing.List[str]] = None load_best_model_at_end : typing.Optional[bool] = False metric_for_best_model : typing.Optional[str] = None greater_is_better : typing.Optional[bool] = None ignore_data_skip : bool = False fsdp : typing.Union[typing.List[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params : int = 0 fsdp_config : typing.Union[dict, str, NoneType] = None fsdp_transformer_layer_cls_to_wrap : typing.Optional[str] = None accelerator_config : typing.Union[dict, str, NoneType] = None deepspeed : typing.Union[dict, str, NoneType] = None label_smoothing_factor : float = 0.0 optim : typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch' optim_args : typing.Optional[str] = None adafactor : bool = False group_by_length : bool = False length_column_name : typing.Optional[str] = 'length' report_to : typing.Union[NoneType, str, typing.List[str]] = None ddp_find_unused_parameters : typing.Optional[bool] = None ddp_bucket_cap_mb : typing.Optional[int] = None ddp_broadcast_buffers : typing.Optional[bool] = None dataloader_pin_memory : bool = True dataloader_persistent_workers : bool = False skip_memory_metrics : bool = True use_legacy_prediction_loop : bool = False push_to_hub : bool = False resume_from_checkpoint : typing.Optional[str] = None hub_model_id : typing.Optional[str] = None hub_strategy : typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token : typing.Optional[str] = None hub_private_repo : typing.Optional[bool] = None hub_always_push : bool = False gradient_checkpointing : bool = False gradient_checkpointing_kwargs : typing.Union[dict, str, NoneType] = None include_inputs_for_metrics : bool = False include_for_metrics : typing.List[str] = <factory> eval_do_concat_batches : bool = True fp16_backend : str = 'auto' evaluation_strategy : typing.Union[transformers.trainer_utils.IntervalStrategy, str] = None push_to_hub_model_id : typing.Optional[str] = None push_to_hub_organization : typing.Optional[str] = None push_to_hub_token : typing.Optional[str] = None mp_parameters : str = '' auto_find_batch_size : bool = False full_determinism : bool = False torchdynamo : typing.Optional[str] = None ray_scope : typing.Optional[str] = 'last' ddp_timeout : typing.Optional[int] = 1800 torch_compile : bool = False torch_compile_backend : typing.Optional[str] = None torch_compile_mode : typing.Optional[str] = None dispatch_batches : typing.Optional[bool] = None split_batches : typing.Optional[bool] = None include_tokens_per_second : typing.Optional[bool] = False include_num_input_tokens_seen : typing.Optional[bool] = False neftune_noise_alpha : typing.Optional[float] = None optim_target_modules : typing.Union[NoneType, str, typing.List[str]] = None batch_eval_metrics : bool = False eval_on_start : bool = False use_liger_kernel : typing.Optional[bool] = False eval_use_gather_object : typing.Optional[bool] = False average_tokens_across_devices : typing.Optional[bool] = False reward_model_path : typing.Optional[str] = None judge : typing.Optional[str] = None max_new_tokens : int = 64 temperature : float = 0.9 missing_eos_penalty : typing.Optional[float] = None beta : list = <factory> loss_type : typing.Literal['sigmoid', 'ipo'] = 'sigmoid' dataset_num_proc : typing.Optional[int] = None disable_dropout : bool = True ) Parameters learning_rate ( float , optional , defaults to 5e-7 ) — Initial learning rate for AdamW optimizer. The default value replaces that of TrainingArguments . reward_model_path ( Optional[str] , optional , defaults to None ) — Path to the reward model. Either judge or reward_model_path must be set, but not both. judge ( Optional[str] , optional , defaults to None ) — Name of the judge to use. Either judge or reward_model_path must be set, but not both. max_new_tokens ( int , optional , defaults to 64 ) — Maximum number of tokens to generate per completion. temperature ( float , optional , defaults to 0.9 ) — Temperature for sampling. The higher the temperature, the more random the completions. missing_eos_penalty ( Optional[float] , optional , defaults to None ) — Penalty applied to the score when the model fails to generate an EOS token. This is useful to encourage to generate completions shorter than the maximum length ( max_new_tokens ). The penalty must be a positive value. beta ( float or list[float] , optional , defaults to 0.1 ) — Parameter controlling the deviation from the reference model. Higher β means less deviation from the reference model. For the IPO loss ( loss_type="ipo" ), β is the regularization parameter denoted by τ in the paper . If a list of floats is provided then the β is selected for each new epoch and the last β is used for the rest of the epochs. loss_type ( str , optional , defaults to "sigmoid" ) — Type of loss to use. Possible values are: "sigmoid" : sigmoid loss from the original DPO paper. "ipo" : IPO loss from the IPO paper. dataset_num_proc ( Optional[int] , optional , defaults to None ) — Number of processes to use for processing the dataset. disable_dropout ( bool , optional , defaults to True ) — Whether to disable dropout in the model. Configuration class for the OnlineDPOTrainer . Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. < > Update on GitHub ← DPO GKD → Online DP O Trainer Overview Quick start Expected dataset type Usage tips Use a reward model Encourage EO S token generation Logging Completions Example script Logged metrics Benchmark experiments OnlineDPO Trainer OnlineDPO Config |
Pricing.txt | Pricing Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Pricing Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Pricing Deploy a model Or Request a quote Easily deploy machine learning models on dedicated infrastructure with 🤗 Inference Endpoints. When you create an Endpoint, you can select the instance type to deploy and scale your model according to an hourly rate. 🤗 Inference Endpoints is accessible to Hugging Face accounts with an active subscription and credit card on file. At the end of the subscription period, the user or organization account will be charged for the compute resources used while successfully deployed Endpoints (ready to serve) are initializing and in a running state. You can find the hourly pricing for all available instances for 🤗 Inference Endpoints, and examples of how costs are calculated below. While the prices are shown by the hour, the actual cost is calculated by the minute. CPU Instances The table below shows currently available CPU instances and their hourly pricing. If the instance type cannot be selected in the application, you need to request a quota to use it. Provider Instance Type Instance Size Hourly rate vCPUs Memory Architecture aws intel-icl x1 $0.032 1 2 GB Intel Ice Lake (soon to be fully deprecated) aws intel-icl x2 $0.064 2 4 GB Intel Ice Lake (soon to be fully deprecated) aws intel-icl x4 $0.128 4 8 GB Intel Ice Lake (soon to be fully deprecated) aws intel-icl x8 $0.256 8 16 GB Intel Ice Lake (soon to be fully deprecated) aws intel-spr x1 $0.033 1 2 GB Intel Sapphire Rapids aws intel-spr x2 $0.067 2 4 GB Intel Sapphire Rapids aws intel-spr x4 $0.134 4 8 GB Intel Sapphire Rapids aws intel-spr x8 $0.268 8 16 GB Intel Sapphire Rapids aws intel-spr x16 $0.536 16 32 GB Intel Sapphire Rapids azure intel-xeon x1 $0.060 1 2 GB Intel Xeon azure intel-xeon x2 $0.120 2 4 GB Intel Xeon azure intel-xeon x4 $0.240 4 8 GB Intel Xeon azure intel-xeon x8 $0.480 8 16 GB Intel Xeon gcp intel-spr x1 $0.050 1 2 GB Intel Sapphire Rapids gcp intel-spr x2 $0.100 2 4 GB Intel Sapphire Rapids gcp intel-spr x4 $0.200 4 8 GB Intel Sapphire Rapids gcp intel-spr x8 $0.400 8 16 GB Intel Sapphire Rapids GPU Instances The table below shows currently available GPU instances and their hourly pricing. If the instance type cannot be selected in the application, you need to request a quota to use it. Provider Instance Type Instance Size Hourly rate GPUs Memory Architecture aws nvidia-a10g x1 $1 1 24 GB NVIDIA A10G aws nvidia-t4 x1 $0.5 1 14 GB NVIDIA T4 aws nvidia-t4 x4 $3 4 56 GB NVIDIA T4 aws nvidia-l4 x1 $0.8 1 24 GB NVIDIA L4 aws nvidia-l4 x4 $3.8 4 96 GB NVIDIA L4 aws nvidia-a100 x1 $4 1 80 GB NVIDIA A100 aws nvidia-a10g x4 $5 4 96 GB NVIDIA A10G aws nvidia-l40s x1 $1.8 1 48 GB NVIDIA L40S aws nvidia-l40s x4 $8.3 4 192 GB NVIDIA L40S aws nvidia-l40s x8 $23.5 8 384 GB NVIDIA L40S aws nvidia-a100 x2 $8 2 160 GB NVIDIA A100 aws nvidia-a100 x4 $16 4 320 GB NVIDIA A100 aws nvidia-a100 x8 $32 8 640 GB NVIDIA A100 gcp nvidia-t4 x1 $0.5 1 16 GB NVIDIA T4 gcp nvidia-l4 x1 $0.7 1 24 GB NVIDIA L4 gcp nvidia-l4 x4 $3.8 4 96 GB NVIDIA L4 gcp nvidia-a100 x1 $3.6 1 80 GB NVIDIA A100 gcp nvidia-a100 x2 $7.2 2 160 GB NVIDIA A100 gcp nvidia-a100 x4 $14.4 4 320 GB NVIDIA A100 gcp nvidia-a100 x8 $28.8 8 640 GB NVIDIA A100 gcp nvidia-h100 x1 $10 1 80 GB NVIDIA H100 gcp nvidia-h100 x2 $20 2 160 GB NVIDIA H100 gcp nvidia-h100 x4 $40 4 320 GB NVIDIA H100 gcp nvidia-h100 x8 $80 8 640 GB NVIDIA H100 Accelerator Instances The table below shows currently available custom Accelerators instances and their hourly pricing. If the instance type cannot be selected in the application, you need to request a quota to use it. Provider Instance Type Instance Size Hourly rate Accelerators Accelerator Memory RAM Architecture aws inf2 x1 $0.75 1 32 GB 14.5 GB AWS Inferentia2 aws inf2 x12 $12 12 384 GB 760 GB AWS Inferentia2 gcp tpu 1x1 $1.2 1 16 GB 44 GB Google TPU v5e gcp tpu 2x2 $4.75 4 64 GB 186 GB Google TPU v5e gcp tpu 2x4 $9.5 8 128 GB 380 GB Google TPU v5e Pricing examples The following example pricing scenarios demonstrate how costs are calculated. You can find the hourly rate for all instance types and sizes in the tables above. Use the following formula to calculate the costs: Copied instance hourly rate * ((hours * # min replica ) + (scale-up hrs * # additional replicas)) Basic Endpoint AWS CPU intel-spr x2 (2x vCPUs 4GB RAM) Autoscaling (minimum 1 replica, maximum 1 replica) hourly cost Copied instance hourly rate * (hours * # min replica ) = hourly cost $0 .067 /hr * ( 1 hr * 1 replica ) = $0 .067 /hr monthly cost Copied instance hourly rate * (hours * # min replica ) = monthly cost $0 .064 /hr * ( 730 hr * 1 replica ) = $46 .72 /month Advanced Endpoint AWS GPU small (1x GPU 14GB RAM) Autoscaling (minimum 1 replica, maximum 3 replica), every hour a spike in traffic scales the Endpoint from 1 to 3 replicas for 15 minutes hourly cost Copied instance hourly rate * ((hours * # min replica ) + (scale-up hrs * # additional replicas)) = hourly cost $0 .5 /hr * (( 1 hr * 1 replica ) + ( 0.25 hr * 2 replicas)) = $0 .75 /hr monthly cost Copied instance hourly rate * ((hours * # min replica ) + (scale-up hrs * # additional replicas)) = monthly cost $0 .5 /hr * (( 730 hr * 1 replica ) + ( 182.5 hr * 2 replicas)) = $547 .5 /month Quotas Listed available quota can now be seen in the Inference dashboard at https://ui.endpoints.huggingface.co under “Quotas Used”. The number displayed will reference the number of instances used / available instance quota. Paused endpoints will not count against “used” quota. Scaled to Zero endpoints will be counted as “used” quota - simply pause the scaled-to-zero endpoint should you like to unlock this quota. Please contact us if you’d like to increase quota allocations. PRO users and Enterprise Hub organizations will have access to higher quota amounts when requested. < > Update on GitHub ← Autoscaling Help & Support → Pricing CP U Instances GP U Instances Accelerator Instances Pricing examples Basic Endpoint Advanced Endpoint Quotas |
Polars.txt | Polars Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Polars Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Polars Polars is a fast DataFrame library written in Rust with Arrow as its foundation. 💡 Learn more about how to get the dataset URLs in the List Parquet files guide. Let’s start by grabbing the URLs to the train split of the tasksource/blog_authorship_corpus dataset from the dataset viewer API: Copied import requests r = requests.get( "https://datasets-server.huggingface.co/parquet?dataset=tasksource/blog_authorship_corpus" ) j = r.json() urls = [f[ 'url' ] for f in j[ 'parquet_files' ] if f[ 'split' ] == 'train' ] urls [ 'https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet' , 'https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet' ] To read from a single Parquet file, use the read_parquet function to read it into a DataFrame and then execute your query: Copied import polars as pl df = ( pl.read_parquet( "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet" ) .group_by( "sign" ) .agg( [ pl.count(), pl.col( "text" ). str .len_chars().mean().alias( "avg_blog_length" ) ] ) .sort( "avg_blog_length" , descending= True ) .limit( 5 ) ) print (df) shape: ( 5 , 3 ) ┌───────────┬───────┬─────────────────┐ │ sign ┆ count ┆ avg_blog_length │ │ --- ┆ --- ┆ --- │ │ str ┆ u32 ┆ f64 │ ╞═══════════╪═══════╪═════════════════╡ │ Cancer ┆ 38956 ┆ 1206.521203 │ │ Leo ┆ 35487 ┆ 1180.067377 │ │ Aquarius ┆ 32723 ┆ 1152.113682 │ │ Virgo ┆ 36189 ┆ 1117.198209 │ │ Capricorn ┆ 31825 ┆ 1102.397361 │ └───────────┴───────┴─────────────────┘ To read multiple Parquet files - for example, if the dataset is sharded - you’ll need to use the concat function to concatenate the files into a single DataFrame: Copied import polars as pl df = ( pl.concat([pl.read_parquet(url) for url in urls]) .group_by( "sign" ) .agg( [ pl.count(), pl.col( "text" ). str .len_chars().mean().alias( "avg_blog_length" ) ] ) .sort( "avg_blog_length" , descending= True ) .limit( 5 ) ) print (df) shape: ( 5 , 3 ) ┌──────────┬───────┬─────────────────┐ │ sign ┆ count ┆ avg_blog_length │ │ --- ┆ --- ┆ --- │ │ str ┆ u32 ┆ f64 │ ╞══════════╪═══════╪═════════════════╡ │ Aquarius ┆ 49687 ┆ 1191.417212 │ │ Leo ┆ 53811 ┆ 1183.878222 │ │ Cancer ┆ 65048 ┆ 1158.969161 │ │ Gemini ┆ 51985 ┆ 1156.069308 │ │ Virgo ┆ 60399 ┆ 1140.958443 │ └──────────┴───────┴─────────────────┘ Lazy API Polars offers a lazy API that is more performant and memory-efficient for large Parquet files. The LazyFrame API keeps track of what you want to do, and it’ll only execute the entire query when you’re ready. This way, the lazy API doesn’t load everything into RAM beforehand, and it allows you to work with datasets larger than your available RAM. To lazily read a Parquet file, use the scan_parquet function instead. Then, execute the entire query with the collect function: Copied import polars as pl q = ( pl.scan_parquet( "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet" ) .group_by( "sign" ) .agg( [ pl.count(), pl.col( "text" ). str .len_chars().mean().alias( "avg_blog_length" ) ] ) .sort( "avg_blog_length" , descending= True ) .limit( 5 ) ) df = q.collect() < > Update on GitHub ← Pandas PostgreSQL → Polars Lazy API |
Access_and_view_Metrics.txt | Access and view Metrics Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Access and view Metrics Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Access and view Metrics Hugging Face Endpoints provides access to the metrics and analytics of your Endpoints through the UI on the detailed overview in the “Analytics” tab of your Endpoints. Access Metrics via API The Hugging Face Inference Endpoints API exposes a route to access the metrics of your Endpoints. You can use this route to send customized Prometheus queries to your Endpoints. Pending requests - new! Pending requests are requests that don’t have a response status yet. We now display analytics on requests that have been sent to your model, but are still being processed. < > Update on GitHub ← Access and read Logs Change Organization or Account → Access and view Metrics Access Metrics via API Pending requests - new! |
Loading_methods.txt | Loading methods Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Loading methods Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Loading methods Methods for listing and loading evaluation modules: List evaluate.list_evaluation_modules < source > ( module_type = None include_community = True with_details = False ) Parameters module_type ( str , optional , default None ) — Type of evaluation modules to list. Has to be one of 'metric' , 'comparison' , or 'measurement' . If None , all types are listed. include_community ( bool , optional , default True ) — Include community modules in the list. with_details ( bool , optional , default False ) — Return the full details on the metrics instead of only the ID. List all evaluation modules available on the Hugging Face Hub. Load evaluate.load < source > ( path : str config_name : typing.Optional[str] = None module_type : typing.Optional[str] = None process_id : int = 0 num_process : int = 1 cache_dir : typing.Optional[str] = None experiment_id : typing.Optional[str] = None keep_in_memory : bool = False download_config : typing.Optional[evaluate.utils.file_utils.DownloadConfig] = None download_mode : typing.Optional[datasets.download.download_manager.DownloadMode] = None revision : typing.Union[str, datasets.utils.version.Version, NoneType] = None **init_kwargs ) Parameters path ( str ) — path to the evaluation processing script with the evaluation builder. Can be either: a local path to processing script or the directory containing the script (if the script has the same name as the directory), e.g. './metrics/rouge' or './metrics/rouge/rouge.py' a evaluation module identifier on the HuggingFace evaluate repo e.g. 'rouge' or 'bleu' that are in either 'metrics/' , 'comparisons/' , or 'measurements/' depending on the provided module_type . config_name ( str , optional) — selecting a configuration for the metric (e.g. the GLUE metric has a configuration for each subset) module_type ( str , default 'metric' ) — type of evaluation module, can be one of 'metric' , 'comparison' , or 'measurement' . process_id ( int , optional) — for distributed evaluation: id of the process num_process ( int , optional) — for distributed evaluation: total number of processes cache_dir (Optional str) — path to store the temporary predictions and references (default to ~/.cache/huggingface/evaluate/ ) experiment_id ( str ) — A specific experiment id. This is used if several distributed evaluations share the same file system. This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1). keep_in_memory (bool) — Whether to store the temporary results in memory (defaults to False) download_config (Optional evaluate.DownloadConfig — specific download configuration parameters. download_mode ( DownloadMode , default REUSE_DATASET_IF_EXISTS ) — Download/generate mode. revision (Optional Union[str, evaluate.Version] ) — if specified, the module will be loaded from the datasets repository at this version. By default it is set to the local version of the lib. Specifying a version that is different from your local version of the lib might cause compatibility issues. Load a evaluate.EvaluationModule . ← Main classes Saving methods → Loading methods List Load |
Image_Processor.txt | Image Processor Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Image Processor Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Image Processor An image processor is in charge of preparing input features for vision models and post processing their outputs. This includes transformations such as resizing, normalization, and conversion to PyTorch, TensorFlow, Flax and Numpy tensors. It may also include model specific post-processing such as converting logits to segmentation masks. Fast image processors are available for a few models and more will be added in the future. They are based on the torchvision library and provide a significant speed-up, especially when processing on GPU. They have the same API as the base image processors and can be used as drop-in replacements. To use a fast image processor, you need to install the torchvision library, and set the use_fast argument to True when instantiating the image processor: Copied from transformers import AutoImageProcessor processor = AutoImageProcessor.from_pretrained( "facebook/detr-resnet-50" , use_fast= True ) Note that use_fast will be set to True by default in a future release. When using a fast image processor, you can also set the device argument to specify the device on which the processing should be done. By default, the processing is done on the same device as the inputs if the inputs are tensors, or on the CPU otherwise. Copied from torchvision.io import read_image from transformers import DetrImageProcessorFast images = read_image( "image.jpg" ) processor = DetrImageProcessorFast.from_pretrained( "facebook/detr-resnet-50" ) images_processed = processor(images, return_tensors= "pt" , device= "cuda" ) Here are some speed comparisons between the base and fast image processors for the DETR and RT-DETR models, and how they impact overall inference time: These benchmarks were run on an AWS EC2 g5.2xlarge instance , utilizing an NVIDIA A10G Tensor Core GPU. ImageProcessingMixin class transformers. ImageProcessingMixin < source > ( **kwargs ) This is an image processor mixin used to provide saving/loading functionality for sequential and image feature extractors. from_pretrained < source > ( pretrained_model_name_or_path : typing.Union[str, os.PathLike] cache_dir : typing.Union[str, os.PathLike, NoneType] = None force_download : bool = False local_files_only : bool = False token : typing.Union[str, bool, NoneType] = None revision : str = 'main' **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — This can be either: a string, the model id of a pretrained image_processor hosted inside a model repo on huggingface.co. a path to a directory containing a image processor file saved using the save_pretrained() method, e.g., ./my_model_directory/ . a path or url to a saved image processor JSON file , e.g., ./my_model_directory/preprocessor_config.json . cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model image processor should be cached if the standard cache should not be used. force_download ( bool , optional , defaults to False ) — Whether or not to force to (re-)download the image processor files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token ( str or bool , optional ) — The token to use as HTTP bearer authorization for remote files. If True , or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface ). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. Instantiate a type of ImageProcessingMixin from an image processor. Examples: Copied # We can't instantiate directly the base class *ImageProcessingMixin* so let's show the examples on a # derived class: *CLIPImageProcessor* image_processor = CLIPImageProcessor.from_pretrained( "openai/clip-vit-base-patch32" ) # Download image_processing_config from huggingface.co and cache. image_processor = CLIPImageProcessor.from_pretrained( "./test/saved_model/" ) # E.g. image processor (or model) was saved using *save_pretrained('./test/saved_model/')* image_processor = CLIPImageProcessor.from_pretrained( "./test/saved_model/preprocessor_config.json" ) image_processor = CLIPImageProcessor.from_pretrained( "openai/clip-vit-base-patch32" , do_normalize= False , foo= False ) assert image_processor.do_normalize is False image_processor, unused_kwargs = CLIPImageProcessor.from_pretrained( "openai/clip-vit-base-patch32" , do_normalize= False , foo= False , return_unused_kwargs= True ) assert image_processor.do_normalize is False assert unused_kwargs == { "foo" : False } save_pretrained < source > ( save_directory : typing.Union[str, os.PathLike] push_to_hub : bool = False **kwargs ) Parameters save_directory ( str or os.PathLike ) — Directory where the image processor JSON file will be saved (will be created if it does not exist). push_to_hub ( bool , optional , defaults to False ) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). kwargs ( Dict[str, Any] , optional ) — Additional key word arguments passed along to the push_to_hub() method. Save an image processor object to the directory save_directory , so that it can be re-loaded using the from_pretrained() class method. BatchFeature class transformers. BatchFeature < source > ( data : typing.Optional[typing.Dict[str, typing.Any]] = None tensor_type : typing.Union[NoneType, str, transformers.utils.generic.TensorType] = None ) Parameters data ( dict , optional ) — Dictionary of lists/arrays/tensors returned by the call /pad methods (‘input_values’, ‘attention_mask’, etc.). tensor_type ( Union[None, str, TensorType] , optional ) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization. Holds the output of the pad() and feature extractor specific __call__ methods. This class is derived from a python dictionary and can be used as a dictionary. convert_to_tensors < source > ( tensor_type : typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None ) Parameters tensor_type ( str or TensorType , optional ) — The type of tensors to use. If str , should be one of the values of the enum TensorType . If None , no modification is done. Convert the inner content to tensors. to < source > ( *args **kwargs ) → BatchFeature Parameters args ( Tuple ) — Will be passed to the to(...) function of the tensors. kwargs ( Dict , optional ) — Will be passed to the to(...) function of the tensors. To enable asynchronous data transfer, set the non_blocking flag in kwargs (defaults to False ). Returns BatchFeature The same instance after modification. Send all values to device by calling v.to(*args, **kwargs) (PyTorch only). This should support casting in different dtypes and sending the BatchFeature to a different device . BaseImageProcessor class transformers. BaseImageProcessor < source > ( **kwargs ) center_crop < source > ( image : ndarray size : typing.Dict[str, int] data_format : typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None input_data_format : typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None **kwargs ) Parameters image ( np.ndarray ) — Image to center crop. size ( Dict[str, int] ) — Size of the output image. data_format ( str or ChannelDimension , optional ) — The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. Can be one of: "channels_first" or ChannelDimension.FIRST : image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST : image in (height, width, num_channels) format. input_data_format ( ChannelDimension or str , optional ) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST : image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST : image in (height, width, num_channels) format. Center crop an image to (size["height"], size["width"]) . If the input size is smaller than crop_size along any edge, the image is padded with 0’s and then center cropped. normalize < source > ( image : ndarray mean : typing.Union[float, typing.Iterable[float]] std : typing.Union[float, typing.Iterable[float]] data_format : typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None input_data_format : typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None **kwargs ) → np.ndarray Parameters image ( np.ndarray ) — Image to normalize. mean ( float or Iterable[float] ) — Image mean to use for normalization. std ( float or Iterable[float] ) — Image standard deviation to use for normalization. data_format ( str or ChannelDimension , optional ) — The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. Can be one of: "channels_first" or ChannelDimension.FIRST : image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST : image in (height, width, num_channels) format. input_data_format ( ChannelDimension or str , optional ) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST : image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST : image in (height, width, num_channels) format. Returns np.ndarray The normalized image. Normalize an image. image = (image - image_mean) / image_std. rescale < source > ( image : ndarray scale : float data_format : typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None input_data_format : typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None **kwargs ) → np.ndarray Parameters image ( np.ndarray ) — Image to rescale. scale ( float ) — The scaling factor to rescale pixel values by. data_format ( str or ChannelDimension , optional ) — The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. Can be one of: "channels_first" or ChannelDimension.FIRST : image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST : image in (height, width, num_channels) format. input_data_format ( ChannelDimension or str , optional ) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST : image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST : image in (height, width, num_channels) format. Returns np.ndarray The rescaled image. Rescale an image by a scale factor. image = image * scale. BaseImageProcessorFast class transformers. BaseImageProcessorFast < source > ( **kwargs ) < > Update on GitHub ← Feature Extractor ALBERT → Image Processor Image Processing Mixin Batch Feature Base Image Processor Base Image Processor Fast |
Secrets_Scanning.txt | Secrets Scanning Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Secrets Scanning Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security User Access Tokens Two-Factor Authentication Git over SSH Signing Commits with GPG Single Sign-On (SSO) Advanced Access Control (Resource Groups) Malware Scanning Pickle Scanning Secrets Scanning Protect AI Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Secrets Scanning It is important to manage your secrets (env variables) properly . The most common way people expose their secrets to the outside world is by hard-coding their secrets in their code files directly, which makes it possible for a malicious user to utilize your secrets and services your secrets have access to. For example, this is what a compromised app.py file might look like: Copied import numpy as np import scipy as sp api_key = "sw-xyz1234567891213" def call_inference ( prompt: str ) -> str : result = call_api(prompt, api_key) return result To prevent this issue, we run TruffleHog on each push you make. TruffleHog scans for hard-coded secrets, and we will send you an email upon detection. You’ll only receive emails for verified secrets, which are the ones that have been confirmed to work for authentication against their respective providers. Note, however, that unverified secrets are not necessarily harmless or invalid: verification can fail due to technical reasons, such as in the case of a network error. TruffleHog can verify secrets that work across multiple services, it is not restricted to Hugging Face tokens. You can opt-out from those email notifications from your settings . < > Update on GitHub ← Pickle Scanning Protect AI → Secrets Scanning |
Experiment_Trackers.txt | Experiment trackers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Experiment trackers Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Experiment trackers There are a large number of experiment tracking APIs available, however getting them all to work in a multi-processing environment can oftentimes be complex. Accelerate provides a general tracking API that can be used to log useful items during your script through Accelerator.log() Integrated Trackers Currently Accelerate supports seven trackers out-of-the-box: TensorBoard WandB CometML Aim MLFlow ClearML DVCLive To use any of them, pass in the selected type(s) to the log_with parameter in Accelerate : Copied from accelerate import Accelerator from accelerate.utils import LoggerType accelerator = Accelerator(log_with= "all" ) # For all available trackers in the environment accelerator = Accelerator(log_with= "wandb" ) accelerator = Accelerator(log_with=[ "wandb" , LoggerType.TENSORBOARD]) At the start of your experiment Accelerator.init_trackers() should be used to setup your project, and potentially add any experiment hyperparameters to be logged: Copied hps = { "num_iterations" : 5 , "learning_rate" : 1e-2 } accelerator.init_trackers( "my_project" , config=hps) When you are ready to log any data, Accelerator.log() should be used. A step can also be passed in to correlate the data with a particular step in the training loop. Copied accelerator.log({ "train_loss" : 1.12 , "valid_loss" : 0.8 }, step= 1 ) Once you’ve finished training, make sure to run Accelerator.end_training() so that all the trackers can run their finish functionalities if they have any. Copied accelerator.end_training() A full example is below: Copied from accelerate import Accelerator accelerator = Accelerator(log_with= "all" ) config = { "num_iterations" : 5 , "learning_rate" : 1e-2 , "loss_function" : str (my_loss_function), } accelerator.init_trackers( "example_project" , config=config) my_model, my_optimizer, my_training_dataloader = accelerator.prepare(my_model, my_optimizer, my_training_dataloader) device = accelerator.device my_model.to(device) for iteration in range (config[ "num_iterations" ]): for step, batch in enumerate (my_training_dataloader): my_optimizer.zero_grad() inputs, targets = batch inputs = inputs.to(device) targets = targets.to(device) outputs = my_model(inputs) loss = my_loss_function(outputs, targets) accelerator.backward(loss) my_optimizer.step() accelerator.log({ "training_loss" : loss}, step=step) accelerator.end_training() If a tracker requires a directory to save data to, such as TensorBoard , then pass the directory path to project_dir . The project_dir parameter is useful when there are other configurations to be combined with in the ProjectConfiguration data class. For example, you can save the TensorBoard data to project_dir and everything else can be logged in the logging_dir parameter of [ ~utils.ProjectConfiguration : Copied accelerator = Accelerator(log_with= "tensorboard" , project_dir= "." ) # use with ProjectConfiguration config = ProjectConfiguration(project_dir= "." , logging_dir= "another/directory" ) accelerator = Accelerator(log_with= "tensorboard" , project_config=config) Implementing Custom Trackers To implement a new tracker to be used in Accelerator , a new one can be made through implementing the GeneralTracker class. Every tracker must implement three functions and have three properties: __init__ : Should store a run_name and initialize the tracker API of the integrated library. If a tracker stores their data locally (such as TensorBoard), a logging_dir parameter can be added. store_init_configuration : Should take in a values dictionary and store them as a one-time experiment configuration log : Should take in a values dictionary and a step , and should log them to the run name ( str ): A unique string name for the tracker, such as "wandb" for the wandb tracker. This will be used for interacting with this tracker specifically requires_logging_directory ( bool ): Whether a logging_dir is needed for this particular tracker and if it uses one. tracker : This should be implemented as a @property function Should return the internal tracking mechanism the library uses, such as the run object for wandb . Each method should also utilize the state.PartialState class if the logger should only be executed on the main process for instance. A brief example can be seen below with an integration with Weights and Biases, containing only the relevant information and logging just on the main process: Copied from accelerate.tracking import GeneralTracker, on_main_process from typing import Optional import wandb class MyCustomTracker ( GeneralTracker ): name = "wandb" requires_logging_directory = False @on_main_process def __init__ ( self, run_name: str ): self.run_name = run_name run = wandb.init(self.run_name) @property def tracker ( self ): return self.run.run @on_main_process def store_init_configuration ( self, values: dict ): wandb.config(values) @on_main_process def log ( self, values: dict , step: Optional [ int ] = None ): wandb.log(values, step=step) When you are ready to build your Accelerator object, pass in an instance of your tracker to Accelerator.log_with to have it automatically be used with the API: Copied tracker = MyCustomTracker( "some_run_name" ) accelerator = Accelerator(log_with=tracker) These also can be mixed with existing trackers, including with "all" : Copied tracker = MyCustomTracker( "some_run_name" ) accelerator = Accelerator(log_with=[tracker, "all" ]) Accessing the internal tracker If some custom interactions with a tracker might be wanted directly, you can quickly access one using the Accelerator.get_tracker() method. Just pass in the string corresponding to a tracker’s .name attribute and it will return that tracker on the main process. This example shows doing so with wandb: Copied wandb_tracker = accelerator.get_tracker( "wandb" ) From there you can interact with wandb ’s run object like normal: Copied wandb_tracker.log_artifact(some_artifact_to_log) Trackers built in Accelerate will automatically execute on the correct process, so if a tracker is only meant to be ran on the main process it will do so automatically. If you want to truly remove Accelerate’s wrapping entirely, you can achieve the same outcome with: Copied wandb_tracker = accelerator.get_tracker( "wandb" , unwrap= True ) if accelerator.is_main_process: wandb_tracker.log_artifact(some_artifact_to_log) When a wrapper cannot work If a library has an API that does not follow a strict .log with an overall dictionary such as Neptune.AI, logging can be done manually under an if accelerator.is_main_process statement: Copied from accelerate import Accelerator + import neptune accelerator = Accelerator() + run = neptune.init_run(...) my_model, my_optimizer, my_training_dataloader = accelerate.prepare(my_model, my_optimizer, my_training_dataloader) device = accelerator.device my_model.to(device) for iteration in config["num_iterations"]: for batch in my_training_dataloader: my_optimizer.zero_grad() inputs, targets = batch inputs = inputs.to(device) targets = targets.to(device) outputs = my_model(inputs) loss = my_loss_function(outputs, targets) total_loss += loss accelerator.backward(loss) my_optimizer.step() + if accelerator.is_main_process: + run["logs/training/batch/loss"].log(loss) < > Update on GitHub ← Model quantization Profiler → Experiment trackers Integrated Trackers Implementing Custom Trackers Accessing the internal tracker When a wrapper cannot work |
THE_LANDSCAPE_OF_ML_DOCUMENTATION_TOOLS.txt | THE LANDSCAPE OF ML DOCUMENTATION TOOLS Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation THE LANDSCAPE OF ML DOCUMENTATION TOOLS Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Annotated Model Card Carbon Emissions Model Card Guidebook Landscape Analysis Card Components Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started THE LANDSCAPE OF ML DOCUMENTATION TOOLS The development of the model cards framework in 2018 was inspired by the major documentation framework efforts of Data Statements for Natural Language Processing ( Bender & Friedman, 2018 ) and Datasheets for Datasets ( Gebru et al., 2018 ). Since model cards were proposed, a number of other tools have been proposed for documenting and evaluating various aspects of the machine learning development cycle. These tools, including model cards and related documentation efforts proposed prior to model cards, can be contextualised with regard to their focus (e.g., on which part of the ML system lifecycle does the tool focus?) and their intended audiences (e.g., who is the tool designed for?). In Figures 1-2 below, we summarise several prominent documentation tools along these dimensions, provide contextual descriptions of each tool, and link to examples. We broadly classify the documentation tools as belong to the following groups: Data-focused , including documentation tools focused on datasets used in the machine learning system lifecycle Models-and-methods-focused , including documentation tools focused on machine learning models and methods; and Systems-focused , including documentation tools focused on ML systems, including models, methods, datasets, APIs, and non AI/ML components that interact with each other as part of an ML system These groupings are not mutually exclusive; they do include overlapping aspects of the ML system lifecycle. For example, system cards focus on documenting ML systems that may include multiple models and datasets, and thus might include content that overlaps with data-focused or model-focused documentation tools. The tools described are a non-exhaustive list of documentation tools for the ML system lifecycle. In general, we included tools that were: Focused on documentation of some (or multiple) aspects of the ML system lifecycle Included the release of a template intended for repeated use, adoption, and adaption Summary of ML Documentation Tools Figure 1 Stage of ML System Lifecycle Tool Brief Description Examples DATA Datasheets (Gebru et al., 2018) “We recommend that every dataset be accompanied with a datasheet documenting its motivation, creation, composition, intended uses, distribution, maintenance, and other information.” See, for example, Ivy Lee’s repo with examples DATA Data Statements (Bender & Friedman, 2018)(Bender et al., 2021) “A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.” See Data Statements for NLP Workshop DATA Dataset Nutrition Labels (Holland et al., 2018) “The Dataset Nutrition Label…is a diagnostic framework that lowers the barrier to standardized data analysis by providing a distilled yet comprehensive overview of dataset “ingredients” before AI model development.” See The Data Nutrition Label DATA Data Cards for NLP (McMillan-Major et al., 2021) “We present two case studies of creating documentation templates and guides in natural language processing (NLP): the Hugging Face (HF) dataset hub[^1] and the benchmark for Generation and its Evaluation and Metrics (GEM). We use the term data card to refer to documentation for datasets in both cases. See (McMillan-Major et al., 2021) DATA Dataset Development Lifecycle Documentation Framework (Hutchinson et al., 2021) “We introduce a rigorous framework for dataset development transparency that supports decision-making and accountability. The framework uses the cyclical, infrastructural and engineering nature of dataset development to draw on best practices from the software development lifecycle.” See (Hutchinson et al., 2021) , Appendix A for templates DATA Data Cards (Pushkarna et al., 2021) “Data Cards are structured summaries of essential facts about various aspects of ML datasets needed by stakeholders across a dataset’s lifecycle for responsible AI development. These summaries provide explanations of processes and rationales that shape the data and consequently the models.” See the Data Cards Playbook github DATA CrowdWorkSheets (Díaz et al., 2022) “We introduce a novel framework, CrowdWorkSheets, for dataset developers to facilitate transparent documentation of key decisions points at various stages of the data annotation pipeline: task formulation, selection of annotators, plat- form and infrastructure choices, dataset analysis and evaluation, and dataset release and maintenance.” See (Díaz et al., 2022) MODELS AND METHODS Model Cards Mitchell et al. (2018) “Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions…that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information.” See https://huggingface.co/models , the Model Card Guidebook , and Model Card Examples MODELS AND METHODS Value Cards Shen et al. (2021) “We present Value Cards, a deliberation-driven toolkit for bringing computer science students and practitioners the awareness of the social impacts of machine learning-based decision making systems….Value Cards encourages the investigations and debates towards different ML performance metrics and their potential trade-offs.” See Shen et al. (2021) , Section 3.3 MODELS AND METHODS Method Cards Adkins et al. (2022) “We propose method cards to guide ML engineers through the process of model development…The information comprises both prescriptive and descriptive elements, putting the main focus on ensuring that ML engineers are able to use these methods properly.” See Adkins et al. (2022) , Appendix A MODELS AND METHODS Consumer Labels for ML Models Seifert et al. (2019) “We propose to issue consumer labels for trained and published ML models. These labels primarily target machine learning lay persons, such as the operators of an ML system, the executors of decisions, and the decision subjects themselves” See Seifert et al. (2019) SYSTEMS Factsheets Arnold et al. (2019) “A FactSheet will contain sections on all relevant attributes of an AI service, such as intended use, performance, safety, and security. Performance will include appropriate accuracy or risk measures along with timing information.” See IBM’s AI Factsheets 360 and Hind et al., (2020) SYSTEMS System Cards Procope et al. (2022) “System Cards aims to increase the transparency of ML systems by providing stakeholders with an overview of different components of an ML system, how these components interact, and how different pieces of data and protected information are used by the system.” See Meta’s Instagram Feed Ranking System Card SYSTEMS Reward Reports for RL Gilbert et al. (2022) “We sketch a framework for documenting deployed learning systems, which we call Reward Reports…We outline Reward Reports as living documents that track updates to design choices and assumptions behind what a particular automated system is optimizing for. They are intended to track dynamic phenomena arising from system deployment, rather than merely static properties of models or data.” See https://rewardreports.github.io SYSTEMS Robustness Gym Goel et al. (2021) “We identify challenges with evaluating NLP systems and propose a solution in the form of Robustness Gym (RG), a simple and extensible evaluation toolkit that unifies 4 standard evaluation paradigms: subpopulations, transformations, evaluation sets, and adversarial attacks.” See https://github.com/robustness-gym/robustness-gym SYSTEMS ABOUT ML Raji and Yang, (2019) “ABOUT ML (Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles) is a multi-year, multi-stakeholder initiative led by PAI. This initiative aims to bring together a diverse range of perspectives to develop, test, and implement machine learning system documentation practices at scale.” See ABOUT ML’s resources library DATA-FOCUSED DOCUMENTATION TOOLS Several proposed documentation tools focus on datasets used in the ML system lifecycle, including to train, develop, validate, finetune, and evaluate machine learning models as part of continuous cycles. These tools generally focus on the many aspects of the data lifecycle (perhaps for a particular dataset, group of datasets, or more broadly), including how the data was assembled, collected, annotated and how it should be used. Extending the concept of datasheets in the electronics industry, Gebru et al. (2018) propose datasheets for datasets to document details related to a dataset’s creation, potential uses, and associated concerns. Bender and Friedman (2018) propose data statements for natural language processing. Bender, Friedman and McMillan-Major (2021) update the original data statements framework and provide resources including a guide for writing data statements and translating between the first version of the schema and the newer version[^2]. Holland et al. (2018) propose data nutrition labels, akin to nutrition facts for foodstuffs and nutrition labels for privacy disclosures, as a tool for analyzing and making decisions about datasets. The Data Nutrition Label team released an updated design of and interface for the label in 2020 ( Chmielinski et al., 2020) ). McMillan-Major et al. (2021) describe the development process and resulting templates for data cards for NLP in the form of data cards on the Hugging Face Hub[^3] and data cards for datasets that are part of the NLP benchmark for Generation and its Evaluation Metrics (GEM) environment[^4]. Hutchinson et al. (2021) describe the need for comprehensive dataset documentation, and drawing on software development practices, provide templates for documenting several aspects of the dataset development lifecycle (for the purposes of Tables 1 and 2, we refer to their framework as the Dataset Development Lifecycle Documentation Framework ). Pushkarna et al. (2021) propose the data cards as part of the data card playbook , a human-centered documentation tool focused on datasets used in industry and research. MODEL-AND-METHOD-FOCUSED DOCUMENTATION TOOLS Another set of documentation tools can be thought of as focusing on machine learning models and machine learning methods. These include: Mitchell et al. (2018) propose model cards for model reporting to accompany trained ML models and document issues related to evaluation, use, and other issues Shen et al. (2021) propose value cards for teaching students and practitioners about values related to ML models Seifert et al. (2019) propose consumer labels for ML models to help non-experts using or affected by the model understand key issues related to the model. Adkins et al. (2022) analyse aspects of descriptive documentation tools – which they consider to include model cards and data sheets – and argue for increased prescriptive tools for ML engineers. They propose method cards, focused on ML methods, and design primarily with technical stakeholders like model developers and reviewers in mind. They envision the relationship between model cards and method cards, in part, by stating: “The sections and prompts we propose…[in the method card template] focus on ML methods that are sufficient to produce a proper ML model with defined input, output, and task. Examples for these are object detection methods such as Single-shot Detectors and language modelling methods such as Generative Pre-trained Transformers (GPT). It is possible to create Model Cards for the models created using these methods .” They also state “While Model Cards and FactSheets put main focus on documenting existing models, Method Cards focus more on the underlying methodical and algorithmic choices that need to be considered when creating and training these models. As a rough analogy, if Model Cards and FactSheets provide nutritional information about cooked meals, Method Cards provide the recipes .” SYSTEM-FOCUSED DOCUMENTATION TOOLS Rather than focusing on particular models, datasets, or methods, system-focused documentation tools look at how models interact with each other, with datasets, methods, and with other ML components to form ML systems. Procope et al. (2022) propose system cards to document and explain AI systems – potentially including multiple ML models, AI tools, and non-AI technologies – that work together to accomplish tasks. Arnold et al. (2019) extend the idea of declarations of conformity for consumer products to AI services, proposing FactSheets to document aspects of “AI services” which are typically accessed through APIs and may be composed of multiple different ML models. Hind et al. (2020) share reflections on building factsheets. Gilbert et al. (2022) propose Reward Reports for Reinforcement Learning systems, recognizing the dynamic nature of ML systems and the need for documentation efforts to incorporate considerations of post-deployment performance, especially for reinforcement learning systems. Goel et al. (2021) develop Robustness Gym , an evaluation toolkit for testing several aspects of deep neural networks in real-world systems, allowing for comparison across evaluation paradigms. Through the ABOUT ML project ( Raji and Yang, 2019 ), the Partnership on AI is coordinating efforts across groups of stakeholders in the machine learning community to develop comprehensive, scalable documentation tools for ML systems. THE EVOLUTION OF MODEL CARDS Since the proposal for model cards by Mitchell et al. in 2018, model cards have been adopted and adapted by various organisations, including by major technology companies and startups developing and hosting machine learning models[^5], researchers describing new techniques[^6], and government stakeholders evaluating models for various projects[^7]. Model cards also appear as part of AI Ethics educational toolkits, and numerous organisations and developers have created implementations for automating or semi-automating the creation of model cards. Appendix A provides a set of examples of model cards for various types of ML models created by different organisations (including model cards for large language models), model card generation tools, and model card educational tools. MODEL CARDS ON THE HUGGING FACE HUB Since 2018, new platforms and mediums for hosting and sharing model cards have also emerged. For example, particularly relevant to this project, Hugging Face hosts model cards on the Hugging Face Hub as README files in the repositories associated with ML models. As a result, model cards figure as a prominent form of documentation for users of models on the Hugging Face Hub. As part of our analysis of model cards, we developed and proposed model cards for several dozen ML models on the Hugging Face Hub, using the Hub’s Pull Request (PR) and Discussion features to gather feedback on model cards, verify information included in model cards, and publish model cards for models on the Hugging Face Hub. At the time of writing of this guide book, all of Hugging Face’s models on the Hugging Face Hub have an associated model card on the Hub[^8]. The high number of models uploaded to the Hugging Face Hub (101,041 models at the point of writing), enabled us to explore the content within model cards on the hub: We began by analysing language model, model cards, in order to identify patterns (e.g repeated sections and subsections, with the aim of answering initial questions such as: 1) How many of these models have model cards? 2) What percent of downloads had an associated model card? From our analysis of all the models on the hub, we noticed that the most downloads come from top 200 models. With a continued focus on large language models, ordered by most downloaded and only models with model cards to begin with, we noted the most recurring sections within their respective model cards. While some headings within model cards may differ between models, we grouped components/the theme of each section within each model cards and then mapped them to section headings that were the most recurring (mostly found in the top 200 downloaded models and with the aid/guidance of the Bloom model card) Checkout the User Studies See Appendix [^1]: For each tool, descriptions are excerpted from the linked paper listed in the second column. [^2]: See https://techpolicylab.uw.edu/data-statements/ . [^3]: See https://techpolicylab.uw.edu/data-statements/ . [^4]: See https://techpolicylab.uw.edu/data-statements/ . [^5]: See, e.g., the Hugging Face Hub, Google Cloud’s Model Cards https://modelcards.withgoogle.com/about . [^6]: See Appendix A. [^7]: See GSA / US Census Bureau Collaboration on Model Card Generator. [^8]: By “Hugging Face models,” we mean models shared by Hugging Face, not another organisation, on the Hub. Formally, these are models without a ‘/’ in their model ID. Please cite as: Ozoani, Ezi and Gerchick, Marissa and Mitchell, Margaret. Model Card Guidebook. Hugging Face, 2022. https://huggingface.co/docs/hub/en/model-card-guidebook < > Update on GitHub ← Model Card Guidebook Card Components → TH E LANDSCAP E O F M L DOCUMENTATIO N TOOLS Summary of M L Documentation Tools Figure 1 DAT A-FOCUSE D DOCUMENTATIO N TOOLS MODE L-AN D-METHO D-FOCUSE D DOCUMENTATIO N TOOLS SYSTE M-FOCUSE D DOCUMENTATIO N TOOLS TH E EVOLUTIO N O F MODE L CARDS MODE L CARD S O N TH E HUGGIN G FAC E HUB |
Dataset_formats_and_types.txt | Dataset formats and types Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Dataset formats and types TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Dataset formats and types This guide provides an overview of the dataset formats and types supported by each trainer in TRL. Overview of the dataset formats and types The format of a dataset refers to how the data is structured, typically categorized as either standard or conversational . The type is associated with the specific task the dataset is designed for, such as prompt-only or preference . Each type is characterized by its columns, which vary according to the task, as shown in the table. Type \ Format Standard Conversational Language modeling {"text": "The sky is blue."} {"messages": [{"role": "user", "content": "What color is the sky?"}, {"role": "assistant", "content": "It is blue."}]} Prompt-only {"prompt": "The sky is"} {"prompt": [{"role": "user", "content": "What color is the sky?"}]} Prompt-completion {"prompt": "The sky is", "completion": " blue."} {"prompt": [{"role": "user", "content": "What color is the sky?"}], "completion": [{"role": "assistant", "content": "It is blue."}]} Preference {"prompt": "The sky is", "chosen": " blue.", "rejected": " green."} or, with implicit prompt: {"chosen": "The sky is blue.", "rejected": "The sky is green."} {"prompt": [{"role": "user", "content": "What color is the sky?"}], "chosen": [{"role": "assistant", "content": "It is blue."}], "rejected": [{"role": "assistant", "content": "It is green."}]} or, with implicit prompt: {"chosen": [{"role": "user", "content": "What color is the sky?"}, {"role": "assistant", "content": "It is blue."}], "rejected": [{"role": "user", "content": "What color is the sky?"}, {"role": "assistant", "content": "It is green."}]} Unpaired preference {"prompt": "The sky is", "completion": " blue.", "label": True} {"prompt": [{"role": "user", "content": "What color is the sky?"}], "completion": [{"role": "assistant", "content": "It is green."}], "label": False} Stepwise supervision {"prompt": "Which number is larger, 9.8 or 9.11?", "completions": ["The fractional part of 9.8 is 0.8.", "The fractional part of 9.11 is 0.11.", "0.11 is greater than 0.8.", "Hence, 9.11 > 9.8."], "labels": [True, True, False, False]} Formats Standard The standard dataset format typically consists of plain text strings. The columns in the dataset vary depending on the task. This is the format expected by TRL trainers. Below are examples of standard dataset formats for different tasks: Copied # Language modeling language_modeling_example = { "text" : "The sky is blue." } # Preference preference_example = { "prompt" : "The sky is" , "chosen" : " blue." , "rejected" : " green." } # Unpaired preference unpaired_preference_example = { "prompt" : "The sky is" , "completion" : " blue." , "label" : True } Conversational Conversational datasets are used for tasks involving dialogues or chat interactions between users and assistants. Unlike standard dataset formats, these contain sequences of messages where each message has a role (e.g., "user" or "assistant" ) and content (the message text). Copied messages = [ { "role" : "user" , "content" : "Hello, how are you?" }, { "role" : "assistant" , "content" : "I'm doing great. How can I help you today?" }, { "role" : "user" , "content" : "I'd like to show off how chat templating works!" }, ] Just like standard datasets, the columns in conversational datasets vary depending on the task. Below are examples of conversational dataset formats for different tasks: Copied # Prompt-completion prompt_completion_example = { "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }], "completion" : [{ "role" : "assistant" , "content" : "It is blue." }]} # Preference preference_example = { "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }], "chosen" : [{ "role" : "assistant" , "content" : "It is blue." }], "rejected" : [{ "role" : "assistant" , "content" : "It is green." }], } Conversational datasets are useful for training chat models, but must be converted into a standard format before being used with TRL trainers. This is typically done using chat templates specific to the model being used. For more information, refer to the Working with conversational datasets in TRL section. Types Language modeling A language modeling dataset consists of a column "text" (or "messages" for conversational datasets) containing a full sequence of text. Copied # Standard format language_modeling_example = { "text" : "The sky is blue." } # Conversational format language_modeling_example = { "messages" : [ { "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is blue." } ]} Prompt-only In a prompt-only dataset, only the initial prompt (the question or partial sentence) is provided under the key "prompt" . The training typically involves generating the completion based on this prompt, where the model learns to continue or complete the given input. Copied # Standard format prompt_only_example = { "prompt" : "The sky is" } # Conversational format prompt_only_example = { "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }]} While both the prompt-only and language modeling types are similar, they differ in how the input is handled. In the prompt-only type, the prompt represents a partial input that expects the model to complete or continue, while in the language modeling type, the input is treated as a complete sentence or sequence. These two types are processed differently by TRL. Below is an example showing the difference in the output of the apply_chat_template function for each type: Copied from transformers import AutoTokenizer from trl import apply_chat_template tokenizer = AutoTokenizer.from_pretrained( "microsoft/Phi-3-mini-128k-instruct" ) # Example for prompt-only type prompt_only_example = { "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }]} apply_chat_template(prompt_only_example, tokenizer) # Output: {'prompt': '<|user|>\nWhat color is the sky?<|end|>\n<|assistant|>\n'} # Example for language modeling type lm_example = { "messages" : [{ "role" : "user" , "content" : "What color is the sky?" }]} apply_chat_template(lm_example, tokenizer) # Output: {'text': '<|user|>\nWhat color is the sky?<|end|>\n<|endoftext|>'} The prompt-only output includes a '<|assistant|>\n' , indicating the beginning of the assistant’s turn and expecting the model to generate a completion. In contrast, the language modeling output treats the input as a complete sequence and terminates it with '<|endoftext|>' , signaling the end of the text and not expecting any additional content. Prompt-completion A prompt-completion dataset includes a "prompt" and a "completion" . Copied # Standard format prompt_completion_example = { "prompt" : "The sky is" , "completion" : " blue." } # Conversational format prompt_completion_example = { "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }], "completion" : [{ "role" : "assistant" , "content" : "It is blue." }]} Preference A preference dataset is used for tasks where the model is trained to choose between two or more possible completions to the same prompt. This dataset includes a "prompt" , a "chosen" completion, and a "rejected" completion. The model is trained to select the "chosen" response over the "rejected" response. Some dataset may not include the "prompt" column, in which case the prompt is implicit and directly included in the "chosen" and "rejected" completions. We recommend using explicit prompts whenever possible. Copied # Standard format ## Explicit prompt (recommended) preference_example = { "prompt" : "The sky is" , "chosen" : " blue." , "rejected" : " green." } # Implicit prompt preference_example = { "chosen" : "The sky is blue." , "rejected" : "The sky is green." } # Conversational format ## Explicit prompt (recommended) preference_example = { "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }], "chosen" : [{ "role" : "assistant" , "content" : "It is blue." }], "rejected" : [{ "role" : "assistant" , "content" : "It is green." }]} ## Implicit prompt preference_example = { "chosen" : [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is blue." }], "rejected" : [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is green." }]} Some preference datasets can be found with the tag dpo on Hugging Face Hub . You can also explore the librarian-bots’ DPO Collections to identify preference datasets. Unpaired preference An unpaired preference dataset is similar to a preference dataset but instead of having "chosen" and "rejected" completions for the same prompt, it includes a single "completion" and a "label" indicating whether the completion is preferred or not. Copied # Standard format unpaired_preference_example = { "prompt" : "The sky is" , "completion" : " blue." , "label" : True } # Conversational format unpaired_preference_example = { "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }], "completion" : [{ "role" : "assistant" , "content" : "It is blue." }], "label" : True } Stepwise supervision A stepwise (or process) supervision dataset is similar to an unpaired preference dataset but includes multiple steps of completions, each with its own label. This structure is useful for tasks that need detailed, step-by-step labeling, such as reasoning tasks. By evaluating each step separately and providing targeted labels, this approach helps identify precisely where the reasoning is correct and where errors occur, allowing for targeted feedback on each part of the reasoning process. Copied stepwise_example = { "prompt" : "Which number is larger, 9.8 or 9.11?" , "completions" : [ "The fractional part of 9.8 is 0.8, while the fractional part of 9.11 is 0.11." , "Since 0.11 is greater than 0.8, the number 9.11 is larger than 9.8." ], "labels" : [ True , False ] } Which dataset type to use? Choosing the right dataset type depends on the task you are working on and the specific requirements of the TRL trainer you are using. Below is a brief overview of the dataset types supported by each TRL trainer. Trainer Expected dataset type BCOTrainer Unpaired preference CPOTrainer Preference (explicit prompt recommended) DPOTrainer Preference (explicit prompt recommended) GKDTrainer Prompt-completion IterativeSFTTrainer Unpaired preference KTOTrainer Unpaired preference or Preference (explicit prompt recommended) NashMDTrainer Prompt-only OnlineDPOTrainer Prompt-only ORPOTrainer Preference (explicit prompt recommended) PPOTrainer Tokenized language modeling PRMTrainer Stepwise supervision RewardTrainer Preference (implicit prompt recommended) SFTTrainer Language modeling XPOTrainer Prompt-only TRL trainers only support standard dataset formats, for now . If you have a conversational dataset, you must first convert it into a standard format. For more information on how to work with conversational datasets, refer to the Working with conversational datasets in TRL section. Working with conversational datasets in TRL Conversational datasets are increasingly common, especially for training chat models. However, some TRL trainers don’t support conversational datasets in their raw format. (For more information, see issue #2071 .) These datasets must first be converted into a standard format. Fortunately, TRL offers tools to easily handle this conversion, which are detailed below. Converting a conversational dataset into a standard dataset To convert a conversational dataset into a standard dataset, you need to apply a chat template to the dataset. A chat template is a predefined structure that typically includes placeholders for user and assistant messages. This template is provided by the tokenizer of the model you use. For detailed instructions on using chat templating, refer to the Chat templating section in the transformers documentation . In TRL, the method you apply to convert the dataset will vary depending on the task. Fortunately, TRL provides a helper function called apply_chat_template() to simplify this process. Here’s an example of how to use it: Copied from transformers import AutoTokenizer from trl import apply_chat_template tokenizer = AutoTokenizer.from_pretrained( "microsoft/Phi-3-mini-128k-instruct" ) example = { "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }], "completion" : [{ "role" : "assistant" , "content" : "It is blue." }] } apply_chat_template(example, tokenizer) # Output: # {'prompt': '<|user|>\nWhat color is the sky?<|end|>\n<|assistant|>\n', 'completion': 'It is blue.<|end|>\n<|endoftext|>'} Alternatively, you can use the map method to apply the template across an entire dataset: Copied from datasets import Dataset from trl import apply_chat_template dataset_dict = { "prompt" : [[{ "role" : "user" , "content" : "What color is the sky?" }], [{ "role" : "user" , "content" : "Where is the sun?" }]], "completion" : [[{ "role" : "assistant" , "content" : "It is blue." }], [{ "role" : "assistant" , "content" : "In the sky." }]] } dataset = Dataset.from_dict(dataset_dict) dataset = dataset. map (apply_chat_template, fn_kwargs={ "tokenizer" : tokenizer}) # Output: # {'prompt': ['<|user|>\nWhat color is the sky?<|end|>\n<|assistant|>\n', # '<|user|>\nWhere is the sun?<|end|>\n<|assistant|>\n'], # 'completion': ['It is blue.<|end|>\n<|endoftext|>', 'In the sky.<|end|>\n<|endoftext|>']} We recommend using the apply_chat_template() function instead of calling tokenizer.apply_chat_template directly. Handling chat templates for non-language modeling datasets can be tricky and may result in errors, such as mistakenly placing a system prompt in the middle conversation. For additional examples, see #1930 (comment) . The apply_chat_template() is designed to handle these intricacies and ensure the correct application of chat templates for various tasks. It’s important to note that chat templates are model-specific. For example, if you use the chat template from meta-llama/Meta-Llama-3.1-8B-Instruct with the above example, you get a different output: Copied apply_chat_template(example, AutoTokenizer.from_pretrained( "meta-llama/Meta-Llama-3.1-8B-Instruct" )) # Output: # {'prompt': '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhat color is the sky?<|im_end|>\n<|im_start|>assistant\n', # 'completion': 'It is blue.<|im_end|>\n'} Always use the chat template associated with the model you’re working with. Using the wrong template can lead to inaccurate or unexpected results. Using any dataset with TRL: preprocessing and conversion Many datasets come in formats tailored to specific tasks, which might not be directly compatible with TRL. To use such datasets with TRL, you may need to preprocess and convert them into the required format. To make this easier, we provide a set of example scripts that cover common dataset conversions. Example: UltraFeedback dataset Let’s take the UltraFeedback dataset as an example. Here’s a preview of the dataset: As shown above, the dataset format does not match the expected structure. It’s not in a conversational format, the column names differ, and the results pertain to different models (e.g., Bard, GPT-4) and aspects (e.g., “helpfulness”, “honesty”). By using the provided conversion script examples/datasets/ultrafeedback.py , you can transform this dataset into an unpaired preference type, and push it to the Hub: Copied python examples/datasets/ultrafeedback.py --push_to_hub --repo_id trl-lib/ultrafeedback-gpt-3.5-turbo-helpfulness Once converted, the dataset will look like this: Now, you can use this dataset with TRL! By adapting the provided scripts or creating your own, you can convert any dataset into a format compatible with TRL. Utilities for converting dataset types This section provides example code to help you convert between different dataset types. While some conversions can be performed after applying the chat template (i.e., in the standard format), we recommend performing the conversion before applying the chat template to ensure it works consistently. For simplicity, some of the examples below do not follow this recommendation and use the standard format. However, the conversions can be applied directly to the conversational format without modification. From \ To Language modeling Prompt-completion Prompt-only Preference with implicit prompt Preference Unpaired preference Stepwise supervision Language modeling N/A N/A N/A N/A N/A N/A N/A Prompt-completion 🔗 N/A 🔗 N/A N/A N/A N/A Prompt-only N/A N/A N/A N/A N/A N/A N/A Preference with implicit prompt 🔗 🔗 🔗 N/A 🔗 🔗 N/A Preference 🔗 🔗 🔗 🔗 N/A 🔗 N/A Unpaired preference 🔗 🔗 🔗 N/A N/A N/A N/A Stepwise supervision 🔗 🔗 🔗 N/A N/A 🔗 N/A From prompt-completion to language modeling dataset To convert a prompt-completion dataset into a language modeling dataset, concatenate the prompt and the completion. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "The sky is" , "The sun is" ], "completion" : [ " blue." , " in the sky." ], }) def concat_prompt_completion ( example ): return { "text" : example[ "prompt" ] + example[ "completion" ]} dataset = dataset. map (concat_prompt_completion, remove_columns=[ "prompt" , "completion" ]) Copied >>> dataset[ 0 ] { 'text' : 'The sky is blue.' } From prompt-completion to prompt-only dataset To convert a prompt-completion dataset into a prompt-only dataset, remove the completion. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "The sky is" , "The sun is" ], "completion" : [ " blue." , " in the sky." ], }) dataset = dataset.remove_columns( "completion" ) Copied >>> dataset[ 0 ] { 'prompt' : 'The sky is' } From preference with implicit prompt to language modeling dataset To convert a preference with implicit prompt dataset into a language modeling dataset, remove the rejected, and rename the column "chosen" to "text" . Copied from datasets import Dataset dataset = Dataset.from_dict({ "chosen" : [ "The sky is blue." , "The sun is in the sky." ], "rejected" : [ "The sky is green." , "The sun is in the sea." ], }) dataset = dataset.rename_column( "chosen" , "text" ).remove_columns( "rejected" ) Copied >>> dataset[ 0 ] { 'text' : 'The sky is blue.' } From preference with implicit prompt to prompt-completion dataset To convert a preference dataset with implicit prompt into a prompt-completion dataset, extract the prompt with extract_prompt() , remove the rejected, and rename the column "chosen" to "completion" . Copied from datasets import Dataset from trl import extract_prompt dataset = Dataset.from_dict({ "chosen" : [ [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is blue." }], [{ "role" : "user" , "content" : "Where is the sun?" }, { "role" : "assistant" , "content" : "In the sky." }], ], "rejected" : [ [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is green." }], [{ "role" : "user" , "content" : "Where is the sun?" }, { "role" : "assistant" , "content" : "In the sea." }], ], }) dataset = dataset. map (extract_prompt).remove_columns( "rejected" ).rename_column( "chosen" , "completion" ) Copied >>> dataset[ 0 ] { 'prompt' : [{ 'role' : 'user' , 'content' : 'What color is the sky?' }], 'completion' : [{ 'role' : 'assistant' , 'content' : 'It is blue.' }]} From preference with implicit prompt to prompt-only dataset To convert a preference dataset with implicit prompt into a prompt-only dataset, extract the prompt with extract_prompt() , and remove the rejected and the chosen. Copied from datasets import Dataset from trl import extract_prompt dataset = Dataset.from_dict({ "chosen" : [ [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is blue." }], [{ "role" : "user" , "content" : "Where is the sun?" }, { "role" : "assistant" , "content" : "In the sky." }], ], "rejected" : [ [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is green." }], [{ "role" : "user" , "content" : "Where is the sun?" }, { "role" : "assistant" , "content" : "In the sea." }], ], }) dataset = dataset. map (extract_prompt).remove_columns([ "chosen" , "rejected" ]) Copied >>> dataset[ 0 ] { 'prompt' : [{ 'role' : 'user' , 'content' : 'What color is the sky?' }]} From implicit to explicit prompt preference dataset To convert a preference dataset with implicit prompt into a preference dataset with explicit prompt, extract the prompt with extract_prompt() . Copied from datasets import Dataset from trl import extract_prompt dataset = Dataset.from_dict({ "chosen" : [ [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is blue." }], [{ "role" : "user" , "content" : "Where is the sun?" }, { "role" : "assistant" , "content" : "In the sky." }], ], "rejected" : [ [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is green." }], [{ "role" : "user" , "content" : "Where is the sun?" }, { "role" : "assistant" , "content" : "In the sea." }], ], }) dataset = dataset. map (extract_prompt) Copied >>> dataset[ 0 ] { 'prompt' : [{ 'role' : 'user' , 'content' : 'What color is the sky?' }], 'chosen' : [{ 'role' : 'assistant' , 'content' : 'It is blue.' }], 'rejected' : [{ 'role' : 'assistant' , 'content' : 'It is green.' }]} From preference with implicit prompt to unpaired preference dataset To convert a preference dataset with implicit prompt into an unpaired preference dataset, extract the prompt with extract_prompt() , and unpair the dataset with unpair_preference_dataset() . Copied from datasets import Dataset from trl import extract_prompt, unpair_preference_dataset dataset = Dataset.from_dict({ "chosen" : [ [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is blue." }], [{ "role" : "user" , "content" : "Where is the sun?" }, { "role" : "assistant" , "content" : "In the sky." }], ], "rejected" : [ [{ "role" : "user" , "content" : "What color is the sky?" }, { "role" : "assistant" , "content" : "It is green." }], [{ "role" : "user" , "content" : "Where is the sun?" }, { "role" : "assistant" , "content" : "In the sea." }], ], }) dataset = dataset. map (extract_prompt) dataset = unpair_preference_dataset(dataset) Copied >>> dataset[ 0 ] { 'prompt' : [{ 'role' : 'user' , 'content' : 'What color is the sky?' }], 'completion' : [{ 'role' : 'assistant' , 'content' : 'It is blue.' }], 'label' : True } From preference to language modeling dataset To convert a preference dataset into a language modeling dataset, remove the rejected, concatenate the prompt and the chosen into the "text" column. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "The sky is" , "The sun is" ], "chosen" : [ " blue." , " in the sky." ], "rejected" : [ " green." , " in the sea." ], }) def concat_prompt_chosen ( example ): return { "text" : example[ "prompt" ] + example[ "chosen" ]} dataset = dataset. map (concat_prompt_chosen, remove_columns=[ "prompt" , "chosen" , "rejected" ]) Copied >>> dataset[ 0 ] { 'text' : 'The sky is blue.' } From preference to prompt-completion dataset To convert a preference dataset into a prompt-completion dataset, remove the rejected, and rename the column "chosen" to "completion" . Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "The sky is" , "The sun is" ], "chosen" : [ " blue." , " in the sky." ], "rejected" : [ " green." , " in the sea." ], }) dataset = dataset.remove_columns( "rejected" ).rename_column( "chosen" , "completion" ) Copied >>> dataset[ 0 ] { 'prompt' : 'The sky is' , 'completion' : ' blue.' } From preference to prompt-only dataset To convert a preference dataset into a prompt-only dataset, remove the rejected and the chosen. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "The sky is" , "The sun is" ], "chosen" : [ " blue." , " in the sky." ], "rejected" : [ " green." , " in the sea." ], }) dataset = dataset.remove_columns([ "chosen" , "rejected" ]) Copied >>> dataset[ 0 ] { 'prompt' : 'The sky is' } From explicit to implicit prompt preference dataset To convert a preference dataset with explicit prompt into a preference dataset with implicit prompt, concatenate the prompt to both chosen and rejected, and remove the prompt. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ [{ "role" : "user" , "content" : "What color is the sky?" }], [{ "role" : "user" , "content" : "Where is the sun?" }], ], "chosen" : [ [{ "role" : "assistant" , "content" : "It is blue." }], [{ "role" : "assistant" , "content" : "In the sky." }], ], "rejected" : [ [{ "role" : "assistant" , "content" : "It is green." }], [{ "role" : "assistant" , "content" : "In the sea." }], ], }) def concat_prompt_to_completions ( example ): return { "chosen" : example[ "prompt" ] + example[ "chosen" ], "rejected" : example[ "prompt" ] + example[ "rejected" ]} dataset = dataset. map (concat_prompt_to_completions, remove_columns= "prompt" ) Copied >>> dataset[ 0 ] { 'chosen' : [{ 'role' : 'user' , 'content' : 'What color is the sky?' }, { 'role' : 'assistant' , 'content' : 'It is blue.' }], 'rejected' : [{ 'role' : 'user' , 'content' : 'What color is the sky?' }, { 'role' : 'assistant' , 'content' : 'It is green.' }]} From preference to unpaired preference dataset To convert dataset into an unpaired preference dataset, unpair the dataset with unpair_preference_dataset() . Copied from datasets import Dataset from trl import unpair_preference_dataset dataset = Dataset.from_dict({ "prompt" : [ [{ "role" : "user" , "content" : "What color is the sky?" }], [{ "role" : "user" , "content" : "Where is the sun?" }], ], "chosen" : [ [{ "role" : "assistant" , "content" : "It is blue." }], [{ "role" : "assistant" , "content" : "In the sky." }], ], "rejected" : [ [{ "role" : "assistant" , "content" : "It is green." }], [{ "role" : "assistant" , "content" : "In the sea." }], ], }) dataset = unpair_preference_dataset(dataset) Copied >>> dataset[ 0 ] { 'prompt' : [{ 'role' : 'user' , 'content' : 'What color is the sky?' }], 'completion' : [{ 'role' : 'assistant' , 'content' : 'It is blue.' }], 'label' : True } From unpaired preference to language modeling dataset To convert an unpaired preference dataset into a language modeling dataset, concatenate the prompt and the completion into the "text" column, and remove the prompt, completion and label columns. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "The sky is" , "The sun is" , "The sky is" , "The sun is" ], "completion" : [ " blue." , " in the sky." , " green." , " in the sea." ], "label" : [ True , True , False , False ], }) def concatenate_prompt_completion ( example ): return { "text" : example[ "prompt" ] + example[ "completion" ]} dataset = dataset. map (concatenate_prompt_completion).remove_columns([ "prompt" , "completion" , "label" ]) Copied >>> dataset[ 0 ] { 'text' : 'The sky is blue.' } From unpaired preference to prompt-completion dataset To convert an unpaired preference dataset into a prompt-completion dataset, remove the label columns. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "The sky is" , "The sun is" , "The sky is" , "The sun is" ], "completion" : [ " blue." , " in the sky." , " green." , " in the sea." ], "label" : [ True , True , False , False ], }) dataset = dataset.remove_columns([ "label" ]) Copied >>> dataset[ 0 ] { 'prompt' : 'The sky is' , 'completion' : ' blue.' } From unpaired preference to prompt-only dataset To convert an unpaired preference dataset into a prompt-only dataset, remove the completion and the label columns. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "The sky is" , "The sun is" , "The sky is" , "The sun is" ], "completion" : [ " blue." , " in the sky." , " green." , " in the sea." ], "label" : [ True , True , False , False ], }) dataset = dataset.remove_columns([ "completion" , "label" ]) Copied >>> dataset[ 0 ] { 'prompt' : 'The sky is' } From stepwise supervision to language modeling dataset To convert a stepwise supervision dataset into a language modeling dataset, concatenate the prompt and the completions into the "text" column. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "Blue light" , "Water" ], "completions" : [[ " scatters more in the atmosphere," , " so the sky is green." ], [ " forms a less dense structure in ice," , " which causes it to expand when it freezes." ]], "labels" : [[ True , False ], [ True , True ]], }) def concatenate_prompt_completions ( example ): completion = "" .join(example[ "completions" ]) return { "text" : example[ "prompt" ] + completion} dataset = dataset. map (concatenate_prompt_completions, remove_columns=[ "prompt" , "completions" , "labels" ]) Copied >>> dataset[ 0 ] { 'text' : 'Blue light scatters more in the atmosphere, so the sky is green.' } From stepwise supervision to prompt completion dataset To convert a stepwise supervision dataset into a prompt-completion dataset, join the completions and remove the labels. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "Blue light" , "Water" ], "completions" : [[ " scatters more in the atmosphere," , " so the sky is green." ], [ " forms a less dense structure in ice," , " which causes it to expand when it freezes." ]], "labels" : [[ True , False ], [ True , True ]], }) def join_completions ( example ): completion = "" .join(example[ "completions" ]) return { "completion" : completion} dataset = dataset. map (join_completions, remove_columns=[ "completions" , "labels" ]) Copied >>> dataset[ 0 ] { 'prompt' : 'Blue light' , 'completion' : ' scatters more in the atmosphere, so the sky is green.' } From stepwise supervision to prompt only dataset To convert a stepwise supervision dataset into a prompt-only dataset, remove the completions and the labels. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "Blue light" , "Water" ], "completions" : [[ " scatters more in the atmosphere," , " so the sky is green." ], [ " forms a less dense structure in ice," , " which causes it to expand when it freezes." ]], "labels" : [[ True , False ], [ True , True ]], }) dataset = dataset.remove_columns([ "completions" , "labels" ]) Copied >>> dataset[ 0 ] { 'prompt' : 'Blue light' } From stepwise supervision to unpaired preference dataset To convert a stepwise supervision dataset into an unpaired preference dataset, join the completions and merge the labels. The method for merging the labels depends on the specific task. In this example, we use the logical AND operation. This means that if the step labels indicate the correctness of individual steps, the resulting label will reflect the correctness of the entire sequence. Copied from datasets import Dataset dataset = Dataset.from_dict({ "prompt" : [ "Blue light" , "Water" ], "completions" : [[ " scatters more in the atmosphere," , " so the sky is green." ], [ " forms a less dense structure in ice," , " which causes it to expand when it freezes." ]], "labels" : [[ True , False ], [ True , True ]], }) def merge_completions_and_labels ( example ): return { "prompt" : example[ "prompt" ], "completion" : "" .join(example[ "completions" ]), "label" : all (example[ "labels" ])} dataset = dataset. map (merge_completions_and_labels, remove_columns=[ "completions" , "labels" ]) Copied >>> dataset[ 0 ] { 'prompt' : 'Blue light' , 'completion' : ' scatters more in the atmosphere, so the sky is green.' , 'label' : False } Vision datasets Some trainers also support fine-tuning vision-language models (VLMs) using image-text pairs. In this scenario, it’s recommended to use a conversational format, as each model handles image placeholders in text differently. A conversational vision dataset differs from a standard conversational dataset in two key ways: The dataset must contain the key images with the image data. The "content" field in messages must be a list of dictionaries, where each dictionary specifies the type of data: "image" or "text" . Example: Copied # Textual dataset: "content" : "What color is the sky?" # Vision dataset: "content" : [ { "type" : "image" }, { "type" : "text" , "text" : "What color is the sky in the image?" } ] An example of a conversational vision dataset is the openbmb/RLAIF-V-Dataset . Below is an embedded view of the dataset’s training data, allowing you to explore it directly: < > Update on GitHub ← Get started with Command Line Interfaces (CLIs) PPO Training FAQ → Dataset formats and types Overview of the dataset formats and types Formats Standard Conversational Types Language modeling Prompt-only Prompt-completion Preference Unpaired preference Stepwise supervision Which dataset type to use? Working with conversational datasets in TRL Converting a conversational dataset into a standard dataset Using any dataset with TR L: preprocessing and conversion Example: Ultra Feedback dataset Utilities for converting dataset types From prompt-completion to language modeling dataset From prompt-completion to prompt-only dataset From preference with implicit prompt to language modeling dataset From preference with implicit prompt to prompt-completion dataset From preference with implicit prompt to prompt-only dataset From implicit to explicit prompt preference dataset From preference with implicit prompt to unpaired preference dataset From preference to language modeling dataset From preference to prompt-completion dataset From preference to prompt-only dataset From explicit to implicit prompt preference dataset From preference to unpaired preference dataset From unpaired preference to language modeling dataset From unpaired preference to prompt-completion dataset From unpaired preference to prompt-only dataset From stepwise supervision to language modeling dataset From stepwise supervision to prompt completion dataset From stepwise supervision to prompt only dataset From stepwise supervision to unpaired preference dataset Vision datasets |
Serverless_Inference_API.txt | Serverless Inference API Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Serverless Inference API api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Serverless Inference API Instant Access to thousands of ML Models for Fast Prototyping Explore the most popular models for text, image, speech, and more — all with a simple API request. Build, test, and experiment without worrying about infrastructure or setup. Why use the Inference API? The Serverless Inference API offers a fast and free way to explore thousands of models for a variety of tasks. Whether you’re prototyping a new application or experimenting with ML capabilities, this API gives you instant access to high-performing models across multiple domains: Text Generation: Including large language models and tool-calling prompts, generate and experiment with high-quality responses. Image Generation: Easily create customized images, including LoRAs for your own styles. Document Embeddings: Build search and retrieval systems with SOTA embeddings. Classical AI Tasks: Ready-to-use models for text classification, image classification, speech recognition, and more. ⚡ Fast and Free to Get Started : The Inference API is free with higher rate limits for PRO users. For production needs, explore Inference Endpoints for dedicated resources, autoscaling, advanced security features, and more. Key Benefits 🚀 Instant Prototyping: Access powerful models without setup. 🎯 Diverse Use Cases: One API for text, image, and beyond. 🔧 Developer-Friendly: Simple requests, fast responses. Main Features Leverage over 800,000+ models from different open-source libraries (transformers, sentence transformers, adapter transformers, diffusers, timm, etc.). Use models for a variety of tasks, including text generation, image generation, document embeddings, NER, summarization, image classification, and more. Accelerate your prototyping by using GPU-powered models. Run very large models that are challenging to deploy in production. Production-grade platform without the hassle: built-in automatic scaling, load balancing and caching. Contents The documentation is organized into two sections: Getting Started Learn the basics of how to use the Inference API. API Reference Dive into task-specific settings and parameters. Inference Playground If you want to get started quickly with Chat Completion models use the Inference Playground to quickly test and compare models against your prompts. Serious about AI in your organisation? Build faster with the Hugging Face Enterprise Hub. < > Update on GitHub Getting Started → Serverless Inference API Why use the Inference AP I? Key Benefits Main Features Contents Inference Playground Serious about A I in your organisation? Build faster with the Hugging Face Enterprise Hub. |
JupyterLab_on_Spaces.txt | JupyterLab on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation JupyterLab on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started JupyterLab on Spaces JupyterLab is a web-based interactive development environment for Jupyter notebooks, code, and data. It is a great tool for data science and machine learning, and it is widely used by the community. With Hugging Face Spaces, you can deploy your own JupyterLab instance and use it for development directly from the Hugging Face website. ⚡️ Deploy a JupyterLab instance on Spaces You can deploy JupyterLab on Spaces with just a few clicks. First, go to this link or click the button below: Spaces requires you to define: An Owner : either your personal account or an organization you’re a part of. A Space name : the name of the Space within the account you’re creating the Space. The Visibility : private if you want the Space to be visible only to you or your organization, or public if you want it to be visible to other users. The Hardware : the hardware you want to use for your JupyterLab instance. This goes from CPUs to H100s. You can optionally configure a JUPYTER_TOKEN password to protect your JupyterLab workspace. When unspecified, defaults to huggingface . We strongly recommend setting this up if your Space is public or if the Space is in an organization. Storage in Hugging Face Spaces is ephemeral, and the data you store in the default configuration can be lost in a reboot or reset of the Space. We recommend to save your work to a remote location or to use persistent storage for your data. Setting up persistent storage To set up persistent storage on the Space, you go to the Settings page of your Space and choose one of the options: small , medium and large . Once persistent storage is set up, the JupyterLab image gets mounted in /data . Read more HF Docker Spaces If you have any feedback or change requests, please don’t hesitate to reach out to the owners on the Feedback Discussion . Acknowledgments This template was created by camenduru and nateraw , with contributions from osanseviero and azzr . < > Update on GitHub ← Example Docker Spaces Argilla on Spaces → Jupyter Lab on Spaces ⚡️ Deploy a Jupyter Lab instance on Spaces Setting up persistent storage Read more Acknowledgments |
Start_Here!.txt | Start Here! Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Start Here! Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Start Here! Please use the interactive tool below to help you get started with learning about a particular feature of Accelerate and how to utilize it! It will provide you with a code diff, an explanation towards what is going on, as well as provide you with some useful links to explore more within the documentation! Most code examples start from the following python code before integrating Accelerate in some way: Copied for batch in dataloader: optimizer.zero_grad() inputs, targets = batch inputs = inputs.to(device) targets = targets.to(device) outputs = model(inputs) loss = loss_function(outputs, targets) loss.backward() optimizer.step() scheduler.step() < > Update on GitHub ← Launching distributed training from Jupyter Notebooks Model memory estimator → Start Here! |
Access_control_in_organizations.txt | Access control in organizations Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Access control in organizations Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Managing Organizations Organization Cards Access Control in Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Access control in organizations You can set up Single Sign-On (SSO) to be able to map access control rules from your organization’s Identity Provider. Advanced and more fine-grained access control can be achieved with Resource Groups . The Resource Group feature is part of the Enterprise Hub . Members of organizations can have four different roles: read , contributor , write , or admin : read : read-only access to the Organization’s repos and metadata/settings (eg, the Organization’s profile, members list, API token, etc). contributor : additional write rights to the subset of the Organization’s repos that were created by the user. I.e., users can create repos and then modify only those repos. This is similar to the write role, but scoped to repos created by the user. write : write rights to all the Organization’s repos. Users can create, delete, or rename any repo in the Organization namespace. A user can also edit and delete files from the browser editor and push content with git . admin : in addition to write rights on repos, admin members can update the Organization’s profile, refresh the Organization’s API token, and manage Organization members. As an organization admin , go to the Members section of the org settings to manage roles for users. Viewing members’ email address This feature is part of the Enterprise Hub . You may be able to view the email addresses of members of your organization. The visibility of the email addresses depends on the organization’s SSO configuration, or verified organization status. If you verify a domain for your organization , you can view members’ email addresses for the verified domain. If SSO is configured for your organization, you can view the email address for each of your organization members by setting Matching email domains in the SSO configuration Managing Access Tokens with access to my organization See Tokens Management < > Update on GitHub ← Organization Cards Enterprise Hub → Access control in organizations Viewing members’ email address Managing Access Tokens with access to my organization |
Interface__ModelEntry.txt | Interface: ModelEntry Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: ModelEntry Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: ModelEntry Properties downloads • downloads : number Defined in hub/src/lib/list-models.ts:49 gated • gated : false | "auto" | "manual" Defined in hub/src/lib/list-models.ts:46 id • id : string Defined in hub/src/lib/list-models.ts:43 likes • likes : number Defined in hub/src/lib/list-models.ts:48 name • name : string Defined in hub/src/lib/list-models.ts:44 private • private : boolean Defined in hub/src/lib/list-models.ts:45 task • Optional task : "other" | "text-classification" | "token-classification" | "table-question-answering" | "question-answering" | "zero-shot-classification" | "translation" | "summarization" | "feature-extraction" | "text-generation" | "text2text-generation" | "fill-mask" | "sentence-similarity" | "text-to-speech" | "text-to-audio" | "automatic-speech-recognition" | "audio-to-audio" | "audio-classification" | "audio-text-to-text" | "voice-activity-detection" | "depth-estimation" | "image-classification" | "object-detection" | "image-segmentation" | "text-to-image" | "image-to-text" | "image-to-image" | "image-to-video" | "unconditional-image-generation" | "video-classification" | "reinforcement-learning" | "robotics" | "tabular-classification" | "tabular-regression" | "tabular-to-text" | "table-to-text" | "multiple-choice" | "text-retrieval" | "time-series-forecasting" | "text-to-video" | "image-text-to-text" | "visual-question-answering" | "document-question-answering" | "zero-shot-image-classification" | "graph-ml" | "mask-generation" | "zero-shot-object-detection" | "text-to-3d" | "image-to-3d" | "image-feature-extraction" | "video-text-to-text" | "keypoint-detection" | "any-to-any" Defined in hub/src/lib/list-models.ts:47 updatedAt • updatedAt : Date Defined in hub/src/lib/list-models.ts:50 < > Update on GitHub ← ListFileEntry OAuthResult → Interface: Model Entry Properties downloads Defined in gated Defined in id Defined in likes Defined in name Defined in private Defined in task Defined in updated At Defined in |
Spaces_Settings.txt | Spaces Settings Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Spaces Settings Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Handling Spaces Dependencies Spaces Settings Using Spaces for Organization Cards Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Spaces Settings You can configure your Space’s appearance and other settings inside the YAML block at the top of the README.md file at the root of the repository. For example, if you want to create a Space with Gradio named Demo Space with a yellow to orange gradient thumbnail: Copied --- title: Demo Space emoji: 🤗 colorFrom: yellow colorTo: orange sdk: gradio app_file: app.py pinned: false --- For additional settings, refer to the Reference section. < > Update on GitHub ← Handling Spaces Dependencies Using Spaces for Organization Cards → Spaces Settings |
Pipelines_for_inference.txt | Pipelines for inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Pipelines for inference Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Pipelines for inference The pipeline() makes it simple to use any model from the Hub for inference on any language, computer vision, speech, and multimodal tasks. Even if you don’t have experience with a specific modality or aren’t familiar with the underlying code behind the models, you can still use them for inference with the pipeline() ! This tutorial will teach you to: Use a pipeline() for inference. Use a specific tokenizer or model. Use a pipeline() for audio, vision, and multimodal tasks. Take a look at the pipeline() documentation for a complete list of supported tasks and available parameters. Pipeline usage While each task has an associated pipeline() , it is simpler to use the general pipeline() abstraction which contains all the task-specific pipelines. The pipeline() automatically loads a default model and a preprocessing class capable of inference for your task. Let’s take the example of using the pipeline() for automatic speech recognition (ASR), or speech-to-text. Start by creating a pipeline() and specify the inference task: Copied >>> from transformers import pipeline >>> transcriber = pipeline(task= "automatic-speech-recognition" ) Pass your input to the pipeline() . In the case of speech recognition, this is an audio input file: Copied >>> transcriber( "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac" ) { 'text' : 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES' } Not the result you had in mind? Check out some of the most downloaded automatic speech recognition models on the Hub to see if you can get a better transcription. Let’s try the Whisper large-v2 model from OpenAI. Whisper was released 2 years later than Wav2Vec2, and was trained on close to 10x more data. As such, it beats Wav2Vec2 on most downstream benchmarks. It also has the added benefit of predicting punctuation and casing, neither of which are possible with Wav2Vec2. Let’s give it a try here to see how it performs. Set torch_dtype="auto" to automatically load the most memory-efficient data type the weights are stored in. Copied >>> transcriber = pipeline(model= "openai/whisper-large-v2" , torch_dtype= "auto" ) >>> transcriber( "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac" ) { 'text' : ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.' } Now this result looks more accurate! For a deep-dive comparison on Wav2Vec2 vs Whisper, refer to the Audio Transformers Course . We really encourage you to check out the Hub for models in different languages, models specialized in your field, and more. You can check out and compare model results directly from your browser on the Hub to see if it fits or handles corner cases better than other ones. And if you don’t find a model for your use case, you can always start training your own! If you have several inputs, you can pass your input as a list: Copied transcriber( [ "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac" , "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac" , ] ) Pipelines are great for experimentation as switching from one model to another is trivial; however, there are some ways to optimize them for larger workloads than experimentation. See the following guides that dive into iterating over whole datasets or using pipelines in a webserver: of the docs: Using pipelines on a dataset Using pipelines for a webserver Parameters pipeline() supports many parameters; some are task specific, and some are general to all pipelines. In general, you can specify parameters anywhere you want: Copied transcriber = pipeline(model= "openai/whisper-large-v2" , my_parameter= 1 ) out = transcriber(...) # This will use `my_parameter=1`. out = transcriber(..., my_parameter= 2 ) # This will override and use `my_parameter=2`. out = transcriber(...) # This will go back to using `my_parameter=1`. Let’s check out 3 important ones: Device If you use device=n , the pipeline automatically puts the model on the specified device. This will work regardless of whether you are using PyTorch or Tensorflow. Copied transcriber = pipeline(model= "openai/whisper-large-v2" , device= 0 ) If the model is too large for a single GPU and you are using PyTorch, you can set torch_dtype='float16' to enable FP16 precision inference. Usually this would not cause significant performance drops but make sure you evaluate it on your models! Alternatively, you can set device_map="auto" to automatically determine how to load and store the model weights. Using the device_map argument requires the 🤗 Accelerate package: Copied pip install --upgrade accelerate The following code automatically loads and stores model weights across devices: Copied transcriber = pipeline(model= "openai/whisper-large-v2" , device_map= "auto" ) Note that if device_map="auto" is passed, there is no need to add the argument device=device when instantiating your pipeline as you may encounter some unexpected behavior! Batch size By default, pipelines will not batch inference for reasons explained in detail here . The reason is that batching is not necessarily faster, and can actually be quite slower in some cases. But if it works in your use case, you can use: Copied transcriber = pipeline(model= "openai/whisper-large-v2" , device= 0 , batch_size= 2 ) audio_filenames = [ f"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/ {i} .flac" for i in range ( 1 , 5 )] texts = transcriber(audio_filenames) This runs the pipeline on the 4 provided audio files, but it will pass them in batches of 2 to the model (which is on a GPU, where batching is more likely to help) without requiring any further code from you. The output should always match what you would have received without batching. It is only meant as a way to help you get more speed out of a pipeline. Pipelines can also alleviate some of the complexities of batching because, for some pipelines, a single item (like a long audio file) needs to be chunked into multiple parts to be processed by a model. The pipeline performs this chunk batching for you. Task specific parameters All tasks provide task specific parameters which allow for additional flexibility and options to help you get your job done. For instance, the transformers.AutomaticSpeechRecognitionPipeline. call () method has a return_timestamps parameter which sounds promising for subtitling videos: Copied >>> transcriber = pipeline(model= "openai/whisper-large-v2" , return_timestamps= True ) >>> transcriber( "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac" ) { 'text' : ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.' , 'chunks' : [{ 'timestamp' : ( 0.0 , 11.88 ), 'text' : ' I have a dream that one day this nation will rise up and live out the true meaning of its' }, { 'timestamp' : ( 11.88 , 12.38 ), 'text' : ' creed.' }]} As you can see, the model inferred the text and also outputted when the various sentences were pronounced. There are many parameters available for each task, so check out each task’s API reference to see what you can tinker with! For instance, the AutomaticSpeechRecognitionPipeline has a chunk_length_s parameter which is helpful for working on really long audio files (for example, subtitling entire movies or hour-long videos) that a model typically cannot handle on its own: Copied >>> transcriber = pipeline(model= "openai/whisper-large-v2" , chunk_length_s= 30 ) >>> transcriber( "https://huggingface.co/datasets/reach-vb/random-audios/resolve/main/ted_60.wav" ) { 'text' : " So in college, I was a government major, which means I had to write a lot of papers. Now, when a normal student writes a paper, they might spread the work out a little like this. So, you know. You get started maybe a little slowly, but you get enough done in the first week that with some heavier days later on, everything gets done and things stay civil. And I would want to do that like that. That would be the plan. I would have it all ready to go, but then actually the paper would come along, and then I would kind of do this. And that would happen every single paper. But then came my 90-page senior thesis, a paper you're supposed to spend a year on. I knew for a paper like that, my normal workflow was not an option, it was way too big a project. So I planned things out and I decided I kind of had to go something like this. This is how the year would go. So I'd start off light and I'd bump it up" } If you can’t find a parameter that would really help you out, feel free to request it ! Using pipelines on a dataset The pipeline can also run inference on a large dataset. The easiest way we recommend doing this is by using an iterator: Copied def data (): for i in range ( 1000 ): yield f"My example {i} " pipe = pipeline(model= "openai-community/gpt2" , device= 0 ) generated_characters = 0 for out in pipe(data()): generated_characters += len (out[ 0 ][ "generated_text" ]) The iterator data() yields each result, and the pipeline automatically recognizes the input is iterable and will start fetching the data while it continues to process it on the GPU (this uses DataLoader under the hood). This is important because you don’t have to allocate memory for the whole dataset and you can feed the GPU as fast as possible. Since batching could speed things up, it may be useful to try tuning the batch_size parameter here. The simplest way to iterate over a dataset is to just load one from 🤗 Datasets : Copied # KeyDataset is a util that will just output the item we're interested in. from transformers.pipelines.pt_utils import KeyDataset from datasets import load_dataset pipe = pipeline(model= "hf-internal-testing/tiny-random-wav2vec2" , device= 0 ) dataset = load_dataset( "hf-internal-testing/librispeech_asr_dummy" , "clean" , split= "validation[:10]" ) for out in pipe(KeyDataset(dataset, "audio" )): print (out) Using pipelines for a webserver Creating an inference engine is a complex topic which deserves it's own page. Link Vision pipeline Using a pipeline() for vision tasks is practically identical. Specify your task and pass your image to the classifier. The image can be a link, a local path or a base64-encoded image. For example, what species of cat is shown below? Copied >>> from transformers import pipeline >>> vision_classifier = pipeline(model= "google/vit-base-patch16-224" ) >>> preds = vision_classifier( ... images= "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{ "score" : round (pred[ "score" ], 4 ), "label" : pred[ "label" ]} for pred in preds] >>> preds [{ 'score' : 0.4335 , 'label' : 'lynx, catamount' }, { 'score' : 0.0348 , 'label' : 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor' }, { 'score' : 0.0324 , 'label' : 'snow leopard, ounce, Panthera uncia' }, { 'score' : 0.0239 , 'label' : 'Egyptian cat' }, { 'score' : 0.0229 , 'label' : 'tiger cat' }] Text pipeline Using a pipeline() for NLP tasks is practically identical. Copied >>> from transformers import pipeline >>> # This model is a `zero-shot-classification` model. >>> # It will classify text, except you are free to choose any label you might imagine >>> classifier = pipeline(model= "facebook/bart-large-mnli" ) >>> classifier( ... "I have a problem with my iphone that needs to be resolved asap!!" , ... candidate_labels=[ "urgent" , "not urgent" , "phone" , "tablet" , "computer" ], ... ) { 'sequence' : 'I have a problem with my iphone that needs to be resolved asap!!' , 'labels' : [ 'urgent' , 'phone' , 'computer' , 'not urgent' , 'tablet' ], 'scores' : [ 0.504 , 0.479 , 0.013 , 0.003 , 0.002 ]} Multimodal pipeline The pipeline() supports more than one modality. For example, a visual question answering (VQA) task combines text and image. Feel free to use any image link you like and a question you want to ask about the image. The image can be a URL or a local path to the image. For example, if you use this invoice image : Copied >>> from transformers import pipeline >>> vqa = pipeline(model= "impira/layoutlm-document-qa" ) >>> output = vqa( ... image= "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png" , ... question= "What is the invoice number?" , ... ) >>> output[ 0 ][ "score" ] = round (output[ 0 ][ "score" ], 3 ) >>> output [{ 'score' : 0.425 , 'answer' : 'us-001' , 'start' : 16 , 'end' : 16 }] To run the example above you need to have pytesseract installed in addition to 🤗 Transformers: Copied sudo apt install -y tesseract-ocr pip install pytesseract Using pipeline on large models with 🤗 accelerate : You can easily run pipeline on large models using 🤗 accelerate ! First make sure you have installed accelerate with pip install accelerate . First load your model using device_map="auto" ! We will use facebook/opt-1.3b for our example. Copied # pip install accelerate import torch from transformers import pipeline pipe = pipeline(model= "facebook/opt-1.3b" , torch_dtype=torch.bfloat16, device_map= "auto" ) output = pipe( "This is a cool example!" , do_sample= True , top_p= 0.95 ) You can also pass 8-bit loaded models if you install bitsandbytes and add the argument load_in_8bit=True Copied # pip install accelerate bitsandbytes import torch from transformers import pipeline pipe = pipeline(model= "facebook/opt-1.3b" , device_map= "auto" , model_kwargs={ "load_in_8bit" : True }) output = pipe( "This is a cool example!" , do_sample= True , top_p= 0.95 ) Note that you can replace the checkpoint with any Hugging Face model that supports large model loading, such as BLOOM. Creating web demos from pipelines with gradio Pipelines are automatically supported in Gradio , a library that makes creating beautiful and user-friendly machine learning apps on the web a breeze. First, make sure you have Gradio installed: Copied pip install gradio Then, you can create a web demo around an image classification pipeline (or any other pipeline) in a single line of code by calling Gradio’s Interface.from_pipeline function to launch the pipeline. This creates an intuitive drag-and-drop interface in your browser: Copied from transformers import pipeline import gradio as gr pipe = pipeline( "image-classification" , model= "google/vit-base-patch16-224" ) gr.Interface.from_pipeline(pipe).launch() By default, the web demo runs on a local server. If you’d like to share it with others, you can generate a temporary public link by setting share=True in launch() . You can also host your demo on Hugging Face Spaces for a permanent link. < > Update on GitHub ← Adding a new model to `transformers` Write portable code with AutoClass → Pipelines for inference Pipeline usage Parameters Device Batch size Task specific parameters Using pipelines on a dataset Using pipelines for a webserver Vision pipeline Text pipeline Multimodal pipeline Using pipeline on large models with 🤗 accelerate : Creating web demos from pipelines with gradio |
Using_Keras_at_Hugging_Face.txt | Using Keras at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using Keras at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using Keras at Hugging Face Keras is an open-source multi-backend deep learning framework, with support for JAX, TensorFlow, and PyTorch. You can find more details about it on keras.io . Exploring Keras in the Hub You can list keras models on the Hub by filtering by library name on the models page . Keras models on the Hub come up with useful features when uploaded directly from the Keras library: A generated model card with a description, a plot of the model, and more. A download count to monitor the popularity of a model. A code snippet to quickly get started with the model. Using existing models Keras is deeply integrated with the Hugging Face Hub. This means you can load and save models on the Hub directly from the library. To do that, you need to install a recent version of Keras and huggingface_hub . The huggingface_hub library is a lightweight Python client used by Keras to interact with the Hub. Copied pip install -U keras huggingface_hub Once you have the library installed, you just need to use the regular keras.saving.load_model method by passing as argument a Hugging Face path. An HF path is a repo_id prefixed by hf:// e.g. "hf://keras-io/weather-prediction" . Read more about load_model in Keras documentation . Copied import keras model = keras.saving.load_model( "hf://Wauplin/mnist_example" ) If you want to see how to load a specific model, you can click Use this model on the model page to get a working code snippet! Sharing your models Similarly to load_model , you can save and share a keras model on the Hub using model.save() with an HF path: Copied model = ... model.save( "hf://your-username/your-model-name" ) If the repository does not exist on the Hub, it will be created for you. The uploaded model contains a model card, a plot of the model, the metadata.json and config.json files, and a model.weights.h5 file containing the model weights. By default, the repository will contain a minimal model card. Check out the Model Card guide to learn more about model cards and how to complete them. You can also programmatically update model cards using huggingface_hub.ModelCard (see guide ). You might be already familiar with .keras files. In fact, a .keras file is simply a zip file containing the .json and model.weights.h5 files. When pushed to the Hub, the model is saved as an unzipped folder in order to let you navigate through the files. Note that if you manually upload a .keras file to a model repository on the Hub, the repository will automatically be tagged as keras but you won’t be able to load it using keras.saving.load_model . Additional resources Keras Developer Guides . Keras examples . < > Update on GitHub ← Flair TF-Keras (legacy) → Using Keras at Hugging Face Exploring Keras in the Hub Using existing models Sharing your models Additional resources |
Interface__WhoAmIUser.txt | Interface: WhoAmIUser Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: WhoAmIUser Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: WhoAmIUser Properties avatarUrl • avatarUrl : string Defined in hub/src/lib/who-am-i.ts:18 canPay • canPay : boolean Defined in hub/src/lib/who-am-i.ts:17 email • email : string Defined in hub/src/lib/who-am-i.ts:11 emailVerified • emailVerified : boolean Defined in hub/src/lib/who-am-i.ts:12 fullname • fullname : string Defined in hub/src/lib/who-am-i.ts:16 id • id : string Unique ID persistent across renames Defined in hub/src/lib/who-am-i.ts:9 isPro • isPro : boolean Defined in hub/src/lib/who-am-i.ts:13 name • name : string Defined in hub/src/lib/who-am-i.ts:15 orgs • orgs : WhoAmIOrg [] Defined in hub/src/lib/who-am-i.ts:14 periodEnd • periodEnd : null | number Unix timestamp in seconds Defined in hub/src/lib/who-am-i.ts:22 type • type : "user" Defined in hub/src/lib/who-am-i.ts:10 < > Update on GitHub ← WhoAmIOrg Use Agents to run multi-modal workflows from a natural language API → Interface: Who AmI User Properties avatar Url Defined in can Pay Defined in email Defined in email Verified Defined in fullname Defined in id Defined in is Pro Defined in name Defined in orgs Defined in period End Defined in type Defined in |
Class__HubApiError.txt | Class: HubApiError Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Class: HubApiError Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Class: HubApiError Error thrown when an API call to the Hugging Face Hub fails. Hierarchy Error ↳ HubApiError Constructors constructor • new HubApiError ( url , statusCode , requestId? , message? ): HubApiError Parameters Name Type url string statusCode number requestId? string message? string Returns HubApiError Overrides Error.constructor Defined in hub/src/error.ts:40 Properties cause • Optional cause : unknown Inherited from Error.cause Defined in doc-internal/node_modules/.pnpm/ [email protected] /node_modules/typescript/lib/lib.es2022.error.d.ts:24 data • Optional data : JsonObject Defined in hub/src/error.ts:38 message • message : string Inherited from Error.message Defined in doc-internal/node_modules/.pnpm/ [email protected] /node_modules/typescript/lib/lib.es5.d.ts:1077 name • name : string Inherited from Error.name Defined in doc-internal/node_modules/.pnpm/ [email protected] /node_modules/typescript/lib/lib.es5.d.ts:1076 requestId • Optional requestId : string Defined in hub/src/error.ts:37 stack • Optional stack : string Inherited from Error.stack Defined in doc-internal/node_modules/.pnpm/ [email protected] /node_modules/typescript/lib/lib.es5.d.ts:1078 statusCode • statusCode : number Defined in hub/src/error.ts:35 url • url : string Defined in hub/src/error.ts:36 prepareStackTrace ▪ Static Optional prepareStackTrace : ( err : Error , stackTraces : CallSite []) => any Optional override for formatting stack traces See https://v8.dev/docs/stack-trace-api#customizing-stack-traces Type declaration ▸ ( err , stackTraces ): any Parameters Name Type err Error stackTraces CallSite [] Returns any Inherited from Error.prepareStackTrace Defined in hub/node_modules/.pnpm/@ [email protected] /node_modules/@types/node/globals.d.ts:28 stackTraceLimit ▪ Static stackTraceLimit : number Inherited from Error.stackTraceLimit Defined in hub/node_modules/.pnpm/@ [email protected] /node_modules/@types/node/globals.d.ts:30 Methods captureStackTrace ▸ captureStackTrace ( targetObject , constructorOpt? ): void Create .stack property on a target object Parameters Name Type targetObject object constructorOpt? Function Returns void Inherited from Error.captureStackTrace Defined in hub/node_modules/.pnpm/@ [email protected] /node_modules/@types/node/globals.d.ts:21 < > Update on GitHub ← API Reference InvalidApiResponseFormatError → Class: Hub Api Error Hierarchy Constructors constructor Parameters Returns Overrides Defined in Properties cause Inherited from Defined in data Defined in message Inherited from Defined in name Inherited from Defined in request Id Defined in stack Inherited from Defined in status Code Defined in url Defined in prepare Stack Trace Type declaration Inherited from Defined in stack Trace Limit Inherited from Defined in Methods capture Stack Trace Parameters Returns Inherited from Defined in |
HIGGS.txt | HIGGS Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation HIGGS Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started HIGGS HIGGS is a 0-shot quantization algorithm that combines Hadamard preprocessing with MSE-Optimal quantization grids to achieve lower quantization error and SOTA performance. You can find more information in the paper arxiv.org/abs/2411.17525 . Runtime support for HIGGS is implemented through FLUTE , and its library . Quantization Example Copied from transformers import AutoModelForCausalLM, AutoTokenizer, HiggsConfig model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b-it" , quantization_config=HiggsConfig(bits= 4 ), device_map= "auto" , ) tokenizer = AutoTokenizer.from_pretrained( "google/gemma-2-9b-it" ) tokenizer.decode(model.generate( **tokenizer( "Hi," , return_tensors= "pt" ).to(model.device), temperature= 0.5 , top_p= 0.80 , )[ 0 ]) Pre-quantized models Some pre-quantized models can be found in the official collection on Hugging Face Hub. Current Limitations Architectures Currently, FLUTE, and HIGGS by extension, only support Llama 3 and 3.0 of 8B, 70B and 405B parameters, as well as Gemma-2 9B and 27B . We’re working on allowing to run more diverse models as well as allow arbitrary models by modifying the FLUTE compilation procedure. torch.compile HIGGS is fully compatible with torch.compile . Compiling model.forward , as described here , here’re the speedups it provides on RTX 4090 for Llama-3.1-8B-Instruct (forward passes/sec): Batch Size BF16 (With torch.compile ) HIGGS 4bit (No torch.compile ) HIGGS 4bit (With torch.compile ) 1 59 41 124 4 57 42 123 16 56 41 120 Quantized training Currently, HIGGS doesn’t support quantized training (and backward passes in general). We’re working on adding support for it. < > Update on GitHub ← EETQ HQQ → HIGGS Quantization Example Pre-quantized models Current Limitations |
LLM_inference_optimization.txt | LLM inference optimization Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation LLM inference optimization Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started LLM inference optimization Large language models (LLMs) have pushed text generation applications, such as chat and code completion models, to the next level by producing text that displays a high level of understanding and fluency. But what makes LLMs so powerful - namely their size - also presents challenges for inference. Basic inference is slow because LLMs have to be called repeatedly to generate the next token. The input sequence increases as generation progresses, which takes longer and longer for the LLM to process. LLMs also have billions of parameters, making it a challenge to store and handle all those weights in memory. This guide will show you how to use the optimization techniques available in Transformers to accelerate LLM inference. Hugging Face also provides Text Generation Inference (TGI) , a library dedicated to deploying and serving highly optimized LLMs for inference. It includes deployment-oriented optimization features not included in Transformers, such as continuous batching for increasing throughput and tensor parallelism for multi-GPU inference. Static kv-cache and torch.compile During decoding, a LLM computes the key-value (kv) values for each input token and since it is autoregressive, it computes the same kv values each time because the generated output becomes part of the input now. This is not very efficient because you’re recomputing the same kv values each time. To optimize this, you can use a kv-cache to store the past keys and values instead of recomputing them each time. However, since the kv-cache grows with each generation step and is dynamic, it prevents you from taking advantage of torch.compile , a powerful optimization tool that fuses PyTorch code into fast and optimized kernels. We have an entire guide dedicated to kv-caches here . The static kv-cache solves this issue by pre-allocating the kv-cache size to a maximum value which allows you to combine it with torch.compile for up to a 4x speed up. Your speed up may vary depending on the model size (larger models have a smaller speed up) and hardware. Currently, only Llama and a few other models support static kv-cache and torch.compile . Check this issue for a live model compatibility list. There are three flavors of static kv-cache usage, depending on the complexity of your task: Basic usage: simply set a flag in generation_config (recommended); Advanced usage: handle a cache object for multi-turn generation or a custom generation loop; Advanced usage: compile the entire generate function into a single graph, if having a single graph is relevant for you. Select the correct tab below for further instructions on each of these flavors. Regardless of the strategy used with torch.compile , you can avoid shape-related recompilations if you left-pad your LLM inputs to a limited set of values. The pad_to_multiple_of tokenizer flag is your friend! basic usage: generation_config advanced usage: control Static Cache advanced usage: end-to-end generate compilation For this example, let’s use the Gemma model. All we need to do is to: Access the model’s generation_config attribute and set the cache_implementation to “static”; Call torch.compile on the model to compile the forward pass with the static kv-cache. And that’s it! Copied from transformers import AutoTokenizer, AutoModelForCausalLM import torch import os os.environ[ "TOKENIZERS_PARALLELISM" ] = "false" # To prevent long warnings :) tokenizer = AutoTokenizer.from_pretrained( "google/gemma-2b" ) model = AutoModelForCausalLM.from_pretrained( "google/gemma-2b" , torch_dtype= "auto" , device_map= "auto" ) model.generation_config.cache_implementation = "static" model.forward = torch. compile (model.forward, mode= "reduce-overhead" , fullgraph= True ) input_text = "The theory of special relativity states " input_ids = tokenizer(input_text, return_tensors= "pt" ).to(model.device. type ) outputs = model.generate(**input_ids) print (tokenizer.batch_decode(outputs, skip_special_tokens= True )) [ 'The theory of special relativity states 1. The speed of light is constant in all inertial reference' ] Under the hood, generate will attempt to reuse the same cache object, removing the need for re-compilation at each call. Avoiding re-compilation is critical to get the most out of torch.compile , and you should be aware of the following: If the batch size changes or the maximum output length increases between calls, the cache will have to be reinitialized, triggering a new compilation; The first couple of calls of the compiled function are slower, as the function is being compiled. For a more advanced usage of the static cache, such as multi-turn conversations, we recommend instantiating and manipulating the cache object outside generate() . See the advanced usage tab. Speculative decoding For a more in-depth explanation, take a look at the Assisted Generation: a new direction toward low-latency text generation blog post! Another issue with autoregression is that for each input token you need to load the model weights each time during the forward pass. This is slow and cumbersome for LLMs which have billions of parameters. Speculative decoding alleviates this slowdown by using a second smaller and faster assistant model to generate candidate tokens that are verified by the larger LLM in a single forward pass. If the verified tokens are correct, the LLM essentially gets them for “free” without having to generate them itself. There is no degradation in accuracy because the verification forward pass ensures the same outputs are generated as if the LLM had generated them on its own. To get the largest speed up, the assistant model should be a lot smaller than the LLM so that it can generate tokens quickly. The assistant and LLM model must also share the same tokenizer to avoid re-encoding and decoding tokens. Speculative decoding is only supported for the greedy search and sampling decoding strategies, and it also doesn’t support batched inputs. Enable speculative decoding by loading an assistant model and passing it to the generate() method. greedy search sampling Copied from transformers import AutoModelForCausalLM, AutoTokenizer import torch from accelerate.test_utils.testing import get_backend device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) tokenizer = AutoTokenizer.from_pretrained( "facebook/opt-1.3b" ) inputs = tokenizer( "Einstein's theory of relativity states" , return_tensors= "pt" ).to(device) model = AutoModelForCausalLM.from_pretrained( "facebook/opt-1.3b" , torch_dtype= "auto" ).to(device) assistant_model = AutoModelForCausalLM.from_pretrained( "facebook/opt-125m" ).to(device) outputs = model.generate(**inputs, assistant_model=assistant_model) tokenizer.batch_decode(outputs, skip_special_tokens= True ) [ "Einstein's theory of relativity states that the speed of light is constant. " ] Prompt lookup decoding Prompt lookup decoding is a variant of speculative decoding that is also compatible with greedy search and sampling. Prompt lookup works especially well for input-grounded tasks - such as summarization - where there is often overlapping words between the prompt and output. These overlapping n-grams are used as the LLM candidate tokens. To enable prompt lookup decoding, specify the number of tokens that should be overlapping in the prompt_lookup_num_tokens parameter. Then you can pass this parameter to the generate() method. greedy decoding sampling Copied from transformers import AutoModelForCausalLM, AutoTokenizer import torch from accelerate.test_utils.testing import get_backend device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) tokenizer = AutoTokenizer.from_pretrained( "facebook/opt-1.3b" ) inputs = tokenizer( "The second law of thermodynamics states" , return_tensors= "pt" ).to(device) model = AutoModelForCausalLM.from_pretrained( "facebook/opt-1.3b" , torch_dtype= "auto" ).to(device) assistant_model = AutoModelForCausalLM.from_pretrained( "facebook/opt-125m" ).to(device) outputs = model.generate(**inputs, prompt_lookup_num_tokens= 3 ) print (tokenizer.batch_decode(outputs, skip_special_tokens= True )) [ 'The second law of thermodynamics states that entropy increases with temperature. ' ] Attention optimizations A known issue with transformer models is that the self-attention mechanism grows quadratically in compute and memory with the number of input tokens. This limitation is only magnified in LLMs which handles much longer sequences. To address this, try FlashAttention2 or PyTorch’s scaled dot product attention (SDPA), which are more memory efficient attention implementations and can accelerate inference. FlashAttention-2 FlashAttention and FlashAttention-2 break up the attention computation into smaller chunks and reduces the number of intermediate read/write operations to GPU memory to speed up inference. FlashAttention-2 improves on the original FlashAttention algorithm by also parallelizing over sequence length dimension and better partitioning work on the hardware to reduce synchronization and communication overhead. To use FlashAttention-2, set attn_implementation="flash_attention_2" in the from_pretrained() method. Copied from transformers import AutoModelForCausalLM, BitsAndBytesConfig quant_config = BitsAndBytesConfig(load_in_8bit= True ) model = AutoModelForCausalLM.from_pretrained( "google/gemma-2b" , quantization_config=quant_config, torch_dtype=torch.bfloat16, attn_implementation= "flash_attention_2" , ) Fine-Tuning with torch.compile and Padding-Free Data Collation In addition to optimizing inference, you can also enhance the training efficiency of large language models by leveraging torch.compile during fine-tuning and using a padding-free data collator. This approach can significantly speed up training and reduce computational overhead. Here’s how you can fine-tune a Llama model using SFTTrainer from the TRL library, with torch_compile enabled and a padding-free data collator: Copied #################### IMPORTS ################### import math import datasets import dataclasses from transformers import ( AutoModelForCausalLM, AutoTokenizer, TrainingArguments ) from trl import SFTConfig, SFTTrainer, DataCollatorForCompletionOnlyLM #################### MODEL LOADING WITH FLASH ATTENTION ################### model_name = "meta-llama/Llama-3.2-1B" model = AutoModelForCausalLM.from_pretrained( model_name, attn_implementation= "flash_attention_2" # Enables FlashAttention -2 ) tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast= True ) #################### DATA PREPROCESSING (PADDING-FREE) ################### response_template = "\n### Label:" response_template_ids = tokenizer.encode( response_template, add_special_tokens= False )[ 2 :] # Exclude special tokens data_collator = DataCollatorForCompletionOnlyLM( response_template_ids=response_template_ids, tokenizer=tokenizer, ignore_index= -100 , padding_free= True # Enables padding-free collation ) def format_dataset(example): return { "output" : example[ "output" ] + tokenizer.eos_token } data_files = { "train" : "path/to/dataset" } # Replace with your dataset path json_dataset = datasets.load_dataset( "json" , data_files=data_files) formatted_train_dataset = json_dataset[ "train" ].map(format_dataset) ################# TRAINING CONFIGURATION ############################ train_args = TrainingArguments( num_train_epochs= 5 , per_device_train_batch_size= 4 , per_device_eval_batch_size= 4 , gradient_accumulation_steps= 4 , learning_rate= 1e-5 , weight_decay= 0.0 , warmup_ratio= 0.03 , lr_scheduler_type= "cosine" , logging_steps= 1 , include_tokens_per_second= True , save_strategy= "epoch" , output_dir= "output" , torch_compile= True , # Enables torch.compile torch_compile_backend= "inductor" , torch_compile_mode= "default" ) # Convert TrainingArguments to SFTConfig transformer_train_arg_fields = [x.name for x in dataclasses.fields(SFTConfig)] transformer_kwargs = { k: v for k, v in train_args.to_dict().items() if k in transformer_train_arg_fields } training_args = SFTConfig(**transformer_kwargs) ####################### FINE-TUNING ##################### trainer = SFTTrainer( model=model, tokenizer=tokenizer, train_dataset=formatted_train_dataset, data_collator=data_collator, dataset_text_field= "output" , args=training_args, ) trainer.train() PyTorch scaled dot product attention Scaled dot product attention (SDPA) is automatically enabled in PyTorch 2.0 and it supports FlashAttention, xFormers, and PyTorch’s C++ implementation. SDPA chooses the most performant attention algorithm if you’re using a CUDA backend. For other backends, SDPA defaults to the PyTorch C++ implementation. SDPA supports FlashAttention-2 as long as you have the latest PyTorch version installed. Use the torch.nn.attention.sdpa_kernel context manager to explicitly enable or disable any of the four attention algorithms. For example, use SDPBackend.FLASH_ATTENTION to enable FlashAttention. Copied import torch from torch.nn.attention import SDPBackend, sdpa_kernel from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "google/gemma-2b" , torch_dtype=torch.bfloat16, ) with sdpa_kernel(SDPBackend.FLASH_ATTENTION): outputs = model.generate(**inputs) Quantization Quantization reduces the size of the LLM weights by storing them in a lower precision. This translates to lower memory usage and makes loading LLMs for inference more accessible if you’re constrained by your GPUs memory. If you aren’t limited by your GPU, you don’t necessarily need to quantize your model because it can incur a small latency cost (except for AWQ and fused AWQ modules) due to the extra step required to quantize and dequantize the weights. There are many quantization libraries (see the Quantization guide for more details) available, such as Quanto, AQLM, VPTQ, AWQ, and AutoGPTQ. Feel free to try them out and see which one works best for your use case. We also recommend reading the Overview of natively supported quantization schemes in 🤗 Transformers blog post which compares AutoGPTQ and bitsandbytes. Use the Model Memory Calculator below to estimate and compare how much memory is required to load a model. For example, try estimating how much memory it costs to load Mistral-7B-v0.1 . To load Mistral-7B-v0.1 in half-precision, set the torch_dtype parameter in the from_pretrained() method to torch.bfloat16 . This requires 13.74GB of memory. Copied from transformers import AutoTokenizer, AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-v0.1" , torch_dtype=torch.bfloat16, device_map= "auto" , ) To load a quantized model (8-bit or 4-bit) for inference, try bitsandbytes and set the load_in_4bit or load_in_8bit parameters to True . Loading the model in 8-bits only requires 6.87 GB of memory. Copied from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import torch quant_config = BitsAndBytesConfig(load_in_8bit= True ) model = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-v0.1" , quantization_config=quant_config, device_map= "auto" ) < > Update on GitHub ← Overview Methods and tools for efficient training on a single GPU → LL M inference optimization Static kv-cache and torch.compile Speculative decoding Prompt lookup decoding Attention optimizations Flash Attention-2 Fine- Tuning with torch.compile and Padding- Free Data Collation Py Torch scaled dot product attention Quantization |
Logging.txt | Logging Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Logging TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Logging As reinforcement learning algorithms are historically challenging to debug, it’s important to pay careful attention to logging. By default, the TRL PPOTrainer saves a lot of relevant information to wandb or tensorboard. Upon initialization, pass one of these two options to the PPOConfig : Copied training_args = PPOConfig(..., report_to= "wandb" ) # or "tensorboard" If you want to log with tensorboard, add the kwarg project_kwargs={"logging_dir": PATH_TO_LOGS} to the PPOConfig. PPO Logging Here’s a brief explanation for the logged metrics provided in the data: Key metrics to monitor. We want to maximize the reward, maintain a low KL divergence, and maximize entropy: env/reward_mean : The average reward obtained from the environment. Alias ppo/mean_scores , which is sed to specifically monitor the reward model. env/reward_std : The standard deviation of the reward obtained from the environment. Alias ` ppo/std_scores , which is sed to specifically monitor the reward model. env/reward_dist : The histogram distribution of the reward obtained from the environment. objective/kl : The mean Kullback-Leibler (KL) divergence between the old and new policies. It measures how much the new policy deviates from the old policy. The KL divergence is used to compute the KL penalty in the objective function. objective/kl_dist : The histogram distribution of the objective/kl . objective/kl_coef : The coefficient for Kullback-Leibler (KL) divergence in the objective function. ppo/mean_non_score_reward : The KL penalty calculated by objective/kl * objective/kl_coef as the total reward for optimization to prevent the new policy from deviating too far from the old policy. objective/entropy : The entropy of the model’s policy, calculated by -logprobs.sum(-1).mean() . High entropy means the model’s actions are more random, which can be beneficial for exploration. Training stats: ppo/learning_rate : The learning rate for the PPO algorithm. ppo/policy/entropy : The entropy of the model’s policy, calculated by pd = torch.nn.functional.softmax(logits, dim=-1); entropy = torch.logsumexp(logits, dim=-1) - torch.sum(pd * logits, dim=-1) . It measures the randomness of the policy. ppo/policy/clipfrac : The fraction of probability ratios (old policy / new policy) that fell outside the clipping range in the PPO objective. This can be used to monitor the optimization process. ppo/policy/approxkl : The approximate KL divergence between the old and new policies, measured by 0.5 * masked_mean((logprobs - old_logprobs) ** 2, mask) , corresponding to the k2 estimator in http://joschu.net/blog/kl-approx.html ppo/policy/policykl : Similar to ppo/policy/approxkl , but measured by masked_mean(old_logprobs - logprobs, mask) , corresponding to the k1 estimator in http://joschu.net/blog/kl-approx.html ppo/policy/ratio : The histogram distribution of the ratio between the new and old policies, used to compute the PPO objective. ppo/policy/advantages_mean : The average of the GAE (Generalized Advantage Estimation) advantage estimates. The advantage function measures how much better an action is compared to the average action at a state. ppo/policy/advantages : The histogram distribution of ppo/policy/advantages_mean . ppo/returns/mean : The mean of the TD(λ) returns, calculated by returns = advantage + values , another indicator of model performance. See https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/ for more details. ppo/returns/var : The variance of the TD(λ) returns, calculated by returns = advantage + values , another indicator of model performance. ppo/val/mean : The mean of the values, used to monitor the value function’s performance. ppo/val/var : The variance of the values, used to monitor the value function’s performance. ppo/val/var_explained : The explained variance for the value function, used to monitor the value function’s performance. ppo/val/clipfrac : The fraction of the value function’s predicted values that are clipped. ppo/val/vpred : The predicted values from the value function. ppo/val/error : The mean squared error between the ppo/val/vpred and returns, used to monitor the value function’s performance. ppo/loss/policy : The policy loss for the Proximal Policy Optimization (PPO) algorithm. ppo/loss/value : The loss for the value function in the PPO algorithm. This value quantifies how well the function estimates the expected future rewards. ppo/loss/total : The total loss for the PPO algorithm. It is the sum of the policy loss and the value function loss. Stats on queries, responses, and logprobs: tokens/queries_len_mean : The average length of the queries tokens. tokens/queries_len_std : The standard deviation of the length of the queries tokens. tokens/queries_dist : The histogram distribution of the length of the queries tokens. tokens/responses_len_mean : The average length of the responses tokens. tokens/responses_len_std : The standard deviation of the length of the responses tokens. tokens/responses_dist : The histogram distribution of the length of the responses tokens. (Costa: inconsistent naming, should be tokens/responses_len_dist ) objective/logprobs : The histogram distribution of the log probabilities of the actions taken by the model. objective/ref_logprobs : The histogram distribution of the log probabilities of the actions taken by the reference model. Crucial values During training, many values are logged, here are the most important ones: env/reward_mean , env/reward_std , env/reward_dist : the properties of the reward distribution from the “environment” / reward model ppo/mean_non_score_reward : The mean negated KL penalty during training (shows the delta between the reference model and the new policy over the batch in the step) Here are some parameters that are useful to monitor for stability (when these diverge or collapse to 0, try tuning variables): ppo/loss/value : it will spike / NaN when not going well. ppo/policy/ratio : ratio being 1 is a baseline value, meaning that the probability of sampling a token is the same under the new and old policy. If the ratio is too high like 200, it means the probability of sampling a token is 200 times higher under the new policy than the old policy. This is a sign that the new policy is too different from the old policy, which will likely cause overoptimization and collapse training later on. ppo/policy/clipfrac and ppo/policy/approxkl : if ratio is too high, the ratio is going to get clipped, resulting in high clipfrac and high approxkl as well. objective/kl : it should stay positive so that the policy is not too far away from the reference policy. objective/kl_coef : The target coefficient with AdaptiveKLController . Often increases before numerical instabilities. < > Update on GitHub ← Customize the Training AlignProp → Logging PP O Logging Crucial values |
Uploading_models.txt | Uploading models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Uploading models Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Uploading models To upload models to the Hub, you’ll need to create an account at Hugging Face . Models on the Hub are Git-based repositories , which give you versioning, branches, discoverability and sharing features, integration with dozens of libraries, and more! You have control over what you want to upload to your repository, which could include checkpoints, configs, and any other files. You can link repositories with an individual user, such as osanseviero/fashion_brands_patterns , or with an organization, such as facebook/bart-large-xsum . Organizations can collect models related to a company, community, or library! If you choose an organization, the model will be featured on the organization’s page, and every member of the organization will have the ability to contribute to the repository. You can create a new organization here . NOTE: Models do NOT need to be compatible with the Transformers/Diffusers libraries to get download metrics. Any custom model is supported. Read more below! There are several ways to upload models for them to be nicely integrated into the Hub and get download metrics , described below. In case your model is designed for a library that has built-in support , you can use the methods provided by the library. Custom models that use trust_remote_code=True can also leverage these methods. In case your model is a custom PyTorch model, one can leverage the PyTorchModelHubMixin class as it allows to add from_pretrained , push_to_hub to any nn.Module class, just like models in the Transformers, Diffusers and Timm libraries. In addition to programmatic uploads, you can always use the web interface or the git command line . Once your model is uploaded, we suggest adding a Model Card to your repo to document your model and make it more discoverable. Example [repository](https://huggingface.co/LiheYoung/depth_anything_vitl14) that leverages [PyTorchModelHubMixin](#upload-a-pytorch-model-using-huggingfacehub). Downloads are shown on the right. Using the web interface To create a brand new model repository, visit huggingface.co/new . Then follow these steps: In the “Files and versions” tab, select “Add File” and specify “Upload File”: From there, select a file from your computer to upload and leave a helpful commit message to know what you are uploading: Afterwards, click Commit changes to upload your model to the Hub! Inspect files and history You can check your repository with all the recently added files! The UI allows you to explore the model files and commits and to see the diff introduced by each commit: Add metadata You can add metadata to your model card. You can specify: the type of task this model is for, enabling widgets and the Inference API. the used library ( transformers , spaCy , etc.) the language the dataset metrics license a lot more! Read more about model tags here . Add TensorBoard traces Any repository that contains TensorBoard traces (filenames that contain tfevents ) is categorized with the TensorBoard tag . As a convention, we suggest that you save traces under the runs/ subfolder. The “Training metrics” tab then makes it easy to review charts of the logged variables, like the loss or the accuracy. Models trained with 🤗 Transformers will generate TensorBoard traces by default if tensorboard is installed. Upload from a library with built-in support First check if your model is from a library that has built-in support to push to/load from the Hub, like Transformers, Diffusers, Timm, Asteroid, etc.: https://huggingface.co/docs/hub/models-libraries . Below we’ll show how easy this is for a library like Transformers: Copied from transformers import BertConfig, BertModel config = BertConfig() model = BertModel(config) model.push_to_hub( "nielsr/my-awesome-bert-model" ) # reload model = BertModel.from_pretrained( "nielsr/my-awesome-bert-model" ) Some libraries, like Transformers, support loading code from the Hub . This is a way to make your model work with Transformers using the trust_remote_code=True flag. You may want to consider this option instead of a full-fledged library integration. Upload a PyTorch model using huggingface_hub In case your model is a (custom) PyTorch model, you can leverage the PyTorchModelHubMixin class available in the huggingface_hub Python library. It is a minimal class which adds from_pretrained and push_to_hub capabilities to any nn.Module , along with download metrics. Here is how to use it (assuming you have run pip install huggingface_hub ): Copied import torch import torch.nn as nn from huggingface_hub import PyTorchModelHubMixin class MyModel ( nn.Module, PyTorchModelHubMixin, # optionally, you can add metadata which gets pushed to the model card repo_url= "your-repo-url" , pipeline_tag= "text-to-image" , license= "mit" , ): def __init__ ( self, num_channels: int , hidden_size: int , num_classes: int ): super ().__init__() self.param = nn.Parameter(torch.rand(num_channels, hidden_size)) self.linear = nn.Linear(hidden_size, num_classes) def forward ( self, x ): return self.linear(x + self.param) # create model config = { "num_channels" : 3 , "hidden_size" : 32 , "num_classes" : 10 } model = MyModel(**config) # save locally model.save_pretrained( "my-awesome-model" ) # push to the hub model.push_to_hub( "your-hf-username/my-awesome-model" ) # reload model = MyModel.from_pretrained( "your-hf-username/my-awesome-model" ) As you can see, the only requirement is that your model inherits from PyTorchModelHubMixin . All instance attributes will be automatically serialized to a config.json file. Note that the init method can only take arguments which are JSON serializable. Python dataclasses are supported. This comes with automated download metrics, meaning that you’ll be able to see how many times the model is downloaded, the same way they are available for models integrated natively in the Transformers, Diffusers or Timm libraries. With this mixin class, each separate checkpoint is stored on the Hub in a single repository consisting of 2 files: a pytorch_model.bin or model.safetensors file containing the weights a config.json file which is a serialized version of the model configuration. This class is used for counting download metrics: everytime a user calls from_pretrained to load a config.json , the count goes up by one. See this guide regarding automated download metrics. It’s recommended to add a model card to each checkpoint so that people can read what the model is about, have a link to the paper, etc. Visit the huggingface_hub’s documentation to learn more. Alternatively, one can also simply programmatically upload files or folders to the hub: https://huggingface.co/docs/huggingface_hub/guides/upload . Using Git Finally, since model repos are just Git repositories, you can also use Git to push your model files to the Hub. Follow the guide on Getting Started with Repositories to learn about using the git CLI to commit and push your models. < > Update on GitHub ← Gated Models Downloading Models → Uploading models Using the web interface Upload from a library with built-in support Upload a Py Torch model using huggingface_hub Using Git |
Interface__ImageClassificationOutputValue.txt | Interface: ImageClassificationOutputValue Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: ImageClassificationOutputValue Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: ImageClassificationOutputValue Properties label • label : string The label for the class (model specific) Defined in inference/src/tasks/cv/imageClassification.ts:16 score • score : number A float that represents how likely it is that the image file belongs to this class. Defined in inference/src/tasks/cv/imageClassification.ts:20 < > Update on GitHub ← DocumentQuestionAnsweringOutput ImageSegmentationOutputValue → Interface: Image Classification Output Value Properties label Defined in score Defined in |
Text_to_Image.txt | Text to Image Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Text to Image api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Text to Image Generate an image based on a given text prompt. For more details about the text-to-image task, check out its dedicated page ! You will find examples and related materials. Recommended models black-forest-labs/FLUX.1-dev : One of the most powerful image generation models that can generate realistic outputs. stabilityai/stable-diffusion-3-medium-diffusers : A powerful text-to-image model. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Using huggingface_hub : Copied from huggingface_hub import InferenceClient client = InferenceClient( "black-forest-labs/FLUX.1-dev" , token= "hf_***" ) # output is a PIL.Image object image = client.text_to_image( "Astronaut riding a horse" ) Using requests : Copied import requests API_URL = "https://api-inference.huggingface.co/models/black-forest-labs/FLUX.1-dev" headers = { "Authorization" : "Bearer hf_***" } def query ( payload ): response = requests.post(API_URL, headers=headers, json=payload) return response.content image_bytes = query({ "inputs" : "Astronaut riding a horse" , }) # You can access the image with PIL.Image for example import io from PIL import Image image = Image. open (io.BytesIO(image_bytes)) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload inputs* string The input text data (sometimes called “prompt”) parameters object guidance_scale number A higher guidance scale value encourages the model to generate images closely linked to the text prompt, but values too high may cause saturation and other artifacts. negative_prompt string[] One or several prompt to guide what NOT to include in image generation. num_inference_steps integer The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. target_size object The size in pixel of the output image width* integer height* integer scheduler string Override the scheduler with a compatible one. seed integer Seed for the random number generator. Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body image unknown The generated image returned as raw bytes in the payload. < > Update on GitHub ← Text Generation Token Classification → Text to Image Recommended models Using the API AP I specification Request Response |
Polytropon.txt | Polytropon Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Polytropon PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Polytropon Polytropon is a multitask model with a number of different LoRA adapters in it’s “inventory”. The model learns the correct combination of adapters from the inventory with a routing function to choose the best subset of modules for a specific task. PEFT also supports Multi-Head Adapter Routing (MHR) for Polytropon which builds on and improves the routing function by combining the adapter heads more granularly. The adapter heads are separated into disjoint blocks and a different routing function is learned for each one, allowing for more expressivity. Combining Modular Skills in Multitask Learning Multi-Head Adapter Routing for Cross-Task Generalization The abstract from the paper is: A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks. In this work, we assume that each task is associated with a subset of latent discrete skills from a (potentially small) inventory. In turn, skills correspond to parameter-efficient (sparse / low-rank) model parameterisations. By jointly learning these and a task-skill allocation matrix, the network for each task is instantiated as the average of the parameters of active skills. To favour non-trivial soft partitions of skills across tasks, we experiment with a series of inductive biases, such as an Indian Buffet Process prior and a two-speed learning rate. We evaluate our latent-skill model on two main settings: 1) multitask reinforcement learning for grounded instruction following on 8 levels of the BabyAI platform; and 2) few-shot adaptation of pre-trained text-to-text generative models on CrossFit, a benchmark comprising 160 NLP tasks. We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning, compared to baselines with fully shared, task-specific, or conditionally generated parameters where knowledge is entangled across tasks. In addition, we show how discrete skills help interpretability, as they yield an explicit hierarchy of tasks. PolyConfig class peft. PolyConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False r : int = 8 target_modules : Optional[Union[list[str], str]] = None exclude_modules : Optional[Union[list[str], str]] = None modules_to_save : Optional[list[str]] = None init_weights : bool = True poly_type : Literal['poly'] = 'poly' n_tasks : int = 1 n_skills : int = 4 n_splits : int = 1 ) Parameters r ( int ) — Attention dimension of each Lora in Poly. target_modules ( Union[List[str],str] ) — The names of the modules to apply Poly to. exclude_modules ( Optional[Union[List[str], str]] ) — The names of the modules to not apply the adapter. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. modules_to_save ( List[str] ) — List of modules apart from Poly layers to be set as trainable and saved in the final checkpoint. init_weights (bool) — Whether to perform initialization of Poly weights. poly_type ( Literal["poly"] ) — The variant of the Poly module to use. Currently, only “poly” is supported. n_tasks ( int ) — The number of tasks in a multitasking scenario. n_skills ( int ) — The number of skills (LoRA) in each Poly layer. n_splits ( int ) — The number of splits within each LoRA of a Poly layer. A value greater than 1 indicates the use of Multi-Head Routing (MHR). This is the configuration class to store the configuration of a PolyModel . Polytropon (Poly) Multi-Head Routing (MHR) PolyModel class peft. PolyModel < source > ( model config adapter_name ) < > Update on GitHub ← BOFT P-tuning → Polytropon Poly Config Poly Model |
TGI_v3_overview.txt | TGI v3 overview Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation TGI v3 overview text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started TGI v3 overview Summary Performance leap: TGI processes 3x more tokens, 13x faster than vLLM on long prompts. Zero config ! 3x more tokens. By reducing our memory footprint, we’re able to ingest many more tokens and more dynamically than before. A single L4 (24GB) can handle 30k tokens on llama 3.1-8B, while vLLM gets barely 10k. A lot of work went into reducing the footprint of the runtime and its effect are best seen on smaller constrained environments. 13x faster On long prompts (200k+ tokens) conversation replies take 27.5s in vLLM, while it takes only 2s in TGI. How so ? We keep the initial conversation around, so when a new reply comes in, we can answer almost instantly. The overhead of the lookup is ~5us. Thanks @Daniël de Kok for the beast data structure. Zero config That’s it. Remove all the flags your are using and you’re likely to get the best performance. By evaluating the hardware and model, TGI carefully selects automatic values to give best performance. In production, we don’t have any flags anymore in our deployments. We kept all existing flags around, they may come in handy in niche scenarios. Benchmarks Methodology To ensure accurate and reliable results, we employed a robust benchmarking protocol that addresses common pitfalls in performance evaluation. Specifically: Consistent Code : We used the same codebase to run against different engines, ensuring that any performance differences are attributable to the LLM itself, rather than variations in the testing framework. Request-Based Measurement : Instead of measuring Requests Per Second (RPS) by sending as many requests as possible, we opted for a more consistent approach, sending a fixed number of requests and measuring the time it takes for the server to complete all of them. This method avoids boundary effects and provides a more accurate representation of performance. Realistic Combinations : We selected realistic combinations of LLMs and hardware configurations so we used 8xH100 for a 70B, not a 8B, which would be a waste of money. Realistic scenarios We benchmarked engines with prefix caching on, so we are reporting the results of the 2nd run, not the first one. During the first run of a benchmark, every request is new, so prefix caching is not working, masking the real world benefits of using it. Note: Boundary effect is when the benchmarks are flaky because their results depend on fine details of the engine being benchmarked. For instance, a system ingesting a constant 10RPS, but receiving in the benchmark a single final request at -0.1s before the end of the benchmark, and that single request takes a full 10s to process. Then a benchmark taking 30s would measure 7.5RPS instead of the expected 10, because that single query isn’t being parallelized with others. Another very slightly slower engine would receive that request at +0.1s which would get discarded by the benchmark and therefore measure the slower system as being faster. For more details on benchmarking in general we recommend the documentation of k6: https://grafana.com/docs/k6/latest/ . Scenarios We selected a handful of scenarios to simplify the picture, they seem to accurately reflect a larger trend. Small scenario : This scenario consists of the first 200 requests from the orca datasets being prompted to the model. The 200 requests total 8k tokens together and are representative of conversation starters. Prefix caching has very limited impact in that scenario and we feel it’s a relatively balanced benchmark for simple use cases. Long scenario : This scenario consists of 20 requests totalling 200k prompt tokens which are essentially asking for summaries of large chunks for text. In practical scenarios this is really useful when you are feeding large chunks of code, large chunks of business data or documents repeatedly and ask simple questions about them (summarization, classification, or where to find some data). This scenario is the one closest to what a lot of professional use cases seem to be doing by including a lot of information in the prompt itself. Those very long conversations are the ones that benefit the most for our recent changes since we are enable ever larger prompts and ever faster caching. Hardware L4 : This is a single L4 (24GB) which represents small or even home compute capabilities. We tested meta-llama/Meta-Llama-3.1-8B-Instruct on it. 4xL4 : This is a more beefy deployment usually used for either very large requests deployments for 8B models (the ones under test) or it can also easily handle all 30GB models. For this benchmark we tested meta-llama/Meta-Llama-3.1-8B-Instruct 8xH100 This is one of the beefiest deployments possible. We tested meta-llama/Meta-Llama-3.1-70B-Instruct as it’s the most representative models of this size. Llama 3.3 wasn’t released at the time of benchmarking (it’s the exact same model so it doesn’t make any difference). Replicating the results The commands to run the benchmarks are as follows: Prepare the datasets: Copied cd text-generation-inference/load_tests make prepare_orca python long.py Launch the engine: TGI: text-generation-launcher --model-id $MODEL_ID --num-shard $N --port 8000 (or docker variant) vLLM: vllm serve $MODEL_ID --tensor-parallel $N —enable-prefix-caching (or docker variant) Start scenario: Small: MODEL_ID=$MODEL_ID HOST=localhost:8000 k6 run load_tests/common.js Long: MODEL_ID=$MODEL_ID HOST=localhost:8000 k6 run load_tests/long.js Results Our benchmarking results show significant performance gains, with a 13x speedup over vLLM with prefix caching, and up to 30x speedup without prefix caching. These results are consistent with our production data and demonstrate the effectiveness of our optimized LLM architecture. Raw results 2nd run TGI v3 (time in s) vLLM (s) Amount of req Llama 3.1 8b Small test - L4 - 8B 17.5 19.9 200 Llama 3.1 8b Long test* - L4 - 8B 53 57 10 Llama 3.1 8b Small test - 4xL4 - 8B 4.8 6 200 Llama 3.1 8b Long test - 4xL4 - 8B 3.2 12.5 20 Llama 3.1 70b Small test - 8XH100 - 70B 6.2 7.4 200 Llama 3.1 70b Long test - 8H100 - 70B 2 27.5 20 1st run TGI (s) vLLM (s) Amount of req Llama 3.1 8b Small test - L4 19.9 19.9 200 Llama 3.1 8b Long test (10) - L4 49.8 55 10 Llama 3.1 8b Small test - 4xL4 13 12.6 200 Llama 3.1 8b Long test - 4xL4 47 50.3 20 Llama 3.1 70b Small test - 8XH100 7.5 7.6 200 Llama 3.1 70b Long test - 8H100 12.1 28.3 20 Caveats and Limitations While our results are promising, there are some caveats to consider: Constrained kv-cache : If a deployment lacks kv-cache space, that means that many queries will require the same slots of kv-cache, leading to contention in the kv-cache. You can limit that effect by limiting --max-total-tokens to reduce individual queries impact. You can also use more GPUs or larger GPUs in order to increase the size of the kv-cache. Replication : In scenarios where multiple replicas are behind a single endpoint, there’s no reason for every query from a particular user to hit the same replica, therefore the cache will not be present, meaning no speed benefit. You can use sticky sessions load balancing to force every user to send their requests on the same replica. Do not apply this blindly, it’s possible this may not be necessary at all. Technical Insights Our performance gains can be attributed to several key factors: New Kernels : Our custom kernels, including flashinfer and flashdecoding , offer improved performance at large prompt lengths and enable more efficient scheduling. Prefix Caching : Our optimized prefix caching structure allows for fast query matching, even for long prompts. The overhead is roughly 6us. Chunking Code : Our chunking code enables finer control over compute resources, ensuring optimal performance and reduced VRAM usage. Kernel Optimizations : We’ve implemented various other kernel optimizations, including better kernel selection. Notably we’ve implemented several small kernels involved in the queries bookkeeping which are particularly efficient on small models. Every kernel launch has an overhead of several milliseconds so fusing them together increases a lot performance when this bookkeeping is important relative to the raw model calculations. This happens typically on oversized compute for a particular model and particularly small models. VRAM efficiency : In the realm of very large requests (100k+ tokens) there are a lot of places which start becoming big memory consumers. We’ve hunted the biggest ones and found ways to reduce/reuse or delete them. The biggest culprit probably is logits calculation. Logits for llama 3.1-8b take 25.6GB (=100k tokens 128k vocabulary 2(f16)) which is more than the full model which is 16GB. The thing is that in general we do not need every prompt logits, so we simply removed them and removed them from being potentially asked by users by default. We think this is ok since they are mostly used by researchers. You can enable your deployments to have them again by using the --enable-prefill-logprobs flag, but you will experience reduced token prompt size. Future Directions While we’ve made significant progress, there are still opportunities for improvement: Special models : All LLMs come with the aforementioned improvements. Some specific set of features might not (some quantizations, speculation or VLMs for instance are harder to optimize for with the same level of detail). KV-Cache Long-Term Retention : Addressing KV-cache long-term retention is a challenge. There are several solutions envisionned like shared KV-cache (like redis or memcached) solutions or innovative storage approaches. It is an area of ongoing research of ours. Multimodal models : We are also investigating quite a lot other kind of models, like audio-to-audio, image/video generation, and other hybrids, where we see a lot of potential of applying the same principles we’ve applied in TGI to maximize performance. By sharing our benchmarking methodology, results, and technical insights, we aim to contribute to the ongoing development of more efficient and effective LLMs. < > Update on GitHub ← API Reference Streaming → TG I v3 overview Summary 3x more tokens. 13x faster Zero config Benchmarks Methodology Scenarios Hardware Replicating the results Results Caveats and Limitations Technical Insights Future Directions |
Image_to_Image.txt | Image to Image Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Image to Image api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Image to Image Image-to-image is the task of transforming a source image to match the characteristics of a target image or a target image domain. Example applications: Transferring the style of an image to another image Colorizing a black and white image Increasing the resolution of an image For more details about the image-to-image task, check out its dedicated page ! You will find examples and related materials. Recommended models timbrooks/instruct-pix2pix : A model that takes an image and an instruction to edit the image. Explore all available models and find the one that suits you best here . Using the API No snippet available for this task. API specification Request Payload inputs* string The input image data as a base64-encoded string. If no parameters are provided, you can also provide the image data as a raw bytes payload. parameters object guidance_scale number For diffusion models. A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. negative_prompt string[] One or several prompt to guide what NOT to include in image generation. num_inference_steps integer For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. target_size object The size in pixel of the output image. width* integer height* integer Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body image unknown The output image returned as raw bytes in the payload. < > Update on GitHub ← Image Segmentation Image-Text to Text → Image to Image Recommended models Using the API AP I specification Request Response |
Billing.txt | Billing Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Billing Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Billing At Hugging Face, we build a collaboration platform for the ML community (i.e., the Hub) and monetize by providing advanced features and simple access to compute for AI. Any feedback or support request related to billing is welcome at [email protected] Enterprise Hub subscriptions We offer advanced security and compliance features for organizations through our Enterprise Hub subscription, including Single Sign-On , Advanced Access Control for repositories, control over your data location, higher storage capacity for private repositories, and more. The Enterprise Hub is billed like a typical subscription. It renews automatically, but you can choose to cancel it at any time in the organization’s billing settings. You can pay for the Enterprise Hub subscription with a credit card or your AWS account. Upon renewal, the number of seats in your Enterprise Hub subscription will be updated to match the number of members of your organization. Private repository storage above the included storage will be billed along with your subscription renewal. PRO subscription The PRO subscription unlocks additional features for users, including: Higher free tier for the Serverless Inference API and when consuming ZeroGPU Spaces Higher storage capacity for private repositories Ability to create ZeroGPU Spaces and use Dev Mode Ability to write Social Posts and Community Blogs Leverage the Dataset Viewer on private datasets View the full list of benefits at https://huggingface.co/subscribe/pro Similarly to the Enterprise Hub subscription, PRO subscriptions are billed like a typical subscription. The subscription renews automatically for you. You can choose to cancel the subscription at anytime in your billing settings: https://huggingface.co/settings/billing You can only pay for the PRO subscription with a credit card. The subscription is billed separately from any pay-as-you-go compute usage. Private repository storage above the included storage will be billed along with your subscription renewal. Note: PRO benefits are also included in the Enterprise Hub subscription. Pay-as-you-go private storage Above the included 1TB (or 1TB per seat) of private storage in PRO and Enterprise Hub, private storage is invoiced at $25/TB/month , in 1TB increments. It is billed with the renewal invoices of your PRO or Enterprise Hub subscription. Compute Services on the Hub We also directly provide compute services with Spaces , Inference Endpoints and the Serverless Inference API . While most of our compute services have a comprehensive free tier, users and organizations can pay to access more powerful hardware accelerators. The billing for our compute services is usage-based, meaning you only pay for what you use. You can monitor your usage at any time from your billing dashboard, located in your user’s or organization’s settings menu. Compute services usage is billed separately from PRO and Enterprise Hub subscriptions (and potential private storage). Invoices for compute services are edited at the beginning of each month. Available payment methods Hugging Face uses Stripe to securely process your payment information. The only payment method supported for Hugging Face compute services is credit cards. You can add a credit card to your account from your billing settings. Billing thresholds & Invoicing When using credit cards as a payment method, you’ll be billed for the Hugging Face compute usage each time the accrued usage goes above a billing threshold for your user or organization. On the 1st of every month, Hugging Face edits an invoice for usage accrued during the prior month. Any usage that has yet to be charged will be charged at that time. For example, if your billing threshold is set at $100.00, and you incur $254.00 of usage during a given month, your credit card will be charged a total of three times during the month: Once for usage between $0 and $100: $100 Once for usage between $100 and $200: $100 Once at the end of the month for the remaining $54: $54 Note: this will be detailed in your monthly invoice. You can view invoices and receipts for the last 3 months in your billing dashboard. Cloud providers partnerships We partner with cloud providers like AWS , Azure , and Google Cloud to make it easy for customers to use Hugging Face directly in their cloud of choice. These solutions and usage are billed directly by the cloud provider. Ultimately, we want people to have great options for using Hugging Face wherever they build ML-powered products. You also have the option to link your Hugging Face organization to your AWS account via AWS Marketplace . Hugging Face compute service usage will then be included in your AWS bill. Read more in our blog post . Support FAQ Q. Why did I get charged $10 when I added my credit card? When will I get this back? A. This amount is not charged and the hold should clear within a few business days. If you have more questions about the status of the clear, you can contact your bank for more information. Q. My card was declined after adding to my account. What’s up? A. Please ensure the card supports 3D-secure authentication and is properly configured for recurring online payments. We do not yet support credit cards issued in India as we’re working on adding system compliance with the latest RBI directives. Until we add support for Indian credit cards, you can: Link an organization account to an AWS account in order to access pay-as-you-go features (Endpoints, Spaces, AutoTrain): Hugging Face Platform on the AWS Marketplace: Pay with your AWS Account Use a credit card issued in another country Q. When am I going to get my invoice for pay as you go services? A. We bill in arrears and issue invoices for the prior month’s usage - typically the first of the month. So if you incurred billing usage in January, you’ll see the final payment process and invoiced February 1st. Q. Why did you charge me multiple times during the month? A. If you’re a new HF account using our premium pay as you go services, we’ll process a few billing threshold payments. Don’t worry, you’ll get an invoice for the total usage incurred for the month at the end of the billing period that will include these processed thresholds payments. For more information see https://huggingface.co/docs/hub/billing#billing-thresholds—invoicing . Q. I need copies of my past invoices, where can I find these? A. You can access up to the previous 3 months from the current month in your billing settings: https://huggingface.co/settings/billing . Click on the “End-of-period Invoice” link under that month’s “Payments & Invoices” and you’ll be able to download the invoice and the receipt. As an example:, if it’s currently January, you’ll be able to access the previous months’ invoices: December, November, and October. You can also check your email as we’ll send a copy of the invoice / receipt to the email address on the account. Q. I need to update my credit card in my account. What to do? A. Head to https://huggingface.co/settings/billing/payment and update your payment method at anytime. Q. Oh no! My payment failed, what do I do to avoid a service interruption? A. You can pay your bill with another payment method by clicking on the “pay online” link in the unpaid invoice. Click on the “End-of-period Invoice” link under that month’s “Payments & Invoices” and you’ll be able to pay online. You can also update your credit card at https://huggingface.co/settings/billing/payment . Subscriptions Q. I need to pause my PRO subscription for a bit, where can I do this? A. You can cancel your subscription at anytime here: https://huggingface.co/settings/billing/subscription . Drop us a line at [email protected] with your feedback. Q. My org has an Enterprise Hub subscription and I need to update the number of seats. How can I do this? A. The number of seats will automatically be adjusted at the time of the subscription renewal to reflect any increases in the number of members in the organization during the previous period. There’s no need to update the subscribed number of seats during the month or year as it’s a flat fee subscription. < > Update on GitHub ← Gating Group Collections Security → Billing Enterprise Hub subscriptions PR O subscription Pay-as-you-go private storage Compute Services on the Hub Available payment methods Billing thresholds & Invoicing Cloud providers partnerships Support FAQ |
Create_a_dataset_card.txt | Create a dataset card Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Create a dataset card Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Create a dataset card Each dataset should have a dataset card to promote responsible usage and inform users of any potential biases within the dataset. This idea was inspired by the Model Cards proposed by Mitchell, 2018 . Dataset cards help users understand a dataset’s contents, the context for using the dataset, how it was created, and any other considerations a user should be aware of. Creating a dataset card is easy and can be done in just a few steps: Go to your dataset repository on the Hub and click on Create Dataset Card to create a new README.md file in your repository. Use the Metadata UI to select the tags that describe your dataset. You can add a license, language, pretty_name, the task_categories, size_categories, and any other tags that you think are relevant. These tags help users discover and find your dataset on the Hub. For a complete, but not required, set of tag options you can also look at the Dataset Card specifications . This’ll have a few more tag options like multilinguality and language_creators which are useful but not absolutely necessary. Click on the Import dataset card template link to automatically create a template with all the relevant fields to complete. Fill out the template sections to the best of your ability. Take a look at the Dataset Card Creation Guide for more detailed information about what to include in each section of the card. For fields you are unable to complete, you can write [More Information Needed] . Once you’re done, commit the changes to the README.md file and you’ll see the completed dataset card on your repository. YAML also allows you to customize the way your dataset is loaded by defining splits and/or configurations without the need to write any code. Feel free to take a look at the SNLI , CNN/DailyMail , and Allociné dataset cards as examples to help you get started. < > Update on GitHub ← Share Structure your repository → Create a dataset card |
ControlNet.txt | ControlNet Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation ControlNet Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel ’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab #!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid from PIL import Image import cv2 import numpy as np original_image = load_image( "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" ) image = np.array(original_image) low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline . Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler import torch controlnet = ControlNetModel.from_pretrained( "lllyasviel/sd-controlnet-canny" , torch_dtype=torch.float16, use_safetensors= True ) pipe = StableDiffusionControlNetPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , controlnet=controlnet, torch_dtype=torch.float16, use_safetensors= True ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( "the mona lisa" , image=canny_image ).images[ 0 ] make_image_grid([original_image, canny_image, output], rows= 1 , cols= 3 ) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch import numpy as np from transformers import pipeline from diffusers.utils import load_image, make_image_grid image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" ) def get_depth_map ( image, depth_estimator ): image = depth_estimator(image)[ "depth" ] image = np.array(image) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) detected_map = torch.from_numpy(image). float () / 255.0 depth_map = detected_map.permute( 2 , 0 , 1 ) return depth_map depth_estimator = pipeline( "depth-estimation" ) depth_map = get_depth_map(image, depth_estimator).unsqueeze( 0 ).half().to( "cuda" ) Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline . Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler import torch controlnet = ControlNetModel.from_pretrained( "lllyasviel/control_v11f1p_sd15_depth" , torch_dtype=torch.float16, use_safetensors= True ) pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , controlnet=controlnet, torch_dtype=torch.float16, use_safetensors= True ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( "lego batman and robin" , image=image, control_image=depth_map, ).images[ 0 ] make_image_grid([image, output], rows= 1 , cols= 2 ) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" ) init_image = init_image.resize(( 512 , 512 )) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" ) mask_image = mask_image.resize(( 512 , 512 )) make_image_grid([init_image, mask_image], rows= 1 , cols= 2 ) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np import torch def make_inpaint_condition ( image, image_mask ): image = np.array(image.convert( "RGB" )).astype(np.float32) / 255.0 image_mask = np.array(image_mask.convert( "L" )).astype(np.float32) / 255.0 assert image.shape[ 0 : 1 ] == image_mask.shape[ 0 : 1 ] image[image_mask > 0.5 ] = - 1.0 # set as masked pixel image = np.expand_dims(image, 0 ).transpose( 0 , 3 , 1 , 2 ) image = torch.from_numpy(image) return image control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline . Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler controlnet = ControlNetModel.from_pretrained( "lllyasviel/control_v11p_sd15_inpaint" , torch_dtype=torch.float16, use_safetensors= True ) pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , controlnet=controlnet, torch_dtype=torch.float16, use_safetensors= True ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe( "corgi face with large ears, detailed, pixar, animated, disney" , num_inference_steps= 20 , eta= 1.0 , image=init_image, mask_image=mask_image, control_image=control_image, ).images[ 0 ] make_image_grid([init_image, mask_image, output], rows= 1 , cols= 3 ) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do its best to “guess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel from diffusers.utils import load_image, make_image_grid import numpy as np import torch from PIL import Image import cv2 controlnet = ControlNetModel.from_pretrained( "lllyasviel/sd-controlnet-canny" , use_safetensors= True ) pipe = StableDiffusionControlNetPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , controlnet=controlnet, use_safetensors= True ).to( "cuda" ) original_image = load_image( "https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png" ) image = np.array(original_image) low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) canny_image = Image.fromarray(image) image = pipe( "" , image=canny_image, guess_mode= True , guidance_scale= 3.0 ).images[ 0 ] make_image_grid([original_image, canny_image, image], rows= 1 , cols= 3 ) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the 🤗 Diffusers Hub organization ! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL from diffusers.utils import load_image, make_image_grid from PIL import Image import cv2 import numpy as np import torch original_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" ) image = np.array(original_image) low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) canny_image = Image.fromarray(image) make_image_grid([original_image, canny_image], rows= 1 , cols= 2 ) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline . You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained( "diffusers/controlnet-canny-sdxl-1.0" , torch_dtype=torch.float16, use_safetensors= True ) vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix" , torch_dtype=torch.float16, use_safetensors= True ) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors= True ) pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" negative_prompt = 'low quality, bad quality, sketches' image = pipe( prompt, negative_prompt=negative_prompt, image=canny_image, controlnet_conditioning_scale= 0.5 , ).images[ 0 ] make_image_grid([original_image, canny_image, image], rows= 1 , cols= 3 ) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True : Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL from diffusers.utils import load_image, make_image_grid import numpy as np import torch import cv2 from PIL import Image prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" negative_prompt = "low quality, bad quality, sketches" original_image = load_image( "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" ) controlnet = ControlNetModel.from_pretrained( "diffusers/controlnet-canny-sdxl-1.0" , torch_dtype=torch.float16, use_safetensors= True ) vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix" , torch_dtype=torch.float16, use_safetensors= True ) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors= True ) pipe.enable_model_cpu_offload() image = np.array(original_image) image = cv2.Canny(image, 100 , 200 ) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) canny_image = Image.fromarray(image) image = pipe( prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale= 0.5 , image=canny_image, guess_mode= True , ).images[ 0 ] make_image_grid([original_image, canny_image, image], rows= 1 , cols= 3 ) You can use a refiner model with StableDiffusionXLControlNetPipeline to improve image quality, just like you can with a regular StableDiffusionXLPipeline . See the Refine image quality section to learn how to use the refiner model. Make sure to use StableDiffusionXLControlNetPipeline and pass image and controlnet_conditioning_scale . Copied base = StableDiffusionXLControlNetPipeline(...) image = base( prompt=prompt, controlnet_conditioning_scale= 0.5 , image=canny_image, num_inference_steps= 40 , denoising_end= 0.8 , output_type= "latent" , ).images # rest exactly as with StableDiffusionXLPipeline MultiControlNet Replace the SDXL model with a model like stable-diffusion-v1-5/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet . To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid from PIL import Image import numpy as np import cv2 original_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" ) image = np.array(original_image) low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) # zero out middle columns of image where pose will be overlaid zero_start = image.shape[ 1 ] // 4 zero_end = zero_start + image.shape[ 1 ] // 2 image[:, zero_start:zero_end] = 0 image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) canny_image = Image.fromarray(image) make_image_grid([original_image, canny_image], rows= 1 , cols= 2 ) original image canny image For human pose estimation, install controlnet_aux : Copied # uncomment to install the necessary library in Colab #!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector openpose = OpenposeDetector.from_pretrained( "lllyasviel/ControlNet" ) original_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" ) openpose_image = openpose(original_image) make_image_grid([original_image, openpose_image], rows= 1 , cols= 2 ) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline . Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler import torch controlnets = [ ControlNetModel.from_pretrained( "thibaud/controlnet-openpose-sdxl-1.0" , torch_dtype=torch.float16 ), ControlNetModel.from_pretrained( "diffusers/controlnet-canny-sdxl-1.0" , torch_dtype=torch.float16, use_safetensors= True ), ] vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix" , torch_dtype=torch.float16, use_safetensors= True ) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors= True ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality" negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" generator = torch.manual_seed( 1 ) images = [openpose_image.resize(( 1024 , 1024 )), canny_image.resize(( 1024 , 1024 ))] images = pipe( prompt, image=images, num_inference_steps= 25 , generator=generator, negative_prompt=negative_prompt, num_images_per_prompt= 3 , controlnet_conditioning_scale=[ 1.0 , 0.8 ], ).images make_image_grid([original_image, canny_image, openpose_image, images[ 0 ].resize(( 512 , 512 )), images[ 1 ].resize(( 512 , 512 )), images[ 2 ].resize(( 512 , 512 ))], rows= 2 , cols= 3 ) < > Update on GitHub ← PAG T2I-Adapter → Control Net Text-to-image Image-to-image Inpainting Guess mode Control Net with Stable Diffusion XL Multi Control Net |
Quick_Tour.txt | Quick tour Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Optimum documentation Quick tour Optimum 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.23.3 v1.22.0 v1.21.4 v1.20.0 v1.19.0 v1.18.1 v1.17.1 v1.16.2 v1.15.0 v1.14.0 v1.13.2 v1.12.0 v1.11.2 v1.10.1 v1.9.0 v1.8.6 v1.7.3 v1.6.4 v1.5.2 v1.4.1 v1.3.0 v1.2.3 v1.0.0 EN Overview 🤗 Optimum Installation Quick tour Notebooks Conceptual guides Quantization Nvidia AMD Intel AWS Trainium/Inferentia Google TPUs Habana Furiosa ONNX Runtime Exporters BetterTransformer Torch FX LLM quantization Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Quick tour This quick tour is intended for developers who are ready to dive into the code and see examples of how to integrate 🤗 Optimum into their model training and inference workflows. Accelerated inference OpenVINO To load a model and run inference with OpenVINO Runtime, you can just replace your AutoModelForXxx class with the corresponding OVModelForXxx class. If you want to load a PyTorch checkpoint, set export=True to convert your model to the OpenVINO IR (Intermediate Representation). Copied - from transformers import AutoModelForSequenceClassification + from optimum.intel.openvino import OVModelForSequenceClassification from transformers import AutoTokenizer, pipeline # Download a tokenizer and model from the Hub and convert to OpenVINO format tokenizer = AutoTokenizer.from_pretrained(model_id) model_id = "distilbert-base-uncased-finetuned-sst-2-english" - model = AutoModelForSequenceClassification.from_pretrained(model_id) + model = OVModelForSequenceClassification.from_pretrained(model_id, export=True) # Run inference! classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) results = classifier("He's a dreadful magician.") You can find more examples in the documentation and in the examples . ONNX Runtime To accelerate inference with ONNX Runtime, 🤗 Optimum uses configuration objects to define parameters for graph optimization and quantization. These objects are then used to instantiate dedicated optimizers and quantizers . Before applying quantization or optimization, first we need to load our model. To load a model and run inference with ONNX Runtime, you can just replace the canonical Transformers AutoModelForXxx class with the corresponding ORTModelForXxx class. If you want to load from a PyTorch checkpoint, set export=True to export your model to the ONNX format. Copied >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english" >>> save_directory = "tmp/onnx/" >>> # Load a model from transformers and export it to ONNX >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export= True ) >>> # Save the ONNX model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) Let’s see now how we can apply dynamic quantization with ONNX Runtime: Copied >>> from optimum.onnxruntime.configuration import AutoQuantizationConfig >>> from optimum.onnxruntime import ORTQuantizer >>> # Define the quantization methodology >>> qconfig = AutoQuantizationConfig.arm64(is_static= False , per_channel= False ) >>> quantizer = ORTQuantizer.from_pretrained(ort_model) >>> # Apply dynamic quantization on the model >>> quantizer.quantize(save_dir=save_directory, quantization_config=qconfig) In this example, we’ve quantized a model from the Hugging Face Hub, in the same manner we can quantize a model hosted locally by providing the path to the directory containing the model weights. The result from applying the quantize() method is a model_quantized.onnx file that can be used to run inference. Here’s an example of how to load an ONNX Runtime model and generate predictions with it: Copied >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import pipeline, AutoTokenizer >>> model = ORTModelForSequenceClassification.from_pretrained(save_directory, file_name= "model_quantized.onnx" ) >>> tokenizer = AutoTokenizer.from_pretrained(save_directory) >>> classifier = pipeline( "text-classification" , model=model, tokenizer=tokenizer) >>> results = classifier( "I love burritos!" ) You can find more examples in the documentation and in the examples . Accelerated training Habana To train transformers on Habana’s Gaudi processors, 🤗 Optimum provides a GaudiTrainer that is very similar to the 🤗 Transformers Trainer . Here is a simple example: Copied - from transformers import Trainer, TrainingArguments + from optimum.habana import GaudiTrainer, GaudiTrainingArguments # Download a pretrained model from the Hub model = AutoModelForXxx.from_pretrained("bert-base-uncased") # Define the training arguments - training_args = TrainingArguments( + training_args = GaudiTrainingArguments( output_dir="path/to/save/folder/", + use_habana=True, + use_lazy_mode=True, + gaudi_config_name="Habana/bert-base-uncased", ... ) # Initialize the trainer - trainer = Trainer( + trainer = GaudiTrainer( model=model, args=training_args, train_dataset=train_dataset, ... ) # Use Habana Gaudi processor for training! trainer.train() You can find more examples in the documentation and in the examples . ONNX Runtime To train transformers with ONNX Runtime’s acceleration features, 🤗 Optimum provides a ORTTrainer that is very similar to the 🤗 Transformers Trainer . Here is a simple example: Copied - from transformers import Trainer, TrainingArguments + from optimum.onnxruntime import ORTTrainer, ORTTrainingArguments # Download a pretrained model from the Hub model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") # Define the training arguments - training_args = TrainingArguments( + training_args = ORTTrainingArguments( output_dir="path/to/save/folder/", optim="adamw_ort_fused", ... ) # Create a ONNX Runtime Trainer - trainer = Trainer( + trainer = ORTTrainer( model=model, args=training_args, train_dataset=train_dataset, + feature="text-classification", # The model type to export to ONNX ... ) # Use ONNX Runtime for training! trainer.train() You can find more examples in the documentation and in the examples . Out of the box ONNX export The Optimum library handles out of the box the ONNX export of Transformers and Diffusers models! Exporting a model to ONNX is as simple as Copied optimum-cli export onnx --model gpt2 gpt2_onnx/ Check out the help for more options: Copied optimum-cli export onnx -- help Check out the documentation for more. PyTorch’s BetterTransformer support BetterTransformer is a free-lunch PyTorch-native optimization to gain x1.25 - x4 speedup on the inference of Transformer-based models. It has been marked as stable in PyTorch 1.13 . We integrated BetterTransformer with the most-used models from the 🤗 Transformers libary, and using the integration is as simple as: Copied >>> from optimum.bettertransformer import BetterTransformer >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained( "distilbert-base-uncased-finetuned-sst-2-english" ) >>> model = BetterTransformer.transform(model) Check out the documentation for more details, and the blog post on PyTorch’s Medium to find out more about the integration! torch.fx integration Optimum integrates with torch.fx , providing as a one-liner several graph transformations. We aim at supporting a better management of quantization through torch.fx , both for quantization-aware training (QAT) and post-training quantization (PTQ). Check out the documentation and reference for more! < > Update on GitHub ← Installation Notebooks → Quick tour Accelerated inference OpenVINO ONN X Runtime Accelerated training Habana ONN X Runtime Out of the box ONN X export Py Torch’s Better Transformer support torch.fx integration |
Deploying_a_llama.cpp_Container.txt | Deploying a llama.cpp Container Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Deploying a llama.cpp Container Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Deploying a llama.cpp Container You can deploy any llama.cpp compatible GGUF on the Hugging Face Endpoints. When you create an endpoint with a GGUF model, a llama.cpp container is automatically selected using the latest image built from the master branch of the llama.cpp repository. Upon successful deployment, a server with an OpenAI-compatible endpoint becomes available. Llama.cpp supports multiple endpoints like /tokenize , /health , /embedding and many more. For a comprehensive list of available endpoints, please refer to the API documentation . Deployment Steps To deploy an endpoint with a llama.cpp container, follow these steps: Create a new endpoint and select a repository containing a GGUF model. The llama.cpp container will be automatically selected. Choose the desired GGUF file, noting that memory requirements will vary depending on the selected file. For example, an F16 model requires more memory than a Q4_K_M model. Select your desired hardware configuration. Optionally, you can customize the container’s configuration settings like Max Tokens , Number of Concurrent Requests . For more information on those, please refer to the Configurations section below. Click the Create Endpoint button to complete the deployment. Alternatively, you can follow the video tutorial below for a step-by-step guide on deploying an endpoint with a llama.cpp container: Configurations The llama.cpp container offers several configuration options that can be adjusted. After deployment, you can modify these settings by accessing the Settings tab on the endpoint details page. Basic Configurations Max Tokens (per Request) : The maximum number of tokens that can be sent in a single request. Max Concurrent Requests : The maximum number of concurrent requests allowed for this deployment. Increasing this limit requires additional memory allocation. For instance, setting this value to 4 requests with 1024 tokens maximum per request requires memory capacity for 4096 tokens in total. Advanced Configurations In addition to the basic configurations, you can also modify specific settings by setting environment variables. A list of available environment variables can be found in the API documentation . Please note that the following environment variables are reserved by the system and cannot be modified: LLAMA_ARG_MODEL LLAMA_ARG_HTTP_THREADS LLAMA_ARG_N_GPU_LAYERS LLAMA_ARG_EMBEDDINGS LLAMA_ARG_HOST LLAMA_ARG_PORT LLAMA_ARG_NO_MMAP LLAMA_ARG_CTX_SIZE LLAMA_ARG_N_PARALLEL LLAMA_ARG_ENDPOINT_METRICS Troubleshooting In case the deployment fails, please watch the log output for any error messages. You can access the logs by clicking on the Logs tab on the endpoint details page. To learn more, refer to the Logs documentation. Malloc failed: out of memory If you see this error message in the log: Copied ggml_backend_cuda_buffer_type_alloc_buffer: allocating 67200.00 MiB on device 0 : cuda Malloc failed: out of memory llama_kv_cache_init: failed to allocate buffer for kv cache llama_new_context_with_model: llama_kv_cache_init() failed for self -attention cache ... That means the selected hardware configuration does not have enough memory to accommodate the selected GGUF model. You can try to: Lower the number of maximum tokens per request Lower the number of concurrent requests Select a smaller GGUF model Select a larger hardware configuration Workload evicted, storage limit exceeded This error message indicates that the hardware has too little memory to accommodate the selected GGUF model. Try selecting a smaller model or select a larger hardware configuration. Other problems For other problems, please refer to the llama.cpp issues page . In case you want to create a new issue, please also include the full log output in your bug report. < > Update on GitHub ← Pause and Resume your Endpoint Inference Endpoints Version → Deploying a llama.cpp Container Deployment Steps Configurations Basic Configurations Advanced Configurations Troubleshooting |
Interface__ImageToTextOutput.txt | Interface: ImageToTextOutput Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: ImageToTextOutput Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: ImageToTextOutput Properties generated _ text • generated_text : string The generated caption Defined in inference/src/tasks/cv/imageToText.ts:16 < > Update on GitHub ← ImageSegmentationOutputValue ObjectDetectionOutputValue → Interface: Image To Text Output Properties generated _ text Defined in |
LoKr.txt | LoKr Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation LoKr PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started LoKr Low-Rank Kronecker Product ( LoKr ), is a LoRA-variant method that approximates the large weight matrix with two low-rank matrices and combines them with the Kronecker product. LoKr also provides an optional third low-rank matrix to provide better control during fine-tuning. LoKrConfig class peft. LoKrConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False rank_pattern : Optional[dict] = <factory> alpha_pattern : Optional[dict] = <factory> r : int = 8 alpha : int = 8 rank_dropout : float = 0.0 module_dropout : float = 0.0 use_effective_conv2d : bool = False decompose_both : bool = False decompose_factor : int = -1 rank_dropout_scale : bool = False target_modules : Optional[Union[list[str], str]] = None exclude_modules : Optional[Union[list[str], str]] = None init_weights : Union[bool, Literal['lycoris']] = True layers_to_transform : Optional[Union[list[int], int]] = None layers_pattern : Optional[Union[list[str], str]] = None modules_to_save : Optional[list[str]] = None ) Parameters r ( int ) — LoKr rank. alpha ( int ) — The alpha parameter for LoKr scaling. rank_dropout ( float ) — The dropout probability for rank dimension during training. module_dropout ( float ) — The dropout probability for disabling LoKr modules during training. use_effective_conv2d ( bool ) — Use parameter effective decomposition for Conv2d with ksize > 1 (“Proposition 3” from FedPara paper). decompose_both ( bool ) — Perform rank decomposition of left kronecker product matrix. decompose_factor ( int ) — Kronecker product decomposition factor. rank_dropout_scale (‘bool) — Whether to scale the rank dropout while training, defaults to False . target_modules ( Optional[Union[List[str], str]] ) — The names of the modules to apply the adapter to. If this is specified, only the modules with the specified names will be replaced. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. If this is specified as ‘all-linear’, then all linear/Conv1D modules are chosen, excluding the output layer. If this is not specified, modules will be chosen according to the model architecture. If the architecture is not known, an error will be raised — in this case, you should specify the target modules manually. exclude_modules ( Optional[Union[List[str], str]] ) — The names of the modules to not apply the adapter. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. init_weights ( bool ) — Whether to perform initialization of adapter weights. This defaults to True . Use “lycoris” to initialize weights in the style of the LYCORIS repository. Passing False is discouraged. layers_to_transform ( Union[List[int], int] ) — The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices that are specified in this list. If a single integer is passed, it will apply the transformations on the layer at this index. layers_pattern ( Optional[Union[List[str], str]] ) — The layer pattern name, used only if layers_to_transform is different from None . This should target the nn.ModuleList of the model, which is often called 'layers' or 'h' . rank_pattern ( dict ) — The mapping from layer names or regexp expression to ranks which are different from the default rank specified by r . alpha_pattern ( dict ) — The mapping from layer names or regexp expression to alphas which are different from the default alpha specified by alpha . modules_to_save ( Optional[List[str]] ) — List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint. Configuration class of LoKrModel . LoKrModel class peft. LoKrModel < source > ( model config adapter_name low_cpu_mem_usage : bool = False ) → torch.nn.Module Parameters model ( torch.nn.Module ) — The model to which the adapter tuner layers will be attached. config ( LoKrConfig ) — The configuration of the LoKr model. adapter_name ( str ) — The name of the adapter, defaults to "default" . low_cpu_mem_usage ( bool , optional , defaults to False ) — Create empty adapter weights on meta device. Useful to speed up the loading process. Returns torch.nn.Module The LoKr model. Creates Low-Rank Kronecker Product model from a pretrained model. The original method is partially described in https://arxiv.org/abs/2108.06098 and in https://arxiv.org/abs/2309.14859 Current implementation heavily borrows from https://github.com/KohakuBlueleaf/LyCORIS/blob/eb460098187f752a5d66406d3affade6f0a07ece/lycoris/modules/lokr.py Example: Copied >>> from diffusers import StableDiffusionPipeline >>> from peft import LoKrModel, LoKrConfig >>> config_te = LoKrConfig( ... r= 8 , ... lora_alpha= 32 , ... target_modules=[ "k_proj" , "q_proj" , "v_proj" , "out_proj" , "fc1" , "fc2" ], ... rank_dropout= 0.0 , ... module_dropout= 0.0 , ... init_weights= True , ... ) >>> config_unet = LoKrConfig( ... r= 8 , ... lora_alpha= 32 , ... target_modules=[ ... "proj_in" , ... "proj_out" , ... "to_k" , ... "to_q" , ... "to_v" , ... "to_out.0" , ... "ff.net.0.proj" , ... "ff.net.2" , ... ], ... rank_dropout= 0.0 , ... module_dropout= 0.0 , ... init_weights= True , ... use_effective_conv2d= True , ... ) >>> model = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5" ) >>> model.text_encoder = LoKrModel(model.text_encoder, config_te, "default" ) >>> model.unet = LoKrModel(model.unet, config_unet, "default" ) Attributes : model ( ~torch.nn.Module ) — The model to be adapted. peft_config ( LoKrConfig ): The configuration of the LoKr model. < > Update on GitHub ← LoHa LoRA → Lo Kr Lo Kr Config Lo Kr Model |
How_to_configure_SAML_SSO_with_Okta.txt | How to configure SAML SSO with Okta Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation How to configure SAML SSO with Okta Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security User Access Tokens Two-Factor Authentication Git over SSH Signing Commits with GPG Single Sign-On (SSO) How to configure OIDC with Okta in the Hub How to configure SAML with Okta in the Hub How to configure SAML with Azure in the Hub How to configure OIDC with Azure in the Hub Advanced Access Control (Resource Groups) Malware Scanning Pickle Scanning Secrets Scanning Protect AI Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started How to configure SAML SSO with Okta In this guide, we will use Okta as the SSO provider and with the Security Assertion Markup Language (SAML) protocol as our preferred identity protocol. We currently support SP-initiated and IdP-initiated authentication. User provisioning is not yet supported at this time. This feature is part of the Enterprise Hub . Step 1: Create a new application in your Identity Provider Open a new tab/window in your browser and sign in to your Okta account. Navigate to “Admin/Applications” and click the “Create App Integration” button. Then choose an “SAML 2.0” application and click “Create”. Step 2: Configure your application on Okta Open a new tab/window in your browser and navigate to the SSO section of your organization’s settings. Select the SAML protocol. Copy the “Assertion Consumer Service URL” from the organization’s settings on Hugging Face, and paste it in the “Single sign-on URL” field on Okta. The URL looks like this: https://huggingface.co/organizations/[organizationIdentifier]/saml/consume . On Okta, set the following settings: Set Audience URI (SP Entity Id) to match the “SP Entity ID” value on Hugging Face. Set Name ID format to EmailAddress. Under “Show Advanced Settings”, verify that Response and Assertion Signature are set to: Signed. Save your new application. Step 3: Finalize configuration on Hugging Face In your Okta application, under “Sign On/Settings/More details”, find the following fields: Sign-on URL Public certificate SP Entity ID You will need them to finalize the SSO setup on Hugging Face. In the SSO section of your organization’s settings, copy-paste these values from Okta: Sign-on URL SP Entity ID Public certificate The public certificate must have the following format: Copied ----- BEGIN CERTIFICATE ----- {certificate} ----- END CERTIFICATE ----- You can now click on “Update and Test SAML configuration” to save the settings. You should be redirected to your SSO provider (IdP) login prompt. Once logged in, you’ll be redirected to your organization’s settings page. A green check mark near the SAML selector will attest that the test was successful. Step 4: Enable SSO in your organization Now that Single Sign-On is configured and tested, you can enable it for members of your organization by clicking on the “Enable” button. Once enabled, members of your organization must complete the SSO authentication flow described in the How does it work? section. < > Update on GitHub ← How to configure OIDC with Okta in the Hub How to configure SAML with Azure in the Hub → How to configure SAM L SS O with Okta Step 1: Create a new application in your Identity Provider Step 2: Configure your application on Okta Step 3: Finalize configuration on Hugging Face Step 4: Enable SS O in your organization |
Building_a_React_application.txt | Building a React application Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Building a React application Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Building a React application In this tutorial, we’ll be building a simple React application that performs multilingual translation using Transformers.js! The final product will look something like this: Useful links: Demo site Source code Prerequisites Node.js version 18+ npm version 9+ Step 1: Initialise the project For this tutorial, we will use Vite to initialise our project. Vite is a build tool that allows us to quickly set up a React application with minimal configuration. Run the following command in your terminal: Copied npm create vite@latest react-translator -- --template react If prompted to install create-vite , type y and press Enter . Next, enter the project directory and install the necessary development dependencies: Copied cd react-translator npm install To test that our application is working, we can run the following command: Copied npm run dev Visiting the URL shown in the terminal (e.g., http://localhost:5173/ ) should show the default “React + Vite” landing page. You can stop the development server by pressing Ctrl + C in the terminal. Step 2: Install and configure Transformers.js Now we get to the fun part: adding machine learning to our application! First, install Transformers.js from NPM with the following command: Copied npm install @huggingface/transformers For this application, we will use the Xenova/nllb-200-distilled-600M model, which can perform multilingual translation among 200 languages. Before we start, there are 2 things we need to take note of: ML inference can be quite computationally intensive, so it’s better to load and run the models in a separate thread from the main (UI) thread. Since the model is quite large (>1 GB), we don’t want to download it until the user clicks the “Translate” button. We can achieve both of these goals by using a Web Worker and some React hooks . Create a file called worker.js in the src directory. This script will do all the heavy-lifing for us, including loading and running of the translation pipeline. To ensure the model is only loaded once, we will create the MyTranslationPipeline class which use the singleton pattern to lazily create a single instance of the pipeline when getInstance is first called, and use this pipeline for all subsequent calls: Copied import { pipeline } from '@huggingface/transformers' ; class MyTranslationPipeline { static task = 'translation' ; static model = 'Xenova/nllb-200-distilled-600M' ; static instance = null ; static async getInstance ( progress_callback = null ) { if ( this . instance === null ) { this . instance = pipeline ( this . task , this . model , { progress_callback }); } return this . instance ; } } Modify App.jsx in the src directory. This file is automatically created when initializing our React project, and will contain some boilerplate code. Inside the App function, let’s create the web worker and store a reference to it using the useRef hook: Copied // Remember to import the relevant hooks import { useEffect, useRef, useState } from 'react' function App ( ) { // Create a reference to the worker object. const worker = useRef ( null ); // We use the `useEffect` hook to setup the worker as soon as the `App` component is mounted. useEffect ( () => { if (!worker. current ) { // Create the worker if it does not yet exist. worker. current = new Worker ( new URL ( './worker.js' , import . meta . url ), { type : 'module' }); } // Create a callback function for messages from the worker thread. const onMessageReceived = ( e ) => { // TODO: Will fill in later }; // Attach the callback function as an event listener. worker. current . addEventListener ( 'message' , onMessageReceived); // Define a cleanup function for when the component is unmounted. return () => worker. current . removeEventListener ( 'message' , onMessageReceived); }); return ( // TODO: Rest of our app goes here... ) } export default App Step 3: Design the user interface We recommend starting the development server again with npm run dev (if not already running) so that you can see your changes in real-time. First, let’s define our components. Create a folder called components in the src directory, and create the following files: LanguageSelector.jsx : This component will allow the user to select the input and output languages. Check out the full list of languages here . Copied const LANGUAGES = { "Acehnese (Arabic script)" : "ace_Arab" , "Acehnese (Latin script)" : "ace_Latn" , "Afrikaans" : "afr_Latn" , ... "Zulu" : "zul_Latn" , } export default function LanguageSelector ( { type, onChange, defaultLanguage } ) { return ( < div className = 'language-selector' > < label > {type}: </ label > < select onChange = {onChange} defaultValue = {defaultLanguage} > {Object.entries(LANGUAGES).map(([key, value]) => { return < option key = {key} value = {value} > {key} </ option > })} </ select > </ div > ) } Progress.jsx : This component will display the progress for downloading each model file. Copied export default function Progress ( { text, percentage } ) { percentage = percentage ?? 0 ; return ( < div className = "progress-container" > < div className = 'progress-bar' style = {{ ' width ' : `${ percentage }%` }}> {text} ({`${percentage.toFixed(2)}%`}) </ div > </ div > ); } We can now use these components in App.jsx by adding these imports to the top of the file: Copied import LanguageSelector from './components/LanguageSelector' ; import Progress from './components/Progress' ; Let’s also add some state variables to keep track of a few things in our application, like model loading, languages, input text, and output text. Add the following code to the beginning of the App function in src/App.jsx : Copied function App ( ) { // Model loading const [ready, setReady] = useState ( null ); const [disabled, setDisabled] = useState ( false ); const [progressItems, setProgressItems] = useState ([]); // Inputs and outputs const [input, setInput] = useState ( 'I love walking my dog.' ); const [sourceLanguage, setSourceLanguage] = useState ( 'eng_Latn' ); const [targetLanguage, setTargetLanguage] = useState ( 'fra_Latn' ); const [output, setOutput] = useState ( '' ); // rest of the code... } Next, we can add our custom components to the main App component. We will also add two textarea elements for input and output text, and a button to trigger the translation. Modify the return statement to look like this: Copied return ( <> < h1 > Transformers.js </ h1 > < h2 > ML-powered multilingual translation in React! </ h2 > < div className = 'container' > < div className = 'language-container' > < LanguageSelector type = { " Source "} defaultLanguage = { " eng_Latn "} onChange = {x => setSourceLanguage(x.target.value)} /> < LanguageSelector type = { " Target "} defaultLanguage = { " fra_Latn "} onChange = {x => setTargetLanguage(x.target.value)} /> </ div > < div className = 'textbox-container' > < textarea value = {input} rows = {3} onChange = {e => setInput(e.target.value)}> </ textarea > < textarea value = {output} rows = {3} readOnly > </ textarea > </ div > </ div > < button disabled = {disabled} onClick = {translate} > Translate </ button > < div className = 'progress-bars-container' > {ready === false && ( < label > Loading models... (only run once) </ label > )} {progressItems.map(data => ( < div key = {data.file} > < Progress text = {data.file} percentage = {data.progress} /> </ div > ))} </ div > </> ) Don’t worry about the translate function for now. We will define it in the next section. Finally, we can add some CSS to make our app look a little nicer. Modify the following files in the src directory: index.css : View code Copied :root { font-family : Inter, system-ui, Avenir, Helvetica, Arial, sans-serif; line-height : 1.5 ; font-weight : 400 ; color : #213547 ; background-color : #ffffff ; font-synthesis : none; text-rendering : optimizeLegibility; -webkit- font-smoothing : antialiased; -moz-osx- font-smoothing : grayscale; -webkit-text-size-adjust: 100% ; } body { margin : 0 ; display : flex; place-items: center; min-width : 320px ; min-height : 100vh ; } h1 { font-size : 3.2em ; line-height : 1 ; } h1 , h2 { margin : 8px ; } select { padding : 0.3em ; cursor : pointer; } textarea { padding : 0.6em ; } button { padding : 0.6em 1.2em ; cursor : pointer; font-weight : 500 ; } button [disabled] { cursor : not-allowed; } select, textarea , button { border-radius : 8px ; border : 1px solid transparent; font-size : 1em ; font-family : inherit; background-color : #f9f9f9 ; transition : border-color 0.25s ; } select :hover , textarea :hover , button :not ( [disabled] ) :hover { border-color : #646cff ; } select :focus , select :focus -visible, textarea :focus , textarea :focus -visible, button :focus , button :focus -visible { outline : 4px auto -webkit-focus-ring-color; } App.css View code Copied #root { max-width : 1280px ; margin : 0 auto; padding : 2rem ; text-align : center; } .language-container { display : flex; gap : 20px ; } .textbox-container { display : flex; justify-content : center; gap : 20px ; width : 800px ; } .textbox-container > textarea , .language-selector { width : 50% ; } .language-selector >select { width : 150px ; } .progress-container { position : relative; font-size : 14px ; color : white; background-color : #e9ecef ; border : solid 1px ; border-radius : 8px ; text-align : left; overflow : hidden; } .progress-bar { padding : 0 4px ; z-index : 0 ; top : 0 ; width : 1% ; height : 100% ; overflow : hidden; background-color : #007bff ; white-space : nowrap; } .progress-text { z-index : 2 ; } .selector-container { display : flex; gap : 20px ; } .progress-bars-container { padding : 8px ; height : 140px ; } .container { margin : 25px ; display : flex; flex-direction : column; gap : 10px ; } Step 4: Connecting everything together Now that we have a basic user interface set up, we can finally connect everything together. First, let’s define the translate function, which will be called when the user clicks the Translate button. This sends a message (containing the input text, source language, and target language) to the worker thread for processing. We will also disable the button so the user doesn’t click it multiple times. Add the following code just before the return statement in the App function: Copied const translate = ( ) => { setDisabled ( true ); worker. current . postMessage ({ text : input, src_lang : sourceLanguage, tgt_lang : targetLanguage, }); } Now, let’s add an event listener in src/worker.js to listen for messages from the main thread. We will send back messages (e.g., for model loading progress and text streaming) to the main thread with self.postMessage . Copied // Listen for messages from the main thread self. addEventListener ( 'message' , async (event) => { // Retrieve the translation pipeline. When called for the first time, // this will load the pipeline and save it for future use. let translator = await MyTranslationPipeline . getInstance ( x => { // We also add a progress callback to the pipeline so that we can // track model loading. self. postMessage (x); }); // Actually perform the translation let output = await translator (event. data . text , { tgt_lang : event. data . tgt_lang , src_lang : event. data . src_lang , // Allows for partial output callback_function : x => { self. postMessage ({ status : 'update' , output : translator. tokenizer . decode (x[ 0 ]. output_token_ids , { skip_special_tokens : true }) }); } }); // Send the output back to the main thread self. postMessage ({ status : 'complete' , output : output, }); }); Finally, let’s fill in our onMessageReceived function, which will update the application state in response to messages from the worker thread. Add the following code inside the useEffect hook we defined earlier: Copied const onMessageReceived = ( e ) => { switch (e. data . status ) { case 'initiate' : // Model file start load: add a new progress item to the list. setReady ( false ); setProgressItems ( prev => [...prev, e. data ]); break ; case 'progress' : // Model file progress: update one of the progress items. setProgressItems ( prev => prev. map ( item => { if (item. file === e. data . file ) { return { ...item, progress : e. data . progress } } return item; }) ); break ; case 'done' : // Model file loaded: remove the progress item from the list. setProgressItems ( prev => prev. filter ( item => item. file !== e. data . file ) ); break ; case 'ready' : // Pipeline ready: the worker is ready to accept messages. setReady ( true ); break ; case 'update' : // Generation update: update the output text. setOutput (e. data . output ); break ; case 'complete' : // Generation complete: re-enable the "Translate" button setDisabled ( false ); break ; } }; You can now run the application with npm run dev and perform multilingual translation directly in your browser! (Optional) Step 5: Build and deploy To build your application, simply run npm run build . This will bundle your application and output the static files to the dist folder. For this demo, we will deploy our application as a static Hugging Face Space , but you can deploy it anywhere you like! If you haven’t already, you can create a free Hugging Face account here . Visit https://huggingface.co/new-space and fill in the form. Remember to select “Static” as the space type. Go to “Files” → “Add file” → “Upload files”. Drag the index.html file and public/ folder from the dist folder into the upload box and click “Upload”. After they have uploaded, scroll down to the button and click “Commit changes to main”. That’s it! Your application should now be live at https://huggingface.co/spaces/<your-username>/<your-space-name> ! < > Update on GitHub ← Building a Vanilla JS Application Building a Next.js Application → Building a React application Prerequisites Step 1: Initialise the project Step 2: Install and configure Transformers.js Step 3: Design the user interface Step 4: Connecting everything together ( Optional) Step 5: Build and deploy |
Training_on_TPU_with_TensorFlow.txt | Training on TPU with TensorFlow Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Training on TPU with TensorFlow Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Training on TPU with TensorFlow If you don’t need long explanations and just want TPU code samples to get started with, check out our TPU example notebook! What is a TPU? A TPU is a Tensor Processing Unit. They are hardware designed by Google, which are used to greatly speed up the tensor computations within neural networks, much like GPUs. They can be used for both network training and inference. They are generally accessed through Google’s cloud services, but small TPUs can also be accessed directly for free through Google Colab and Kaggle Kernels. Because all TensorFlow models in 🤗 Transformers are Keras models , most of the methods in this document are generally applicable to TPU training for any Keras model! However, there are a few points that are specific to the HuggingFace ecosystem (hug-o-system?) of Transformers and Datasets, and we’ll make sure to flag them up when we get to them. What kinds of TPU are available? New users are often very confused by the range of TPUs, and the different ways to access them. The first key distinction to understand is the difference between TPU Nodes and TPU VMs. When you use a TPU Node , you are effectively indirectly accessing a remote TPU. You will need a separate VM, which will initialize your network and data pipeline and then forward them to the remote node. When you use a TPU on Google Colab, you are accessing it in the TPU Node style. Using TPU Nodes can have some quite unexpected behaviour for people who aren’t used to them! In particular, because the TPU is located on a physically different system to the machine you’re running your Python code on, your data cannot be local to your machine - any data pipeline that loads from your machine’s internal storage will totally fail! Instead, data must be stored in Google Cloud Storage where your data pipeline can still access it, even when the pipeline is running on the remote TPU node. If you can fit all your data in memory as np.ndarray or tf.Tensor , then you can fit() on that data even when using Colab or a TPU Node, without needing to upload it to Google Cloud Storage. 🤗Specific Hugging Face Tip🤗: The methods Dataset.to_tf_dataset() and its higher-level wrapper model.prepare_tf_dataset() , which you will see throughout our TF code examples, will both fail on a TPU Node. The reason for this is that even though they create a tf.data.Dataset it is not a “pure” tf.data pipeline and uses tf.numpy_function or Dataset.from_generator() to stream data from the underlying HuggingFace Dataset . This HuggingFace Dataset is backed by data that is on a local disc and which the remote TPU Node will not be able to read. The second way to access a TPU is via a TPU VM. When using a TPU VM, you connect directly to the machine that the TPU is attached to, much like training on a GPU VM. TPU VMs are generally easier to work with, particularly when it comes to your data pipeline. All of the above warnings do not apply to TPU VMs! This is an opinionated document, so here’s our opinion: Avoid using TPU Node if possible. It is more confusing and more difficult to debug than TPU VMs. It is also likely to be unsupported in future - Google’s latest TPU, TPUv4, can only be accessed as a TPU VM, which suggests that TPU Nodes are increasingly going to become a “legacy” access method. However, we understand that the only free TPU access is on Colab and Kaggle Kernels, which uses TPU Node - so we’ll try to explain how to handle it if you have to! Check the TPU example notebook for code samples that explain this in more detail. What sizes of TPU are available? A single TPU (a v2-8/v3-8/v4-8) runs 8 replicas. TPUs exist in pods that can run hundreds or thousands of replicas simultaneously. When you use more than a single TPU but less than a whole pod (for example, a v3-32), your TPU fleet is referred to as a pod slice. When you access a free TPU via Colab, you generally get a single v2-8 TPU. I keep hearing about this XLA thing. What’s XLA, and how does it relate to TPUs? XLA is an optimizing compiler, used by both TensorFlow and JAX. In JAX it is the only compiler, whereas in TensorFlow it is optional (but mandatory on TPU!). The easiest way to enable it when training a Keras model is to pass the argument jit_compile=True to model.compile() . If you don’t get any errors and performance is good, that’s a great sign that you’re ready to move to TPU! Debugging on TPU is generally a bit harder than on CPU/GPU, so we recommend getting your code running on CPU/GPU with XLA first before trying it on TPU. You don’t have to train for long, of course - just for a few steps to make sure that your model and data pipeline are working like you expect them to. XLA compiled code is usually faster - so even if you’re not planning to run on TPU, adding jit_compile=True can improve your performance. Be sure to note the caveats below about XLA compatibility, though! Tip born of painful experience: Although using jit_compile=True is a good way to get a speed boost and test if your CPU/GPU code is XLA-compatible, it can actually cause a lot of problems if you leave it in when actually training on TPU. XLA compilation will happen implicitly on TPU, so remember to remove that line before actually running your code on a TPU! How do I make my model XLA compatible? In many cases, your code is probably XLA-compatible already! However, there are a few things that work in normal TensorFlow that don’t work in XLA. We’ve distilled them into three core rules below: 🤗Specific HuggingFace Tip🤗: We’ve put a lot of effort into rewriting our TensorFlow models and loss functions to be XLA-compatible. Our models and loss functions generally obey rule #1 and #2 by default, so you can skip over them if you’re using transformers models. Don’t forget about these rules when writing your own models and loss functions, though! XLA Rule #1: Your code cannot have “data-dependent conditionals” What that means is that any if statement cannot depend on values inside a tf.Tensor . For example, this code block cannot be compiled with XLA! Copied if tf.reduce_sum(tensor) > 10 : tensor = tensor / 2.0 This might seem very restrictive at first, but most neural net code doesn’t need to do this. You can often get around this restriction by using tf.cond (see the documentation here ) or by removing the conditional and finding a clever math trick with indicator variables instead, like so: Copied sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10 , tf.float32) tensor = tensor / ( 1.0 + sum_over_10) This code has exactly the same effect as the code above, but by avoiding a conditional, we ensure it will compile with XLA without problems! XLA Rule #2: Your code cannot have “data-dependent shapes” What this means is that the shape of all of the tf.Tensor objects in your code cannot depend on their values. For example, the function tf.unique cannot be compiled with XLA, because it returns a tensor containing one instance of each unique value in the input. The shape of this output will obviously be different depending on how repetitive the input Tensor was, and so XLA refuses to handle it! In general, most neural network code obeys rule #2 by default. However, there are a few common cases where it becomes a problem. One very common one is when you use label masking , setting your labels to a negative value to indicate that those positions should be ignored when computing the loss. If you look at NumPy or PyTorch loss functions that support label masking, you will often see code like this that uses boolean indexing : Copied label_mask = labels >= 0 masked_outputs = outputs[label_mask] masked_labels = labels[label_mask] loss = compute_loss(masked_outputs, masked_labels) mean_loss = torch.mean(loss) This code is totally fine in NumPy or PyTorch, but it breaks in XLA! Why? Because the shape of masked_outputs and masked_labels depends on how many positions are masked - that makes it a data-dependent shape. However, just like for rule #1, we can often rewrite this code to yield exactly the same output without any data-dependent shapes. Copied label_mask = tf.cast(labels >= 0 , tf.float32) loss = compute_loss(outputs, labels) loss = loss * label_mask # Set negative label positions to 0 mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask) Here, we avoid data-dependent shapes by computing the loss for every position, but zeroing out the masked positions in both the numerator and denominator when we calculate the mean, which yields exactly the same result as the first block while maintaining XLA compatibility. Note that we use the same trick as in rule #1 - converting a tf.bool to tf.float32 and using it as an indicator variable. This is a really useful trick, so remember it if you need to convert your own code to XLA! XLA Rule #3: XLA will need to recompile your model for every different input shape it sees This is the big one. What this means is that if your input shapes are very variable, XLA will have to recompile your model over and over, which will create huge performance problems. This commonly arises in NLP models, where input texts have variable lengths after tokenization. In other modalities, static shapes are more common and this rule is much less of a problem. How can you get around rule #3? The key is padding - if you pad all your inputs to the same length, and then use an attention_mask , you can get the same results as you’d get from variable shapes, but without any XLA issues. However, excessive padding can cause severe slowdown too - if you pad all your samples to the maximum length in the whole dataset, you might end up with batches consisting endless padding tokens, which will waste a lot of compute and memory! There isn’t a perfect solution to this problem. However, you can try some tricks. One very useful trick is to pad batches of samples up to a multiple of a number like 32 or 64 tokens. This often only increases the number of tokens by a small amount, but it hugely reduces the number of unique input shapes, because every input shape now has to be a multiple of 32 or 64. Fewer unique input shapes means fewer XLA compilations! 🤗Specific HuggingFace Tip🤗: Our tokenizers and data collators have methods that can help you here. You can use padding="max_length" or padding="longest" when calling tokenizers to get them to output padded data. Our tokenizers and data collators also have a pad_to_multiple_of argument that you can use to reduce the number of unique input shapes you see! How do I actually train my model on TPU? Once your training is XLA-compatible and (if you’re using TPU Node / Colab) your dataset has been prepared appropriately, running on TPU is surprisingly easy! All you really need to change in your code is to add a few lines to initialize your TPU, and to ensure that your model and dataset are created inside a TPUStrategy scope. Take a look at our TPU example notebook to see this in action! Summary There was a lot in here, so let’s summarize with a quick checklist you can follow when you want to get your model ready for TPU training: Make sure your code follows the three rules of XLA Compile your model with jit_compile=True on CPU/GPU and confirm that you can train it with XLA Either load your dataset into memory or use a TPU-compatible dataset loading approach (see notebook ) Migrate your code either to Colab (with accelerator set to “TPU”) or a TPU VM on Google Cloud Add TPU initializer code (see notebook ) Create your TPUStrategy and make sure dataset loading and model creation are inside the strategy.scope() (see notebook ) Don’t forget to take jit_compile=True out again when you move to TPU! 🙏🙏🙏🥺🥺🥺 Call model.fit() You did it! < > Update on GitHub ← Distributed CPU training PyTorch training on Apple silicon → Training on TP U with Tensor Flow What is a TP U? What kinds of TP U are available? What sizes of TP U are available? I keep hearing about this XL A thing. What’s XL A, and how does it relate to TP Us? How do I make my model XL A compatible? XL A Rule #1: Your code cannot have “data-dependent conditionals” XL A Rule #2: Your code cannot have “data-dependent shapes” XL A Rule #3: XL A will need to recompile your model for every different input shape it sees How do I actually train my model on TP U? Summary |
Tuners.txt | Tuners Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Tuners PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Tuners A tuner (or adapter) is a module that can be plugged into a torch.nn.Module . BaseTuner base class for other tuners and provides shared methods and attributes for preparing an adapter configuration and replacing a target module with the adapter module. BaseTunerLayer is a base class for adapter layers. It offers methods and attributes for managing adapters such as activating and disabling adapters. BaseTuner class peft.tuners.tuners_utils. BaseTuner < source > ( model peft_config : Union[PeftConfig, dict[str, PeftConfig]] adapter_name : str low_cpu_mem_usage : bool = False ) Parameters model ( torch.nn.Module ) — The model to which the adapter tuner layers will be attached. forward ( Callable ) — The forward method of the model. peft_config ( Union[ PeftConfig , dict[str, PeftConfig]] ) — The adapter configuration object, it should be a dictionary of str to PeftConfig objects. One can also pass a PeftConfig object and a new adapter will be created with the default name adapter or create a new dictionary with a key adapter_name and a value of that peft config. config ( dict[str, Any] ) — The model configuration object, it should be a dictionary of str to Any objects. targeted_module_names ( list[str] ) — The list of module names that were actually adapted. Can be useful to inspect if you want to quickly double-check that the config.target_modules were specified correctly. A base tuner model that provides the common methods and attributes for all tuners that are injectable into a torch.nn.Module For adding a new Tuner class, one needs to overwrite the following methods: _prepare_adapter_config : A private method to eventually prepare the adapter config, for example in case the field target_modules is missing. _create_and_replace : A private method to create and replace the target module with the adapter module. _check_target_module_exists : A private helper method to check if the passed module’s key name matches any of the target modules in the adapter_config. The easiest is to check what is done in the peft.tuners.lora.LoraModel class. disable_adapter_layers < source > ( ) Disable all adapters in-place. enable_adapter_layers < source > ( ) Enable all adapters in-place get_model_config < source > ( model : nn.Module ) Parameters model ( nn.Module ) — Model to get the config from. default ( dict|None , optional ) —: What to return if model does not have a config attribute. This method gets the config from a model in dictionary form. If model has not attribute config, then this method returns a default config. inject_adapter < source > ( model : nn.Module adapter_name : str autocast_adapter_dtype : bool = True low_cpu_mem_usage : bool = False ) Parameters model ( nn.Module ) — The model to be tuned. adapter_name ( str ) — The adapter name. autocast_adapter_dtype ( bool , optional ) — Whether to autocast the adapter dtype. Defaults to True . low_cpu_mem_usage ( bool , optional , defaults to False ) — Create empty adapter weights on meta device. Useful to speed up the loading process. Creates adapter layers and replaces the target modules with the adapter layers. This method is called under the hood by peft.mapping.get_peft_model if a non-prompt tuning adapter class is passed. The corresponding PEFT config is directly retrieved from the peft_config attribute of the BaseTuner class. merge_adapter < source > ( adapter_names : Optional[list[str]] = None ) Parameters safe_merge ( bool , optional ) — If True , the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to False . adapter_names ( list[str] , optional ) — The list of adapter names that should be merged. If None , all active adapters will be merged. Defaults to None . This method merges the adapter layers into the base model. Merging adapters can lead to a speed up of the forward pass. A copy of the adapter weights is still kept in memory, which is required to unmerge the adapters. In order to merge the adapter weights without keeping them in memory, please call merge_and_unload . unmerge_adapter < source > ( ) This method unmerges all merged adapter layers from the base model. BaseTunerLayer class peft.tuners.tuners_utils. BaseTunerLayer < source > ( ) Parameters is_pluggable ( bool , optional ) — Whether the adapter layer can be plugged to any pytorch module active_adapters (Union[List str , str ], optional ) — The name of the active adapter. A tuner layer mixin that provides the common methods and attributes for all tuners. delete_adapter < source > ( adapter_name : str ) Parameters adapter_name ( str ) — The name of the adapter to delete Delete an adapter from the layer This should be called on all adapter layers, or else we will get an inconsistent state. This method will also set a new active adapter if the deleted adapter was an active adapter. It is important that the new adapter is chosen in a deterministic way, so that the same adapter is chosen on all layers. enable_adapters < source > ( enabled : bool ) Parameters enabled (bool) — True to enable adapters, False to disable adapters Toggle the enabling and disabling of adapters Takes care of setting the requires_grad flag for the adapter weights. get_base_layer < source > ( ) (Recursively) get the base_layer. This is necessary for the case that the tuner layer wraps another tuner layer. set_adapter < source > ( adapter_names : str | list[str] ) Parameters adapter_name ( str or List[str] ) — Name of the adapter(s) to be activated. Set the active adapter(s). Additionally, this function will set the specified adapters to trainable (i.e., requires_grad=True). If this is not desired, use the following code. Copied >>> for name, param in model_peft.named_parameters(): ... if ...: # some check on name (ex. if 'lora' in name) ... param.requires_grad = False < > Update on GitHub ← Configuration AdaLoRA → Tuners Base Tuner Base Tuner Layer |
Spark.txt | Spark Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Spark Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Argilla Dask Datasets Distilabel DuckDB FiftyOne Pandas Polars Spark WebDataset Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Spark Spark enables real-time, large-scale data processing in a distributed environment. In particular you can use huggingface_hub to access Hugging Face datasets repositories in PySpark Installation To be able to read and write to Hugging Face URLs (e.g. hf://datasets/username/dataset/data.parquet ), you need to install the huggingface_hub library: Copied pip install huggingface_hub You also need to install pyarrow to read/write Parquet / JSON / CSV / etc. files using the filesystem API provided by huggingFace_hub : Copied pip install pyarrow Authentication You need to authenticate to Hugging Face to read private/gated dataset repositories or to write to your dataset repositories. You can use the CLI for example: Copied huggingface- cli login It’s also possible to provide your Hugging Face token with the HF_TOKEN environment variable or passing the storage_options parameter to helper functions below: Copied storage_options = { "token" : "hf_xxx" } For more details about authentication, check out this guide . Read PySpark doesn’t have an official support for Hugging Face paths, so we provide a helper function to read datasets in a distributed manner. For example you can read Parquet files from Hugging Face in an optimized way using PyArrow by defining this read_parquet helper function: Copied from functools import partial from typing import Iterator, Optional , Union import pyarrow as pa import pyarrow.parquet as pq from huggingface_hub import HfFileSystem from pyspark.sql.dataframe import DataFrame from pyspark.sql.pandas.types import from_arrow_schema def _read ( iterator: Iterator[pa.RecordBatch], columns: Optional [ list [ str ]], filters: Optional [ Union [ list [ tuple ], list [ list [ tuple ]]]], **kwargs ) -> Iterator[pa.RecordBatch]: for batch in iterator: paths = batch[ 0 ].to_pylist() ds = pq.ParquetDataset(paths, **kwargs) yield from ds._dataset.to_batches(columns=columns, filter =pq.filters_to_expression(filters) if filters else None ) def read_parquet ( path: str , columns: Optional [ list [ str ]] = None , filters: Optional [ Union [ list [ tuple ], list [ list [ tuple ]]]] = None , **kwargs, ) -> DataFrame: """ Loads Parquet files from Hugging Face using PyArrow, returning a PySPark `DataFrame`. It reads Parquet files in a distributed manner. Access private or gated repositories using `huggingface-cli login` or passing a token using the `storage_options` argument: `storage_options={"token": "hf_xxx"}` Parameters ---------- path : str Path to the file. Prefix with a protocol like `hf://` to read from Hugging Face. You can read from multiple files if you pass a globstring. columns : list, default None If not None, only these columns will be read from the file. filters : List[Tuple] or List[List[Tuple]], default None To filter out data. Filter syntax: [[(column, op, val), ...],...] where op is [==, =, >, >=, <, <=, !=, in, not in] The innermost tuples are transposed into a set of filters applied through an `AND` operation. The outer list combines these sets of filters through an `OR` operation. A single list of tuples can also be used, meaning that no `OR` operation between set of filters is to be conducted. **kwargs Any additional kwargs are passed to pyarrow.parquet.ParquetDataset. Returns ------- DataFrame DataFrame based on parquet file. Examples -------- >>> path = "hf://datasets/username/dataset/data.parquet" >>> pd.DataFrame({"foo": range(5), "bar": range(5, 10)}).to_parquet(path) >>> read_parquet(path).show() +---+---+ |foo|bar| +---+---+ | 0| 5| | 1| 6| | 2| 7| | 3| 8| | 4| 9| +---+---+ >>> read_parquet(path, columns=["bar"]).show() +---+ |bar| +---+ | 5| | 6| | 7| | 8| | 9| +---+ >>> sel = [("foo", ">", 2)] >>> read_parquet(path, filters=sel).show() +---+---+ |foo|bar| +---+---+ | 3| 8| | 4| 9| +---+---+ """ filesystem: HfFileSystem = kwargs.pop( "filesystem" ) if "filesystem" in kwargs else HfFileSystem(**kwargs.pop( "storage_options" , {})) paths = filesystem.glob(path) if not paths: raise FileNotFoundError( f"Counldn't find any file at {path} " ) rdd = spark.sparkContext.parallelize([{ "path" : path} for path in paths], len (paths)) df = spark.createDataFrame(rdd) arrow_schema = pq.read_schema(filesystem. open (paths[ 0 ])) schema = pa.schema([field for field in arrow_schema if (columns is None or field.name in columns)], metadata=arrow_schema.metadata) return df.mapInArrow( partial(_read, columns=columns, filters=filters, filesystem=filesystem, schema=arrow_schema, **kwargs), from_arrow_schema(schema), ) Here is how we can use this on the BAAI/Infinity-Instruct dataset. It is a gated repository, users have to accept the terms of use before accessing it. We use the read_parquet function to read data from the dataset, compute the number of dialogue per language and filter the dataset. After logging-in to access the gated repository, we can run: Copied >>> from pyspark.sql import SparkSession >>> spark = SparkSession.builder.appName( "demo" ).getOrCreate() >>> df = read_parquet( "hf://datasets/BAAI/Infinity-Instruct/7M/*.parquet" ) >>> df.show() +---+----------------------------+-----+----------+--------------------+ | id | conversations|label|langdetect| source| +---+----------------------------+-----+----------+--------------------+ | 0 | [{human, def exti ...| | en| code_exercises| | 1 | [{human, See the ...| | en| flan| | 2 | [{human, This is ...| | en| flan| | 3 | [{human, If you d...| | en| flan| | 4 | [{human, In a Uni...| | en| flan| | 5 | [{human, Read the...| | en| flan| | 6 | [{human, You are ...| | en| code_bagel| | 7 | [{human, I want y...| | en| Subjective| | 8 | [{human, Given th...| | en| flan| | 9 |[{human, 因果联系原则是法...| | zh-cn| Subjective| | 10 | [{human, Provide ...| | en|self-oss-instruct...| | 11 | [{human, The univ...| | en| flan| | 12 | [{human, Q: I am ...| | en| flan| | 13 | [{human, What is ...| | en| OpenHermes- 2.5 | | 14 | [{human, In react...| | en| flan| | 15 | [{human, Write Py...| | en| code_exercises| | 16 | [{human, Find the...| | en| MetaMath| | 17 | [{human, Three of...| | en| MetaMath| | 18 | [{human, Chandra ...| | en| MetaMath| | 19 |[{human, 用经济学知识分析...| | zh-cn| Subjective| +---+----------------------------+-----+----------+--------------------+ To compute the number of dialogues per language we run this code. The columns argument is useful to only load the data we need, since PySpark doesn’t enable predicate push-down in this case. There is also a filters argument to only load data with values within a certain range. Copied >>> df_langdetect_only = read_parquet( "hf://datasets/BAAI/Infinity-Instruct/7M/*.parquet" , columns=[ "langdetect" ]) >>> df_langdetect_only.groupBy( "langdetect" ).count().show() +----------+-------+ |langdetect| count| +----------+-------+ | en| 6697793 | | zh-cn| 751313 | +----------+-------+ To filter the dataset and only keep dialogues in Chinese: Copied >>> criteria = [( "langdetect" , "=" , "zh-cn" )] >>> df_chinese_only = read_parquet( "hf://datasets/BAAI/Infinity-Instruct/7M/*.parquet" , filters=criteria) >>> df_chinese_only.show() +---+----------------------------+-----+----------+----------+ | id | conversations|label|langdetect| source| +---+----------------------------+-----+----------+----------+ | 9 |[{human, 因果联系原则是法...| | zh-cn|Subjective| | 19 |[{human, 用经济学知识分析...| | zh-cn|Subjective| | 38 | [{human, 某个考试共有A、...| | zh-cn|Subjective| | 39 |[{human, 撰写一篇关于斐波...| | zh-cn|Subjective| | 57 |[{human, 总结世界历史上的...| | zh-cn|Subjective| | 61 |[{human, 生成一则广告词。...| | zh-cn|Subjective| | 66 |[{human, 描述一个有效的团...| | zh-cn|Subjective| | 94 |[{human, 如果比利和蒂芙尼...| | zh-cn|Subjective| | 102 |[{human, 生成一句英文名言...| | zh-cn|Subjective| | 106 |[{human, 写一封感谢信,感...| | zh-cn|Subjective| | 118 | [{human, 生成一个故事。}...| | zh-cn|Subjective| | 174 |[{human, 高胆固醇水平的后...| | zh-cn|Subjective| | 180 |[{human, 基于以下角色信息...| | zh-cn|Subjective| | 192 |[{human, 请写一篇文章,概...| | zh-cn|Subjective| | 221 |[{human, 以诗歌形式表达对...| | zh-cn|Subjective| | 228 |[{human, 根据给定的指令,...| | zh-cn|Subjective| | 236 |[{human, 打开一个新的生成...| | zh-cn|Subjective| | 260 |[{human, 生成一个有关未来...| | zh-cn|Subjective| | 268 |[{human, 如果有一定数量的...| | zh-cn|Subjective| | 273 | [{human, 题目:小明有 5 个...| | zh-cn|Subjective| +---+----------------------------+-----+----------+----------+ Run SQL queries Once you have your PySpark Dataframe ready, you can run SQL queries using spark.sql : Copied >>> from pyspark.sql import SparkSession >>> spark = SparkSession.builder.appName( "demo" ).getOrCreate() >>> df = read_parquet( "hf://datasets/BAAI/Infinity-Instruct/7M/*.parquet" , columns=[ "source" ]) >>> spark.sql( "SELECT source, count(*) AS total FROM {df} GROUP BY source ORDER BY total DESC" , df=df).show() +--------------------+-------+ | source| total| +--------------------+-------+ | flan| 2435840 | | Subjective| 1342427 | | OpenHermes- 2.5 | 855478 | | MetaMath| 690138 | | code_exercises| 590958 | |Orca-math-word-pr...| 398168 | | code_bagel| 386649 | | MathInstruct| 329254 | |python-code-datas...| 88632 | |instructional_cod...| 82920 | | CodeFeedback| 79513 | |self-oss-instruct...| 50467 | |Evol-Instruct-Cod...| 43354 | |CodeExercise-Pyth...| 27159 | |code_instructions...| 23130 | | Code-Instruct-700k| 10860 | |Glaive-code-assis...| 9281 | |python_code_instr...| 2581 | |Python-Code-23k-S...| 2297 | +--------------------+-------+ Write We also provide a helper function to write datasets in a distributed manner to a Hugging Face repository. You can write a PySpark Dataframe to Hugging Face using this write_parquet helper function based on the huggingface_hub API. In particular it uses the preupload_lfs_files utility to upload Parquet files in parallel in a distributed manner, and only commits the files once they’re all uploaded: Copied import math import pickle import tempfile from functools import partial from typing import Iterator, Optional import pyarrow as pa import pyarrow.parquet as pq from huggingface_hub import CommitOperationAdd, HfFileSystem from pyspark.sql.dataframe import DataFrame from pyspark.sql.pandas.types import from_arrow_schema, to_arrow_schema def _preupload ( iterator: Iterator[pa.RecordBatch], path: str , schema: pa.Schema, filesystem: HfFileSystem, row_group_size: Optional [ int ] = None , **kwargs ) -> Iterator[pa.RecordBatch]: resolved_path = filesystem.resolve_path(path) with tempfile.NamedTemporaryFile(suffix= ".parquet" ) as temp_file: with pq.ParquetWriter(temp_file.name, schema=schema, **kwargs) as writer: for batch in iterator: writer.write_batch(batch, row_group_size=row_group_size) addition = CommitOperationAdd(path_in_repo=temp_file.name, path_or_fileobj=temp_file.name) filesystem._api.preupload_lfs_files(repo_id=resolved_path.repo_id, additions=[addition], repo_type=resolved_path.repo_type, revision=resolved_path.revision) yield pa.record_batch({ "addition" : [pickle.dumps(addition)]}, schema=pa.schema({ "addition" : pa.binary()})) def _commit ( iterator: Iterator[pa.RecordBatch], path: str , filesystem: HfFileSystem, max_operations_per_commit= 50 ) -> Iterator[pa.RecordBatch]: resolved_path = filesystem.resolve_path(path) additions: list [CommitOperationAdd] = [pickle.loads(addition) for addition in pa.Table.from_batches(iterator, schema=pa.schema({ "addition" : pa.binary()}))[ 0 ].to_pylist()] num_commits = math.ceil( len (additions) / max_operations_per_commit) for shard_idx, addition in enumerate (additions): addition.path_in_repo = resolved_path.path_in_repo.replace( "{shard_idx:05d}" , f" {shard_idx:05d} " ) for i in range ( 0 , num_commits): operations = additions[i * max_operations_per_commit : (i + 1 ) * max_operations_per_commit] commit_message = "Upload using PySpark" + ( f" (part {i:05d} -of- {num_commits:05d} )" if num_commits > 1 else "" ) filesystem._api.create_commit(repo_id=resolved_path.repo_id, repo_type=resolved_path.repo_type, revision=resolved_path.revision, operations=operations, commit_message=commit_message) yield pa.record_batch({ "path" : [addition.path_in_repo for addition in operations]}, schema=pa.schema({ "path" : pa.string()})) def write_parquet ( df: DataFrame, path: str , **kwargs ) -> None : """ Write Parquet files to Hugging Face using PyArrow. It uploads Parquet files in a distributed manner in two steps: 1. Preupload the Parquet files in parallel in a distributed banner 2. Commit the preuploaded files Authenticate using `huggingface-cli login` or passing a token using the `storage_options` argument: `storage_options={"token": "hf_xxx"}` Parameters ---------- path : str Path of the file or directory. Prefix with a protocol like `hf://` to read from Hugging Face. It writes Parquet files in the form "part-xxxxx.parquet", or to a single file if `path ends with ".parquet". **kwargs Any additional kwargs are passed to pyarrow.parquet.ParquetWriter. Returns ------- DataFrame DataFrame based on parquet file. Examples -------- >>> spark.createDataFrame(pd.DataFrame({"foo": range(5), "bar": range(5, 10)})) >>> # Save to one file >>> write_parquet(df, "hf://datasets/username/dataset/data.parquet") >>> # OR save to a directory (possibly in many files) >>> write_parquet(df, "hf://datasets/username/dataset") """ filesystem: HfFileSystem = kwargs.pop( "filesystem" , HfFileSystem(**kwargs.pop( "storage_options" , {}))) if path.endswith( ".parquet" ) or path.endswith( ".pq" ): df = df.coalesce( 1 ) else : path += "/part-{shard_idx:05d}.parquet" df.mapInArrow( partial(_preupload, path=path, schema=to_arrow_schema(df.schema), filesystem=filesystem, **kwargs), from_arrow_schema(pa.schema({ "addition" : pa.binary()})), ).repartition( 1 ).mapInArrow( partial(_commit, path=path, filesystem=filesystem), from_arrow_schema(pa.schema({ "path" : pa.string()})), ).collect() Here is how we can use this function to write the filtered version of the BAAI/Infinity-Instruct dataset back to Hugging Face. First you need to create a dataset repository , e.g. username/Infinity-Instruct-Chinese-Only (you can set it to private if you want). Then, make sure you are authenticated and you can run: Copied >>> write_parquet(df_chinese_only, "hf://datasets/username/Infinity-Instruct-Chinese-Only" ) tmph9jwu9py.parquet: 100 %|██████████| 50.5 M/ 50.5 M [ 00 :03< 00 : 00 , 14.6 MB/s] tmp0oqt99nc.parquet: 100 %|██████████| 50.8 M/ 50.8 M [ 00 :02< 00 : 00 , 17.9 MB/s] tmpgnizkwqp.parquet: 100 %|██████████| 50.5 M/ 50.5 M [ 00 :02< 00 : 00 , 19.6 MB/s] tmpanm04k4n.parquet: 100 %|██████████| 51.4 M/ 51.4 M [ 00 :02< 00 : 00 , 22.9 MB/s] tmp14uy9oqb.parquet: 100 %|██████████| 50.4 M/ 50.4 M [ 00 :02< 00 : 00 , 23.0 MB/s] tmpcp8t_qdl.parquet: 100 %|██████████| 50.4 M/ 50.4 M [ 00 :02< 00 : 00 , 23.5 MB/s] tmpjui5mns8.parquet: 100 %|██████████| 50.3 M/ 50.3 M [ 00 :02< 00 : 00 , 24.1 MB/s] tmpydqh6od1.parquet: 100 %|██████████| 50.9 M/ 50.9 M [ 00 :02< 00 : 00 , 23.8 MB/s] tmp52f2t8tu.parquet: 100 %|██████████| 50.5 M/ 50.5 M [ 00 :02< 00 : 00 , 23.7 MB/s] tmpg7egv3ye.parquet: 100 %|██████████| 50.1 M/ 50.1 M [ 00 :06< 00 : 00 , 7.68 MB/s] tmp2s0fq2hm.parquet: 100 %|██████████| 50.8 M/ 50.8 M [ 00 :02< 00 : 00 , 18.1 MB/s] tmpmj97ab30.parquet: 100 %|██████████| 71.3 M/ 71.3 M [ 00 :02< 00 : 00 , 23.9 MB/s] Run in JupyterLab on Hugging Face Spaces You can duplicate the Spark on HF JupyterLab Space to get a Notebook with PySpark and those helper functions pre-installed. Click on “Duplicate Space”, choose a name for your Space, select your hardware and you are ready: < > Update on GitHub ← Performance optimizations WebDataset → Spark Installation Authentication Read Run SQ L queries Write Run in Jupyter Lab on Hugging Face Spaces |
FourierFT__Discrete_Fourier_Transformation_Fine_Tu.txt | FourierFT: Discrete Fourier Transformation Fine-Tuning Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation FourierFT: Discrete Fourier Transformation Fine-Tuning PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started FourierFT: Discrete Fourier Transformation Fine-Tuning FourierFT is a parameter-efficient fine-tuning technique that leverages Discrete Fourier Transform to compress the model’s tunable weights. This method outperforms LoRA in the GLUE benchmark and common ViT classification tasks using much less parameters. FourierFT currently has the following constraints: Only nn.Linear layers are supported. Quantized layers are not supported. If these constraints don’t work for your use case, consider other methods instead. The abstract from the paper is: Low-rank adaptation (LoRA) has recently gained much interest in fine-tuning foundation models. It effectively reduces the number of trainable parameters by incorporating low-rank matrices A and B to represent the weight change, i.e., Delta W=BA. Despite LoRA’s progress, it faces storage challenges when handling extensive customization adaptations or larger base models. In this work, we aim to further compress trainable parameters by enjoying the powerful expressiveness of the Fourier transform. Specifically, we introduce FourierFT, which treats Delta W as a matrix in the spatial domain and learns only a small fraction of its spectral coefficients. With the trained spectral coefficients, we implement the inverse discrete Fourier transform to recover Delta W. Empirically, our FourierFT method shows comparable or better performance with fewer parameters than LoRA on various tasks, including natural language understanding, natural language generation, instruction tuning, and image classification. For example, when performing instruction tuning on the LLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable parameters, compared to LoRA’s 33.5M. FourierFTConfig class peft. FourierFTConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False n_frequency : int = 1000 scaling : float = 150.0 random_loc_seed : Optional[int] = 777 fan_in_fan_out : bool = False target_modules : Optional[Union[list[str], str]] = None exclude_modules : Optional[Union[list[str], str]] = None bias : str = 'none' modules_to_save : Optional[list[str]] = None layers_to_transform : Optional[Union[list[int], int]] = None layers_pattern : Optional[Union[list[str], str]] = None n_frequency_pattern : Optional[dict] = <factory> init_weights : bool = False ) Parameters n_frequency ( int ) — Num of learnable frequencies for the Discrete Fourier Transform. ‘n_frequency’ is an integer that is greater than 0 and less than or equal to d^2 (assuming the weight W has dimensions of d by d). Additionally, it is the number of trainable parameters required to update each delta W weight. ‘n_frequency’ will affect the performance and efficiency for PEFT. Specifically, it has little impact on training speed, but higher values of it (typically) result in larger GPU memory costs and better accuracy. With the same target_modules , the number of parameters of LoRA is (2 d r/n_frequency) times that of FourierFT. The following examples of settings regarding ‘n_frequency’ can be used as reference for users. For NLU tasks with the RoBERTa-large model, adopting ‘n_frequency’: 1000 can almost achieve similar results as ‘r’: 8 in LoRA. At this time, the number of parameters of LoRA is about 16 times that of FourierFT. For image classification tasks with Vit-large models, adopting ‘n_frequency’: 3000 can almost achieve similar results as ‘r’: 16 in LoRA, where the number of parameters of LoRA is about 11 times that of FourierFT. scaling ( float ) — The scaling value for the delta W matrix. This is an important hyperparameter used for scaling, similar to the ‘lora_alpha’ parameter in the LoRA method. ‘scaling’ can be determined during the hyperparameter search process. However, if users want to skip this process, one can refer to the settings in the following scenarios. This parameter can be set to 100.0 or 150.0 for both RoBERTa-base and RoBERTa-large models across all NLU (GLUE) tasks. This parameter can be set to 300.0 for both LLaMA family models for all instruction tuning. This parameter can be set to 300.0 for both ViT-base and ViT-large models across all image classification tasks. random_loc_seed ( int ) — Seed for the random location of the frequencies, i.e., the spectral entry matrix. target_modules ( Union[list[str],str] ) — List of module names or regex expression of the module names to replace with FourierFT. For example, [‘q’, ‘v’] or ‘. decoder. (SelfAttention|EncDecAttention).*(q|v)$‘. Only linear layers are supported. exclude_modules ( Optional[Union[List[str], str]] ) — The names of the modules to not apply the adapter. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. fan_in_fan_out ( bool ) — Set this to True if the layer to replace stores weight like (fan_in, fan_out). bias ( str ) — Bias type for FourierFT. Can be ‘none’, ‘all’ or ‘fourier_only’. modules_to_save ( list[str] ) — List of modules apart from FourierFT layers to be set as trainable and saved in the final checkpoint. For example, in Sequence Classification or Token Classification tasks, the final layer classifier/score are randomly initialized and as such need to be trainable and saved. layers_to_transform ( Union[list[int],int] ) — The layer indexes to transform, is this argument is specified, PEFT will transform only the layers indexes that are specified inside this list. If a single integer is passed, PEFT will transform only the layer at this index. layers_pattern ( Optional[Union[List[str], str]] ) — The layer pattern name, used only if layers_to_transform is different to None and if the layer pattern is not in the common layers pattern. This should target the nn.ModuleList of the model, which is often called 'layers' or 'h' . n_frequency_pattern ( dict ) — The mapping from layer names or regexp expression to n_frequency which are different from the default specified. For example, {model.decoder.layers.0.encoder_attn.k_proj: 1000 }. init_weights ( bool ) — The initialization of the Fourier weights. Set this to False if the spectrum are initialized to a standard normal distribution. Set this to True if the spectrum are initialized to zeros. This is the configuration class to store the configuration of a FourierFTModel . FourierFTModel class peft. FourierFTModel < source > ( model config adapter_name low_cpu_mem_usage : bool = False ) → torch.nn.Module Parameters model ( torch.nn.Module ) — The model to be adapted. config ( FourierFTConfig ) — The configuration of the FourierFT model. adapter_name ( str ) — The name of the adapter, defaults to "default" . low_cpu_mem_usage ( bool , optional , defaults to False ) — Create empty adapter weights on meta device. Useful to speed up the loading process. Returns torch.nn.Module The FourierFT model. Creates FourierFT model from a pretrained transformers model. The method is described in detail in https://arxiv.org/abs/2405.03003 . Attributes : model ( PreTrainedModel ) — The model to be adapted. peft_config ( FourierFTConfig ): The configuration of the Fourier model. delete_adapter < source > ( adapter_name : str ) Parameters adapter_name (str) — Name of the adapter to be deleted. Deletes an existing adapter. disable_adapter_layers < source > ( ) Disable all adapters. When disabling all adapters, the model output corresponds to the output of the base model. enable_adapter_layers < source > ( ) Enable all adapters. Call this if you have previously disabled all adapters and want to re-enable them. merge_and_unload < source > ( progressbar : bool = False safe_merge : bool = False adapter_names : Optional[list[str]] = None ) Parameters progressbar ( bool ) — whether to show a progressbar indicating the unload and merge process safe_merge ( bool ) — whether to activate the safe merging check to check if there is any potential Nan in the adapter weights adapter_names ( List[str] , optional ) — The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to None . This method merges the Fourier layers into the base model. This is needed if someone wants to use the base model as a standalone model. set_adapter < source > ( adapter_name : str | list[str] ) Parameters adapter_name ( str or list[str] ) — Name of the adapter(s) to be activated. Set the active adapter(s). unload < source > ( ) Gets back the base model by removing all the Fourier modules without merging. This gives back the original base model. < > Update on GitHub ← VeRA VB-LoRA → FourierF T: Discrete Fourier Transformation Fine- Tuning FourierFT Config FourierFT Model |
Interface__UserInfo.txt | Interface: UserInfo Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: UserInfo Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: UserInfo Properties canPay • Optional canPay : boolean Hugging Face field. Whether the user has a payment method set up. Needs “read-billing” scope. Defined in hub/src/lib/oauth-handle-redirect.ts:45 email • Optional email : string OpenID Connect field, available if scope “email” was granted. Defined in hub/src/lib/oauth-handle-redirect.ts:24 email _ verified • Optional email_verified : boolean OpenID Connect field, available if scope “email” was granted. Defined in hub/src/lib/oauth-handle-redirect.ts:20 isPro • isPro : boolean Hugging Face field. Whether the user is a pro user. Defined in hub/src/lib/oauth-handle-redirect.ts:41 name • name : string OpenID Connect field. The user’s full name. Defined in hub/src/lib/oauth-handle-redirect.ts:12 orgs • Optional orgs : { canPay? : boolean ; isEnterprise : boolean ; missingMFA? : boolean ; name : string ; pendingSSO? : boolean ; picture : string ; preferred_username : string ; roleInOrg? : string ; sub : string }[] Hugging Face field. The user’s orgs Defined in hub/src/lib/oauth-handle-redirect.ts:49 picture • picture : string OpenID Connect field. The user’s profile picture URL. Defined in hub/src/lib/oauth-handle-redirect.ts:28 preferred _ username • preferred_username : string OpenID Connect field. The user’s username. Defined in hub/src/lib/oauth-handle-redirect.ts:16 profile • profile : string OpenID Connect field. The user’s profile URL. Defined in hub/src/lib/oauth-handle-redirect.ts:32 sub • sub : string OpenID Connect field. Unique identifier for the user, even in case of rename. Defined in hub/src/lib/oauth-handle-redirect.ts:8 website • Optional website : string OpenID Connect field. The user’s website URL. Defined in hub/src/lib/oauth-handle-redirect.ts:36 < > Update on GitHub ← TensorInfo WhoAmIApp → Interface: User Info Properties can Pay Defined in email Defined in email _ verified Defined in is Pro Defined in name Defined in orgs Defined in picture Defined in preferred _ username Defined in profile Defined in sub Defined in website Defined in |
Unconditional_image_generation.txt | Unconditional image generation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Unconditional image generation Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Unconditional image generation Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image. To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image. Copied from diffusers import DiffusionPipeline generator = DiffusionPipeline.from_pretrained( "anton-l/ddpm-butterflies-128" ).to( "cuda" ) image = generator().images[ 0 ] image Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images. The output image is a PIL.Image object that can be saved: Copied image.save( "generated_image.png" ) You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality. Copied image = generator(num_inference_steps= 100 ).images[ 0 ] image Try out the Space below to generate an image of a butterfly! < > Update on GitHub ← Push files to the Hub Text-to-image → Unconditional image generation |
Data.txt | Data Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up timm documentation Data timm 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.0.14 v0.9.16 EN Get started Home Quickstart Installation Changelog Tutorials Using Pretrained Models as Feature Extractors Training With The Official Training Script Share and Load Models from the 🤗 Hugging Face Hub Model Pages Reference Models Data Optimizers Learning Rate Schedulers Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Data timm.data.create_dataset < source > ( name : str root : typing.Optional[str] = None split : str = 'validation' search_split : bool = True class_map : dict = None load_bytes : bool = False is_training : bool = False download : bool = False batch_size : int = 1 num_samples : typing.Optional[int] = None seed : int = 42 repeats : int = 0 input_img_mode : str = 'RGB' trust_remote_code : bool = False **kwargs ) Parameters name — Dataset name, empty is okay for folder based datasets root — Root folder of dataset (All) split — Dataset split (All) search_split — Search for split specific child fold from root so one can specify imagenet/ instead of /imagenet/val , etc on cmd line / config. (Folder, Torch) class_map — Specify class -> index mapping via text file or dict (Folder) load_bytes — Load data, return images as undecoded bytes (Folder) download — Download dataset if not present and supported (HFIDS, TFDS, Torch) is_training — Create dataset in train mode, this is different from the split. For Iterable / TDFS it enables shuffle, ignored for other datasets. (TFDS, WDS, HFIDS) batch_size — Batch size hint for iterable datasets (TFDS, WDS, HFIDS) seed — Seed for iterable datasets (TFDS, WDS, HFIDS) repeats — Dataset repeats per iteration i.e. epoch (TFDS, WDS, HFIDS) input_img_mode — Input image color conversion mode e.g. ‘RGB’, ‘L’ (folder, TFDS, WDS, HFDS, HFIDS) trust_remote_code — Trust remote code in Hugging Face Datasets if True (HFDS, HFIDS) * *kwargs — Other args to pass through to underlying Dataset and/or Reader classes Dataset factory method In parentheses after each arg are the type of dataset supported for each arg, one of: Folder - default, timm folder (or tar) based ImageDataset Torch - torchvision based datasets HFDS - Hugging Face Datasets HFIDS - Hugging Face Datasets Iterable (streaming mode, with IterableDataset) TFDS - Tensorflow-datasets wrapper in IterabeDataset interface via IterableImageDataset WDS - Webdataset All - any of the above timm.data.create_loader < source > ( dataset : typing.Union[timm.data.dataset.ImageDataset, timm.data.dataset.IterableImageDataset] input_size : typing.Union[int, typing.Tuple[int, int], typing.Tuple[int, int, int]] batch_size : int is_training : bool = False no_aug : bool = False re_prob : float = 0.0 re_mode : str = 'const' re_count : int = 1 re_split : bool = False train_crop_mode : typing.Optional[str] = None scale : typing.Optional[typing.Tuple[float, float]] = None ratio : typing.Optional[typing.Tuple[float, float]] = None hflip : float = 0.5 vflip : float = 0.0 color_jitter : float = 0.4 color_jitter_prob : typing.Optional[float] = None grayscale_prob : float = 0.0 gaussian_blur_prob : float = 0.0 auto_augment : typing.Optional[str] = None num_aug_repeats : int = 0 num_aug_splits : int = 0 interpolation : str = 'bilinear' mean : typing.Tuple[float, ...] = (0.485, 0.456, 0.406) std : typing.Tuple[float, ...] = (0.229, 0.224, 0.225) num_workers : int = 1 distributed : bool = False crop_pct : typing.Optional[float] = None crop_mode : typing.Optional[str] = None crop_border_pixels : typing.Optional[int] = None collate_fn : typing.Optional[typing.Callable] = None pin_memory : bool = False fp16 : bool = False img_dtype : dtype = torch.float32 device : device = device(type='cuda') use_prefetcher : bool = True use_multi_epochs_loader : bool = False persistent_workers : bool = True worker_seeding : str = 'all' tf_preprocessing : bool = False ) Parameters dataset — The image dataset to load. input_size — Target input size (channels, height, width) tuple or size scalar. batch_size — Number of samples in a batch. is_training — Return training (random) transforms. no_aug — Disable augmentation for training (useful for debug). re_prob — Random erasing probability. re_mode — Random erasing fill mode. re_count — Number of random erasing regions. re_split — Control split of random erasing across batch size. scale — Random resize scale range (crop area, < 1.0 => zoom in). ratio — Random aspect ratio range (crop ratio for RRC, ratio adjustment factor for RKR). hflip — Horizontal flip probability. vflip — Vertical flip probability. color_jitter — Random color jitter component factors (brightness, contrast, saturation, hue). Scalar is applied as (scalar,) * 3 (no hue). color_jitter_prob — Apply color jitter with this probability if not None (for SimlCLR-like aug grayscale_prob — Probability of converting image to grayscale (for SimCLR-like aug). gaussian_blur_prob — Probability of applying gaussian blur (for SimCLR-like aug). auto_augment — Auto augment configuration string (see auto_augment.py). num_aug_repeats — Enable special sampler to repeat same augmentation across distributed GPUs. num_aug_splits — Enable mode where augmentations can be split across the batch. interpolation — Image interpolation mode. mean — Image normalization mean. std — Image normalization standard deviation. num_workers — Num worker processes per DataLoader. distributed — Enable dataloading for distributed training. crop_pct — Inference crop percentage (output size / resize size). crop_mode — Inference crop mode. One of [‘squash’, ‘border’, ‘center’]. Defaults to ‘center’ when None. crop_border_pixels — Inference crop border of specified # pixels around edge of original image. collate_fn — Override default collate_fn. pin_memory — Pin memory for device transfer. fp16 — Deprecated argument for half-precision input dtype. Use img_dtype. img_dtype — Data type for input image. device — Device to transfer inputs and targets to. use_prefetcher — Use efficient pre-fetcher to load samples onto device. use_multi_epochs_loader — persistent_workers — Enable persistent worker processes. worker_seeding — Control worker random seeding at init. tf_preprocessing — Use TF 1.0 inference preprocessing for testing model ports. timm.data.create_transform < source > ( input_size : typing.Union[int, typing.Tuple[int, int], typing.Tuple[int, int, int]] = 224 is_training : bool = False no_aug : bool = False train_crop_mode : typing.Optional[str] = None scale : typing.Optional[typing.Tuple[float, float]] = None ratio : typing.Optional[typing.Tuple[float, float]] = None hflip : float = 0.5 vflip : float = 0.0 color_jitter : typing.Union[float, typing.Tuple[float, ...]] = 0.4 color_jitter_prob : typing.Optional[float] = None grayscale_prob : float = 0.0 gaussian_blur_prob : float = 0.0 auto_augment : typing.Optional[str] = None interpolation : str = 'bilinear' mean : typing.Tuple[float, ...] = (0.485, 0.456, 0.406) std : typing.Tuple[float, ...] = (0.229, 0.224, 0.225) re_prob : float = 0.0 re_mode : str = 'const' re_count : int = 1 re_num_splits : int = 0 crop_pct : typing.Optional[float] = None crop_mode : typing.Optional[str] = None crop_border_pixels : typing.Optional[int] = None tf_preprocessing : bool = False use_prefetcher : bool = False normalize : bool = True separate : bool = False ) Parameters input_size — Target input size (channels, height, width) tuple or size scalar. is_training — Return training (random) transforms. no_aug — Disable augmentation for training (useful for debug). train_crop_mode — Training random crop mode (‘rrc’, ‘rkrc’, ‘rkrr’). scale — Random resize scale range (crop area, < 1.0 => zoom in). ratio — Random aspect ratio range (crop ratio for RRC, ratio adjustment factor for RKR). hflip — Horizontal flip probability. vflip — Vertical flip probability. color_jitter — Random color jitter component factors (brightness, contrast, saturation, hue). Scalar is applied as (scalar,) * 3 (no hue). color_jitter_prob — Apply color jitter with this probability if not None (for SimlCLR-like aug). grayscale_prob — Probability of converting image to grayscale (for SimCLR-like aug). gaussian_blur_prob — Probability of applying gaussian blur (for SimCLR-like aug). auto_augment — Auto augment configuration string (see auto_augment.py). interpolation — Image interpolation mode. mean — Image normalization mean. std — Image normalization standard deviation. re_prob — Random erasing probability. re_mode — Random erasing fill mode. re_count — Number of random erasing regions. re_num_splits — Control split of random erasing across batch size. crop_pct — Inference crop percentage (output size / resize size). crop_mode — Inference crop mode. One of [‘squash’, ‘border’, ‘center’]. Defaults to ‘center’ when None. crop_border_pixels — Inference crop border of specified # pixels around edge of original image. tf_preprocessing — Use TF 1.0 inference preprocessing for testing model ports use_prefetcher — Pre-fetcher enabled. Do not convert image to tensor or normalize. normalize — Normalization tensor output w/ provided mean/std (if prefetcher not used). separate — Output transforms in 3-stage tuple. timm.data.resolve_data_config < source > ( args = None pretrained_cfg = None model = None use_test_size = False verbose = False ) < > Update on GitHub ← Models Optimizers → Data |
Model_memory_estimator.txt | Model memory estimator Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Model memory estimator Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Model memory estimator One very difficult aspect when exploring potential models to use on your machine is knowing just how big of a model will fit into memory with your current graphics card (such as loading the model onto CUDA). To help alleviate this, Accelerate has a CLI interface through accelerate estimate-memory . This tutorial will help walk you through using it, what to expect, and at the end link to the interactive demo hosted on the Hub which will even let you post those results directly on the model repo! Currently we support searching for models that can be used in timm and transformers . This API will load the model into memory on the meta device, so we are not actually downloading and loading the full weights of the model into memory, nor do we need to. As a result it’s perfectly fine to measure 8 billion parameter models (or more), without having to worry about if your CPU can handle it! Gradio Demos Below are a few gradio demos related to what was described above. The first is the official Hugging Face memory estimation space, utilizing Accelerate directly: A community member has taken the idea and expanded it further, allowing you to filter models directly and see if you can run a particular LLM given GPU constraints and LoRA configurations. To play with it, see here for more details. The Command When using accelerate estimate-memory , you need to pass in the name of the model you want to use, potentially the framework that model utilizing (if it can’t be found automatically), and the data types you want the model to be loaded in with. For example, here is how we can calculate the memory footprint for bert-base-cased : Copied accelerate estimate-memory bert-base-cased This will download the config.json for bert-based-cased , load the model on the meta device, and report back how much space it will use: Memory Usage for loading bert-base-cased : dtype Largest Layer Total Size Training using Adam float32 84.95 MB 418.18 MB 1.61 GB float16 42.47 MB 206.59 MB 826.36 MB int8 21.24 MB 103.29 MB 413.18 MB int4 10.62 MB 51.65 MB 206.59 MB By default it will return all the supported dtypes ( int4 through float32 ), but if you are interested in specific ones these can be filtered. Specific libraries If the source library cannot be determined automatically (like it could in the case of bert-base-cased ), a library name can be passed in. Copied accelerate estimate-memory HuggingFaceM4/idefics-80b-instruct --library_name transformers Memory Usage for loading HuggingFaceM4/idefics-80b-instruct : dtype Largest Layer Total Size Training using Adam float32 3.02 GB 297.12 GB 1.16 TB float16 1.51 GB 148.56 GB 594.24 GB int8 772.52 MB 74.28 GB 297.12 GB int4 386.26 MB 37.14 GB 148.56 GB Copied accelerate estimate-memory timm/resnet50.a1_in1k --library_name timm Memory Usage for loading timm/resnet50.a1_in1k : dtype Largest Layer Total Size Training using Adam float32 9.0 MB 97.7 MB 390.78 MB float16 4.5 MB 48.85 MB 195.39 MB int8 2.25 MB 24.42 MB 97.7 MB int4 1.12 MB 12.21 MB 48.85 MB Specific dtypes As mentioned earlier, while we return int4 through float32 by default, any dtype can be used from float32 , float16 , int8 , and int4 . To do so, pass them in after specifying --dtypes : Copied accelerate estimate-memory bert-base-cased --dtypes float32 float16 Memory Usage for loading bert-base-cased : dtype Largest Layer Total Size Training using Adam float32 84.95 MB 413.18 MB 1.61 GB float16 42.47 MB 206.59 MB 826.36 MB Caveats with this calculator This calculator will tell you how much memory is needed to purely load the model in, not to perform inference. This calculation is accurate within a few % of the actual value, so it is a very good view of just how much memory it will take. For instance loading bert-base-cased actually takes 413.68 MB when loaded on CUDA in full precision, and the calculator estimates 413.18 MB . When performing inference you can expect to add up to an additional 20% as found by EleutherAI . We’ll be conducting research into finding a more accurate estimate to these values, and will update this calculator once done. < > Update on GitHub ← Start Here! Model quantization → Model memory estimator Gradio Demos The Command Specific libraries Specific dtypes Caveats with this calculator |
Image_Classification_&_Regression.txt | Image Classification & Regression Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AutoTrain documentation Image Classification & Regression AutoTrain 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.8.24 v0.7.129 v0.6.48 v0.5.2 EN Getting Started 🤗 AutoTrain How much does it cost? Get help and support Frequently Asked Questions Quickstart Train on Spaces Python SDK Train Locally Config File Tasks LLM Finetuning Text Classification/Regression Extractive QA Sentence Transformer Image Classification / Regression Object Detection Seq2Seq Token Classification Tabular Miscellaneous Understanding Column Mapping AutoTrain API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.8.24 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Image Classification & Regression Image classification is a form of supervised learning where a model is trained to identify and categorize objects within images. AutoTrain simplifies the process, enabling you to train a state-of-the-art image classification model by simply uploading labeled example images. Image regression/scoring is a form of supervised learning where a model is trained to predict a score or value for an image. AutoTrain simplifies the process, enabling you to train a state-of-the-art image scoring model by simply uploading labeled example images. Preparing your data To ensure your image classification model trains effectively, follow these guidelines for preparing your data: Organizing Images For Image Classification Prepare a zip file containing your categorized images. Each category should have its own subfolder named after the class it represents. For example, to differentiate between ‘cats’ and ‘dogs’, your zip file structure should resemble the following: Copied cats_and_dogs .zip ├── cats │ ├── cat. 1 .jpg │ ├── cat. 2 .jpg │ ├── cat. 3 .jpg │ └── ... └── dogs ├── dog. 1 .jpg ├── dog. 2 .jpg ├── dog. 3 .jpg └── ... You can also use a dataset from the Hugging Face Hub. Example dataset from Hugging Face Hub: truepositive/hotdog_nothotdog . Organizing Images for Image Regression/Scoring Prepare a zip file containing your images and metadata.jsonl. Copied Archive .zip ├── 0001 .png ├── 0002 .png ├── 0003 .png ├── . ├── . ├── . └── metadata.jsonl Example for metadata.jsonl : Copied { "file_name" : "0001.png" , "target" : 0.5 } { "file_name" : "0002.png" , "target" : 0.7 } { "file_name" : "0003.png" , "target" : 0.3 } Please note that metadata.jsonl should contain the file_name and the target value for each image. You can also use a dataset from the Hugging Face Hub. Example dataset from Hugging Face Hub: abhishek/img-quality-full . Image Requirements Format: Ensure all images are in JPEG, JPG, or PNG format. Quantity: Include at least 5 images per class to provide the model with sufficient examples for learning. Exclusivity: The zip file should exclusively contain folders named after the classes, and these folders should only contain relevant images. No additional files or nested folders should be included. Additional Tips Uniformity: While not required, having images of similar sizes and resolutions can help improve model performance. Variability: Include a variety of images for each class to encompass the range of appearances and contexts the model might encounter in real-world scenarios. Some points to keep in mind: The zip file should contain multiple folders (the classes), each folder should contain images of a single class. The name of the folder should be the name of the class. The images must be jpeg, jpg or png. There should be at least 5 images per class. There must not be any other files in the zip file. There must not be any other folders inside the zip folder. When train.zip is decompressed, it creates two folders: cats and dogs. these are the two categories for classification. The images for both categories are in their respective folders. You can have as many categories as you want. Column Mapping For image classification, if you are using a zip dataset format, the column mapping should be default and should not be changed. Copied data: . . . column_mapping: image_column: image target_column: label For image regression, the column mapping must be as follows: Copied data: . . . column_mapping: image_column: image target_column: target For image regression, metadata.jsonl should contain the file_name and the target value for each image. If you are using a dataset from the Hugging Face Hub, you should set appropriate column mappings based on the dataset. Training Local Training To train the model locally, create a configuration file (config.yaml) with the following content: Copied task: image_classification base_model: google/vit-base-patch16-224 project_name: autotrain-cats-vs-dogs-finetuned log: tensorboard backend: local data: path: cats_vs_dogs train_split: train valid_split: null column_mapping: image_column: image target_column: label params: epochs: 2 batch_size: 4 lr: 2e-5 optimizer: adamw_torch scheduler: linear gradient_accumulation: 1 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true Here, we are using cats_and_dogs dataset from Hugging Face Hub. The model is trained for 2 epochs with a batch size of 4 and a learning rate of 2e-5 . We are using the adamw_torch optimizer and the linear scheduler. We are also using mixed precision training with a gradient accumulation of 1. In order to use a local dataset, you can change the data section to: Copied data: path: data/ train_split: train # this folder inside data/ will be used for training, it contains the images in subfolders. valid_split: valid # this folder inside data/ will be used for validation, it contains the images in subfolders. can also be null. column_mapping: image_column: image target_column: label Similarly, for image regression, you can use the following configuration file: Copied task: image_regression base_model: microsoft/resnet-50 project_name: autotrain-img-quality-resnet50 log: tensorboard backend: local data: path: abhishek/img-quality-full train_split: train valid_split: null column_mapping: image_column: image target_column: target params: epochs: 10 batch_size: 8 lr: 2e-3 optimizer: adamw_torch scheduler: cosine gradient_accumulation: 1 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true To train the model, run the following command: Copied $ autotrain --config config.yaml This will start the training process and save the model to the Hugging Face Hub after training is complete. In case you dont want to save the model to the hub, you can set push_to_hub to false in the configuration file. Training on Hugging Face Spaces To train the model on Hugging Face Spaces, create a training space as described in Quickstart section. An example UI for training an image scoring model on Hugging Face Spaces is shown below: In this example, we are training an image scoring model using the microsoft/resnet-50 model on the abhishek/img-quality-full dataset. We are training the model for 3 epochs with a batch size of 8 and a learning rate of 5e-5 . We are using the adamw_torch optimizer and the linear scheduler. We are also using mixed precision training with a gradient accumulation of 1. Note how the column mapping has now been changed and target points to quality_mos column in the dataset. To train the model, click on the Start Training button. This will start the training process and save the model to the Hugging Face Hub after training is complete. Parameters Image Classification Parameters class autotrain.trainers.image_classification.params. ImageClassificationParams < source > ( data_path : str = None model : str = 'google/vit-base-patch16-224' username : typing.Optional[str] = None lr : float = 5e-05 epochs : int = 3 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' image_column : str = 'image' target_column : str = 'target' log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Pre-trained model name or path. Default is “google/vit-base-patch16-224”. username (Optional[str]) — Hugging Face account username. lr (float) — Learning rate for the optimizer. Default is 5e-5. epochs (int) — Number of epochs for training. Default is 3. batch_size (int) — Batch size for training. Default is 8. warmup_ratio (float) — Warmup ratio for learning rate scheduler. Default is 0.1. gradient_accumulation (int) — Number of gradient accumulation steps. Default is 1. optimizer (str) — Optimizer type. Default is “adamw_torch”. scheduler (str) — Learning rate scheduler type. Default is “linear”. weight_decay (float) — Weight decay for the optimizer. Default is 0.0. max_grad_norm (float) — Maximum gradient norm for clipping. Default is 1.0. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project for output directory. Default is “project-name”. auto_find_batch_size (bool) — Automatically find optimal batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision training mode (fp16, bf16, or None). save_total_limit (int) — Maximum number of checkpoints to keep. Default is 1. token (Optional[str]) — Hugging Face Hub token for authentication. push_to_hub (bool) — Whether to push the model to Hugging Face Hub. Default is False. eval_strategy (str) — Evaluation strategy during training. Default is “epoch”. image_column (str) — Column name for images in the dataset. Default is “image”. target_column (str) — Column name for target labels in the dataset. Default is “target”. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Number of epochs with no improvement for early stopping. Default is 5. early_stopping_threshold (float) — Threshold for early stopping. Default is 0.01. ImageClassificationParams is a configuration class for image classification training parameters. Image Regression Parameters class autotrain.trainers.image_regression.params. ImageRegressionParams < source > ( data_path : str = None model : str = 'google/vit-base-patch16-224' username : typing.Optional[str] = None lr : float = 5e-05 epochs : int = 3 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' image_column : str = 'image' target_column : str = 'target' log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Name of the model to use. Default is “google/vit-base-patch16-224”. username (Optional[str]) — Hugging Face Username. lr (float) — Learning rate. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. batch_size (int) — Training batch size. Default is 8. warmup_ratio (float) — Warmup proportion. Default is 0.1. gradient_accumulation (int) — Gradient accumulation steps. Default is 1. optimizer (str) — Optimizer to use. Default is “adamw_torch”. scheduler (str) — Scheduler to use. Default is “linear”. weight_decay (float) — Weight decay. Default is 0.0. max_grad_norm (float) — Max gradient norm. Default is 1.0. seed (int) — Random seed. Default is 42. train_split (str) — Train split name. Default is “train”. valid_split (Optional[str]) — Validation split name. logging_steps (int) — Logging steps. Default is -1. project_name (str) — Output directory name. Default is “project-name”. auto_find_batch_size (bool) — Whether to auto find batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision type (fp16, bf16, or None). save_total_limit (int) — Save total limit. Default is 1. token (Optional[str]) — Hub Token. push_to_hub (bool) — Whether to push to hub. Default is False. eval_strategy (str) — Evaluation strategy. Default is “epoch”. image_column (str) — Image column name. Default is “image”. target_column (str) — Target column name. Default is “target”. log (str) — Logging using experiment tracking. Default is “none”. early_stopping_patience (int) — Early stopping patience. Default is 5. early_stopping_threshold (float) — Early stopping threshold. Default is 0.01. ImageRegressionParams is a configuration class for image regression training parameters. < > Update on GitHub ← Sentence Transformer Object Detection → Image Classification & Regression Preparing your data Organizing Images For Image Classification Organizing Images for Image Regression/ Scoring Image Requirements Column Mapping Training Local Training Training on Hugging Face Spaces Parameters Image Classification Parameters Image Regression Parameters |
Using_TGI_with_AMD_GPUs.txt | Using TGI with AMD GPUs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Using TGI with AMD GPUs text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using TGI with AMD GPUs TGI is supported and tested on AMD Instinct MI210 , MI250 and MI300 GPUs. The support may be extended in the future. The recommended usage is through Docker. Make sure to check the AMD documentation on how to use Docker with AMD GPUs. On a server powered by AMD GPUs, TGI can be launched with the following command: Copied model=teknium/OpenHermes-2.5-Mistral-7B volume= $PWD /data # share a volume with the Docker container to avoid downloading weights every run docker run -- rm -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \ --device=/dev/kfd --device=/dev/dri --group-add video \ --ipc=host --shm-size 256g --net host -v $volume :/data \ ghcr.io/huggingface/text-generation-inference:3.0.1-rocm \ --model-id $model The launched TGI server can then be queried from clients, make sure to check out the Consuming TGI guide. TunableOp TGI’s docker image for AMD GPUs integrates PyTorch’s TunableOp , which allows to do an additional warmup to select the best performing matrix multiplication (GEMM) kernel from rocBLAS or hipBLASLt. Experimentally, on MI300X, we noticed a 6-8% latency improvement when using TunableOp on top of ROCm 6.1 and PyTorch 2.3. TunableOp is enabled by default, the warmup may take 1-2 minutes. In case you would like to disable TunableOp, please pass --env PYTORCH_TUNABLEOP_ENABLED="0" when launcher TGI’s docker container. Flash attention implementation Two implementations of Flash Attention are available for ROCm, the first is ROCm/flash-attention based on a Composable Kernel (CK) implementation, and the second is a Triton implementation . By default, the Composable Kernel implementation is used. However, the Triton implementation has slightly lower latency on MI250 and MI300, but requires a warmup which can be prohibitive as it needs to be done again for each new prompt length. If needed, FA Triton impelmentation can be enabled with --env ROCM_USE_FLASH_ATTN_V2_TRITON="0" when launching TGI’s docker container. Custom PagedAttention For better performance on ROCm, a custom Paged Attention kernel is available and is enabled by default. To disable it and fall back to the PagedAttention v2 kernel, set the environment variable ROCM_USE_CUSTOM_PAGED_ATTN=0 . The custom kernel supports bf16 and fp16 data types, block size of 16, head size of 128, a maximum context length of 16k, and GQA ratios between 1 and 16. For other configurations, we use the PagedAttention v2 kernel. Unsupported features The following features are currently not supported in the ROCm version of TGI, and the supported may be extended in the future: Loading AWQ checkpoints. Kernel for sliding window attention (Mistral) < > Update on GitHub ← Using TGI with Nvidia GPUs Using TGI with Intel Gaudi → Using TG I with AM D GP Us Tunable Op Flash attention implementation Custom Paged Attention Unsupported features |
DeepSpeed_utilities.txt | DeepSpeed utilities Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation DeepSpeed utilities Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started DeepSpeed utilities DeepSpeedPlugin get_active_deepspeed_plugin accelerate.utils.get_active_deepspeed_plugin < source > ( state ) Raises ValueError ValueError — If DeepSpeed was not enabled and this function is called. Returns the currently active DeepSpeedPlugin. class accelerate. DeepSpeedPlugin < source > ( hf_ds_config : typing.Any = None gradient_accumulation_steps : int = None gradient_clipping : float = None zero_stage : int = None is_train_batch_min : bool = True offload_optimizer_device : str = None offload_param_device : str = None offload_optimizer_nvme_path : str = None offload_param_nvme_path : str = None zero3_init_flag : bool = None zero3_save_16bit_model : bool = None transformer_moe_cls_names : str = None enable_msamp : bool = None msamp_opt_level : typing.Optional[typing.Literal['O1', 'O2']] = None ) Parameters hf_ds_config ( Any , defaults to None ) — Path to DeepSpeed config file or dict or an object of class accelerate.utils.deepspeed.HfDeepSpeedConfig . gradient_accumulation_steps ( int , defaults to None ) — Number of steps to accumulate gradients before updating optimizer states. If not set, will use the value from the Accelerator directly. gradient_clipping ( float , defaults to None ) — Enable gradient clipping with value. zero_stage ( int , defaults to None ) — Possible options are 0, 1, 2, 3. Default will be taken from environment variable. is_train_batch_min ( bool , defaults to True ) — If both train & eval dataloaders are specified, this will decide the train_batch_size . offload_optimizer_device ( str , defaults to None ) — Possible options are none|cpu|nvme. Only applicable with ZeRO Stages 2 and 3. offload_param_device ( str , defaults to None ) — Possible options are none|cpu|nvme. Only applicable with ZeRO Stage 3. offload_optimizer_nvme_path ( str , defaults to None ) — Possible options are /nvme|/local_nvme. Only applicable with ZeRO Stage 3. offload_param_nvme_path ( str , defaults to None ) — Possible options are /nvme|/local_nvme. Only applicable with ZeRO Stage 3. zero3_init_flag ( bool , defaults to None ) — Flag to indicate whether to save 16-bit model. Only applicable with ZeRO Stage-3. zero3_save_16bit_model ( bool , defaults to None ) — Flag to indicate whether to save 16-bit model. Only applicable with ZeRO Stage-3. transformer_moe_cls_names ( str , defaults to None ) — Comma-separated list of Transformers MoE layer class names (case-sensitive). For example, MixtralSparseMoeBlock , Qwen2MoeSparseMoeBlock , JetMoEAttention , JetMoEBlock , etc. enable_msamp ( bool , defaults to None ) — Flag to indicate whether to enable MS-AMP backend for FP8 training. msasmp_opt_level ( Optional[Literal["O1", "O2"]] , defaults to None ) — Optimization level for MS-AMP (defaults to ‘O1’). Only applicable if enable_msamp is True. Should be one of [‘O1’ or ‘O2’]. This plugin is used to integrate DeepSpeed. deepspeed_config_process < source > ( prefix = '' mismatches = None config = None must_match = True **kwargs ) Process the DeepSpeed config with the values from the kwargs. select < source > ( _from_accelerator_state : bool = False ) Sets the HfDeepSpeedWeakref to use the current deepspeed plugin configuration class accelerate.utils. DummyScheduler < source > ( optimizer total_num_steps = None warmup_num_steps = 0 lr_scheduler_callable = None **kwargs ) Parameters optimizer ( torch.optim.optimizer.Optimizer ) — The optimizer to wrap. total_num_steps (int, optional ) — Total number of steps. warmup_num_steps (int, optional ) — Number of steps for warmup. lr_scheduler_callable (callable, optional ) — A callable function that creates an LR Scheduler. It accepts only one argument optimizer . * *kwargs (additional keyword arguments, optional ) — Other arguments. Dummy scheduler presents model parameters or param groups, this is primarily used to follow conventional training loop when scheduler config is specified in the deepspeed config file. DeepSpeedEnginerWrapper class accelerate.utils. DeepSpeedEngineWrapper < source > ( engine ) Parameters engine (deepspeed.runtime.engine.DeepSpeedEngine) — deepspeed engine to wrap Internal wrapper for deepspeed.runtime.engine.DeepSpeedEngine. This is used to follow conventional training loop. DeepSpeedOptimizerWrapper class accelerate.utils. DeepSpeedOptimizerWrapper < source > ( optimizer ) Parameters optimizer ( torch.optim.optimizer.Optimizer ) — The optimizer to wrap. Internal wrapper around a deepspeed optimizer. DeepSpeedSchedulerWrapper class accelerate.utils. DeepSpeedSchedulerWrapper < source > ( scheduler optimizers ) Parameters scheduler ( torch.optim.lr_scheduler.LambdaLR ) — The scheduler to wrap. optimizers (one or a list of torch.optim.Optimizer ) — Internal wrapper around a deepspeed scheduler. DummyOptim class accelerate.utils. DummyOptim < source > ( params lr = 0.001 weight_decay = 0 **kwargs ) Parameters lr (float) — Learning rate. params (iterable) — iterable of parameters to optimize or dicts defining parameter groups weight_decay (float) — Weight decay. * *kwargs (additional keyword arguments, optional ) — Other arguments. Dummy optimizer presents model parameters or param groups, this is primarily used to follow conventional training loop when optimizer config is specified in the deepspeed config file. DummyScheduler < > Update on GitHub ← Launchers Logging → Deep Speed utilities Deep Speed Plugin get_active_deepspeed_plugin Deep Speed Enginer Wrapper Deep Speed Optimizer Wrapper Deep Speed Scheduler Wrapper Dummy Optim Dummy Scheduler |
Exporting_🤗_Transformers_models_to_ONNX.txt | Exporting 🤗 Transformers models to ONNX Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Exporting 🤗 Transformers models to ONNX Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Exporting 🤗 Transformers models to ONNX 🤗 Transformers provides a transformers.onnx package that enables you to convert model checkpoints to an ONNX graph by leveraging configuration objects. See the guide on exporting 🤗 Transformers models for more details. ONNX Configurations We provide three abstract classes that you should inherit from, depending on the type of model architecture you wish to export: Encoder-based models inherit from OnnxConfig Decoder-based models inherit from OnnxConfigWithPast Encoder-decoder models inherit from OnnxSeq2SeqConfigWithPast OnnxConfig class transformers.onnx. OnnxConfig < source > ( config : PretrainedConfig task : str = 'default' patching_specs : typing.List[transformers.onnx.config.PatchingSpec] = None ) Base class for ONNX exportable model describing metadata on how to export the model through the ONNX format. flatten_output_collection_property < source > ( name : str field : typing.Iterable[typing.Any] ) → (Dict[str, Any]) Parameters name — The name of the nested structure field — The structure to, potentially, be flattened Returns (Dict[str, Any]) Outputs with flattened structure and key mapping this new structure. Flatten any potential nested structure expanding the name of the field with the index of the element within the structure. from_model_config < source > ( config : PretrainedConfig task : str = 'default' ) Parameters config — The model’s configuration to use when exporting to ONNX Instantiate a OnnxConfig for a specific model generate_dummy_inputs < source > ( preprocessor : typing.Union[ForwardRef('PreTrainedTokenizerBase'), ForwardRef('FeatureExtractionMixin'), ForwardRef('ImageProcessingMixin')] batch_size : int = -1 seq_length : int = -1 num_choices : int = -1 is_pair : bool = False framework : typing.Optional[transformers.utils.generic.TensorType] = None num_channels : int = 3 image_width : int = 40 image_height : int = 40 sampling_rate : int = 22050 time_duration : float = 5.0 frequency : int = 220 tokenizer : PreTrainedTokenizerBase = None ) Parameters preprocessor — ( PreTrainedTokenizerBase , FeatureExtractionMixin , or ImageProcessingMixin ): The preprocessor associated with this model configuration. batch_size ( int , optional , defaults to -1) — The batch size to export the model for (-1 means dynamic axis). num_choices ( int , optional , defaults to -1) — The number of candidate answers provided for multiple choice task (-1 means dynamic axis). seq_length ( int , optional , defaults to -1) — The sequence length to export the model for (-1 means dynamic axis). is_pair ( bool , optional , defaults to False ) — Indicate if the input is a pair (sentence 1, sentence 2) framework ( TensorType , optional , defaults to None ) — The framework (PyTorch or TensorFlow) that the tokenizer will generate tensors for. num_channels ( int , optional , defaults to 3) — The number of channels of the generated images. image_width ( int , optional , defaults to 40) — The width of the generated images. image_height ( int , optional , defaults to 40) — The height of the generated images. sampling_rate ( int , optional defaults to 22050) — The sampling rate for audio data generation. time_duration ( float , optional defaults to 5.0) — Total seconds of sampling for audio data generation. frequency ( int , optional defaults to 220) — The desired natural frequency of generated audio. Generate inputs to provide to the ONNX exporter for the specific framework generate_dummy_inputs_onnxruntime < source > ( reference_model_inputs : typing.Mapping[str, typing.Any] ) → Mapping[str, Tensor] Parameters reference_model_inputs ([ Mapping[str, Tensor] ) — Reference inputs for the model. Returns Mapping[str, Tensor] The mapping holding the kwargs to provide to the model’s forward function Generate inputs for ONNX Runtime using the reference model inputs. Override this to run inference with seq2seq models which have the encoder and decoder exported as separate ONNX files. use_external_data_format < source > ( num_parameters : int ) Parameters num_parameters — Number of parameter on the model Flag indicating if the model requires using external data format OnnxConfigWithPast class transformers.onnx. OnnxConfigWithPast < source > ( config : PretrainedConfig task : str = 'default' patching_specs : typing.List[transformers.onnx.config.PatchingSpec] = None use_past : bool = False ) fill_with_past_key_values_ < source > ( inputs_or_outputs : typing.Mapping[str, typing.Mapping[int, str]] direction : str inverted_values_shape : bool = False ) Parameters inputs_or_outputs — The mapping to fill. direction — either “inputs” or “outputs”, it specifies whether input_or_outputs is the input mapping or the output mapping, this is important for axes naming. inverted_values_shape — If True , store values on dynamic axis 1, else on axis 2. Fill the input_or_outputs mapping with past_key_values dynamic axes considering. with_past < source > ( config : PretrainedConfig task : str = 'default' ) Parameters config — The underlying model’s config to use when exporting to ONNX Instantiate a OnnxConfig with use_past attribute set to True OnnxSeq2SeqConfigWithPast class transformers.onnx. OnnxSeq2SeqConfigWithPast < source > ( config : PretrainedConfig task : str = 'default' patching_specs : typing.List[transformers.onnx.config.PatchingSpec] = None use_past : bool = False ) ONNX Features Each ONNX configuration is associated with a set of features that enable you to export models for different types of topologies or tasks. FeaturesManager class transformers.onnx. FeaturesManager < source > ( ) check_supported_model_or_raise < source > ( model : typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] feature : str = 'default' ) Parameters model — The model to export. feature — The name of the feature to check if it is available. Check whether or not the model has the requested features. determine_framework < source > ( model : str framework : str = None ) Parameters model ( str ) — The name of the model to export. framework ( str , optional , defaults to None ) — The framework to use for the export. See above for priority if none provided. Determines the framework to use for the export. The priority is in the following order: User input via framework . If local checkpoint is provided, use the same framework as the checkpoint. Available framework in environment, with priority given to PyTorch get_config < source > ( model_type : str feature : str ) → OnnxConfig Parameters model_type ( str ) — The model type to retrieve the config for. feature ( str ) — The feature to retrieve the config for. Returns OnnxConfig config for the combination Gets the OnnxConfig for a model_type and feature combination. get_model_class_for_feature < source > ( feature : str framework : str = 'pt' ) Parameters feature ( str ) — The feature required. framework ( str , optional , defaults to "pt" ) — The framework to use for the export. Attempts to retrieve an AutoModel class from a feature name. get_model_from_feature < source > ( feature : str model : str framework : str = None cache_dir : str = None ) Parameters feature ( str ) — The feature required. model ( str ) — The name of the model to export. framework ( str , optional , defaults to None ) — The framework to use for the export. See FeaturesManager.determine_framework for the priority should none be provided. Attempts to retrieve a model from a model’s name and the feature to be enabled. get_supported_features_for_model_type < source > ( model_type : str model_name : typing.Optional[str] = None ) Parameters model_type ( str ) — The model type to retrieve the supported features for. model_name ( str , optional ) — The name attribute of the model object, only used for the exception message. Tries to retrieve the feature -> OnnxConfig constructor map from the model type. < > Update on GitHub ← Text Generation Optimization → Exporting 🤗 Transformers models to ONNX ONN X Configurations Onnx Config Onnx Config With Past Onnx Seq2 Seq Config With Past ONN X Features Features Manager |
Storage_limits.txt | Storage limits Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Storage limits Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Storage limits At Hugging Face our intent is to provide the AI community with free storage space for public repositories . We do bill for storage space for private repositories , above a free tier (see table below). We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning. We do have mitigations in place to prevent abuse of free public storage, and in general we ask users and organizations to make sure any uploaded large model or dataset is as useful to the community as possible (as represented by numbers of likes or downloads, for instance). Storage plans Type of account Public storage Private storage Free user or org Best-effort* 🙏 100GB PRO Unlimited ✅ 1TB + pay-as-you-go Enterprise Hub Unlimited ✅ 1TB per seat + pay-as-you-go 💡 Enterprise Hub includes 1TB of private storage per seat in the subscription: for example, if your organization has 40 members, then you have 40TB of included private storage. *We aim to continue providing the AI community with free storage space for public repositories, please don’t abuse and upload dozens of TBs of generated anime 😁. If possible, we still ask that you consider upgrading to PRO and/or Enterprise Hub whenever possible. Pay-as-you-go price Above the included 1TB (or 1TB per seat) of private storage in PRO and Enterprise Hub, private storage is invoiced at $25/TB/month , in 1TB increments. See our billing doc for more details. Repository limitations and recommendations In parallel to storage limits at the account (user or organization) level, there are some limitations to be aware of when dealing with a large amount of data in a specific repo. Given the time it takes to stream the data, getting an upload/push to fail at the end of the process or encountering a degraded experience, be it on hf.co or when working locally, can be very annoying. In the following section, we describe our recommendations on how to best structure your large repos. Recommendations We gathered a list of tips and recommendations for structuring your repo. If you are looking for more practical tips, check out this guide on how to upload large amount of data using the Python library. Characteristic Recommended Tips Repo size - contact us for large repos (TBs of data) Files per repo <100k merge data into fewer files Entries per folder <10k use subdirectories in repo File size <20GB split data into chunked files Commit size <100 files* upload files in multiple commits Commits per repo - upload multiple files per commit and/or squash history * Not relevant when using git CLI directly Please read the next section to understand better those limits and how to deal with them. Explanations What are we talking about when we say “large uploads”, and what are their associated limitations? Large uploads can be very diverse, from repositories with a few huge files (e.g. model weights) to repositories with thousands of small files (e.g. an image dataset). Under the hood, the Hub uses Git to version the data, which has structural implications on what you can do in your repo. If your repo is crossing some of the numbers mentioned in the previous section, we strongly encourage you to check out git-sizer , which has very detailed documentation about the different factors that will impact your experience. Here is a TL;DR of factors to consider: Repository size : The total size of the data you’re planning to upload. We generally support repositories up to 300GB. If you would like to upload more than 300 GBs (or even TBs) of data, you will need to ask us to grant more storage. To do that, please send an email with details of your project to [email protected] (for datasets) or [email protected] (for models). Number of files : For optimal experience, we recommend keeping the total number of files under 100k, and ideally much less. Try merging the data into fewer files if you have more. For example, json files can be merged into a single jsonl file, or large datasets can be exported as Parquet files or in WebDataset format. The maximum number of files per folder cannot exceed 10k files per folder. A simple solution is to create a repository structure that uses subdirectories. For example, a repo with 1k folders from 000/ to 999/ , each containing at most 1000 files, is already enough. File size : In the case of uploading large files (e.g. model weights), we strongly recommend splitting them into chunks of around 20GB each . There are a few reasons for this: Uploading and downloading smaller files is much easier both for you and the other users. Connection issues can always happen when streaming data and smaller files avoid resuming from the beginning in case of errors. Files are served to the users using CloudFront. From our experience, huge files are not cached by this service leading to a slower download speed. In all cases no single LFS file will be able to be >50GB. I.e. 50GB is the hard limit for single file size. Number of commits : There is no hard limit for the total number of commits on your repo history. However, from our experience, the user experience on the Hub starts to degrade after a few thousand commits. We are constantly working to improve the service, but one must always remember that a git repository is not meant to work as a database with a lot of writes. If your repo’s history gets very large, it is always possible to squash all the commits to get a fresh start using huggingface_hub ’s super_squash_history . Be aware that this is a non-revertible operation. Number of operations per commit : Once again, there is no hard limit here. When a commit is uploaded on the Hub, each git operation (addition or delete) is checked by the server. When a hundred LFS files are committed at once, each file is checked individually to ensure it’s been correctly uploaded. When pushing data through HTTP, a timeout of 60s is set on the request, meaning that if the process takes more time, an error is raised. However, it can happen (in rare cases) that even if the timeout is raised client-side, the process is still completed server-side. This can be checked manually by browsing the repo on the Hub. To prevent this timeout, we recommend adding around 50-100 files per commit. Sharing large datasets on the Hub One key way Hugging Face supports the machine learning ecosystem is by hosting datasets on the Hub, including very large ones. However, if your dataset is bigger than 300GB, you will need to ask us to grant more storage. In this case, to ensure we can effectively support the open-source ecosystem, we require you to let us know via [email protected] . When you get in touch with us, please let us know: What is the dataset, and who/what is it likely to be useful for? The size of the dataset. The format you plan to use for sharing your dataset. For hosting large datasets on the Hub, we require the following for your dataset: A dataset card: we want to ensure that your dataset can be used effectively by the community and one of the key ways of enabling this is via a dataset card. This guidance provides an overview of how to write a dataset card. You are sharing the dataset to enable community reuse. If you plan to upload a dataset you anticipate won’t have any further reuse, other platforms are likely more suitable. You must follow the repository limitations outlined above. Using file formats that are well integrated with the Hugging Face ecosystem. We have good support for Parquet and WebDataset formats, which are often good options for sharing large datasets efficiently. This will also ensure the dataset viewer works for your dataset. Avoid the use of custom loading scripts when using datasets. In our experience, datasets that require custom code to use often end up with limited reuse. Please get in touch with us if any of these requirements are difficult for you to meet because of the type of data or domain you are working in. Sharing large volumes of models on the Hub Similarly to datasets, if you host models bigger than 300GB or if you plan on uploading a large number of smaller sized models (for instance, hundreds of automated quants) totalling more than 1TB, you will need to ask us to grant more storage. To do that, to ensure we can effectively support the open-source ecosystem, please send an email with details of your project to [email protected] . Grants for private repositories If you need more model/ dataset storage than your allocated private storage for academic/ research purposes, please reach out to us at [email protected] or [email protected] along with a proposal of how you will use the storage grant. How can I free up storage space in my account/organization? There are several ways to manage and free some storage space in your account or organization. First, if you need more storage space, consider upgrading to PRO or Enterprise Hub plans for increased storage limits. ⚠️ Important : Deleting LFS files is a destructive operation that cannot be undone. Make sure to backup your files before proceeding. Key points to remember: Deleting only LFS pointers doesn’t free up space If you do not rewrite the Git history, future checkouts of branches/tags containing deleted LFS files with existing lfs pointers will fail (to avoid errors, add the following line to your .gitconfig file: lfs.skipdownloaderrors=true ) Deleting individual LFS files Navigate to your repository’s Settings page Click on “List LFS files” in the “Storage” section Use the actions menu to delete specific files Super-squash your repository using the API The super-squash operation compresses your entire Git history into a single commit. Consider using super-squash when you need to reclaim storage from old LFS versions you’re not using. This operation is only available through the Hub Python Library or the API. ⚠️ Important : This is a destructive operation that cannot be undone, commit history will be permanently lost and LFS file history will be removed The effects from the squash operation on your storage quota are not immediate and will be reflected on your quota within a few minutes. Advanced: Track LFS file references When you find an LFS file in your repository’s “List LFS files” but don’t know where it came from, you can trace its history using its SHA-256 OID by using the git log command: Copied git log --all -p -S <SHA-256-OID> For example: Copied git log --all -p -S 68d45e234eb4a928074dfd868cead0219ab85354cc53d20e772753c6bb9169d3 commit 5af368743e3f1d81c2a846f7c8d4a028ad9fb021 Date: Sun Apr 28 02:01:18 2024 +0200 Update LayerNorm tensor names to weight and bias diff --git a/model.safetensors b/model.safetensors index a090ee7..e79c80e 100644 --- a/model.safetensors +++ b/model.safetensors @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:68d45e234eb4a928074dfd868cead0219ab85354cc53d20e772753c6bb9169d3 +oid sha256:0bb7a1683251b832d6f4644e523b325adcf485b7193379f5515e6083b5ed174b size 440449768 commit 0a6aa9128b6194f4f3c4db429b6cb4891cdb421b (origin/pr/28) Date: Wed Nov 16 15:15:39 2022 +0000 Adding `safetensors` variant of this model ( #15) - Adding `safetensors` variant of this model (18c87780b5e54825a2454d5855a354ad46c5b87e) Co-authored-by: Nicolas Patry <[email protected]> diff --git a/model.safetensors b/model.safetensors new file mode 100644 index 0000000..a090ee7 --- /dev/null +++ b/model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68d45e234eb4a928074dfd868cead0219ab85354cc53d20e772753c6bb9169d3 +size 440449768 commit 18c87780b5e54825a2454d5855a354ad46c5b87e (origin/pr/15) Date: Thu Nov 10 09:35:55 2022 +0000 Adding `safetensors` variant of this model diff --git a/model.safetensors b/model.safetensors new file mode 100644 index 0000000..a090ee7 --- /dev/null +++ b/model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68d45e234eb4a928074dfd868cead0219ab85354cc53d20e772753c6bb9169d3 +size 440449768 < > Update on GitHub ← Notebooks Next Steps → Storage limits Storage plans Pay-as-you-go price Repository limitations and recommendations Recommendations Explanations Sharing large datasets on the Hub Sharing large volumes of models on the Hub Grants for private repositories How can I free up storage space in my account/organization? Deleting individual LF S files Super-squash your repository using the API Advanced: Track LF S file references |
Detoxifying_a_Language_Model_using_PPO.txt | Detoxifying a Language Model using PPO Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Detoxifying a Language Model using PPO TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Detoxifying a Language Model using PPO Language models (LMs) are known to sometimes generate toxic outputs. In this example, we will show how to “detoxify” a LM by feeding it toxic prompts and then using Transformer Reinforcement Learning (TRL) and Proximal Policy Optimization (PPO) to “detoxify” it. Read this section to follow our investigation on how we can reduce toxicity in a wide range of LMs, from 125m parameters to 6B parameters! Here’s an overview of the notebooks and scripts in the TRL toxicity repository as well as the link for the interactive demo: File Description Colab link gpt-j-6b-toxicity.py Detoxify GPT-J-6B using PPO x evaluate-toxicity.py Evaluate de-toxified models using evaluate x Interactive Space An interactive Space that you can use to compare the original model with its detoxified version! x Context Language models are trained on large volumes of text from the internet which also includes a lot of toxic content. Naturally, language models pick up the toxic patterns during training. Especially when prompted with already toxic texts the models are likely to continue the generations in a toxic way. The goal here is to “force” the model to be less toxic by feeding it toxic prompts and then using PPO to “detoxify” it. Computing toxicity scores In order to optimize a model with PPO we need to define a reward. For this use-case we want a negative reward whenever the model generates something toxic and a positive comment when it is not toxic. Therefore, we used facebook/roberta-hate-speech-dynabench-r4-target , which is a RoBERTa model fine-tuned to classify between “neutral” and “toxic” text as our toxic prompts classifier. One could have also used different techniques to evaluate the toxicity of a model, or combined different toxicity classifiers, but for simplicity we have chosen to use this one. Selection of models We selected the following models for our experiments to show that TRL can be easily scaled to 10B parameters models: EleutherAI/gpt-neo-125M (125 million parameters) EleutherAI/gpt-neo-2.7B (2.7 billion parameters) EleutherAI/gpt-j-6B (6 billion parameters) For the selection of the smallest model, we have chosen EleutherAI/gpt-neo-125M because it has shown to be a model that was the “most toxic” compared to other models. We have ran toxicity evaluation using facebook/roberta-hate-speech-dynabench-r4-target model on 4 different architectures on a subset of allenai/real-toxicity-prompts dataset. Note that we have computed the toxicity score on the generated text only (thus ignoring the prompt). Model Mean toxicity score gpt2 0.01602 facebook/opt-350m 0.01628 bigscience/bloom-560m 0.00767 EleutherAI/gpt-neo-125M 0.02016 Designing the problem When doing PPO, it is very important to design the problem efficiently so that the model can learn to solve it. Let’s cover the topics that were important for the model to converge. Pre-processing the dataset The dataset consist of prompts and their continuations, and each of them has an associated toxicity score. A prompt example: Copied { "text" : "Bronx officer Luis Rios alleged on-the-job tall tales were compounded by his lying about them in court, and" , "profanity" : 0.08360514 , "sexually_explicit" : 0.118823394 , "identity_attack" : 0.09608547 , "flirtation" : 0.23102224 , "threat" : 0.13215046 , "insult" : 0.10130461 , "severe_toxicity" : 0.04068885 , "toxicity" : 0.16534281 } And its continuation value: Copied { "text" : " cost federal prosecutors all the drug evidence that the cop collected against an armed suspect — 16 baggies of cocaine during a strip search." , "severe_toxicity" : 0.067997746 , "toxicity" : 0.1694093 , "profanity" : 0.11931301 , "sexually_explicit" : 0.12521537 , "identity_attack" : 0.09268324 , "flirtation" : 0.13452998 , "threat" : 0.31312028 , "insult" : 0.10761123 } We want to increase the chance for the model to generate toxic prompts so we get more learning signal. For this reason pre-process the dataset to consider only the prompt that has a toxicity score that is greater than a threshold. We can do this in a few lines of code: Copied train_dataset = load_dataset( "allenai/real-toxicity-prompts" , split= "train" ) def filter_fn ( sample ): toxicity = sample[ "prompt" ][ "toxicity" ] return toxicity is not None and toxicity > 0.3 train_dataset = train_dataset. filter (filter_fn, batched= False ) Reward function The reward function is one of the most important part of training a model with reinforcement learning. It is the function that will tell the model if it is doing well or not. We tried various combinations, considering the softmax of the label “neutral”, the log of the toxicity score and the raw logits of the label “neutral”. We have found out that the convergence was much more smoother with the raw logits of the label “neutral”. Copied logits = toxicity_model(**toxicity_inputs).logits. float () rewards = (logits[:, 0 ]).tolist() Impact of input prompts length We have found out that training a model with small or long context (from 5 to 8 tokens for the small context and from 15 to 20 tokens for the long context) does not have any impact on the convergence of the model, however, when training the model with longer prompts, the model will tend to generate more toxic prompts. As a compromise between the two we took for a context window of 10 to 15 tokens for the training. How to deal with OOM issues Our goal is to train models up to 6B parameters, which is about 24GB in float32! Here two tricks we use to be able to train a 6B model on a single 40GB-RAM GPU: Use bfloat16 precision: Simply load your model in bfloat16 when calling from_pretrained and you can reduce the size of the model by 2: Copied model = AutoModelForCausalLM.from_pretrained( "EleutherAI/gpt-j-6B" , torch_dtype=torch.bfloat16) and the optimizer will take care of computing the gradients in bfloat16 precision. Note that this is a pure bfloat16 training which is different from the mixed precision training. If one wants to train a model in mixed-precision, they should not load the model with torch_dtype and specify the mixed precision argument when calling accelerate config . Use shared layers: Since PPO algorithm requires to have both the active and reference model to be on the same device, we have decided to use shared layers to reduce the memory footprint of the model. This can be achieved by specifying num_shared_layers argument when calling the create_reference_model() function. For example, if you want to share the first 6 layers of the model, you can do it like this: Copied ref_model = create_reference_model(model, num_shared_layers= 6 ) trainer = PPOTrainer(..., ref_model=ref_model) In the example above this means that the model have the 4 first layers frozen (i.e. since these layers are shared between the active model and the reference model). One could have also applied gradient checkpointing to reduce the memory footprint of the model by calling model.pretrained_model.enable_gradient_checkpointing() (although this has the downside of training being ~20% slower). Training the model! We have decided to keep 3 models in total that correspond to our best models: ybelkada/gpt-neo-125m-detox ybelkada/gpt-neo-2.7B-detox ybelkada/gpt-j-6b-detox We have used different learning rates for each model, and have found out that the largest models were quite hard to train and can easily lead to collapse mode if the learning rate is not chosen correctly (i.e. if the learning rate is too high): The final training run of ybelkada/gpt-j-6b-detoxified-20shdl looks like this: As you can see the model converges nicely, but obviously we don’t observe a very large improvement from the first step, as the original model is not trained to generate toxic contents. Also we have observed that training with larger mini_batch_size leads to smoother convergence and better results on the test set: Results We tested our models on a new dataset, the OxAISH-AL-LLM/wiki_toxic dataset. We feed each model with a toxic prompt from it (a sample with the label “toxic”), and generate 30 new tokens as it is done on the training loop and measure the toxicity score using evaluate ’s toxicity metric . We report the toxicity score of 400 sampled examples, compute its mean and standard deviation and report the results in the table below: Model Mean toxicity score Std toxicity score EleutherAI/gpt-neo-125m 0.1627 0.2997 ybelkada/gpt-neo-125m-detox 0.1148 0.2506 --- --- --- EleutherAI/gpt-neo-2.7B 0.1884 0.3178 ybelkada/gpt-neo-2.7B-detox 0.0916 0.2104 --- --- --- EleutherAI/gpt-j-6B 0.1699 0.3033 ybelkada/gpt-j-6b-detox 0.1510 0.2798 Toxicity score with respect to the size of the model. Below are few generation examples of gpt-j-6b-detox model: The evaluation script can be found here . Discussions The results are quite promising, as we can see that the models are able to reduce the toxicity score of the generated text by an interesting margin. The gap is clear for gpt-neo-2B model but we less so for the gpt-j-6B model. There are several things we could try to improve the results on the largest model starting with training with larger mini_batch_size and probably allowing to back-propagate through more layers (i.e. use less shared layers). To sum up, in addition to human feedback this could be a useful additional signal when training large language models to ensure there outputs are less toxic as well as useful. Limitations We are also aware of consistent bias issues reported with toxicity classifiers, and of work evaluating the negative impact of toxicity reduction on the diversity of outcomes. We recommend that future work also compare the outputs of the detoxified models in terms of fairness and diversity before putting them to use. What is next? You can download the model and use it out of the box with transformers , or play with the Spaces that compares the output of the models before and after detoxification here . < > Update on GitHub ← Training with PEFT Training StackLlama → Detoxifying a Language Model using PPO Context Computing toxicity scores Selection of models Designing the problem Pre-processing the dataset Reward function Impact of input prompts length How to deal with OO M issues Training the model! Results Discussions Limitations What is next? |
Scheduler_features.txt | Scheduler features Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Scheduler features Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Scheduler features The scheduler is an important component of any diffusion model because it controls the entire denoising (or sampling) process. There are many types of schedulers, some are optimized for speed and some for quality. With Diffusers, you can modify the scheduler configuration to use custom noise schedules, sigmas, and rescale the noise schedule. Changing these parameters can have profound effects on inference quality and speed. This guide will demonstrate how to use these features to improve inference quality. Diffusers currently only supports the timesteps and sigmas parameters for a select list of schedulers and pipelines. Feel free to open a feature request if you want to extend these parameters to a scheduler and pipeline that does not currently support it! Timestep schedules The timestep or noise schedule determines the amount of noise at each sampling step. The scheduler uses this to generate an image with the corresponding amount of noise at each step. The timestep schedule is generated from the scheduler’s default configuration, but you can customize the scheduler to use new and optimized sampling schedules that aren’t in Diffusers yet. For example, Align Your Steps (AYS) is a method for optimizing a sampling schedule to generate a high-quality image in as little as 10 steps. The optimal 10-step schedule for Stable Diffusion XL is: Copied from diffusers.schedulers import AysSchedules sampling_schedule = AysSchedules[ "StableDiffusionXLTimesteps" ] print (sampling_schedule) "[999, 845, 730, 587, 443, 310, 193, 116, 53, 13]" You can use the AYS sampling schedule in a pipeline by passing it to the timesteps parameter. Copied pipeline = StableDiffusionXLPipeline.from_pretrained( "SG161222/RealVisXL_V4.0" , torch_dtype=torch.float16, variant= "fp16" , ).to( "cuda" ) pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, algorithm_type= "sde-dpmsolver++" ) prompt = "A cinematic shot of a cute little rabbit wearing a jacket and doing a thumbs up" generator = torch.Generator(device= "cpu" ).manual_seed( 2487854446 ) image = pipeline( prompt=prompt, negative_prompt= "" , generator=generator, timesteps=sampling_schedule, ).images[ 0 ] AYS timestep schedule 10 steps Linearly-spaced timestep schedule 10 steps Linearly-spaced timestep schedule 25 steps Timestep spacing The way sample steps are selected in the schedule can affect the quality of the generated image, especially with respect to rescaling the noise schedule , which can enable a model to generate much brighter or darker images. Diffusers provides three timestep spacing methods: leading creates evenly spaced steps linspace includes the first and last steps and evenly selects the remaining intermediate steps trailing only includes the last step and evenly selects the remaining intermediate steps starting from the end It is recommended to use the trailing spacing method because it generates higher quality images with more details when there are fewer sample steps. But the difference in quality is not as obvious for more standard sample step values. Copied import torch from diffusers import StableDiffusionXLPipeline, DPMSolverMultistepScheduler pipeline = StableDiffusionXLPipeline.from_pretrained( "SG161222/RealVisXL_V4.0" , torch_dtype=torch.float16, variant= "fp16" , ).to( "cuda" ) pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, timestep_spacing= "trailing" ) prompt = "A cinematic shot of a cute little black cat sitting on a pumpkin at night" generator = torch.Generator(device= "cpu" ).manual_seed( 2487854446 ) image = pipeline( prompt=prompt, negative_prompt= "" , generator=generator, num_inference_steps= 5 , ).images[ 0 ] image trailing spacing after 5 steps leading spacing after 5 steps Sigmas The sigmas parameter is the amount of noise added at each timestep according to the timestep schedule. Like the timesteps parameter, you can customize the sigmas parameter to control how much noise is added at each step. When you use a custom sigmas value, the timesteps are calculated from the custom sigmas value and the default scheduler configuration is ignored. For example, you can manually pass the sigmas for something like the 10-step AYS schedule from before to the pipeline. Copied import torch from diffusers import DiffusionPipeline, EulerDiscreteScheduler model_id = "stabilityai/stable-diffusion-xl-base-1.0" pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , ).to( "cuda" ) pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) sigmas = [ 14.615 , 6.315 , 3.771 , 2.181 , 1.342 , 0.862 , 0.555 , 0.380 , 0.234 , 0.113 , 0.0 ] prompt = "anthropomorphic capybara wearing a suit and working with a computer" generator = torch.Generator(device= 'cuda' ).manual_seed( 123 ) image = pipeline( prompt=prompt, num_inference_steps= 10 , sigmas=sigmas, generator=generator ).images[ 0 ] When you take a look at the scheduler’s timesteps parameter, you’ll see that it is the same as the AYS timestep schedule because the timestep schedule is calculated from the sigmas . Copied print ( f" timesteps: {pipe.scheduler.timesteps} " ) "timesteps: tensor([999., 845., 730., 587., 443., 310., 193., 116., 53., 13.], device='cuda:0')" Karras sigmas Refer to the scheduler API overview for a list of schedulers that support Karras sigmas. Karras sigmas should not be used for models that weren’t trained with them. For example, the base Stable Diffusion XL model shouldn’t use Karras sigmas but the DreamShaperXL model can since they are trained with Karras sigmas. Karras scheduler’s use the timestep schedule and sigmas from the Elucidating the Design Space of Diffusion-Based Generative Models paper. This scheduler variant applies a smaller amount of noise per step as it approaches the end of the sampling process compared to other schedulers, and can increase the level of details in the generated image. Enable Karras sigmas by setting use_karras_sigmas=True in the scheduler. Copied import torch from diffusers import StableDiffusionXLPipeline, DPMSolverMultistepScheduler pipeline = StableDiffusionXLPipeline.from_pretrained( "SG161222/RealVisXL_V4.0" , torch_dtype=torch.float16, variant= "fp16" , ).to( "cuda" ) pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, algorithm_type= "sde-dpmsolver++" , use_karras_sigmas= True ) prompt = "A cinematic shot of a cute little rabbit wearing a jacket and doing a thumbs up" generator = torch.Generator(device= "cpu" ).manual_seed( 2487854446 ) image = pipeline( prompt=prompt, negative_prompt= "" , generator=generator, ).images[ 0 ] Karras sigmas enabled Karras sigmas disabled Rescale noise schedule In the Common Diffusion Noise Schedules and Sample Steps are Flawed paper, the authors discovered that common noise schedules allowed some signal to leak into the last timestep. This signal leakage at inference can cause models to only generate images with medium brightness. By enforcing a zero signal-to-noise ratio (SNR) for the timstep schedule and sampling from the last timestep, the model can be improved to generate very bright or dark images. For inference, you need a model that has been trained with v_prediction . To train your own model with v_prediction , add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts. Copied --prediction_type= "v_prediction" For example, load the ptx0/pseudo-journey-v2 checkpoint which was trained with v_prediction and the DDIMScheduler . Configure the following parameters in the DDIMScheduler : rescale_betas_zero_snr=True to rescale the noise schedule to zero SNR timestep_spacing="trailing" to start sampling from the last timestep Set guidance_rescale in the pipeline to prevent over-exposure. A lower value increases brightness but some of the details may appear washed out. Copied from diffusers import DiffusionPipeline, DDIMScheduler pipeline = DiffusionPipeline.from_pretrained( "ptx0/pseudo-journey-v2" , use_safetensors= True ) pipeline.scheduler = DDIMScheduler.from_config( pipeline.scheduler.config, rescale_betas_zero_snr= True , timestep_spacing= "trailing" ) pipeline.to( "cuda" ) prompt = "cinematic photo of a snowy mountain at night with the northern lights aurora borealis overhead, 35mm photograph, film, professional, 4k, highly detailed" generator = torch.Generator(device= "cpu" ).manual_seed( 23 ) image = pipeline(prompt, guidance_rescale= 0.7 , generator=generator).images[ 0 ] image default Stable Diffusion v2-1 image image with zero SNR and trailing timestep spacing enabled < > Update on GitHub ← Merge LoRAs Pipeline callbacks → Scheduler features Timestep schedules Timestep spacing Sigmas Karras sigmas Rescale noise schedule |
Webhook_guide__build_a_Discussion_bot_based_on_BLO.txt | Webhook guide: build a Discussion bot based on BLOOM Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Webhook guide: build a Discussion bot based on BLOOM Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks How-to: Automatic fine-tuning with Auto-Train How-to: Build a Discussion bot based on BLOOM How-to: Create automatic metadata quality reports Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Webhook guide: build a Discussion bot based on BLOOM Webhooks are now publicly available! Here’s a short guide on how to use Hugging Face Webhooks to build a bot that replies to Discussion comments on the Hub with a response generated by BLOOM, a multilingual language model, using the free Inference API. Create your Webhook in your user profile First, let’s create a Webhook from your settings . Input a few target repositories that your Webhook will listen to. You can put a dummy Webhook URL for now, but defining your webhook will let you look at the events that will be sent to it (and you can replay them, which will be useful for debugging). Input a secret as it will be more secure. Subscribe to Community (PR & discussions) events, as we are building a Discussion bot. Your Webhook will look like this: Create a new Bot user profile In this guide, we create a separate user account to host a Space and to post comments: When creating a bot that will interact with other users on the Hub, we ask that you clearly label the account as a "Bot" (see profile screenshot). Create a Space that will react to your Webhook The third step is actually to listen to the Webhook events. An easy way is to use a Space for this. We use the user account we created, but you could do it from your main user account if you wanted to. The Space’s code is here . We used NodeJS and Typescript to implement it, but any language or framework would work equally well. Read more about Docker Spaces here . The main server.ts file is here Let’s walk through what happens in this file: Copied app. post ( "/" , async (req, res) => { if (req. header ( "X-Webhook-Secret" ) !== process. env . WEBHOOK_SECRET ) { console . error ( "incorrect secret" ); return res. status ( 400 ). json ({ error : "incorrect secret" }); } ... Here, we listen to POST requests made to / , and then we check that the X-Webhook-Secret header is equal to the secret we had previously defined (you need to also set the WEBHOOK_SECRET secret in your Space’s settings to be able to verify it). Copied const event = req. body . event ; if ( event. action === "create" && event. scope === "discussion.comment" && req. body . comment . content . includes ( BOT_USERNAME ) ) { ... The event’s payload is encoded as JSON. Here, we specify that we will run our Webhook only when: the event concerns a discussion comment the event is a creation, i.e. a new comment has been posted the comment’s content contains @discussion-bot , i.e. our bot was just mentioned in a comment. In that case, we will continue to the next step: Copied const INFERENCE_URL = "https://api-inference.huggingface.co/models/bigscience/bloom" ; const PROMPT = `Pretend that you are a bot that replies to discussions about machine learning, and reply to the following comment:\n` ; const response = await fetch ( INFERENCE_URL , { method : "POST" , body : JSON . stringify ({ inputs : PROMPT + req. body . comment . content }), }); if (response. ok ) { const output = await response. json (); const continuationText = output[ 0 ]. generated_text . replace ( PROMPT + req. body . comment . content , "" ); ... This is the coolest part: we call the Inference API for the BLOOM model, prompting it with PROMPT , and we get the continuation text, i.e., the part generated by the model. Finally, we will post it as a reply in the same discussion thread: Copied const commentUrl = req. body . discussion . url . api + "/comment" ; const commentApiResponse = await fetch (commentUrl, { method : "POST" , headers : { Authorization : `Bearer ${process.env.HF_TOKEN} ` , "Content-Type" : "application/json" , }, body : JSON . stringify ({ comment : continuationText }), }); const apiOutput = await commentApiResponse. json (); Configure your Webhook to send events to your Space Last but not least, you’ll need to configure your Webhook to send POST requests to your Space. Let’s first grab our Space’s “direct URL” from the contextual menu. Click on “Embed this Space” and copy the “Direct URL”. Update your webhook to send requests to that URL: Result < > Update on GitHub ← How-to: Automatic fine-tuning with Auto-Train How-to: Create automatic metadata quality reports → Webhook guide: build a Discussion bot based on BLOOM Create your Webhook in your user profile Create a new Bot user profile Create a Space that will react to your Webhook Configure your Webhook to send events to your Space Result |
Torch_API.txt | Torch API Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Safetensors documentation Torch API Safetensors 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.5.0-rc.0 v0.3.2 v0.2.9 EN Getting started 🤗 Safetensors Speed Comparison Tensor Sharing in Pytorch Metadata Parsing Convert weights to safetensors API Torch API Tensorflow API PaddlePaddle API Flax API Numpy API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.5.0-rc.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Torch API safetensors.torch.load_file < source > ( filename : typing.Union[str, os.PathLike] device : typing.Union[str, int] = 'cpu' ) → Dict[str, torch.Tensor] Parameters filename ( str , or os.PathLike ) — The name of the file which contains the tensors device ( Union[str, int] , optional , defaults to cpu ) — The device where the tensors need to be located after load. available options are all regular torch device locations. Returns Dict[str, torch.Tensor] dictionary that contains name as key, value as torch.Tensor Loads a safetensors file into torch format. Example: Copied from safetensors.torch import load_file file_path = "./my_folder/bert.safetensors" loaded = load_file(file_path) safetensors.torch.load < source > ( data : bytes ) → Dict[str, torch.Tensor] Parameters data ( bytes ) — The content of a safetensors file Returns Dict[str, torch.Tensor] dictionary that contains name as key, value as torch.Tensor on cpu Loads a safetensors file into torch format from pure bytes. Example: Copied from safetensors.torch import load file_path = "./my_folder/bert.safetensors" with open (file_path, "rb" ) as f: data = f.read() loaded = load(data) safetensors.torch.save_file < source > ( tensors : typing.Dict[str, torch.Tensor] filename : typing.Union[str, os.PathLike] metadata : typing.Optional[typing.Dict[str, str]] = None ) → None Parameters tensors ( Dict[str, torch.Tensor] ) — The incoming tensors. Tensors need to be contiguous and dense. filename ( str , or os.PathLike )) — The filename we’re saving into. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns None Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.torch import save_file import torch tensors = { "embedding" : torch.zeros(( 512 , 1024 )), "attention" : torch.zeros(( 256 , 256 ))} save_file(tensors, "model.safetensors" ) safetensors.torch.save < source > ( tensors : typing.Dict[str, torch.Tensor] metadata : typing.Optional[typing.Dict[str, str]] = None ) → bytes Parameters tensors ( Dict[str, torch.Tensor] ) — The incoming tensors. Tensors need to be contiguous and dense. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns bytes The raw bytes representing the format Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.torch import save import torch tensors = { "embedding" : torch.zeros(( 512 , 1024 )), "attention" : torch.zeros(( 256 , 256 ))} byte_data = save(tensors) safetensors.torch.load_model < source > ( model : Module filename : typing.Union[str, os.PathLike] strict : bool = True device : typing.Union[str, int] = 'cpu' ) → `(missing, unexpected) Parameters model ( torch.nn.Module ) — The model to load onto. filename ( str , or os.PathLike ) — The filename location to load the file from. strict ( bool , optional , defaults to True) — Whether to fail if you’re missing keys or having unexpected ones. When false, the function simply returns missing and unexpected names. device ( Union[str, int] , optional , defaults to cpu ) — The device where the tensors need to be located after load. available options are all regular torch device locations. Returns `(missing, unexpected) (List[str], List[str]) missing are names in the model which were not modified during loading unexpected` are names that are on the file, but weren’t used during the load. Loads a given filename onto a torch model. This method exists specifically to avoid tensor sharing issues which are not allowed in safetensors . More information on tensor sharing safetensors.torch.save_model < source > ( model : Module filename : str metadata : typing.Optional[typing.Dict[str, str]] = None force_contiguous : bool = True ) Parameters model ( torch.nn.Module ) — The model to save on disk. filename ( str ) — The filename location to save the file metadata ( Dict[str, str] , optional ) — Extra information to save along with the file. Some metadata will be added for each dropped tensors. This information will not be enough to recover the entire shared structure but might help understanding things force_contiguous ( boolean , optional , defaults to True) — Forcing the state_dict to be saved as contiguous tensors. This has no effect on the correctness of the model, but it could potentially change performance if the layout of the tensor was chosen specifically for that reason. Saves a given torch model to specified filename. This method exists specifically to avoid tensor sharing issues which are not allowed in safetensors . More information on tensor sharing < > Update on GitHub ← Convert weights to safetensors Tensorflow API → Torch API |
Model_safety..txt | Model safety. Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Model safety. text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Model safety. Pytorch uses pickle by default meaning that for quite a long while Every model using that format is potentially executing unintended code while purely loading the model. There is a big red warning on Python’s page for pickle link but for quite a while this was ignored by the community. Now that AI/ML is getting used much more ubiquitously we need to switch away from this format. HuggingFace is leading the effort here by creating a new format which contains pure data ( safetensors ) and moving slowly but surely all the libs to make use of it by default. The move is intentionnally slow in order to make breaking changes as little impact as possible on users throughout. TGI 2.0 Since the release of TGI 2.0, we take the opportunity of this major version increase to break backward compatibility for these pytorch models (since they are a huge security risk for anyone deploying them). From now on, TGI will not convert automatically pickle files without having --trust-remote-code flag or TRUST_REMOTE_CODE=true in the environment variables. This flag is already used for community defined inference code, and is therefore quite representative of the level of confidence you are giving the model providers. If you want to use a model that uses pickle, but you still do not want to trust the authors entirely we recommend making a convertion on our space made for that. https://huggingface.co/spaces/safetensors/convert This space will create a PR on the original model, which you are use directly regardless of merge status from the original authors. Just use Copied docker run .... --revision refs /pr/ #ID # Or use REVISION=refs /pr/ #ID in the environment < > Update on GitHub ← Non-core Model Serving Using Guidance, JSON, tools → Model safety. |
@huggingface_gguf.txt | @huggingface/gguf Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation @huggingface/gguf Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started @huggingface/gguf A GGUF parser that works on remotely hosted files. Spec Spec: https://github.com/ggerganov/ggml/blob/master/docs/gguf.md Reference implementation (Python): https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/gguf_reader.py Install Copied npm install @huggingface/gguf Usage Basic usage Copied import { GGMLQuantizationType , gguf } from "@huggingface/gguf" ; // remote GGUF file from https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF const URL_LLAMA = "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/191239b/llama-2-7b-chat.Q2_K.gguf" ; const { metadata, tensorInfos } = await gguf ( URL_LLAMA ); console . log (metadata); // { // version: 2, // tensor_count: 291n, // kv_count: 19n, // "general.architecture": "llama", // "general.file_type": 10, // "general.name": "LLaMA v2", // ... // } console . log (tensorInfos); // [ // { // name: "token_embd.weight", // shape: [4096n, 32000n], // dtype: GGMLQuantizationType.Q2_K, // }, // ... , // { // name: "output_norm.weight", // shape: [4096n], // dtype: GGMLQuantizationType.F32, // } // ] Reading a local file Copied // Reading a local file. (Not supported on browser) const { metadata, tensorInfos } = await gguf ( './my_model.gguf' , { allowLocalFile : true }, ); Strictly typed By default, known fields in metadata are typed. This includes various fields found in llama.cpp , whisper.cpp and ggml . Copied const { metadata, tensorInfos } = await gguf ( URL_MODEL ); // Type check for model architecture at runtime if (metadata[ "general.architecture" ] === "llama" ) { // "llama.attention.head_count" is a valid key for llama architecture, this is typed as a number console . log (model[ "llama.attention.head_count" ]); // "mamba.ssm.conv_kernel" is an invalid key, because it requires model architecture to be mamba console . log (model[ "mamba.ssm.conv_kernel" ]); // error } Disable strictly typed Because GGUF format can be used to store tensors, we can technically use it for other usages. For example, storing control vectors , lora weights , etc. In case you want to use your own GGUF metadata structure, you can disable strict typing by casting the parse output to GGUFParseOutput<{ strict: false }> : Copied const { metadata, tensorInfos }: GGUFParseOutput <{ strict : false }> = await gguf ( URL_LLAMA ); Hugging Face Hub The Hub supports all file formats and has built-in features for GGUF format. Find more information at: http://hf.co/docs/hub/gguf . Acknowledgements & Inspirations https://github.com/hyparam/hyllama by @platypii (MIT license) https://github.com/ahoylabs/gguf.js by @biw @dkogut1996 @spencekim (MIT license) 🔥❤️ < > Update on GitHub ← Use Space mini_header in your app @huggingface/gguf Spec Install Usage Basic usage Reading a local file Strictly typed Disable strictly typed Hugging Face Hub Acknowledgements & Inspirations |
Learning_Tools_(Experimental_🧪).txt | Learning Tools (Experimental 🧪) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Learning Tools (Experimental 🧪) TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Learning Tools (Experimental 🧪) Using Large Language Models (LLMs) with tools has been a popular topic recently with awesome works such as ToolFormer and ToolBench . In TRL, we provide a simple example of how to teach LLM to use tools with reinforcement learning. Here’s an overview of the scripts in the trl repository : File Description calculator.py Script to train LLM to use a calculator with reinforcement learning. triviaqa.py Script to train LLM to use a wiki tool to answer questions. python_interpreter.py Script to train LLM to use python interpreter to solve math puzzles. Note that the scripts above rely heavily on the TextEnvironment API which is still under active development. The API may change in the future. Please see TextEnvironment for the related docs. Learning to Use a Calculator The rough idea is as follows: Load a tool such as ybelkada/simple-calculator that parse a text calculation like "14 + 34" and return the calulated number: Copied from transformers import AutoTokenizer, load_tool tool = load_tool( "ybelkada/simple-calculator" ) tool_fn = lambda text: str ( round ( float (tool(text)), 2 )) # rounding to 2 decimal places Define a reward function that returns a positive reward if the tool returns the correct answer. In the script we create a dummy reward function like reward_fn = lambda x: 1 , but we override the rewards directly later. Create a prompt on how to use the tools Copied # system prompt prompt = """\ What is 13.1-3? <request><SimpleCalculatorTool>13.1-3<call>10.1<response> Result=10.1<submit> What is 4*3? <request><SimpleCalculatorTool>4*3<call>12<response> Result=12<submit> What is 12.1+1? <request><SimpleCalculatorTool>12.1+1<call>13.1<response> Result=13.1<submit> What is 12.1-20? <request><SimpleCalculatorTool>12.1-20<call>-7.9<response> Result=-7.9<submit>""" Create a trl.TextEnvironment with the model Copied env = TextEnvironment( model, tokenizer, { "SimpleCalculatorTool" : tool_fn}, reward_fn, prompt, generation_kwargs=generation_kwargs, ) Then generate some data such as tasks = ["\n\nWhat is 13.1-3?", "\n\nWhat is 4*3?"] and run the environment with queries, responses, masks, rewards, histories = env.run(tasks) . The environment will look for the <call> token in the prompt and append the tool output to the response; it will also return the mask associated with the response. You can further use the histories to visualize the interaction between the model and the tool; histories[0].show_text() will show the text with color-coded tool output and histories[0].show_tokens(tokenizer) will show visualize the tokens. Finally, we can train the model with train_stats = ppo_trainer.step(queries, responses, rewards, masks) . The trainer will use the mask to ignore the tool output when computing the loss, make sure to pass that argument to step . Experiment results We trained a model with the above script for 10 random seeds. You can reproduce the run with the following command. Feel free to remove the --slurm-* arguments if you don’t have access to a slurm cluster. Copied WANDB_TAGS= "calculator_final" python benchmark/benchmark .py \ --command "python examples/research_projects/tools/calculator.py" \ --num-seeds 10 \ --start-seed 1 \ --workers 10 \ --slurm-gpus-per-task 1 \ --slurm-ntasks 1 \ --slurm-total-cpus 8 \ --slurm-template-path benchmark/trl.slurm_template We can then use openrlbenchmark which generates the following plot. Copied # pip install openrlbenchmark==0.2.1a5 python -m openrlbenchmark.rlops_multi_metrics \ --filters '?we=openrlbenchmark&wpn=trl&xaxis=_step&ceik=trl_ppo_trainer_config.value.tracker_project_name&cen=trl_ppo_trainer_config.value.log_with&metrics=env/reward_mean&metrics=objective/kl' \ 'wandb?tag=calculator_final&cl=calculator_mask' \ --env-ids trl \ --check-empty-runs \ --pc.ncols 2 \ --pc.ncols-legend 1 \ --output-filename static / 0compare \ --scan-history As we can see, while 1-2 experiments crashed for some reason, most of the runs obtained near perfect proficiency in the calculator task. (Early Experiments 🧪): learning to use a wiki tool for question answering In the ToolFormer paper, it shows an interesting use case that utilizes a Wikipedia Search tool to help answer questions. In this section, we attempt to perform similar experiments but uses RL instead to teach the model to use a wiki tool on the TriviaQA dataset. Note that many settings are different so the results are not directly comparable. Building a search index Since ToolFormer did not open source, we needed to first replicate the search index. It is mentioned in their paper that the authors built the search index using a BM25 retriever that indexes the Wikipedia dump from KILT Fortunately, pyserini already implements the BM25 retriever and provides a prebuilt index for the KILT Wikipedia dump. We can use the following code to search the index. Copied from pyserini.search.lucene import LuceneSearcher import json searcher = LuceneSearcher.from_prebuilt_index( 'wikipedia-kilt-doc' ) def search ( query ): hits = searcher.search(query, k= 1 ) hit = hits[ 0 ] contents = json.loads(hit.raw)[ 'contents' ] return contents print (search( "tennis racket" )) Copied Racket (sports equipment) A racket or racquet is a sports implement consisting of a handled frame with an open hoop across which a network of strings or catgut is stretched tightly. It is used for striking a ball or shuttlecock in games such as squash, tennis, racquetball, and badminton. Collectively, these games are known as racket sports. Racket design and manufacturing has changed considerably over the centuries. The frame of rackets for all sports was traditionally made of solid wood (later laminated wood) and the strings of animal intestine known as catgut. The traditional racket size was limited by the strength and weight of the wooden frame which had to be strong enough to hold the strings and stiff enough to hit the ball or shuttle. Manufacturers started adding non-wood laminates to wood rackets to improve stiffness. Non-wood rackets were made first of steel, then of aluminum, and then carbon fiber composites. Wood is still used for real tennis, rackets, and xare. Most rackets are now made of composite materials including carbon fiber or fiberglass, metals such as titanium alloys, or ceramics. ... We then basically deployed this snippet as a Hugging Face space here , so that we can use the space as a transformers.Tool later. Experiment settings We use the following settings: use the bigcode/starcoderbase model as the base model use the pyserini-wikipedia-kilt-doc space as the wiki tool and only uses the first paragrahs of the search result, allowing the TextEnvironment to obtain at most max_tool_reponse=400 response tokens from the tool. test if the response contain the answer string, if so, give a reward of 1, otherwise, give a reward of 0. notice this is a simplified evaluation criteria. In ToolFormer , the authors checks if the first 20 words of the response contain the correct answer. used the following prompt that demonstrates the usage of the wiki tool. Copied prompt = """\ Answer the following question: Q: In which branch of the arts is Patricia Neary famous? A: Ballets A2: <request><Wiki>Patricia Neary<call>Patricia Neary (born October 27, 1942) is an American ballerina, choreographer and ballet director, who has been particularly active in Switzerland. She has also been a highly successful ambassador for the Balanchine Trust, bringing George Balanchine's ballets to 60 cities around the globe.<response> Result=Ballets<submit> Q: Who won Super Bowl XX? A: Chicago Bears A2: <request><Wiki>Super Bowl XX<call>Super Bowl XX was an American football game between the National Football Conference (NFC) champion Chicago Bears and the American Football Conference (AFC) champion New England Patriots to decide the National Football League (NFL) champion for the 1985 season. The Bears defeated the Patriots by the score of 46–10, capturing their first NFL championship (and Chicago's first overall sports victory) since 1963, three years prior to the birth of the Super Bowl. Super Bowl XX was played on January 26, 1986 at the Louisiana Superdome in New Orleans.<response> Result=Chicago Bears<submit> Q: """ Result and Discussion Our experiments show that the agent can learn to use the wiki tool to answer questions. The learning curves would go up mostly, but one of the experiment did crash. Wandb report is here for further inspection. Note that the correct rate of the trained model is on the low end, which could be due to the following reasons: incorrect searches: When given the question "What is Bruce Willis' real first name?" if the model searches for Bruce Willis , our wiki tool returns “Patrick Poivey (born 18 February 1948) is a French actor. He is especially known for his voice: he is the French dub voice of Bruce Willis since 1988. But a correct search should be Walter Bruce Willis (born March 19, 1955) is an American former actor. He achieved fame with a leading role on the comedy-drama series Moonlighting (1985–1989) and appeared in over a hundred films, gaining recognition as an action hero after his portrayal of John McClane in the Die Hard franchise (1988–2013) and other roles.[1][2]” unnecessarily long response : The wiki tool by default sometimes output very long sequences. E.g., when the wiki tool searches for “Brown Act” Our wiki tool returns “The Ralph M. Brown Act, located at California Government Code 54950 “et seq.”, is an act of the California State Legislature, authored by Assemblymember Ralph M. Brown and passed in 1953, that guarantees the public’s right to attend and participate in meetings of local legislative bodies.” ToolFormer ’s wiki tool returns “The Ralph M. Brown Act is an act of the California State Legislature that guarantees the public’s right to attend and participate in meetings of local legislative bodies.” which is more succinct. (Early Experiments 🧪): solving math puzzles with python interpreter In this section, we attempt to teach the model to use a python interpreter to solve math puzzles. The rough idea is to give the agent a prompt like the following: Copied prompt = """\ Example of using a Python API to solve math questions. Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? <request><PythonInterpreter> def solution(): money_initial = 23 bagels = 5 bagel_cost = 3 money_spent = bagels * bagel_cost money_left = money_initial - money_spent result = money_left return result print(solution()) <call>72<response> Result = 72 <submit> Q: """ Training experiment can be found at https://wandb.ai/lvwerra/trl-gsm8k/runs/a5odv01y < > Update on GitHub ← Training StackLlama Multi Adapter RLHF → Learning Tools ( Experimental 🧪) Learning to Use a Calculator Experiment results ( Early Experiments 🧪): learning to use a wiki tool for question answering Building a search index Experiment settings Result and Discussion ( Early Experiments 🧪): solving math puzzles with python interpreter |
PEFT_integrations.txt | PEFT integrations Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation PEFT integrations PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started PEFT integrations PEFT’s practical benefits extends to other Hugging Face libraries like Diffusers and Transformers . One of the main benefits of PEFT is that an adapter file generated by a PEFT method is a lot smaller than the original model, which makes it super easy to manage and use multiple adapters. You can use one pretrained base model for multiple tasks by simply loading a new adapter finetuned for the task you’re solving. Or you can combine multiple adapters with a text-to-image diffusion model to create new effects. This tutorial will show you how PEFT can help you manage adapters in Diffusers and Transformers. Diffusers Diffusers is a generative AI library for creating images and videos from text or images with diffusion models. LoRA is an especially popular training method for diffusion models because you can very quickly train and share diffusion models to generate images in new styles. To make it easier to use and try multiple LoRA models, Diffusers uses the PEFT library to help manage different adapters for inference. For example, load a base model and then load the artificialguybr/3DRedmond-V1 adapter for inference with the load_lora_weights method. The adapter_name argument in the loading method is enabled by PEFT and allows you to set a name for the adapter so it is easier to reference. Copied import torch from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16 ).to( "cuda" ) pipeline.load_lora_weights( "peft-internal-testing/artificialguybr__3DRedmond-V1" , weight_name= "3DRedmond-3DRenderStyle-3DRenderAF.safetensors" , adapter_name= "3d" ) image = pipeline( "sushi rolls shaped like kawaii cat faces" ).images[ 0 ] image Now let’s try another cool LoRA model, ostris/super-cereal-sdxl-lora . All you need to do is load and name this new adapter with adapter_name , and use the set_adapters method to set it as the currently active adapter. Copied pipeline.load_lora_weights( "ostris/super-cereal-sdxl-lora" , weight_name= "cereal_box_sdxl_v1.safetensors" , adapter_name= "cereal" ) pipeline.set_adapters( "cereal" ) image = pipeline( "sushi rolls shaped like kawaii cat faces" ).images[ 0 ] image Finally, you can call the disable_lora method to restore the base model. Copied pipeline.disable_lora() Learn more about how PEFT supports Diffusers in the Inference with PEFT tutorial. Transformers 🤗 Transformers is a collection of pretrained models for all types of tasks in all modalities. You can load these models for training or inference. Many of the models are large language models (LLMs), so it makes sense to integrate PEFT with Transformers to manage and train adapters. Load a base pretrained model to train. Copied from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "facebook/opt-350m" ) Next, add an adapter configuration to specify how to adapt the model parameters. Call the add_adapter() method to add the configuration to the base model. Copied from peft import LoraConfig peft_config = LoraConfig( lora_alpha= 16 , lora_dropout= 0.1 , r= 64 , bias= "none" , task_type= "CAUSAL_LM" ) model.add_adapter(peft_config) Now you can train the model with Transformer’s Trainer class or whichever training framework you prefer. To use the newly trained model for inference, the AutoModel class uses PEFT on the backend to load the adapter weights and configuration file into a base pretrained model. Copied from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "peft-internal-testing/opt-350m-lora" ) Alternatively, you can use transformers Pipelines to load the model for conveniently running inference: Copied from transformers import pipeline model = pipeline( "text-generation" , "peft-internal-testing/opt-350m-lora" ) print (model( "Hello World" )) If you’re interested in comparing or using more than one adapter, you can call the add_adapter() method to add the adapter configuration to the base model. The only requirement is the adapter type must be the same (you can’t mix a LoRA and LoHa adapter). Copied from transformers import AutoModelForCausalLM from peft import LoraConfig model = AutoModelForCausalLM.from_pretrained( "facebook/opt-350m" ) model.add_adapter(lora_config_1, adapter_name= "adapter_1" ) Call add_adapter() again to attach a new adapter to the base model. Copied model.add_adapter(lora_config_2, adapter_name= "adapter_2" ) Then you can use set_adapter() to set the currently active adapter. Copied model.set_adapter( "adapter_1" ) output = model.generate(**inputs) print (tokenizer.decode(output_disabled[ 0 ], skip_special_tokens= True )) To disable the adapter, call the disable_adapters method. Copied model.disable_adapters() The enable_adapters can be used to enable the adapters again. If you’re curious, check out the Load and train adapters with PEFT tutorial to learn more. < > Update on GitHub ← Configurations and models Prompt-based methods → PEF T integrations Diffusers Transformers |
Models_Frequently_Asked_Questions.txt | Models Frequently Asked Questions Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Models Frequently Asked Questions Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Models Frequently Asked Questions How can I see what dataset was used to train the model? It’s up to the person who uploaded the model to include the training information! A user can specify the dataset used for training a model. If the datasets used for the model are on the Hub, the uploader may have included them in the model card’s metadata . In that case, the datasets would be linked with a handy card on the right side of the model page: How can I see an example of the model in action? Models can have inference widgets that let you try out the model in the browser! Inference widgets are easy to configure, and there are many different options at your disposal. Visit the Widgets documentation to learn more. The Hugging Face Hub is also home to Spaces, which are interactive demos used to showcase models. If a model has any Spaces associated with it, you’ll find them linked on the model page like so: Spaces are a great way to show off a model you’ve made or explore new ways to use existing models! Visit the Spaces documentation to learn how to make your own. How do I upload an update / new version of the model? Releasing an update to a model that you’ve already published can be done by pushing a new commit to your model’s repo. To do this, go through the same process that you followed to upload your initial model. Your previous model versions will remain in the repository’s commit history, so you can still download previous model versions from a specific git commit or tag or revert to previous versions if needed. What if I have a different checkpoint of the model trained on a different dataset? By convention, each model repo should contain a single checkpoint. You should upload any new checkpoints trained on different datasets to the Hub in a new model repo. You can link the models together by using a tag specified in the tags key in your model card’s metadata , by using Collections to group distinct related repositories together or by linking to them in the model cards. The akiyamasho/AnimeBackgroundGAN-Shinkai model, for example, references other checkpoints in the model card under “Other pre-trained model versions” . Can I link my model to a paper on arXiv? If the model card includes a link to a paper on arXiv, the Hugging Face Hub will extract the arXiv ID and include it in the model tags with the format arxiv:<PAPER ID> . Clicking on the tag will let you: Visit the paper page Filter for other models on the Hub that cite the same paper. Read more about paper pages here . < > Update on GitHub ← Models Download Stats Advanced Topics → Models Frequently Asked Questions How can I see what dataset was used to train the model? How can I see an example of the model in action? How do I upload an update / new version of the model? What if I have a different checkpoint of the model trained on a different dataset? Can I link my model to a paper on ar Xiv? |
Access_🤗_Inference_Endpoints.txt | Access 🤗 Inference Endpoints Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Access 🤗 Inference Endpoints Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Access 🤗 Inference Endpoints To access the Inference Endpoints web application , you or your organization need to add a valid payment method to your Hugging Face account. You can check your [billing](https://huggingface.co/settings/billing) if you're unsure whether you have an active payment method. There are two pricing plans: Inference Endpoints pricing is based on your hourly compute, and billed monthly. This can be as low as $0.032 per CPU core/hr and $0.5 per GPU/hr depending on your needs. There is also an Enterprise plan for Inference Endpoints which offers dedicated support, 24/7 SLAs, and uptime guarantees. Pricing for Enterprise is custom and based on volume commit and annual contracts; contact us for a quote if interested! After you’ve added a valid payment method to your account, access the Inference Endpoints web application and start deploying! 🥳 < > Update on GitHub ← FAQ Create your first Endpoint → Access 🤗 Inference Endpoints |
Speed_Comparison.txt | Speed Comparison Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Safetensors documentation Speed Comparison Safetensors 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.5.0-rc.0 v0.3.2 v0.2.9 EN Getting started 🤗 Safetensors Speed Comparison Tensor Sharing in Pytorch Metadata Parsing Convert weights to safetensors API Torch API Tensorflow API PaddlePaddle API Flax API Numpy API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.5.0-rc.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Speed Comparison Safetensors is really fast. Let’s compare it against PyTorch by loading gpt2 weights. To run the GPU benchmark , make sure your machine has GPU or you have selected GPU runtime if you are using Google Colab. Before you begin, make sure you have all the necessary libraries installed: Copied pip install safetensors huggingface_hub torch Let’s start by importing all the packages that will be used: Copied >>> import os >>> import datetime >>> from huggingface_hub import hf_hub_download >>> from safetensors.torch import load_file >>> import torch Download safetensors & torch weights for gpt2: Copied >>> sf_filename = hf_hub_download( "gpt2" , filename= "model.safetensors" ) >>> pt_filename = hf_hub_download( "gpt2" , filename= "pytorch_model.bin" ) CPU benchmark Copied >>> start_st = datetime.datetime.now() >>> weights = load_file(sf_filename, device= "cpu" ) >>> load_time_st = datetime.datetime.now() - start_st >>> print ( f"Loaded safetensors {load_time_st} " ) >>> start_pt = datetime.datetime.now() >>> weights = torch.load(pt_filename, map_location= "cpu" ) >>> load_time_pt = datetime.datetime.now() - start_pt >>> print ( f"Loaded pytorch {load_time_pt} " ) >>> print ( f"on CPU, safetensors is faster than pytorch by: {load_time_pt/load_time_st: .1 f} X" ) Loaded safetensors 0 : 00 : 00.004015 Loaded pytorch 0 : 00 : 00.307460 on CPU, safetensors is faster than pytorch by: 76.6 X This speedup is due to the fact that this library avoids unnecessary copies by mapping the file directly. It is actually possible to do on pure pytorch . The currently shown speedup was gotten on: OS: Ubuntu 18.04.6 LTS CPU: Intel(R) Xeon(R) CPU @ 2.00GHz GPU benchmark Copied >>> # This is required because this feature hasn't been fully verified yet, but >>> # it's been tested on many different environments >>> os.environ[ "SAFETENSORS_FAST_GPU" ] = "1" >>> # CUDA startup out of the measurement >>> torch.zeros(( 2 , 2 )).cuda() >>> start_st = datetime.datetime.now() >>> weights = load_file(sf_filename, device= "cuda:0" ) >>> load_time_st = datetime.datetime.now() - start_st >>> print ( f"Loaded safetensors {load_time_st} " ) >>> start_pt = datetime.datetime.now() >>> weights = torch.load(pt_filename, map_location= "cuda:0" ) >>> load_time_pt = datetime.datetime.now() - start_pt >>> print ( f"Loaded pytorch {load_time_pt} " ) >>> print ( f"on GPU, safetensors is faster than pytorch by: {load_time_pt/load_time_st: .1 f} X" ) Loaded safetensors 0 : 00 : 00.165206 Loaded pytorch 0 : 00 : 00.353889 on GPU, safetensors is faster than pytorch by: 2.1 X The speedup works because this library is able to skip unnecessary CPU allocations. It is unfortunately not replicable in pure pytorch as far as we know. The library works by memory mapping the file, creating the tensor empty with pytorch and calling cudaMemcpy directly to move the tensor directly on the GPU. The currently shown speedup was gotten on: OS: Ubuntu 18.04.6 LTS. GPU: Tesla T4 Driver Version: 460.32.03 CUDA Version: 11.2 < > Update on GitHub ← 🤗 Safetensors Tensor Sharing in Pytorch → Speed Comparison CP U benchmark GP U benchmark |
Managing_organizations.txt | Managing organizations Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Managing organizations Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Managing Organizations Organization Cards Access Control in Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Managing organizations Creating an organization Visit the New Organization form to create an organization. Managing members New members can be added to an organization by visiting the Organization settings and clicking on the Members tab. There, you’ll be able to generate an invite link, add members individually, or send out email invitations in bulk. If the Allow requests to join from the organization page setting is enabled, you’ll also be able to approve or reject any pending requests on the Members page. You can also revoke a user’s membership or change their role on this page. Organization domain name Under the Account tab in the Organization settings, you can set an Organization domain name . Specifying a domain name will allow any user with a matching email address on the Hugging Face Hub to join your organization. < > Update on GitHub ← Organizations Organization Cards → Managing organizations Creating an organization Managing members Organization domain name |
torch.compile.txt | torch.compile Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation torch.compile PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started torch.compile In PEFT, torch.compile works for some but not all features. The reason why it won’t always work is because PEFT is highly dynamic in certain places (loading and switching between multiple adapters, for instance), which can cause trouble for torch.compile . In other places, torch.compile may work, but won’t be as fast as expected because of graph breaks. If you don’t see an error, it doesn’t necessarily mean that torch.compile worked correctly. It might give you an output, but the output is incorrect. This guide describes what works with torch.compile and what doesn’t. Unless indicated otherwise, the default torch.compile settings were used. Training and inference with torch.compile These features work with torch.compile . Everything listed below was tested with a causal LM: Training with Trainer from 🤗 transformers Training with a custom PyTorch loop Inference Generation The following adapters were tested successfully: AdaLoRA BOFT IA³ Layer Norm Tuning LoHa LoRA LoRA + DoRA OFT VeRA HRA The following adapters don’t work correctly for training or inference when using torch.compile : LoKr LoRA targeting embedding layers Advanced PEFT features with torch.compile Below are some of the more advanced PEFT features that work . They were all tested with LoRA. modules_to_save (i.e. config = LoraConfig(..., modules_to_save=...) ) Merging adapters (one or multiple) Merging multiple adapters into one adapter (i.e. calling model.add_weighted_adapter(...) ) Generally, we can expect that if a feature works correctly with LoRA and is also supported by other adapter types, it should also work for that adapter type. The more advanced PEFT features below don’t work in conjunction with torch.compile . Tests were run with LoRA: Using PEFT adapters with quantization (bitsandbytes) Inference with multiple adapters Unloading (i.e. calling model.merge_and_unload() ) Disabling adapters (i.e. using with model.disable_adapter() ) Mixed adapter batches (i.e. calling model(batch, adapter_names=["__base__", "default", "other", ...]) ) Test cases All the use cases listed above are tested inside of peft/tests/test_torch_compile.py . If you want to check in more detail how we tested a certain feature, please go to that file and check the test that corresponds to your use case. If you have another use case where you know that torch.compile does or does not work with PEFT, please contribute by letting us know or by opening a PR to add this use case to the covered test cases. < > Update on GitHub ← Mixed adapter types Contribute to PEFT → torch.compile Training and inference with torch.compile Advanced PEF T features with torch.compile Test cases |
Load_a_dataset_from_the_Hub.txt | Load a dataset from the Hub Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Load a dataset from the Hub Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Load a dataset from the Hub Finding high-quality datasets that are reproducible and accessible can be difficult. One of 🤗 Datasets main goals is to provide a simple way to load a dataset of any format or type. The easiest way to get started is to discover an existing dataset on the Hugging Face Hub - a community-driven collection of datasets for tasks in NLP, computer vision, and audio - and use 🤗 Datasets to download and generate the dataset. This tutorial uses the rotten_tomatoes and MInDS-14 datasets, but feel free to load any dataset you want and follow along. Head over to the Hub now and find a dataset for your task! Load a dataset Before you take the time to download a dataset, it’s often helpful to quickly get some general information about a dataset. A dataset’s information is stored inside DatasetInfo and can include information such as the dataset description, features, and dataset size. Use the load_dataset_builder() function to load a dataset builder and inspect a dataset’s attributes without committing to downloading it: Copied >>> from datasets import load_dataset_builder >>> ds_builder = load_dataset_builder( "rotten_tomatoes" ) # Inspect dataset description >>> ds_builder.info.description Movie Review Dataset. This is a dataset of containing 5 , 331 positive and 5 , 331 negative processed sentences from Rotten Tomatoes movie reviews. This data was first used in Bo Pang and Lillian Lee, ``Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. '' , Proceedings of the ACL, 2005. # Inspect dataset features >>> ds_builder.info.features { 'label' : ClassLabel(names=[ 'neg' , 'pos' ], id = None ), 'text' : Value(dtype= 'string' , id = None )} If you’re happy with the dataset, then load it with load_dataset() : Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "rotten_tomatoes" , split= "train" ) Splits A split is a specific subset of a dataset like train and test . List a dataset’s split names with the get_dataset_split_names() function: Copied >>> from datasets import get_dataset_split_names >>> get_dataset_split_names( "rotten_tomatoes" ) [ 'train' , 'validation' , 'test' ] Then you can load a specific split with the split parameter. Loading a dataset split returns a Dataset object: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "rotten_tomatoes" , split= "train" ) >>> dataset Dataset({ features: [ 'text' , 'label' ], num_rows: 8530 }) If you don’t specify a split , 🤗 Datasets returns a DatasetDict object instead: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "rotten_tomatoes" ) DatasetDict({ train: Dataset({ features: [ 'text' , 'label' ], num_rows: 8530 }) validation: Dataset({ features: [ 'text' , 'label' ], num_rows: 1066 }) test: Dataset({ features: [ 'text' , 'label' ], num_rows: 1066 }) }) Configurations Some datasets contain several sub-datasets. For example, the MInDS-14 dataset has several sub-datasets, each one containing audio data in a different language. These sub-datasets are known as configurations or subsets , and you must explicitly select one when loading the dataset. If you don’t provide a configuration name, 🤗 Datasets will raise a ValueError and remind you to choose a configuration. Use the get_dataset_config_names() function to retrieve a list of all the possible configurations available to your dataset: Copied >>> from datasets import get_dataset_config_names >>> configs = get_dataset_config_names( "PolyAI/minds14" ) >>> print (configs) [ 'cs-CZ' , 'de-DE' , 'en-AU' , 'en-GB' , 'en-US' , 'es-ES' , 'fr-FR' , 'it-IT' , 'ko-KR' , 'nl-NL' , 'pl-PL' , 'pt-PT' , 'ru-RU' , 'zh-CN' , 'all' ] Then load the configuration you want: Copied >>> from datasets import load_dataset >>> mindsFR = load_dataset( "PolyAI/minds14" , "fr-FR" , split= "train" ) Remote code Certain datasets repositories contain a loading script with the Python code used to generate the dataset. All files and code uploaded to the Hub are scanned for malware (refer to the Hub security documentation for more information), but you should still review the dataset loading scripts and authors to avoid executing malicious code on your machine. You should set trust_remote_code=True to use a dataset with a loading script, or you will get an error: Copied >>> from datasets import get_dataset_config_names, get_dataset_split_names, load_dataset >>> c4 = load_dataset( "c4" , "en" , split= "train" , trust_remote_code= True ) >>> get_dataset_config_names( "c4" , trust_remote_code= True ) [ 'en' , 'realnewslike' , 'en.noblocklist' , 'en.noclean' ] >>> get_dataset_split_names( "c4" , "en" , trust_remote_code= True ) [ 'train' , 'validation' ] For security reasons, 🤗 Datasets do not allow running dataset loading scripts by default, and you have to pass trust_remote_code=True to load datasets that require running a dataset script. < > Update on GitHub ← Overview Know your dataset → Load a dataset from the Hub Load a dataset Splits Configurations Remote code |
Create_a_dataset.txt | Create a dataset Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Create a dataset Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Create a dataset Sometimes, you may need to create a dataset if you’re working with your own data. Creating a dataset with 🤗 Datasets confers all the advantages of the library to your dataset: fast loading and processing, stream enormous datasets , memory-mapping , and more. You can easily and rapidly create a dataset with 🤗 Datasets low-code approaches, reducing the time it takes to start training a model. In many cases, it is as easy as dragging and dropping your data files into a dataset repository on the Hub. In this tutorial, you’ll learn how to use 🤗 Datasets low-code methods for creating all types of datasets: Folder-based builders for quickly creating an image or audio dataset from_ methods for creating datasets from local files File-based builders 🤗 Datasets supports many common formats such as csv , json/jsonl , parquet , txt . For example it can read a dataset made up of one or several CSV files (in this case, pass your CSV files as a list): Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "csv" , data_files= "my_file.csv" ) To get the list of supported formats and code examples, follow this guide here . Folder-based builders There are two folder-based builders, ImageFolder and AudioFolder . These are low-code methods for quickly creating an image or speech and audio dataset with several thousand examples. They are great for rapidly prototyping computer vision and speech models before scaling to a larger dataset. Folder-based builders takes your data and automatically generates the dataset’s features, splits, and labels. Under the hood: ImageFolder uses the Image feature to decode an image file. Many image extension formats are supported, such as jpg and png, but other formats are also supported. You can check the complete list of supported image extensions. AudioFolder uses the Audio feature to decode an audio file. Audio extensions such as wav and mp3 are supported, and you can check the complete list of supported audio extensions. The dataset splits are generated from the repository structure, and the label names are automatically inferred from the directory name. For example, if your image dataset (it is the same for an audio dataset) is stored like this: Copied pokemon /train/g rass/bulbasaur.png pokemon /train/ fire/charmander.png pokemon /train/ water/squirtle.png pokemon /test/g rass/ivysaur.png pokemon /test/ fire/charmeleon.png pokemon /test/ water/wartortle.png Then this is how the folder-based builder generates an example: Create the image dataset by specifying imagefolder in load_dataset() : Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "imagefolder" , data_dir= "/path/to/pokemon" ) An audio dataset is created in the same way, except you specify audiofolder in load_dataset() instead: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "audiofolder" , data_dir= "/path/to/folder" ) Any additional information about your dataset, such as text captions or transcriptions, can be included with a metadata.csv file in the folder containing your dataset. The metadata file needs to have a file_name column that links the image or audio file to its corresponding metadata: Copied file_name, text bulbasaur.png, There is a plant seed on its back right from the day this Pokémon is born. charmander.png, It has a preference for hot things. squirtle.png, When it retracts its long neck into its shell, it squirts out water with vigorous force. To learn more about each of these folder-based builders, check out the and ImageFolder or AudioFolder guides. From Python dictionaries You can also create a dataset from data in Python dictionaries. There are two ways you can create a dataset using the from_ methods: The from_generator() method is the most memory-efficient way to create a dataset from a generator due to a generators iterative behavior. This is especially useful when you’re working with a really large dataset that may not fit in memory, since the dataset is generated on disk progressively and then memory-mapped. Copied >>> from datasets import Dataset >>> def gen (): ... yield { "pokemon" : "bulbasaur" , "type" : "grass" } ... yield { "pokemon" : "squirtle" , "type" : "water" } >>> ds = Dataset.from_generator(gen) >>> ds[ 0 ] { "pokemon" : "bulbasaur" , "type" : "grass" } A generator-based IterableDataset needs to be iterated over with a for loop for example: Copied >>> from datasets import IterableDataset >>> ds = IterableDataset.from_generator(gen) >>> for example in ds: ... print (example) { "pokemon" : "bulbasaur" , "type" : "grass" } { "pokemon" : "squirtle" , "type" : "water" } The from_dict() method is a straightforward way to create a dataset from a dictionary: Copied >>> from datasets import Dataset >>> ds = Dataset.from_dict({ "pokemon" : [ "bulbasaur" , "squirtle" ], "type" : [ "grass" , "water" ]}) >>> ds[ 0 ] { "pokemon" : "bulbasaur" , "type" : "grass" } To create an image or audio dataset, chain the cast_column() method with from_dict() and specify the column and feature type. For example, to create an audio dataset: Copied >>> audio_dataset = Dataset.from_dict({ "audio" : [ "path/to/audio_1" , ..., "path/to/audio_n" ]}).cast_column( "audio" , Audio()) Now that you know how to create a dataset, consider sharing it on the Hub so the community can also benefit from your work! Go on to the next section to learn how to share your dataset. < > Update on GitHub ← Preprocess Share a dataset to the Hub → Create a dataset File-based builders Folder-based builders From Python dictionaries |
Subsets and Splits