filename
stringlengths
7
54
content
stringlengths
1.74k
710k
🤗_Hugging_Face_Agents.js.txt
🤗 Hugging Face Agents.js Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation 🤗 Hugging Face Agents.js Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🤗 Hugging Face Agents.js A way to call Hugging Face models and Inference Endpoints from natural language, using an LLM. Install Copied pnpm add @huggingface/agents npm add @huggingface/agents yarn add @huggingface/agents Deno Copied // esm.sh import { HfAgent } from "https://esm.sh/@huggingface/agent" // or npm: import { HfAgent } from "npm:@huggingface/agent" Usage Agents.js leverages LLMs hosted as Inference Endpoints on HF, so you need to create an account and generate an access token . Copied import { HfAgent } from "@huggingface/agents" ; const agent = new HfAgent ( "hf_..." ); const code = await agent. generateCode ( "Draw a picture of a cat, wearing a top hat." ) console . log (code) // always good to check the generated code before running it const outputs = await agent.evaluateCode(code); console . log (outputs) Choose your LLM You can also use your own LLM, by calling one of the LLMFrom* functions. From the hub You can specify any valid model on the hub as long as they have an API. Copied import { HfAgent , LLMFromHub } from "@huggingface/agents" ; const agent = new HfAgent ( "hf_..." , LLMFromHub ( "hf_..." , "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5" ) ); From your own endpoints You can also specify your own endpoint, as long as it implements the same API, for exemple using text generation inference and Inference Endpoints . Copied import { HfAgent , LLMFromEndpoint } from "@huggingface/agents" ; const agent = new HfAgent ( "hf_..." , LLMFromEndpoint ( "hf_..." , "http://..." ) ); Custom LLM A LLM in this context is defined as any async function that takes a string input and returns a string. For example if you wanted to use the OpenAI API you could do so like this: Copied import { HfAgent } from "@huggingface/agents" ; import { Configuration , OpenAIApi } from "openai" ; const api = new OpenAIApi ( new Configuration ({ apiKey : "sk-..." })); const llmOpenAI = async ( prompt : string ): Promise < string > => { return ( ( await api. createCompletion ({ model : "text-davinci-003" , prompt : prompt, max_tokens : 1000 , }) ). data . choices [ 0 ]. text ?? "" ); }; const agent = new HfAgent ( "hf_..." , llmOpenAI ); // do anything you want with the agent here Tools By default, agents ship with 4 tools. (textToImage, textToSpeech, imageToText, speechToText) But you can expand the list of tools easily by creating new tools and passing them at initialization. Copied import { HfAgent , defaultTools, LLMFromHub } from "@huggingface/agents" ; import type { Tool } from "@huggingface/agents/src/types" ; // define the tool const uppercaseTool : Tool = { name : "uppercase" , description : "uppercase the input string and returns it " , examples : [ { prompt : "uppercase the string: hello world" , code : `const output = uppercase("hello world")` , tools : [ "uppercase" ], }, ], call : async (input) => { const data = await input; if ( typeof data !== "string" ) { throw new Error ( "Input must be a string" ); } return data. toUpperCase (); }, }; // pass it in the agent const agent = new HfAgent (process. env . HF_TOKEN , LLMFromHub ( "hf_..." , "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5" ), [uppercaseTool, ...defaultTools]); Dependencies @huggingface/inference : Required to call the inference endpoints themselves. < > Update on GitHub ← WhoAmIUser API Reference → 🤗 Hugging Face Agents.js Install Deno Usage Choose your LLM From the hub From your own endpoints Custom LLM Tools Dependencies
Scripts_Utilities.txt
Scripts Utilities Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Scripts Utilities TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Scripts Utilities ScriptArguments class trl. ScriptArguments < source > ( dataset_name : str dataset_config : typing.Optional[str] = None dataset_train_split : str = 'train' dataset_test_split : str = 'test' gradient_checkpointing_use_reentrant : bool = False ignore_bias_buffers : bool = False ) Parameters dataset_name ( str ) — Dataset name. dataset_config ( str or None , optional , defaults to None ) — Dataset configuration name. Corresponds to the name argument of the load_dataset function. dataset_train_split ( str , optional , defaults to "train" ) — Dataset split to use for training. dataset_test_split ( str , optional , defaults to "test" ) — Dataset split to use for evaluation. gradient_checkpointing_use_reentrant ( bool , optional , defaults to False ) — Whether to apply use_reentrant for gradient_checkpointing. ignore_bias_buffers ( bool , optional , defaults to False ) — Debug argument for distributed training. Fix for DDP issues with LM bias/mask buffers - invalid scalar type, inplace operation. See https://github.com/huggingface/transformers/issues/22482#issuecomment-1595790992 . Arguments common to all scripts. TrlParser class trl. TrlParser < source > ( dataclass_types : typing.Union[transformers.hf_argparser.DataClassType, typing.Iterable[transformers.hf_argparser.DataClassType], NoneType] = None ignore_extra_args : typing.Optional[bool] = None **kwargs ) Parameters dataclass_types ( Union[DataClassType, Iterable[DataClassType]] or None , optional , defaults to None ) — Dataclass types to use for argument parsing. * *kwargs — Additional keyword arguments passed to the transformers.HfArgumentParser constructor. A subclass of transformers.HfArgumentParser designed for parsing command-line arguments with dataclass-backed configurations, while also supporting configuration file loading and environment variable management. Examples: Copied # config.yaml env: VAR1: value1 arg1: 23 Copied # main.py import os from dataclasses import dataclass from trl import TrlParser @dataclass class MyArguments : arg1: int arg2: str = "alpha" parser = TrlParser(dataclass_types=[MyArguments]) training_args = parser.parse_args_and_config() print (training_args, os.environ.get( "VAR1" )) Copied $ python main.py --config config.yaml (MyArguments(arg1=23, arg2= 'alpha' ),) value1 $ python main.py --arg1 5 --arg2 beta (MyArguments(arg1=5, arg2= 'beta' ),) None parse_args_and_config < source > ( args : typing.Optional[typing.Iterable[str]] = None return_remaining_strings : bool = False ) Parse command-line args and config file into instances of the specified dataclass types. This method wraps transformers.HfArgumentParser.parse_args_into_dataclasses and also parses the config file specified with the --config flag. The config file (in YAML format) provides argument values that replace the default values in the dataclasses. Command line arguments can override values set by the config file. The method also sets any environment variables specified in the env field of the config file. parse_args_into_dataclasses < source > ( args = None return_remaining_strings = False look_for_args_file = True args_filename = None args_file_flag = None ) → Tuple consisting of Parameters args — List of strings to parse. The default is taken from sys.argv. (same as argparse.ArgumentParser) return_remaining_strings — If true, also return a list of remaining argument strings. look_for_args_file — If true, will look for a “.args” file with the same base name as the entry point script for this process, and will append its potential content to the command line args. args_filename — If not None, will uses this file instead of the “.args” file specified in the previous argument. args_file_flag — If not None, will look for a file in the command-line args specified with this flag. The flag can be specified multiple times and precedence is determined by the order (last one wins). Returns Tuple consisting of the dataclass instances in the same order as they were passed to the initializer.abspath if applicable, an additional namespace for more (non-dataclass backed) arguments added to the parser after initialization. The potential list of remaining argument strings. (same as argparse.ArgumentParser.parse_known_args) Parse command-line args into instances of the specified dataclass types. This relies on argparse’s ArgumentParser.parse_known_args . See the doc at: docs.python.org/3.7/library/argparse.html#argparse.ArgumentParser.parse_args set_defaults_with_config < source > ( **kwargs ) Overrides the parser’s default values with those provided via keyword arguments. Any argument with an updated default will also be marked as not required if it was previously required. Returns a list of strings that were not consumed by the parser. < > Update on GitHub ← Text Environments Community Tutorials → Scripts Utilities Script Arguments Trl Parser
Create_custom_Inference_Handler.txt
Create custom Inference Handler Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Create custom Inference Handler Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Create custom Inference Handler Hugging Face Endpoints supports all of the Transformers and Sentence-Transformers tasks and can support custom tasks, including custom pre- & post-processing. The customization can be done through a handler.py file in your model repository on the Hugging Face Hub. The handler.py needs to implement the EndpointHandler class with a __init__ and a __call__ method. If you want to use custom dependencies, e.g. optimum , the dependencies must be listed in a requirements.txt as described above in “add custom dependencies.” Custom Handler Examples There are already several public examples on the Hugging Face Hub where you can take insipiration or directly use them. The repositories are tagged with endpoints-template and can be found under this link . Included examples are for: Optimum and ONNX Runtime Image Embeddings with BLIP TrOCR for OCR Detection Optimized Sentence Transformers with Optimum Pyannote Speaker diarization LayoutLM Flair NER GPT-J 6B Single GPU Donut Document understanding SetFit classifier Tutorial Before creating a Custom Handler, you need a Hugging Face Model repository with your model weights and an Access Token with WRITE access to the repository. To find, create and manage Access Tokens, click here . If you want to write a Custom Handler for an existing model from the community, you can use the repo_duplicator to create a repository fork. The code can also be found in this Notebook . You can also search for already existing Custom Handlers here: https://huggingface.co/models?other=endpoints-template 1. Set up Development Environment The easiest way to develop our custom handler is to set up a local development environment, to implement, test, and iterate there, and then deploy it as an Inference Endpoint. The first step is to install all required development dependencies. needed to create the custom handler, not needed for inference Copied # install git-lfs to interact with the repository sudo apt- get update sudo apt- get install git-lfs # install transformers (not needed since it is installed by default in the container) pip install transformers[sklearn,sentencepiece,audio,vision] After we have installed our libraries we will clone our repository to our development environment. We will use philschmid/distilbert-base-uncased-emotion during the tutorial. Copied git lfs install git clone https: // huggingface.co /philschmid/ distilbert-base-uncased-emotion To be able to push our model repo later you need to login into our HF account. This can be done by using the huggingface-cli . Note: Make sure to configure git config as well. Copied # setup cli with token huggingface- cli login git config -- global credential.helper store 2. Create EndpointHandler After we have set up our environment, we can start creating your custom handler. The custom handler is a Python class ( EndpointHandler ) inside a handler.py file in our repository. The EndpointHandler needs to implement an __init__ and a __call__ method. The __init__ method will be called when starting the Endpoint and will receive 1 argument, a string with the path to your model weights. This allows you to load your model correctly. The __call__ method will be called on every request and receive a dictionary with your request body as a python dictionary. It will always contain the inputs key. The first step is to create our handler.py in the local clone of our repository. Copied ! cd distilbert-base-uncased-emotion && touch handler.py In there, you define your EndpointHandler class with the __init__ and __call__ method. Copied from typing import Dict , List , Any class EndpointHandler (): def __init__ ( self, path= "" ): # Preload all the elements you are going to need at inference. # pseudo: # self.model= load_model(path) def __call__ ( self, data: Dict [ str , Any ] ) -> List [ Dict [ str , Any ]]: """ data args: inputs (:obj: `str` | `PIL.Image` | `np.array`) kwargs Return: A :obj:`list` | `dict`: will be serialized and returned """ # pseudo # self.model(input) 3. Customize EndpointHandler Now, you can add all of the custom logic you want to use during initialization or inference to your Custom Endpoint. You can already find multiple Custom Handler on the Hub if you need some inspiration. In our example, we will add a custom condition based on additional payload information. The model we are using in the tutorial is fine-tuned to detect emotions. We will add an additional payload field for the date, and will use an external package to check if it is a holiday, to add a condition so that when the input date is a holiday, the model returns “happy” - since everyone is happy when there are holidays 🌴🎉😆 First, we need to create a new requirements.txt and add our holiday detection package and make sure we have it installed in our development environment as well. Copied !echo "holidays" >> requirements.txt !pip install -r requirements.txt Next, we have to adjust our handler.py and EndpointHandler to match our condition. Copied from typing import Dict , List , Any from transformers import pipeline import holidays class EndpointHandler (): def __init__ ( self, path= "" ): self.pipeline = pipeline( "text-classification" ,model=path) self.holidays = holidays.US() def __call__ ( self, data: Dict [ str , Any ] ) -> List [ Dict [ str , Any ]]: """ data args: inputs (:obj: `str`) date (:obj: `str`) Return: A :obj:`list` | `dict`: will be serialized and returned """ # get inputs inputs = data.pop( "inputs" ,data) date = data.pop( "date" , None ) # check if date exists and if it is a holiday if date is not None and date in self.holidays: return [{ "label" : "happy" , "score" : 1 }] # run normal prediction prediction = self.pipeline(inputs) return prediction 4. Test EndpointHandler To test our EndpointHandler, we can simplify import, initialize and test it. Therefore we only need to prepare a sample payload. Copied from handler import EndpointHandler # init handler my_handler = EndpointHandler(path= "." ) # prepare sample payload non_holiday_payload = { "inputs" : "I am quite excited how this will turn out" , "date" : "2022-08-08" } holiday_payload = { "inputs" : "Today is a though day" , "date" : "2022-07-04" } # test the handler non_holiday_pred=my_handler(non_holiday_payload) holiday_payload=my_handler(holiday_payload) # show results print ( "non_holiday_pred" , non_holiday_pred) print ( "holiday_payload" , holiday_payload) # non_holiday_pred [{'label': 'joy', 'score': 0.9985942244529724}] # holiday_payload [{'label': 'happy', 'score': 1}] It works!!!! 🎉 Note: If you are using a notebook you might have to restart your kernel when you make changes to the handler.py since it is not automatically re-imported. 5. Push the Custom Handler to your repository After you have successfully tested your handler locally, you can push it to your repository by simply using basic git commands. Copied # add all our new files !git add * # commit our files !git commit - m "add custom handler" # push the files to the hub !git push Now, you should see your handler.py and requirements.txt in your repository in the “Files and version” tab. 6. Deploy your Custom Handler as an Inference Endpoint The last step is to deploy your Custom Handler as an Inference Endpoint. You can deploy your Custom Handler like you would a regular Inference Endpoint. Add your repository, select your cloud and region, your instance and security setting, and deploy. When creating your Endpoint, the Inference Endpoint Service will check for an available and valid handler.py , and will use it for serving requests no matter which “Task” you select. Note: In your Inference Endpoints dashboard , the Task for this Endpoint should now be set to Custom < > Update on GitHub ← Add custom Dependencies Use a custom Container Image → Create custom Inference Handler Custom Handler Examples Tutorial 1. Set up Development Environment 2. Create Endpoint Handler 3. Customize Endpoint Handler 4. Test Endpoint Handler 5. Push the Custom Handler to your repository 6. Deploy your Custom Handler as an Inference Endpoint
Guidance.txt
Guidance Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Guidance text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Guidance What is Guidance? Guidance is a feature that allows users to constrain the generation of a large language model with a specified grammar. This feature is particularly useful when you want to generate text that follows a specific structure or uses a specific set of words or produce output in a specific format. A prominent example is JSON grammar, where the model is forced to output valid JSON. How is it used? Guidance can be implemented in many ways and the community is always finding new ways to use it. Here are some examples of how you can use guidance: Technically, guidance can be used to generate: a specific JSON object a function signature typed output like a list of integers However these use cases can span a wide range of applications, such as: extracting structured data from unstructured text summarizing text into a specific format limit output to specific classes of words (act as a LLM powered classifier) generate the input to specific APIs or services provide reliable and consistent output for downstream tasks extract data from multimodal inputs How it works? Diving into the details, guidance is enabled by including a grammar with a generation request that is compiled, and used to modify the chosen tokens. This process can be broken down into the following steps: A request is sent to the backend, it is processed and placed in batch. Processing includes compiling the grammar into a finite state machine and a grammar state. The model does a forward pass over the batch. This returns probabilities for each token in the vocabulary for each request in the batch. The process of choosing one of those tokens is called sampling . The model samples from the distribution of probabilities to choose the next token. In TGI all of the steps before sampling are called processor . Grammars are applied as a processor that masks out tokens that are not allowed by the grammar. The grammar mask is applied and the model samples from the remaining tokens. Once a token is chosen, we update the grammar state with the new token, to prepare it for the next pass. How to use Guidance? There are two main ways to use guidance; you can either use the /generate endpoint with a grammar or use the /chat/completion endpoint with tools. Under the hood tools are a special case of grammars that allows the model to choose one or none of the provided tools. Please refer to using guidance for more examples and details on how to use guidance in Python, JavaScript, and cURL. Getting the most out of guidance Depending on how you are using guidance, you may want to make use of different features. Here are some tips to get the most out of guidance: If you are using the /generate with a grammar it is recommended to include the grammar in the prompt prefixed by something like Please use the following JSON schema to generate the output: . This will help the model understand the context of the grammar and generate the output accordingly. If you are getting a response with many repeated tokens, please use the frequency_penalty or repetition_penalty to reduce the number of repeated tokens in the output. < > Update on GitHub ← Speculation (Medusa, ngram) LoRA (Low-Rank Adaptation) → Guidance What is Guidance? How is it used? How it works? How to use Guidance? Getting the most out of guidance
Image-Text_to_Text.txt
Image-Text to Text Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Image-Text to Text api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Image-Text to Text Image-text-to-text models take in an image and text prompt and output text. These models are also called vision-language models, or VLMs. The difference from image-to-text models is that these models take an additional text input, not restricting the model to certain use cases like image captioning, and may also be trained to accept a conversation as input. For more details about the image-text-to-text task, check out its dedicated page ! You will find examples and related materials. Recommended models meta-llama/Llama-3.2-11B-Vision-Instruct : Powerful vision language model with great visual understanding and reasoning capabilities. Qwen/Qwen2-VL-7B-Instruct : Strong image-text-to-text model. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Using huggingface_hub : Copied from huggingface_hub import InferenceClient client = InferenceClient(api_key= "hf_***" ) messages = "\"Can you please let us know more details about your \"" stream = client.chat.completions.create( model= "meta-llama/Llama-3.2-11B-Vision-Instruct" , messages=messages, max_tokens= 500 , stream= True ) for chunk in stream: print (chunk.choices[ 0 ].delta.content, end= "" ) Using openai : Copied from openai import OpenAI client = OpenAI( base_url= "https://api-inference.huggingface.co/v1/" , api_key= "hf_***" ) messages = "\"Can you please let us know more details about your \"" stream = client.chat.completions.create( model= "meta-llama/Llama-3.2-11B-Vision-Instruct" , messages=messages, max_tokens= 500 , stream= True ) for chunk in stream: print (chunk.choices[ 0 ].delta.content, end= "" ) To use the Python client, see huggingface_hub ’s package reference . API specification For the API specification of conversational image-text-to-text models, please refer to the Chat Completion API documentation . < > Update on GitHub ← Image to Image Object Detection → Image- Text to Text Recommended models Using the API AP I specification
Storage_Regions_on_the_Hub.txt
Storage Regions on the Hub Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Storage Regions on the Hub Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Single Sign-On (SSO) Audit Logs Storage Regions Dataset viewer for Private datasets Resource Groups (Access Control) Advanced Compute Options Advanced Security Tokens Management Analytics Network Security Gating Group Collections Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Storage Regions on the Hub This feature is part of the Enterprise Hub . Regions allow you to specify where your organization’s models, datasets and Spaces are stored. For non-Enterprise Hub users, repositories are always stored in the US. This offers two key benefits: Regulatory and legal compliance Performance (faster download/upload speeds and lower latency) Currently available regions: US 🇺🇸 EU 🇪🇺 Coming soon: Asia-Pacific 🌏 Getting started with Storage Regions Organizations subscribed to Enterprise Hub can access the Regions settings page to manage their repositories storage locations. This page displays: An audit of your organization’s repository locations Options to select where new repositories will be stored Repository Tag Any repository (model or dataset) stored in a non-default location displays its Region as a tag, allowing organization members to quickly identify repository locations. Regulatory and legal compliance Regulated industries often require data storage in specific regions. For EU companies, you can use the Hub for ML development in a GDPR-compliant manner, with datasets, models and inference endpoints stored in EU data centers. Performance Storing models and datasets closer to your team and infrastructure significantly improves performance for both uploads and downloads. This impact is substantial given the typically large size of model weights and dataset files. For example, European users storing repositories in the EU region can expect approximately 4-5x faster upload and download speeds compared to US storage. Spaces Both Spaces’s storage and runtime use the chosen region. Available hardware configurations vary by region, and some features may not be avaialble in all regions, like persistent storage associated to a Space. < > Update on GitHub ← Audit Logs Dataset viewer for Private datasets → Storage Regions on the Hub Getting started with Storage Regions Repository Tag Regulatory and legal compliance Performance Spaces
Evidence_on_Spaces.txt
Evidence on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Evidence on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Evidence on Spaces Evidence is an open-source framework designed for building data-driven applications, reports, and dashboards using SQL and Markdown. With Evidence, you can quickly create decision-support tools, reports, and interactive dashboards without relying on traditional drag-and-drop business intelligence (BI) platforms. Evidence enables you to: Write reports and dashboards directly in Markdown with SQL-backed components. Integrate data from multiple sources, including SQL databases and APIs. Use templated pages to automatically generate multiple pages based on a single template. Deploy reports seamlessly to various hosting solutions. Visit Evidence’s documentation for guides, examples, and best practices for using Evidence to create data products. Deploy Evidence on Spaces You can deploy Evidence on Hugging Face Spaces with just a few clicks: Once created, the Space will display Building status. Refresh the page if the status doesn’t automatically update to Running . Your Evidence app will automatically be deployed on Hugging Face Spaces. Editing your Evidence app from the CLI To edit your app, clone the Space and edit the files locally. Copied git clone https://huggingface.co/spaces/your-username/your-space-name cd your-space-name npm install npm run sources npm run dev You can then modify pages/index.md to change the content of your app. Editing your Evidence app from VS Code The easiest way to develop with Evidence is using the VS Code Extension : Install the extension from the VS Code Marketplace Open the Command Palette (Ctrl/Cmd + Shift + P) and enter Evidence: Copy Existing Project Paste the URL of the Hugging Face Spaces Evidence app you’d like to copy (e.g. https://huggingface.co/spaces/your-username/your-space-name ) and press Enter Select the folder you’d like to clone the project to and press Enter Press Start Evidence in the bottom status bar Check out the docs for alternative install methods , Github Codespaces, and alongside dbt. Learning More Docs Github Slack Community Evidence Home Page < > Update on GitHub ← Giskard on Spaces marimo on Spaces → Evidence on Spaces Deploy Evidence on Spaces Editing your Evidence app from the CLI Editing your Evidence app from V S Code Learning More
Dataset_viewer.txt
Dataset viewer Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Dataset viewer Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Configure the Dataset Viewer Embed the Dataset Viewer in a webpage SQL Console Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Dataset viewer Each dataset page includes a table with the contents of the dataset, arranged by pages of 100 rows. You can navigate between pages using the buttons at the bottom of the table. Inspect data distributions At the top of the columns you can see the graphs representing the distribution of their data. This gives you a quick insight on how balanced your classes are, what are the range and distribution of numerical data and lengths of texts, and what portion of the column data is missing. Filter by value If you click on a bar of a histogram from a numerical column, the dataset viewer will filter the data and show only the rows with values that fall in the selected range. Similarly, if you select one class from a categorical column, it will show only the rows from the selected category. Search a word in the dataset You can search for a word in the dataset by typing it in the search bar at the top of the table. The search is case-insensitive and will match any row containing the word. The text is searched in the columns of string , even if the values are nested in a dictionary or a list. Run SQL queries on the dataset You can run SQL queries on the dataset in the browser using the SQL Console. This feature also leverages our auto-conversion to Parquet . For more information see our guide on SQL Console . Share a specific row You can share a specific row by clicking on it, and then copying the URL in the address bar of your browser. For example https://huggingface.co/datasets/nyu-mll/glue/viewer/mrpc/test?p=2&row=241 will open the dataset viewer on the MRPC dataset, on the test split, and on the 241st row. Large scale datasets The Dataset Viewer supports large scale datasets, but depending on the data format it may only show the first 5GB of the dataset: For Parquet datasets: the Dataset Viewer shows the full dataset, but sorting, filtering and search are only enabled on the first 5GB. For datasets >5GB in other formats (e.g. WebDataset or JSON Lines): the Dataset Viewer only shows the first 5GB, and sorting, filtering and search are enabled on these first 5GB. In this case, an informational message lets you know that the Viewer is partial. This should be a large enough sample to represent the full dataset accurately, let us know if you need a bigger sample. Access the parquet files To power the dataset viewer, the first 5GB of every dataset are auto-converted to the Parquet format (unless it was already a Parquet dataset). In the dataset viewer (for example, see GLUE ), you can click on “Auto-converted to Parquet” to access the Parquet files. Please, refer to the dataset viewer docs to learn how to query the dataset parquet files with libraries such as Polars, Pandas or DuckDB. Parquet is a columnar storage format optimized for querying and processing large datasets. Parquet is a popular choice for big data processing and analytics and is widely used for data processing and machine learning. You can learn more about the advantages associated with this format in the documentation . Conversion bot When you create a new dataset, the parquet-converter bot notifies you once it converts the dataset to Parquet. The discussion it opens in the repository provides details about the Parquet format and links to the Parquet files. Programmatic access You can also access the list of Parquet files programmatically using the Hub API ; for example, endpoint https://huggingface.co/api/datasets/nyu-mll/glue/parquet lists the parquet files of the nyu-mll/glue dataset. We also have a specific documentation about the Dataset Viewer API , which you can call directly. That API lets you access the contents, metadata and basic statistics of all Hugging Face Hub datasets, and powers the Dataset viewer frontend. Dataset preview For the biggest datasets, the page shows a preview of the first 100 rows instead of a full-featured viewer. This restriction only applies for datasets over 5GB that are not natively in Parquet format or that have not been auto-converted to Parquet. Embed the Dataset Viewer in a webpage You can embed the Dataset Viewer in your own webpage using an iframe. The URL to use is https://huggingface.co/datasets/<namespace>/<dataset-name>/embed/viewer , where <namespace> is the owner of the dataset and <dataset-name> is the name of the dataset. You can also pass other parameters like the subset, split, filter, search or selected row. For more information see our guide on How to embed the Dataset Viewer in a webpage . Configure the Dataset Viewer To have a properly working Dataset Viewer for your dataset, make sure your dataset is in a supported format and structure. There is also an option to configure your dataset using YAML. For private datasets, the Dataset Viewer is enabled for PRO users and Enterprise Hub organizations . For more information see our guide on How to configure the Dataset Viewer . < > Update on GitHub ← WebDataset Configure the Dataset Viewer → Dataset viewer Inspect data distributions Filter by value Search a word in the dataset Run SQ L queries on the dataset Share a specific row Large scale datasets Access the parquet files Conversion bot Programmatic access Dataset preview Embed the Dataset Viewer in a webpage Configure the Dataset Viewer
Combine_datasets_and_export.txt
Combine datasets and export Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Combine datasets and export Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Argilla Dask Datasets Distilabel DuckDB Authentication for private and gated datasets Query datasets Perform SQL operations Combine datasets and export Perform vector similarity search FiftyOne Pandas Polars Spark WebDataset Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Combine datasets and export In this section, we’ll demonstrate how to combine two datasets and export the result. The first dataset is in CSV format, and the second dataset is in Parquet format. Let’s start by examining our datasets: The first will be TheFusion21/PokemonCards : Copied FROM 'hf://datasets/TheFusion21/PokemonCards/train.csv' LIMIT 3; ┌─────────┬──────────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────┬───────┬─────────────────┐ │ id │ image_url │ caption │ name │ hp │ set_name │ │ varchar │ varchar │ varchar │ varchar │ int64 │ varchar │ ├─────────┼──────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────┼───────┼─────────────────┤ │ pl3-1 │ https://images.pok… │ A Basic, SP Pokemon Card of type Darkness with the title Absol G and 70 HP of rarity Rare Holo from the set Supreme Victors. It has … │ Absol G │ 70 │ Supreme Victors │ │ ex12-1 │ https://images.pok… │ A Stage 1 Pokemon Card of type Colorless with the title Aerodactyl and 70 HP of rarity Rare Holo evolved from Mysterious Fossil from … │ Aerodactyl │ 70 │ Legend Maker │ │ xy5-1 │ https://images.pok… │ A Basic Pokemon Card of type Grass with the title Weedle and 50 HP of rarity Common from the set Primal Clash and the flavor text: It… │ Weedle │ 50 │ Primal Clash │ └─────────┴──────────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────┴───────┴─────────────────┘ And the second one will be wanghaofan/pokemon-wiki-captions : Copied FROM 'hf://datasets/wanghaofan/pokemon-wiki-captions/data/*.parquet' LIMIT 3; ┌──────────────────────┬───────────┬──────────┬──────────────────────────────────────────────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ image │ name_en │ name_zh │ text_en │ text_zh │ │ struct(bytes blob,… │ varchar │ varchar │ varchar │ varchar │ ├──────────────────────┼───────────┼──────────┼──────────────────────────────────────────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ { 'bytes' : \x89PNG\… │ abomasnow │ 暴雪王 │ Grass attributes,Blizzard King standing on two feet, with … │ 草属性,双脚站立的暴雪王,全身白色的绒毛,淡紫色的眼睛,几缕长条装的毛皮盖着它的嘴巴 │ │ { 'bytes' : \x89PNG\… │ abra │ 凯西 │ Super power attributes, the whole body is yellow, the head … │ 超能力属性,通体黄色,头部外形类似狐狸,尖尖鼻子,手和脚上都有三个指头,长尾巴末端带着一个褐色圆环 │ │ { 'bytes' : \x89PNG\… │ absol │ 阿勃梭鲁 │ Evil attribute, with white hair, blue-gray part without ha… │ 恶属性,有白色毛发,没毛发的部分是蓝灰色,头右边类似弓的角,红色眼睛 │ └──────────────────────┴───────────┴──────────┴──────────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────┘ Now, let’s try to combine these two datasets by joining on the name column: Copied SELECT a.image_url , a.caption AS card_caption , a.name , a.hp , b.text_en as wiki_caption FROM 'hf://datasets/TheFusion21/PokemonCards/train.csv' a JOIN 'hf://datasets/wanghaofan/pokemon-wiki-captions/data/*.parquet' b ON LOWER(a.name) = b.name_en LIMIT 3; ┌──────────────────────┬──────────────────────┬────────────┬───────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ image_url │ card_caption │ name │ hp │ wiki_caption │ │ varchar │ varchar │ varchar │ int64 │ varchar │ ├──────────────────────┼──────────────────────┼────────────┼───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ https://images.pok… │ A Stage 1 Pokemon … │ Aerodactyl │ 70 │ A Pokémon with rock attributes, gray body, blue pupils, purple inner wings, two sharp claws on the wings, jagged teeth, and an arrow-like … │ │ https://images.pok… │ A Basic Pokemon Ca… │ Weedle │ 50 │ Insect-like, caterpillar-like in appearance, with a khaki-yellow body, seven pairs of pink gastropods, a pink nose, a sharp poisonous need… │ │ https://images.pok… │ A Basic Pokemon Ca… │ Caterpie │ 50 │ Insect attributes, caterpillar appearance, green back, white abdomen, Y-shaped red antennae on the head , yellow spindle-shaped tail , two p… │ └──────────────────────┴──────────────────────┴────────────┴───────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ We can export the result to a Parquet file using the COPY command: Copied COPY (SELECT a.image_url , a.caption AS card_caption , a.name , a.hp , b.text_en as wiki_caption FROM 'hf://datasets/TheFusion21/PokemonCards/train.csv' a JOIN 'hf://datasets/wanghaofan/pokemon-wiki-captions/data/*.parquet' b ON LOWER(a.name) = b.name_en) TO 'output.parquet' (FORMAT PARQUET); Let’s validate the new Parquet file: Copied SELECT COUNT(*) FROM 'output.parquet' ; ┌──────────────┐ │ count_star() │ │ int64 │ ├──────────────┤ │ 9460 │ └──────────────┘ You can also export to CSV , Excel and JSON formats. Finally, let’s push the resulting dataset to the Hub. You can use the Hub UI, the huggingface_hub client library and more to upload your Parquet file, see more information here . And that’s it! You’ve successfully combined two datasets, exported the result, and uploaded it to the Hugging Face Hub. < > Update on GitHub ← Perform SQL operations Perform vector similarity search → Combine datasets and export
Custom_Python_Spaces.txt
Custom Python Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Custom Python Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Using OpenCV in Spaces More ways to create Spaces Managing Spaces with Github Actions Managing Spaces with CircleCI Workflows Custom Python Spaces How to Add a Space to ArXiv Cookie limitations in Spaces Set URL query and hash Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Custom Python Spaces Spaces now support arbitrary Dockerfiles so you can host any Python app directly using Docker Spaces . While not an official workflow, you are able to run your own Python + interface stack in Spaces by selecting Gradio as your SDK and serving a frontend on port 7860 . See the templates for examples. Spaces are served in iframes, which by default restrict links from opening in the parent page. The simplest solution is to open them in a new window: Copied < a href = "https://hf.space" rel = "noopener" target = "_blank" > Spaces </ a > Usually, the height of Spaces is automatically adjusted when using the Gradio library interface. However, if you provide your own frontend in the Gradio SDK and the content height is larger than the viewport, you’ll need to add an iFrame Resizer script , so the content is scrollable in the iframe: Copied < script src = "https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.3.2/iframeResizer.contentWindow.min.js" > </ script > As an example, here is the same Space with and without the script: https://huggingface.co/spaces/ronvolutional/http-server https://huggingface.co/spaces/ronvolutional/iframe-test < > Update on GitHub ← Managing Spaces with CircleCI Workflows How to Add a Space to ArXiv → Custom Python Spaces
GPTQ.txt
GPTQ Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation GPTQ Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started GPTQ Try GPTQ quantization with PEFT in this notebook and learn more about it’s details in this blog post ! The AutoGPTQ library implements the GPTQ algorithm, a post-training quantization technique where each row of the weight matrix is quantized independently to find a version of the weights that minimizes the error. These weights are quantized to int4, but they’re restored to fp16 on the fly during inference. This can save your memory-usage by 4x because the int4 weights are dequantized in a fused kernel rather than a GPU’s global memory, and you can also expect a speedup in inference because using a lower bitwidth takes less time to communicate. Before you begin, make sure the following libraries are installed: Copied pip install auto-gptq pip install --upgrade accelerate optimum transformers To quantize a model (currently only supported for text models), you need to create a GPTQConfig class and set the number of bits to quantize to, a dataset to calibrate the weights for quantization, and a tokenizer to prepare the dataset. Copied from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig model_id = "facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_id) gptq_config = GPTQConfig(bits= 4 , dataset= "c4" , tokenizer=tokenizer) You could also pass your own dataset as a list of strings, but it is highly recommended to use the same dataset from the GPTQ paper. Copied dataset = [ "auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm." ] gptq_config = GPTQConfig(bits= 4 , dataset=dataset, tokenizer=tokenizer) Load a model to quantize and pass the gptq_config to the from_pretrained() method. Set device_map="auto" to automatically offload the model to a CPU to help fit the model in memory, and allow the model modules to be moved between the CPU and GPU for quantization. Copied quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map= "auto" , quantization_config=gptq_config) If you’re running out of memory because a dataset is too large, disk offloading is not supported. If this is the case, try passing the max_memory parameter to allocate the amount of memory to use on your device (GPU and CPU): Copied quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map= "auto" , max_memory={ 0 : "30GiB" , 1 : "46GiB" , "cpu" : "30GiB" }, quantization_config=gptq_config) Depending on your hardware, it can take some time to quantize a model from scratch. It can take ~5 minutes to quantize the facebook/opt-350m model on a free-tier Google Colab GPU, but it’ll take ~4 hours to quantize a 175B parameter model on a NVIDIA A100. Before you quantize a model, it is a good idea to check the Hub if a GPTQ-quantized version of the model already exists. Once your model is quantized, you can push the model and tokenizer to the Hub where it can be easily shared and accessed. Use the push_to_hub() method to save the GPTQConfig : Copied quantized_model.push_to_hub( "opt-125m-gptq" ) tokenizer.push_to_hub( "opt-125m-gptq" ) You could also save your quantized model locally with the save_pretrained() method. If the model was quantized with the device_map parameter, make sure to move the entire model to a GPU or CPU before saving it. For example, to save the model on a CPU: Copied quantized_model.save_pretrained( "opt-125m-gptq" ) tokenizer.save_pretrained( "opt-125m-gptq" ) # if quantized with device_map set quantized_model.to( "cpu" ) quantized_model.save_pretrained( "opt-125m-gptq" ) Reload a quantized model with the from_pretrained() method, and set device_map="auto" to automatically distribute the model on all available GPUs to load the model faster without using more memory than needed. Copied from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "{your_username}/opt-125m-gptq" , device_map= "auto" ) ExLlama ExLlama is a Python/C++/CUDA implementation of the Llama model that is designed for faster inference with 4-bit GPTQ weights (check out these benchmarks ). The ExLlama kernel is activated by default when you create a GPTQConfig object. To boost inference speed even further, use the ExLlamaV2 kernels by configuring the exllama_config parameter: Copied import torch from transformers import AutoModelForCausalLM, GPTQConfig gptq_config = GPTQConfig(bits= 4 , exllama_config={ "version" : 2 }) model = AutoModelForCausalLM.from_pretrained( "{your_username}/opt-125m-gptq" , device_map= "auto" , quantization_config=gptq_config) Only 4-bit models are supported, and we recommend deactivating the ExLlama kernels if you’re finetuning a quantized model with PEFT. The ExLlama kernels are only supported when the entire model is on the GPU. If you’re doing inference on a CPU with AutoGPTQ (version > 0.4.2), then you’ll need to disable the ExLlama kernel. This overwrites the attributes related to the ExLlama kernels in the quantization config of the config.json file. Copied import torch from transformers import AutoModelForCausalLM, GPTQConfig gptq_config = GPTQConfig(bits= 4 , use_exllama= False ) model = AutoModelForCausalLM.from_pretrained( "{your_username}/opt-125m-gptq" , device_map= "cpu" , quantization_config=gptq_config) < > Update on GitHub ← bitsandbytes AWQ → GPTQ Ex Llama
Choosing_a_metric_for_your_task.txt
Choosing a metric for your task Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Choosing a metric for your task Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Choosing a metric for your task So you’ve trained your model and want to see how well it’s doing on a dataset of your choice. Where do you start? There is no “one size fits all” approach to choosing an evaluation metric, but some good guidelines to keep in mind are: Categories of metrics There are 3 high-level categories of metrics: Generic metrics , which can be applied to a variety of situations and datasets, such as precision and accuracy. Task-specific metrics , which are limited to a given task, such as Machine Translation (often evaluated using metrics BLEU or ROUGE ) or Named Entity Recognition (often evaluated with seqeval ). Dataset-specific metrics , which aim to measure model performance on specific benchmarks: for instance, the GLUE benchmark has a dedicated evaluation metric . Let’s look at each of these three cases: Generic metrics Many of the metrics used in the Machine Learning community are quite generic and can be applied in a variety of tasks and datasets. This is the case for metrics like accuracy and precision , which can be used for evaluating labeled (supervised) datasets, as well as perplexity , which can be used for evaluating different kinds of (unsupervised) generative tasks. To see the input structure of a given metric, you can look at its metric card. For example, in the case of precision , the format is: Copied >>> precision_metric = evaluate.load( "precision" ) >>> results = precision_metric.compute(references=[ 0 , 1 ], predictions=[ 0 , 1 ]) >>> print (results) {'precision': 1.0} Task-specific metrics Popular ML tasks like Machine Translation and Named Entity Recognition have specific metrics that can be used to compare models. For example, a series of different metrics have been proposed for text generation, ranging from BLEU and its derivatives such as GoogleBLEU and GLEU , but also ROUGE , MAUVE , etc. You can find the right metric for your task by: Looking at the Task pages to see what metrics can be used for evaluating models for a given task. Checking out leaderboards on sites like Papers With Code (you can search by task and by dataset). Reading the metric cards for the relevant metrics and see which ones are a good fit for your use case. For example, see the BLEU metric card or SQuaD metric card . Looking at papers and blog posts published on the topic and see what metrics they report. This can change over time, so try to pick papers from the last couple of years! Dataset-specific metrics Some datasets have specific metrics associated with them — this is especially in the case of popular benchmarks like GLUE and SQuAD . 💡 GLUE is actually a collection of different subsets on different tasks, so first you need to choose the one that corresponds to the NLI task, such as mnli, which is described as “crowdsourced collection of sentence pairs with textual entailment annotations” If you are evaluating your model on a benchmark dataset like the ones mentioned above, you can use its dedicated evaluation metric. Make sure you respect the format that they require. For example, to evaluate your model on the SQuAD dataset, you need to feed the question and context into your model and return the prediction_text , which should be compared with the references (based on matching the id of the question) : Copied >>> from evaluate import load >>> squad_metric = load( "squad" ) >>> predictions = [{ 'prediction_text' : '1976' , 'id' : '56e10a3be3433e1400422b22' }] >>> references = [{ 'answers' : { 'answer_start' : [ 97 ], 'text' : [ '1976' ]}, 'id' : '56e10a3be3433e1400422b22' }] >>> results = squad_metric.compute(predictions=predictions, references=references) >>> results {'exact_match': 100.0, 'f1': 100.0} You can find examples of dataset structures by consulting the “Dataset Preview” function or the dataset card for a given dataset, and you can see how to use its dedicated evaluation function based on the metric card. ← A quick tour Adding new evaluations → Choosing a metric for your task Categories of metrics Generic metrics Task-specific metrics Dataset-specific metrics
Sharing_and_Loading_Models_From_the_Hugging_Face_H.txt
Sharing and Loading Models From the Hugging Face Hub Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up timm documentation Sharing and Loading Models From the Hugging Face Hub timm 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.0.14 v0.9.16 EN Get started Home Quickstart Installation Changelog Tutorials Using Pretrained Models as Feature Extractors Training With The Official Training Script Share and Load Models from the 🤗 Hugging Face Hub Model Pages Reference Models Data Optimizers Learning Rate Schedulers Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Sharing and Loading Models From the Hugging Face Hub The timm library has a built-in integration with the Hugging Face Hub, making it easy to share and load models from the 🤗 Hub. In this short guide, we’ll see how to: Share a timm model on the Hub How to load that model back from the Hub Authenticating First, you’ll need to make sure you have the huggingface_hub package installed. Copied pip install huggingface_hub Then, you’ll need to authenticate yourself. You can do this by running the following command: Copied huggingface-cli login Or, if you’re using a notebook, you can use the notebook_login helper: Copied >>> from huggingface_hub import notebook_login >>> notebook_login() Sharing a Model Copied >>> import timm >>> model = timm.create_model( 'resnet18' , pretrained= True , num_classes= 4 ) Here is where you would normally train or fine-tune the model. We’ll skip that for the sake of this tutorial. Let’s pretend we’ve now fine-tuned the model. The next step would be to push it to the Hub! We can do this with the timm.models.hub.push_to_hf_hub function. Copied >>> model_cfg = dict (label_names=[ 'a' , 'b' , 'c' , 'd' ]) >>> timm.models.push_to_hf_hub(model, 'resnet18-random' , model_config=model_cfg) Running the above would push the model to <your-username>/resnet18-random on the Hub. You can now share this model with your friends, or use it in your own code! Loading a Model Loading a model from the Hub is as simple as calling timm.create_model with the pretrained argument set to the name of the model you want to load. In this case, we’ll use nateraw/resnet18-random , which is the model we just pushed to the Hub. Copied >>> model_reloaded = timm.create_model( 'hf_hub:nateraw/resnet18-random' , pretrained= True ) < > Update on GitHub ← Training With The Official Training Script Model Summaries → Sharing and Loading Models From the Hugging Face Hub Authenticating Sharing a Model Loading a Model
Data_types.txt
Data types Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Data types Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Data types Datasets supported by the dataset viewer have a tabular format, meaning a data point is represented in a row and its features are contained in columns. Using the /first-rows endpoint allows you to preview the first 100 rows of a dataset and information about each feature. Within the features key, you’ll notice it returns a _type field. This value describes the data type of the column, and it is also known as a dataset’s Features . There are several different data Features for representing different data formats such as Audio and Image for speech and image data respectively. Knowing a dataset feature gives you a better understanding of the data type you’re working with, and how you can preprocess it. For example, the /first-rows endpoint for the Rotten Tomatoes dataset returns the following: Copied { "dataset" : "cornell-movie-review-data/rotten_tomatoes" , "config" : "default" , "split" : "train" , "features" : [ { "feature_idx" : 0 , "name" : "text" , "type" : { "dtype" : "string" , "id" : null , "_type" : "Value" } } , { "feature_idx" : 1 , "name" : "label" , "type" : { "num_classes" : 2 , "names" : [ "neg" , "pos" ] , "id" : null , "_type" : "ClassLabel" } } ] , ... } This dataset has two columns, text and label : The text column has a type of Value . The Value type is extremely versatile and represents scalar values such as strings, integers, dates, and even timestamp values. The label column has a type of ClassLabel . The ClassLabel type represents the number of classes in a dataset and their label names. Naturally, this means you’ll frequently see ClassLabel used in classification datasets. For a complete list of available data types, take a look at the Features documentation. < > Update on GitHub ← Splits and subsets Server infrastructure → Data types
Supported_architectures.txt
Supported architectures Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Supported architectures AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Supported architectures Transformers Architecture Task ALBERT feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification AST feature-extraction, audio-classification BERT feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification BLOOM text-generation Beit feature-extraction, image-classification CamemBERT feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification ConvBERT feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification ConvNext feature-extraction, image-classification ConvNextV2 feature-extraction, image-classification CvT feature-extraction, image-classification DeBERTa (INF2 only) feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification DeBERTa-v2 (INF2 only) feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification Deit feature-extraction, image-classification DistilBERT feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification DonutSwin feature-extraction Dpt feature-extraction ELECTRA feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification ESM feature-extraction, fill-mask, text-classification, token-classification FlauBERT feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification GPT2 text-generation Hubert feature-extraction, automatic-speech-recognition, audio-classification Levit feature-extraction, image-classification Llama, Llama 2, Llama 3 text-generation Mistral text-generation Mixtral text-generation MobileBERT feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification MobileNetV2 feature-extraction, image-classification, semantic-segmentation MobileViT feature-extraction, image-classification, semantic-segmentation MPNet feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification OPT text-generation Phi feature-extraction, text-classification, token-classification RoBERTa feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification RoFormer feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification Swin feature-extraction, image-classification T5 text2text-generation UniSpeech feature-extraction, automatic-speech-recognition, audio-classification UniSpeech-SAT feature-extraction, automatic-speech-recognition, audio-classification, audio-frame-classification, audio-xvector ViT feature-extraction, image-classification Wav2Vec2 feature-extraction, automatic-speech-recognition, audio-classification, audio-frame-classification, audio-xvector WavLM feature-extraction, automatic-speech-recognition, audio-classification, audio-frame-classification, audio-xvector XLM feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification XLM-RoBERTa feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification Yolos feature-extraction, object-detection Diffusers Architecture Task Stable Diffusion text-to-image, image-to-image, inpaint Stable Diffusion XL Base text-to-image, image-to-image, inpaint Stable Diffusion XL Refiner image-to-image, inpaint SDXL Turbo text-to-image, image-to-image, inpaint LCM text-to-image PixArt-α text-to-image PixArt-Σ text-to-image Sentence Transformers Architecture Task Transformer feature-extraction, sentence-similarity CLIP feature-extraction, zero-shot-image-classification More details for checking supported tasks here . More architectures coming soon, stay tuned! 🚀 ← Neuron Distributed Neuron Exporter → Supported architectures Transformers Diffusers Sentence Transformers
File_formats.txt
File formats Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation File formats Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Argilla Dask Datasets Distilabel DuckDB FiftyOne Pandas Polars Authentication for private and gated datasets Supported file formats Performing data transformations Performance optimizations Spark WebDataset Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started File formats Polars supports the following file formats when reading from Hugging Face: Parquet CSV JSON Lines The examples below show the default settings only. Use the links above to view all available parameters in the API reference guide. Parquet Parquet is the preferred file format as it stores the schema with type information within the file. This avoids any ambiguity with parsing and speeds up reading. To read a Parquet file in Polars, use the read_parquet function: Copied pl.read_parquet( "hf://datasets/roneneldan/TinyStories/data/train-00000-of-00004-2d5a1467fff1081b.parquet" ) CSV The read_csv function can be used to read a CSV file: Copied pl.read_csv( "hf://datasets/lhoestq/demo1/data/train.csv" ) JSON Polars supports reading new line delimited JSON — also known as json lines — with the read_ndjson function: Copied pl.read_ndjson( "hf://datasets/proj-persona/PersonaHub/persona.jsonl" ) < > Update on GitHub ← Authentication for private and gated datasets Performing data transformations → File formats
Analytics.txt
Analytics Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Analytics Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Single Sign-On (SSO) Audit Logs Storage Regions Dataset viewer for Private datasets Resource Groups (Access Control) Advanced Compute Options Advanced Security Tokens Management Analytics Network Security Gating Group Collections Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Analytics This feature is part of the Enterprise Hub . Analytics Dashboard Track all your repository activity with a detailed downloads overview that shows total downloads for all your Models and Datasets. Toggle between “All Time” and “Last Month” views to gain insights into your repository’s downloads over different periods. Per-repo breakdown Explore the metrics of individual repositories with the per-repository drill-down table. Utilize the built-in search feature to quickly locate specific repositories. Each row also features a time-series graph that illustrates the trend of downloads over time. Export Analytics as CSV Download a comprehensive CSV file containing analytics for all your repositories, including model and dataset download activity. Response Structure The CSV file is made of daily download records for each of your model and dataset. Copied repoType ,repoName,total,timestamp,downloads model ,huggingface/CodeBERTa-small-v1, 4362460 , 2021 - 01 - 22 T00: 00 : 00 . 000 Z, 4 model ,huggingface/CodeBERTa-small-v1, 4362460 , 2021 - 01 - 23 T00: 00 : 00 . 000 Z, 7 model ,huggingface/CodeBERTa-small-v1, 4362460 , 2021 - 01 - 24 T00: 00 : 00 . 000 Z, 2 dataset ,huggingface/documentation-images, 2167284 , 2021 - 11 - 27 T00: 00 : 00 . 000 Z, 3 dataset ,huggingface/documentation-images, 2167284 , 2021 - 11 - 28 T00: 00 : 00 . 000 Z, 18 dataset ,huggingface/documentation-images, 2167284 , 2021 - 11 - 29 T00: 00 : 00 . 000 Z, 7 Repository Object Structure Each record in the CSV contains: repoType : The type of repository (e.g., “model”, “dataset”) repoName : Full repository name including organization (e.g., “huggingface/documentation-images”) total : Cumulative number of downloads for this repository timestamp : ISO 8601 formatted date (UTC) downloads : Number of downloads for that day Records are ordered chronologically and provide a daily granular view of download activity for each repository. < > Update on GitHub ← Tokens Management Network Security → Analytics Analytics Dashboard Per-repo breakdown Export Analytics as CSV Response Structure Repository Object Structure
Signing_commits_with_GPG.txt
Signing commits with GPG Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Signing commits with GPG Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security User Access Tokens Two-Factor Authentication Git over SSH Signing Commits with GPG Single Sign-On (SSO) Advanced Access Control (Resource Groups) Malware Scanning Pickle Scanning Secrets Scanning Protect AI Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Signing commits with GPG git has an authentication layer to control who can push commits to a repo, but it does not authenticate the actual commit authors. In other words, you can commit changes as Elon Musk <[email protected]> , push them to your preferred git host (for instance github.com), and your commit will link to Elon’s GitHub profile. (Try it! But don’t blame us if Elon gets mad at you for impersonating him.) The reasons we implemented GPG signing were: To provide finer-grained security, especially as more and more Enterprise users rely on the Hub. To provide ML benchmarks backed by a cryptographically-secure source. See Ale Segala’s How (and why) to sign git commits for more context. You can prove a commit was authored by you with GNU Privacy Guard (GPG) and a key server. GPG is a cryptographic tool used to verify the authenticity of a message’s origin. We’ll explain how to set this up on Hugging Face below. The Pro Git book is, as usual, a good resource about commit signing: Pro Git: Signing your work . Setting up signed commits verification You will need to install GPG on your system in order to execute the following commands. It’s included by default in most Linux distributions. On Windows, it is included in Git Bash (which comes with git for Windows). You can sign your commits locally using GPG . Then configure your profile to mark these commits as verified on the Hub, so other people can be confident that they come from a trusted source. For a more in-depth explanation of how git and GPG interact, please visit the the git documentation on the subject Commits can have the following signing statuses: Status Explanation Verified The commit is signed and the signature is verified Unverified The commit is signed but the signature could not be verified No signing status The commit is not signed For a commit to be marked as verified , you need to upload the public key used to sign it on your Hugging Face account. Use the gpg --list-secret-keys command to list the GPG keys for which you have both a public and private key. A private key is required for signing commits or tags. If you don’t have a GPG key pair or you don’t want to use the existing keys to sign your commits, go to Generating a new GPG key . Otherwise, go straight to Adding a GPG key to your account . Generating a new GPG key To generate a GPG key, run the following: Copied gpg --gen-key GPG will then guide you through the process of creating a GPG key pair. Make sure you specify an email address for this key, and that the email address matches the one you specified in your Hugging Face account . Adding a GPG key to your account First, select or generate a GPG key on your computer. Make sure the email address of the key matches the one in your Hugging Face account and that the email of your account is verified. Export the public part of the selected key: Copied gpg --armor -- export <YOUR KEY ID> Then visit your profile settings page and click on Add GPG Key . Copy & paste the output of the gpg --export command in the text area and click on Add Key . Congratulations! 🎉 You’ve just added a GPG key to your account! Configure git to sign your commits with GPG The last step is to configure git to sign your commits: Copied git config user.signingkey <Your GPG Key ID> git config user.email <Your email on hf.co> Then add the -S flag to your git commit commands to sign your commits! Copied git commit -S -m "My first signed commit" Once pushed on the Hub, you should see the commit with a “Verified” badge. To sign all commits by default in any local repository on your computer, you can run git config --global commit.gpgsign true . < > Update on GitHub ← Git over SSH Single Sign-On (SSO) → Signing commits with GPG Setting up signed commits verification Generating a new GP G key Adding a GP G key to your account Configure git to sign your commits with GPG
Optimizations.txt
Optimizations Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Optimizations Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Argilla Dask Datasets Distilabel DuckDB FiftyOne Pandas Polars Authentication for private and gated datasets Supported file formats Performing data transformations Performance optimizations Spark WebDataset Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Optimizations We briefly touched upon the difference between lazy and eager evaluation. On this page we will show how the lazy API can be used to get huge performance benefits. Lazy vs Eager Polars supports two modes of operation: lazy and eager. In the eager API the query is executed immediately while in the lazy API the query is only evaluated once it’s ‘needed’. Deferring the execution to the last minute can have significant performance advantages and is why the lazy API is preferred in most non-interactive cases. Example We will be using the example from the previous page to show the performance benefits of using the lazy API. The code below will compute the number of uploads from archive.org . Eager Copied import polars as pl import datetime df = pl.read_csv( "hf://datasets/commoncrawl/statistics/tlds.csv" , try_parse_dates= True ) df = df.select( "suffix" , "crawl" , "date" , "tld" , "pages" , "domains" ) df = df. filter ( (pl.col( "date" ) >= datetime.date( 2020 , 1 , 1 )) | pl.col( "crawl" ). str .contains( "CC" ) ) df = df.with_columns( (pl.col( "pages" ) / pl.col( "domains" )).alias( "pages_per_domain" ) ) df = df.group_by( "tld" , "date" ).agg( pl.col( "pages" ). sum (), pl.col( "domains" ). sum (), ) df = df.group_by( "tld" ).agg( pl.col( "date" ).unique().count().alias( "number_of_scrapes" ), pl.col( "domains" ).mean().alias( "avg_number_of_domains" ), pl.col( "pages" ).sort_by( "date" ).pct_change().mean().alias( "avg_page_growth_rate" ), ).sort( "avg_number_of_domains" , descending= True ).head( 10 ) Lazy Copied import polars as pl import datetime lf = ( pl.scan_csv( "hf://datasets/commoncrawl/statistics/tlds.csv" , try_parse_dates= True ) . filter ( (pl.col( "date" ) >= datetime.date( 2020 , 1 , 1 )) | pl.col( "crawl" ). str .contains( "CC" ) ).with_columns( (pl.col( "pages" ) / pl.col( "domains" )).alias( "pages_per_domain" ) ).group_by( "tld" , "date" ).agg( pl.col( "pages" ). sum (), pl.col( "domains" ). sum (), ).group_by( "tld" ).agg( pl.col( "date" ).unique().count().alias( "number_of_scrapes" ), pl.col( "domains" ).mean().alias( "avg_number_of_domains" ), pl.col( "pages" ).sort_by( "date" ).pct_change().mean().alias( "avg_page_growth_rate" ), ).sort( "avg_number_of_domains" , descending= True ).head( 10 ) ) df = lf.collect() Timings Running both queries leads to following run times on a regular laptop with a household internet connection: Eager: 1.96 seconds Lazy: 410 milliseconds The lazy query is ~5 times faster than the eager one. The reason for this is the query optimizer: if we delay collect -ing our dataset until the end, Polars will be able to reason about which columns and rows are required and apply filters as early as possible when reading the data. For file formats such as Parquet that contain metadata (e.g. min, max in a certain group of rows) the difference can even be bigger as Polars can skip entire row groups based on the filters and the metadata without sending the data over the wire. < > Update on GitHub ← Performing data transformations Spark → Optimizations Lazy vs Eager Example Eager Lazy Timings
cuDF.txt
cuDF Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation cuDF Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started cuDF cuDF is a Python GPU DataFrame library. To read from a single Parquet file, use the read_parquet function to read it into a DataFrame: Copied import cudf df = ( cudf.read_parquet( "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet" ) .groupby( 'sign' )[ 'text' ] .apply( lambda x: x. str . len ().mean()) .sort_values(ascending= False ) .head( 5 ) ) To read multiple Parquet files - for example, if the dataset is sharded - you’ll need to use dask-cudf : Copied import dask import dask.dataframe as dd dask.config. set ({ "dataframe.backend" : "cudf" }) df = ( dd.read_parquet( "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/*.parquet" ) ) < > Update on GitHub ← ClickHouse DuckDB → cuDF
transformers.txt
transformers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation transformers Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started transformers Entry point for the Transformers.js library. Only the exports from this file are available to the end user, and are grouped as follows: Pipelines Environment variables Models Tokenizers Processors < > Update on GitHub ← Server-side Audio Processing in Node.js Pipelines → transformers
Amazon_SageMaker.txt
Amazon SageMaker Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Amazon SageMaker Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Amazon SageMaker Hugging Face and Amazon introduced new Hugging Face Deep Learning Containers (DLCs) to make it easier than ever to train Hugging Face Transformer models in Amazon SageMaker . Getting Started Setup & Installation Before you can run your Accelerate scripts on Amazon SageMaker you need to sign up for an AWS account. If you do not have an AWS account yet learn more here . After you have your AWS Account you need to install the sagemaker sdk for Accelerate with: Copied pip install "accelerate[sagemaker]" --upgrade Accelerate currently uses the DLCs, with transformers , datasets and tokenizers pre-installed. Accelerate is not in the DLC yet (will soon be added!) so to use it within Amazon SageMaker you need to create a requirements.txt in the same directory where your training script is located and add it as dependency: Copied accelerate You should also add any other dependencies you have to this requirements.txt . Configure Accelerate You can configure the launch configuration for Amazon SageMaker the same as you do for non SageMaker training jobs with the Accelerate CLI: Copied accelerate config # In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 1 Accelerate will go through a questionnaire about your Amazon SageMaker setup and create a config file you can edit. Accelerate is not saving any of your credentials. Prepare a Accelerate fine-tuning script The training script is very similar to a training script you might run outside of SageMaker, but to save your model after training you need to specify either /opt/ml/model or use os.environ["SM_MODEL_DIR"] as your save directory. After training, artifacts in this directory are uploaded to S3: Copied - torch.save('/opt/ml/model`) + accelerator.save('/opt/ml/model') SageMaker doesn’t support argparse actions. If you want to use, for example, boolean hyperparameters, you need to specify type as bool in your script and provide an explicit True or False value for this hyperparameter. [REF] . Launch Training You can launch your training with Accelerate CLI with: Copied accelerate launch path_to_script.py --args_to_the_script This will launch your training script using your configuration. The only thing you have to do is provide all the arguments needed by your training script as named arguments. Examples If you run one of the example scripts, don’t forget to add accelerator.save('/opt/ml/model') to it. Copied accelerate launch ./examples/sagemaker_example.py Outputs: Copied Configuring Amazon SageMaker environment Converting Arguments to Hyperparameters Creating Estimator 2021 -04 -08 11 : 56 : 50 Starting - Starting the training job ... 2021 -04 -08 11 : 57 : 13 Starting - Launching requested ML instancesProfilerReport -1617883008 : InProgress ... ... ... 2021 -04 -08 11 : 58 : 54 Starting - Preparing the instances for training ... ... ... 2021 -04 -08 12 : 00 : 24 Downloading - Downloading input data 2021 -04 -08 12 : 00 : 24 Training - Downloading the training image ... ... ... ... ... ... 2021 -04 -08 12 : 03 : 39 Training - Training image download completed. Training in progress.. ... ... .. epoch 0 : { 'accuracy' : 0.7598039215686274 , 'f1' : 0.8178438661710037 } epoch 1 : { 'accuracy' : 0.8357843137254902 , 'f1' : 0.882249560632689 } epoch 2 : { 'accuracy' : 0.8406862745098039 , 'f1' : 0.8869565217391304 } ... ... .. 2021 -04 -08 12 : 05 : 40 Uploading - Uploading generated training model 2021 -04 -08 12 : 05 : 40 Completed - Training job completed Training seconds: 331 Billable seconds: 331 You can find your model data at: s3: //your-bucket/accelerate-sagemaker-1-2021-04-08-11-56-47-108/output/model.tar.gz Advanced Features Distributed Training: Data Parallelism Set up the accelerate config by running accelerate config and answer the SageMaker questions and set it up. To use SageMaker DDP, select it when asked What is the distributed mode? ([0] No distributed training, [1] data parallelism): . Example config below: Copied base_job_name: accelerate-sagemaker-1 compute_environment: AMAZON_SAGEMAKER distributed_type: DATA_PARALLEL ec2_instance_type: ml.p3.16xlarge iam_role_name: xxxxx image_uri: null mixed_precision: fp16 num_machines: 1 profile: xxxxx py_version: py10 pytorch_version: 2.5 .0 region: us-east-1 transformers_version: 4.17 .0 use_cpu: false Distributed Training: Model Parallelism currently in development, will be supported soon. Python packages and dependencies Accelerate currently uses the DLCs, with transformers , datasets and tokenizers pre-installed. If you want to use different/other Python packages you can do this by adding them to the requirements.txt . These packages will be installed before your training script is started. Local Training: SageMaker Local mode The local mode in the SageMaker SDK allows you to run your training script locally inside the HuggingFace DLC (Deep Learning container) or using your custom container image. This is useful for debugging and testing your training script inside the final container environment. Local mode uses Docker compose ( Note: Docker Compose V2 is not supported yet ). The SDK will handle the authentication against ECR to pull the DLC to your local environment. You can emulate CPU (single and multi-instance) and GPU (single instance) SageMaker training jobs. To use local mode, you need to set your ec2_instance_type to local . Copied ec2_instance_type: local Advanced configuration The configuration allows you to override parameters for the Estimator . These settings have to be applied in the config file and are not part of accelerate config . You can control many additional aspects of the training job, e.g. use Spot instances, enable network isolation and many more. Copied additional_args: # enable network isolation to restrict internet access for containers enable_network_isolation: True You can find all available configuration here . Use Spot Instances You can use Spot Instances e.g. using (see Advanced configuration ): Copied additional_args: use_spot_instances: True max_wait: 86400 Note: Spot Instances are subject to be terminated and training to be continued from a checkpoint. This is not handled in Accelerate out of the box. Contact us if you would like this feature. Remote scripts: Use scripts located on Github undecided if feature is needed. Contact us if you would like this feature. < > Update on GitHub ← Megatron-LM Apple M1 GPUs → Amazon Sage Maker Getting Started Setup & Installation Configure Accelerate Prepare a Accelerate fine-tuning script Launch Training Advanced Features Distributed Training: Data Parallelism Distributed Training: Model Parallelism Python packages and dependencies Local Training: Sage Maker Local mode Advanced configuration Use Spot Instances Remote scripts: Use scripts located on Github
Tasks.txt
Tasks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Tasks Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Integrate a library with the Hub Tasks GGUF DDUF Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Tasks What’s a task? Tasks, or pipeline types, describe the “shape” of each model’s API (inputs and outputs) and are used to determine which Inference API and widget we want to display for any given model. This classification is relatively coarse-grained (you can always add more fine-grained task names in your model tags), so you should rarely have to create a new task . If you want to add support for a new task, this document explains the required steps. Overview Having a new task integrated into the Hub means that: Users can search for all models – and datasets – of a given task. The Inference API supports the task. Users can try out models directly with the widget. 🏆 Note that you don’t need to implement all the steps by yourself. Adding a new task is a community effort, and multiple people can contribute. 🧑‍🤝‍🧑 To begin the process, open a new issue in the huggingface_hub repository. Please use the “Adding a new task” template. ⚠️Before doing any coding, it’s suggested to go over this document. ⚠️ The first step is to upload a model for your proposed task. Once you have a model in the Hub for the new task, the next step is to enable it in the Inference API. There are three types of support that you can choose from: 🤗 using a transformers model 🐳 using a model from an officially supported library 🖨️ using a model with custom inference code. This experimental option has downsides, so we recommend using one of the other approaches. Finally, you can add a couple of UI elements, such as the task icon and the widget, that complete the integration in the Hub. 📷 Some steps are orthogonal; you don’t need to do them in order. You don’t need the Inference API to add the icon. This means that, even if there isn’t full integration yet, users can still search for models of a given task. Adding new tasks to the Hub Using Hugging Face transformers library If your model is a transformers -based model, there is a 1:1 mapping between the Inference API task and a pipeline class. Here are some example PRs from the transformers library: Adding ImageClassificationPipeline Adding AudioClassificationPipeline Once the pipeline is submitted and deployed, you should be able to use the Inference API for your model. Using Community Inference API with a supported library The Hub also supports over 10 open-source libraries in the Community Inference API . Adding a new task is relatively straightforward and requires 2 PRs: PR 1: Add the new task to the API validation . This code ensures that the inference input is valid for a given task. Some PR examples: Add text-to-image Add audio-classification Add tabular-classification PR 2: Add the new task to a library docker image. You should also add a template to docker_images/common/app/pipelines to facilitate integrating the task in other libraries. Here is an example PR: Add text-classification to spaCy Adding Community Inference API for a quick prototype My model is not supported by any library. Am I doomed? 😱 We recommend using Hugging Face Spaces for these use cases. UI elements The Hub allows users to filter models by a given task. To do this, you need to add the task to several places. You’ll also get to pick an icon for the task! Add the task type to Types.ts In huggingface.js/packages/tasks/src/pipelines.ts , you need to do a couple of things Add the type to PIPELINE_DATA . Note that pipeline types are sorted into different categories (NLP, Audio, Computer Vision, and others). You will also need to fill minor changes in huggingface.js/packages/tasks/src/tasks/index.ts Choose an icon You can add an icon in the lib/Icons directory. We usually choose carbon icons from https://icones.js.org/collection/carbon . Also add the icon to PipelineIcon . Widget Once the task is in production, what could be more exciting than implementing some way for users to play directly with the models in their browser? 🤩 You can find all the widgets here . If you would be interested in contributing with a widget, you can look at the implementation of all the widgets. < > Update on GitHub ← Integrate a library with the Hub GGUF → Tasks What’s a task? Overview Adding new tasks to the Hub Using Hugging Face transformers library Using Community Inference AP I with a supported library Adding Community Inference AP I for a quick prototype U I elements Widget
Interface__TextGenerationInput.txt
Interface: TextGenerationInput Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: TextGenerationInput Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: TextGenerationInput Text Generation Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts . Indexable ▪ [property: string ]: unknown Properties inputs • inputs : string Defined in tasks/dist/commonjs/tasks/text-generation/inference.d.ts:14 parameters • Optional parameters : TextGenerationInputGenerateParameters Defined in tasks/dist/commonjs/tasks/text-generation/inference.d.ts:15 stream • Optional stream : boolean Defined in tasks/dist/commonjs/tasks/text-generation/inference.d.ts:16 < > Update on GitHub ← TableQuestionAnsweringOutput TextGenerationOutput → Interface: Text Generation Input Indexable Properties inputs Defined in parameters Defined in stream Defined in
Using_BERTopic_at_Hugging_Face.txt
Using BERTopic at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using BERTopic at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using BERTopic at Hugging Face BERTopic is a topic modeling framework that leverages 🤗 transformers and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions. BERTopic supports all kinds of topic modeling techniques: Guided Supervised Semi-supervised Manual Multi-topic distributions Hierarchical Class-based Dynamic Online/Incremental Multimodal Multi-aspect Text Generation/LLM Zero-shot (new!) Merge Models (new!) Seed Words (new!) Exploring BERTopic on the Hub You can find BERTopic models by filtering at the left of the models page . BERTopic models hosted on the Hub have a model card with useful information about the models. Thanks to BERTopic Hugging Face Hub integration, you can load BERTopic models with a few lines of code. You can also deploy these models using Inference Endpoints . Installation To get started, you can follow the BERTopic installation guide . You can also use the following one-line install through pip: Copied pip install bertopic Using Existing Models All BERTopic models can easily be loaded from the Hub: Copied from bertopic import BERTopic topic_model = BERTopic.load( "MaartenGr/BERTopic_Wikipedia" ) Once loaded, you can use BERTopic’s features to predict the topics for new instances: Copied topic, prob = topic_model.transform( "This is an incredible movie!" ) topic_model.topic_labels_[topic] Which gives us the following topic: Copied 64_rating_rated_cinematography_film Sharing Models When you have created a BERTopic model, you can easily share it with others through the Hugging Face Hub. To do so, we can make use of the push_to_hf_hub function that allows us to directly push the model to the Hugging Face Hub: Copied from bertopic import BERTopic # Train model topic_model = BERTopic().fit(my_docs) # Push to HuggingFace Hub topic_model.push_to_hf_hub( repo_id= "MaartenGr/BERTopic_ArXiv" , save_ctfidf= True ) Note that the saved model does not include the dimensionality reduction and clustering algorithms. Those are removed since they are only necessary to train the model and find relevant topics. Inference is done through a straightforward cosine similarity between the topic and document embeddings. This not only speeds up the model but allows us to have a tiny BERTopic model that we can work with. Additional Resources BERTopic repository BERTopic docs BERTopic models in the Hub < > Update on GitHub ← AllenNLP Asteroid → Using BER Topic at Hugging Face Exploring BER Topic on the Hub Installation Using Existing Models Sharing Models Additional Resources
IA3.txt
IA3 Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation IA3 PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started IA3 Infused Adapter by Inhibiting and Amplifying Inner Activations, or IA3 , is a method that adds three learned vectors to rescale the keys and values of the self-attention and encoder-decoder attention layers, and the intermediate activation of the position-wise feed-forward network. The abstract from the paper is: Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input. ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made. Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task. In this paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs. Along the way, we introduce a new PEFT method called (IA)^3 that scales activations by learned vectors, attaining stronger performance while only introducing a relatively tiny amount of new parameters. We also propose a simple recipe based on the T0 model called T-Few that can be applied to new tasks without task-specific tuning or modifications. We validate the effectiveness of T-Few on completely unseen tasks by applying it to the RAFT benchmark, attaining super-human performance for the first time and outperforming the state-of-the-art by 6% absolute. All of the code used in our experiments is publicly available . IA3Config class peft. IA3Config < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False target_modules : Optional[Union[list[str], str]] = None exclude_modules : Optional[Union[list[str], str]] = None feedforward_modules : Optional[Union[list[str], str]] = None fan_in_fan_out : bool = False modules_to_save : Optional[list[str]] = None init_ia3_weights : bool = True ) Parameters target_modules ( Optional[Union[List[str], str]] ) — The names of the modules to apply the adapter to. If this is specified, only the modules with the specified names will be replaced. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. If this is specified as ‘all-linear’, then all linear/Conv1D modules are chosen, excluding the output layer. If this is not specified, modules will be chosen according to the model architecture. If the architecture is not known, an error will be raised — in this case, you should specify the target modules manually. exclude_modules ( Optional[Union[List[str], str]] ) — The names of the modules to not apply the adapter. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. feedforward_modules ( Optional[Union[List[str], str]] ) — The names of the modules to be treated as feedforward modules, as in the original paper. These modules will have (IA)³ vectors multiplied to the input, instead of the output. feedforward_modules must be a name or a subset of names present in target_modules . fan_in_fan_out ( bool ) — Set this to True if the layer to replace stores weight like (fan_in, fan_out). For example, gpt-2 uses Conv1D which stores weights like (fan_in, fan_out) and hence this should be set to True . modules_to_save ( Optional[List[str]] ) — List of modules apart from (IA)³ layers to be set as trainable and saved in the final checkpoint. init_ia3_weights ( bool ) — Whether to initialize the vectors in the (IA)³ layers, defaults to True . Setting this to False is discouraged. This is the configuration class to store the configuration of a IA3Model . IA3Model class peft. IA3Model < source > ( model config adapter_name low_cpu_mem_usage : bool = False ) → torch.nn.Module Parameters model ( PreTrainedModel ) — The model to be adapted. config ( IA3Config ) — The configuration of the (IA)^3 model. adapter_name ( str ) — The name of the adapter, defaults to "default" . low_cpu_mem_usage ( bool , optional , defaults to False ) — Create empty adapter weights on meta device. Useful to speed up the loading process. Returns torch.nn.Module The (IA)^3 model. Creates a Infused Adapter by Inhibiting and Amplifying Inner Activations ((IA)^3) model from a pretrained transformers model. The method is described in detail in https://arxiv.org/abs/2205.05638 Example: Copied >>> from transformers import AutoModelForSeq2SeqLM, ia3Config >>> from peft import IA3Model, IA3Config >>> config = IA3Config( ... peft_type= "IA3" , ... task_type= "SEQ_2_SEQ_LM" , ... target_modules=[ "k" , "v" , "w0" ], ... feedforward_modules=[ "w0" ], ... ) >>> model = AutoModelForSeq2SeqLM.from_pretrained( "t5-base" ) >>> ia3_model = IA3Model(config, model) Attributes : model ( PreTrainedModel ) — The model to be adapted. peft_config ( ia3Config ): The configuration of the (IA)^3 model. add_weighted_adapter < source > ( adapters : list[str] weights : list[float] adapter_name : str ) Parameters adapters ( list ) — List of adapter names to be merged. weights ( list ) — List of weights for each adapter. adapter_name ( str ) — Name of the new adapter. This method adds a new adapter by merging the given adapters with the given weights. delete_adapter < source > ( adapter_name : str ) Parameters adapter_name (str) — Name of the adapter to be deleted. Deletes an existing adapter. disable_adapter_layers < source > ( ) Disable all adapters. When disabling all adapters, the model output corresponds to the output of the base model. enable_adapter_layers < source > ( ) Enable all adapters. Call this if you have previously disabled all adapters and want to re-enable them. merge_and_unload < source > ( safe_merge : bool = False adapter_names : Optional[list[str]] = None ) Parameters safe_merge ( bool ) — whether to activate the safe merging check to check if there is any potential Nan in the adapter weights adapter_names ( List[str] , optional ) — The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to None . This method merges the IA³ layers into the base model. This is needed if someone wants to use the base model as a standalone model. Example: Copied >>> from transformers import AutoModelForCausalLM >>> from peft import PeftModel >>> base_model = AutoModelForCausalLM.from_pretrained( "tiiuae/falcon-40b" ) >>> peft_model_id = "smangrul/falcon-40B-int4-peft-lora-sfttrainer-sample" >>> model = PeftModel.from_pretrained(base_model, peft_model_id) >>> merged_model = model.merge_and_unload() set_adapter < source > ( adapter_name : str | list[str] ) Parameters adapter_name ( str or list[str] ) — Name of the adapter(s) to be activated. Set the active adapter(s). Additionally, this function will set the specified adapters to trainable (i.e., requires_grad=True). If this is not desired, use the following code. Copied >>> for name, param in model_peft.named_parameters(): ... if ...: # some check on name (ex. if 'lora' in name) ... param.requires_grad = False unload < source > ( ) Gets back the base model by removing all the IA³ modules without merging. This gives back the original base model. < > Update on GitHub ← AdaLoRA Llama-Adapter → I A3 I A3 Config I A3 Model
Interface__ImageSegmentationOutputValue.txt
Interface: ImageSegmentationOutputValue Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: ImageSegmentationOutputValue Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: ImageSegmentationOutputValue Properties label • label : string The label for the class (model specific) of a segment. Defined in inference/src/tasks/cv/imageSegmentation.ts:16 mask • mask : string A str (base64 str of a single channel black-and-white img) representing the mask of a segment. Defined in inference/src/tasks/cv/imageSegmentation.ts:20 score • score : number A float that represents how likely it is that the detected object belongs to the given class. Defined in inference/src/tasks/cv/imageSegmentation.ts:24 < > Update on GitHub ← ImageClassificationOutputValue ImageToTextOutput → Interface: Image Segmentation Output Value Properties label Defined in mask Defined in score Defined in
Interface__RepoId.txt
Interface: RepoId Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: RepoId Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: RepoId Properties name • name : string Defined in hub/src/types/public.ts:6 type • type : RepoType Defined in hub/src/types/public.ts:7 < > Update on GitHub ← PathInfo SafetensorsIndexJson → Interface: Repo Id Properties name Defined in type Defined in
Load_audio_data.txt
Load audio data Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Load audio data Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Load audio data You can load an audio dataset using the Audio feature that automatically decodes and resamples the audio files when you access the examples. Audio decoding is based on the soundfile python package, which uses the libsndfile C library under the hood. Installation To work with audio datasets, you need to have the audio dependencies installed. Check out the installation guide to learn how to install it. Local files You can load your own dataset using the paths to your audio files. Use the cast_column() function to take a column of audio file paths, and cast it to the Audio feature: Copied >>> audio_dataset = Dataset.from_dict({ "audio" : [ "path/to/audio_1" , "path/to/audio_2" , ..., "path/to/audio_n" ]}).cast_column( "audio" , Audio()) >>> audio_dataset[ 0 ][ "audio" ] { 'array' : array([ 0. , 0.00024414 , - 0.00024414 , ..., - 0.00024414 , 0. , 0. ], dtype=float32), 'path' : 'path/to/audio_1' , 'sampling_rate' : 16000 } AudioFolder You can also load a dataset with an AudioFolder dataset builder. It does not require writing a custom dataloader, making it useful for quickly creating and loading audio datasets with several thousand audio files. AudioFolder with metadata To link your audio files with metadata information, make sure your dataset has a metadata.csv file. Your dataset structure might look like: Copied folder /train/m etadata.csv folder /train/ first_audio_file.mp3 folder /train/ second_audio_file.mp3 folder /train/ third_audio_file.mp3 Your metadata.csv file must have a file_name column which links audio files with their metadata. An example metadata.csv file might look like: Copied file_name,transcription first_audio_file.mp3,znowu się duch z ciałem zrośnie w młodocianej wstaniesz wiosnie i możesz skutkiem tych leków umierać wstawać wiek wieków dalej tam były przestrogi jak siekać głowę jak nogi second_audio_file.mp3,już u źwierzyńca podwojów król zasiada przy nim książęta i panowie rada a gdzie wzniosły krążył ganek rycerze obok kochanek król skinął palcem zaczęto igrzysko third_audio_file.mp3,pewnie kędyś w obłędzie ubite minęły szlaki zaczekajmy dzień jaki poślemy szukać wszędzie dziś jutro pewnie będzie posłali wszędzie sługi czekali dzień i drugi gdy nic nie doczekali z płaczem chcą jechać dali AudioFolder will load audio data and create a transcription column containing texts from metadata.csv : Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "audiofolder" , data_dir= "/path/to/folder" ) >>> # OR by specifying the list of files >>> dataset = load_dataset( "audiofolder" , data_files=[ "path/to/audio_1" , "path/to/audio_2" , ..., "path/to/audio_n" ]) You can load remote datasets from their URLs with the data_files parameter: Copied >>> dataset = load_dataset( "audiofolder" , data_files=[ "https://foo.bar/audio_1" , "https://foo.bar/audio_2" , ..., "https://foo.bar/audio_n" ] >>> # for example, pass SpeechCommands archive: >>> dataset = load_dataset( "audiofolder" , data_files= "https://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz" ) Metadata can also be specified as JSON Lines, in which case use metadata.jsonl as the name of the metadata file. This format is helpful in scenarios when one of the columns is complex, e.g. a list of floats, to avoid parsing errors or reading the complex values as strings. To ignore the information in the metadata file, set drop_metadata=True in load_dataset() : Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "audiofolder" , data_dir= "/path/to/folder" , drop_metadata= True ) If you don’t have a metadata file, AudioFolder automatically infers the label name from the directory name. If you want to drop automatically created labels, set drop_labels=True . In this case, your dataset will only contain an audio column: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "audiofolder" , data_dir= "/path/to/folder_without_metadata" , drop_labels= True ) For more information about creating your own AudioFolder dataset, take a look at the Create an audio dataset guide. For a guide on how to load any type of dataset, take a look at the general loading guide . < > Update on GitHub ← Troubleshooting Process audio data → Load audio data Installation Local files Audio Folder Audio Folder with metadata
HTTP_API_Reference.txt
HTTP API Reference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation HTTP API Reference text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started HTTP API Reference Table of Contents Text Generation Inference custom API OpenAI Messages API Making a Request Streaming Synchronous Hugging Face Inference Endpoints Cloud Providers Amazon SageMaker The HTTP API is a RESTful API that allows you to interact with the text-generation-inference component. Two endpoints are available: Text Generation Inference custom API OpenAI’s Messages API Text Generation Inference custom API Check the API documentation for more information on how to interact with the Text Generation Inference API. OpenAI Messages API Text Generation Inference (TGI) now supports the Messages API, which is fully compatible with the OpenAI Chat Completion API. This feature is available starting from version 1.4.0. You can use OpenAI’s client libraries or third-party libraries expecting OpenAI schema to interact with TGI’s Messages API. Below are some examples of how to utilize this compatibility. Note: The Messages API is supported from TGI version 1.4.0 and above. Ensure you are using a compatible version to access this feature. Making a Request You can make a request to TGI’s Messages API using curl . Here’s an example: Copied curl localhost:3000/v1/chat/completions \ -X POST \ -d '{ "model": "tgi", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "What is deep learning?" } ], "stream": true, "max_tokens": 20 }' \ -H 'Content-Type: application/json' Streaming You can also use OpenAI’s Python client library to make a streaming request. Here’s how: Copied from openai import OpenAI # init the client but point it to TGI client = OpenAI( base_url= "http://localhost:3000/v1" , api_key= "-" ) chat_completion = client.chat.completions.create( model= "tgi" , messages=[ { "role" : "system" , "content" : "You are a helpful assistant." }, { "role" : "user" , "content" : "What is deep learning?" } ], stream= True ) # iterate and print stream for message in chat_completion: print (message) Synchronous If you prefer to make a synchronous request, you can do so like this: Copied from openai import OpenAI # init the client but point it to TGI client = OpenAI( base_url= "http://localhost:3000/v1" , api_key= "-" ) chat_completion = client.chat.completions.create( model= "tgi" , messages=[ { "role" : "system" , "content" : "You are a helpful assistant." }, { "role" : "user" , "content" : "What is deep learning?" } ], stream= False ) print (chat_completion) Hugging Face Inference Endpoints The Messages API is integrated with Inference Endpoints . Every endpoint that uses “Text Generation Inference” with an LLM, which has a chat template can now be used. Below is an example of how to use IE with TGI using OpenAI’s Python client library: Note: Make sure to replace base_url with your endpoint URL and to include v1/ at the end of the URL. The api_key should be replaced with your Hugging Face API key. Copied from openai import OpenAI # init the client but point it to TGI client = OpenAI( # replace with your endpoint url, make sure to include "v1/" at the end base_url= "https://vlzz10eq3fol3429.us-east-1.aws.endpoints.huggingface.cloud/v1/" , # replace with your API key api_key= "hf_XXX" ) chat_completion = client.chat.completions.create( model= "tgi" , messages=[ { "role" : "system" , "content" : "You are a helpful assistant." }, { "role" : "user" , "content" : "What is deep learning?" } ], stream= True ) # iterate and print stream for message in chat_completion: print (message.choices[ 0 ].delta.content, end= "" ) Cloud Providers TGI can be deployed on various cloud providers for scalable and robust text generation. One such provider is Amazon SageMaker, which has recently added support for TGI. Here’s how you can deploy TGI on Amazon SageMaker: Amazon SageMaker Amazon Sagemaker natively supports the message API: Copied import json import sagemaker import boto3 from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri try : role = sagemaker.get_execution_role() except ValueError: iam = boto3.client( 'iam' ) role = iam.get_role(RoleName= 'sagemaker_execution_role' )[ 'Role' ][ 'Arn' ] # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID' : 'HuggingFaceH4/zephyr-7b-beta' , 'SM_NUM_GPUS' : json.dumps( 1 ), } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( image_uri=get_huggingface_llm_image_uri( "huggingface" ,version= "3.0.1" ), env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count= 1 , instance_type= "ml.g5.2xlarge" , container_startup_health_check_timeout= 300 , ) # send request predictor.predict({ "messages" : [ { "role" : "system" , "content" : "You are a helpful assistant." }, { "role" : "user" , "content" : "What is deep learning?" } ] }) < > Update on GitHub ← Exported Metrics V3 update, caching and chunking → HTT P AP I Reference Table of Contents Text Generation Inference custom API OpenA I Messages API Making a Request Streaming Synchronous Hugging Face Inference Endpoints Cloud Providers Amazon Sage Maker
bitsandbytes.txt
bitsandbytes Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation bitsandbytes Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started bitsandbytes bitsandbytes is the easiest option for quantizing a model to 8 and 4-bit. 8-bit quantization multiplies outliers in fp16 with non-outliers in int8, converts the non-outlier values back to fp16, and then adds them together to return the weights in fp16. This reduces the degradative effect outlier values have on a model’s performance. 4-bit quantization compresses a model even further, and it is commonly used with QLoRA to finetune quantized LLMs. This guide demonstrates how quantization can enable running FLUX.1-dev on less than 16GB of VRAM and even on a free Google Colab instance. To use bitsandbytes, make sure you have the following libraries installed: Copied pip install diffusers transformers accelerate bitsandbytes -U Now you can quantize a model by passing a BitsAndBytesConfig to from_pretrained() . This works for any model in any modality, as long as it supports loading with Accelerate and contains torch.nn.Linear layers. 8-bit 4-bit Quantizing a model in 8-bit halves the memory-usage: bitsandbytes is supported in both Transformers and Diffusers, so you can quantize both the FluxTransformer2DModel and T5EncoderModel . For Ada and higher-series GPUs. we recommend changing torch_dtype to torch.bfloat16 . The CLIPTextModel and AutoencoderKL aren’t quantized because they’re already small in size and because AutoencoderKL only has a few torch.nn.Linear layers. Copied from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig from diffusers import FluxTransformer2DModel from transformers import T5EncoderModel quant_config = TransformersBitsAndBytesConfig(load_in_8bit= True ,) text_encoder_2_8bit = T5EncoderModel.from_pretrained( "black-forest-labs/FLUX.1-dev" , subfolder= "text_encoder_2" , quantization_config=quant_config, torch_dtype=torch.float16, ) quant_config = DiffusersBitsAndBytesConfig(load_in_8bit= True ,) transformer_8bit = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev" , subfolder= "transformer" , quantization_config=quant_config, torch_dtype=torch.float16, ) By default, all the other modules such as torch.nn.LayerNorm are converted to torch.float16 . You can change the data type of these modules with the torch_dtype parameter. Copied transformer_8bit = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev", subfolder="transformer", quantization_config=quant_config, + torch_dtype=torch.float32, ) Let’s generate an image using our quantized models. Setting device_map="auto" automatically fills all available space on the GPU(s) first, then the CPU, and finally, the hard drive (the absolute slowest option) if there is still not enough memory. Copied pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev" , transformer=transformer_8bit, text_encoder_2=text_encoder_2_8bit, torch_dtype=torch.float16, device_map= "auto" , ) pipe_kwargs = { "prompt" : "A cat holding a sign that says hello world" , "height" : 1024 , "width" : 1024 , "guidance_scale" : 3.5 , "num_inference_steps" : 50 , "max_sequence_length" : 512 , } image = pipe(**pipe_kwargs, generator=torch.manual_seed( 0 ),).images[ 0 ] When there is enough memory, you can also directly move the pipeline to the GPU with .to("cuda") and apply enable_model_cpu_offload() to optimize GPU memory usage. Once a model is quantized, you can push the model to the Hub with the push_to_hub() method. The quantization config.json file is pushed first, followed by the quantized model weights. You can also save the serialized 8-bit models locally with save_pretrained() . Training with 8-bit and 4-bit weights are only supported for training extra parameters. Check your memory footprint with the get_memory_footprint method: Copied print (model.get_memory_footprint()) Quantized models can be loaded from the from_pretrained() method without needing to specify the quantization_config parameters: Copied from diffusers import FluxTransformer2DModel, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit= True ) model_4bit = FluxTransformer2DModel.from_pretrained( "hf-internal-testing/flux.1-dev-nf4-pkg" , subfolder= "transformer" ) 8-bit (LLM.int8() algorithm) Learn more about the details of 8-bit quantization in this blog post ! This section explores some of the specific features of 8-bit models, such as outlier thresholds and skipping module conversion. Outlier threshold An “outlier” is a hidden state value greater than a certain threshold, and these values are computed in fp16. While the values are usually normally distributed ([-3.5, 3.5]), this distribution can be very different for large models ([-60, 6] or [6, 60]). 8-bit quantization works well for values ~5, but beyond that, there is a significant performance penalty. A good default threshold value is 6, but a lower threshold may be needed for more unstable models (small models or finetuning). To find the best threshold for your model, we recommend experimenting with the llm_int8_threshold parameter in BitsAndBytesConfig : Copied from diffusers import FluxTransformer2DModel, BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_8bit= True , llm_int8_threshold= 10 , ) model_8bit = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev" , subfolder= "transformer" , quantization_config=quantization_config, ) Skip module conversion For some models, you don’t need to quantize every module to 8-bit which can actually cause instability. For example, for diffusion models like Stable Diffusion 3 , the proj_out module can be skipped using the llm_int8_skip_modules parameter in BitsAndBytesConfig : Copied from diffusers import SD3Transformer2DModel, BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_8bit= True , llm_int8_skip_modules=[ "proj_out" ], ) model_8bit = SD3Transformer2DModel.from_pretrained( "stabilityai/stable-diffusion-3-medium-diffusers" , subfolder= "transformer" , quantization_config=quantization_config, ) 4-bit (QLoRA algorithm) Learn more about its details in this blog post . This section explores some of the specific features of 4-bit models, such as changing the compute data type, using the Normal Float 4 (NF4) data type, and using nested quantization. Compute data type To speedup computation, you can change the data type from float32 (the default value) to bf16 using the bnb_4bit_compute_dtype parameter in BitsAndBytesConfig : Copied import torch from diffusers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit= True , bnb_4bit_compute_dtype=torch.bfloat16) Normal Float 4 (NF4) NF4 is a 4-bit data type from the QLoRA paper, adapted for weights initialized from a normal distribution. You should use NF4 for training 4-bit base models. This can be configured with the bnb_4bit_quant_type parameter in the BitsAndBytesConfig : Copied from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig from diffusers import FluxTransformer2DModel from transformers import T5EncoderModel quant_config = TransformersBitsAndBytesConfig( load_in_4bit= True , bnb_4bit_quant_type= "nf4" , ) text_encoder_2_4bit = T5EncoderModel.from_pretrained( "black-forest-labs/FLUX.1-dev" , subfolder= "text_encoder_2" , quantization_config=quant_config, torch_dtype=torch.float16, ) quant_config = DiffusersBitsAndBytesConfig( load_in_4bit= True , bnb_4bit_quant_type= "nf4" , ) transformer_4bit = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev" , subfolder= "transformer" , quantization_config=quant_config, torch_dtype=torch.float16, ) For inference, the bnb_4bit_quant_type does not have a huge impact on performance. However, to remain consistent with the model weights, you should use the bnb_4bit_compute_dtype and torch_dtype values. Nested quantization Nested quantization is a technique that can save additional memory at no additional performance cost. This feature performs a second quantization of the already quantized weights to save an additional 0.4 bits/parameter. Copied from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig from diffusers import FluxTransformer2DModel from transformers import T5EncoderModel quant_config = TransformersBitsAndBytesConfig( load_in_4bit= True , bnb_4bit_use_double_quant= True , ) text_encoder_2_4bit = T5EncoderModel.from_pretrained( "black-forest-labs/FLUX.1-dev" , subfolder= "text_encoder_2" , quantization_config=quant_config, torch_dtype=torch.float16, ) quant_config = DiffusersBitsAndBytesConfig( load_in_4bit= True , bnb_4bit_use_double_quant= True , ) transformer_4bit = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev" , subfolder= "transformer" , quantization_config=quant_config, torch_dtype=torch.float16, ) Dequantizing bitsandbytes models Once quantized, you can dequantize a model to its original precision, but this might result in a small loss of quality. Make sure you have enough GPU RAM to fit the dequantized model. Copied from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig from diffusers import FluxTransformer2DModel from transformers import T5EncoderModel quant_config = TransformersBitsAndBytesConfig( load_in_4bit= True , bnb_4bit_use_double_quant= True , ) text_encoder_2_4bit = T5EncoderModel.from_pretrained( "black-forest-labs/FLUX.1-dev" , subfolder= "text_encoder_2" , quantization_config=quant_config, torch_dtype=torch.float16, ) quant_config = DiffusersBitsAndBytesConfig( load_in_4bit= True , bnb_4bit_use_double_quant= True , ) transformer_4bit = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev" , subfolder= "transformer" , quantization_config=quant_config, torch_dtype=torch.float16, ) text_encoder_2_4bit.dequantize() transformer_4bit.dequantize() Resources End-to-end notebook showing Flux.1 Dev inference in a free-tier Colab Training < > Update on GitHub ← Getting Started gguf → bitsandbytes 8-bit (LL M.int8() algorithm) Outlier threshold Skip module conversion 4-bit (Q LoR A algorithm) Compute data type Normal Float 4 (N F4) Nested quantization Dequantizing bitsandbytes models Resources
Webhooks.txt
Webhooks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Webhooks Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks How-to: Automatic fine-tuning with Auto-Train How-to: Build a Discussion bot based on BLOOM How-to: Create automatic metadata quality reports Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Webhooks Webhooks are now publicly available! Webhooks are a foundation for MLOps-related features. They allow you to listen for new changes on specific repos or to all repos belonging to particular set of users/organizations (not just your repos, but any repo). You can use them to auto-convert models, build community bots, or build CI/CD for your models, datasets, and Spaces (and much more!). The documentation for Webhooks is below – or you can also browse our guides showcasing a few possible use cases of Webhooks: Fine-tune a new model whenever a dataset gets updated (Python) Create a discussion bot on the Hub, using a LLM API (NodeJS) Create metadata quality reports (Python) and more to come… Create your Webhook You can create new Webhooks and edit existing ones in your Webhooks settings : Webhooks can watch for repos updates, Pull Requests, discussions, and new comments. It’s even possible to create a Space to react to your Webhooks! Webhook Payloads After registering a Webhook, you will be notified of new events via an HTTP POST call on the specified target URL. The payload is encoded in JSON. You can view the history of payloads sent in the activity tab of the webhook settings page, it’s also possible to replay past webhooks for easier debugging: As an example, here is the full payload when a Pull Request is opened: Copied { "event" : { "action" : "create" , "scope" : "discussion" } , "repo" : { "type" : "model" , "name" : "openai-community/gpt2" , "id" : "621ffdc036468d709f17434d" , "private" : false , "url" : { "web" : "https://huggingface.co/openai-community/gpt2" , "api" : "https://huggingface.co/api/models/openai-community/gpt2" } , "owner" : { "id" : "628b753283ef59b5be89e937" } } , "discussion" : { "id" : "6399f58518721fdd27fc9ca9" , "title" : "Update co2 emissions" , "url" : { "web" : "https://huggingface.co/openai-community/gpt2/discussions/19" , "api" : "https://huggingface.co/api/models/openai-community/gpt2/discussions/19" } , "status" : "open" , "author" : { "id" : "61d2f90c3c2083e1c08af22d" } , "num" : 19 , "isPullRequest" : true , "changes" : { "base" : "refs/heads/main" } } , "comment" : { "id" : "6399f58518721fdd27fc9caa" , "author" : { "id" : "61d2f90c3c2083e1c08af22d" } , "content" : "Add co2 emissions information to the model card" , "hidden" : false , "url" : { "web" : "https://huggingface.co/openai-community/gpt2/discussions/19#6399f58518721fdd27fc9caa" } } , "webhook" : { "id" : "6390e855e30d9209411de93b" , "version" : 3 } } Event The top-level properties event is always specified and used to determine the nature of the event. It has two sub-properties: event.action and event.scope . event.scope will be one of the following values: "repo" - Global events on repos. Possible values for the associated action : "create" , "delete" , "update" , "move" . "repo.content" - Events on the repo’s content, such as new commits or tags. It triggers on new Pull Requests as well due to the newly created reference/commit. The associated action is always "update" . "repo.config" - Events on the config: update Space secrets, update settings, update DOIs, disabled or not, etc. The associated action is always "update" . "discussion" - Creating a discussion or Pull Request, updating the title or status, and merging. Possible values for the associated action : "create" , "delete" , "update" . "discussion.comment" - Creating, updating, and hiding a comment. Possible values for the associated action : "create" , "update" . More scopes can be added in the future. To handle unknown events, your webhook handler can consider any action on a narrowed scope to be an "update" action on the broader scope. For example, if the "repo.config.dois" scope is added in the future, any event with that scope can be considered by your webhook handler as an "update" action on the "repo.config" scope. Repo In the current version of webhooks, the top-level property repo is always specified, as events can always be associated with a repo. For example, consider the following value: Copied "repo" : { "type" : "model" , "name" : "some-user/some-repo" , "id" : "6366c000a2abcdf2fd69a080" , "private" : false , "url" : { "web" : "https://huggingface.co/some-user/some-repo" , "api" : "https://huggingface.co/api/models/some-user/some-repo" } , "headSha" : "c379e821c9c95d613899e8c4343e4bfee2b0c600" , "tags" : [ "license:other" , "has_space" ] , "owner" : { "id" : "61d2000c3c2083e1c08af22d" } } repo.headSha is the sha of the latest commit on the repo’s main branch. It is only sent when event.scope starts with "repo" , not on community events like discussions and comments. Code changes On code changes, the top-level property updatedRefs is specified on repo events. It is an array of references that have been updated. Here is an example value: Copied "updatedRefs" : [ { "ref" : "refs/heads/main" , "oldSha" : "ce9a4674fa833a68d5a73ec355f0ea95eedd60b7" , "newSha" : "575db8b7a51b6f85eb06eee540738584589f131c" } , { "ref" : "refs/tags/test" , "oldSha" : null , "newSha" : "575db8b7a51b6f85eb06eee540738584589f131c" } ] Newly created references will have oldSha set to null . Deleted references will have newSha set to null . You can react to new commits on specific pull requests, new tags, or new branches. Discussions and Pull Requests The top-level property discussion is specified on community events (discussions and Pull Requests). The discussion.isPullRequest property is a boolean indicating if the discussion is also a Pull Request (on the Hub, a PR is a special type of discussion). Here is an example value: Copied "discussion" : { "id" : "639885d811ae2bad2b7ba461" , "title" : "Hello!" , "url" : { "web" : "https://huggingface.co/some-user/some-repo/discussions/3" , "api" : "https://huggingface.co/api/models/some-user/some-repo/discussions/3" } , "status" : "open" , "author" : { "id" : "61d2000c3c2083e1c08af22d" } , "isPullRequest" : true , "changes" : { "base" : "refs/heads/main" } "num" : 3 } Comment The top level property comment is specified when a comment is created (including on discussion creation) or updated. Here is an example value: Copied "comment" : { "id" : "6398872887bfcfb93a306f18" , "author" : { "id" : "61d2000c3c2083e1c08af22d" } , "content" : "This adds an env key" , "hidden" : false , "url" : { "web" : "https://huggingface.co/some-user/some-repo/discussions/4#6398872887bfcfb93a306f18" } } Webhook secret Setting a Webhook secret is useful to make sure payloads sent to your Webhook handler URL are actually from Hugging Face. If you set a secret for your Webhook, it will be sent along as an X-Webhook-Secret HTTP header on every request. Only ASCII characters are supported. It's also possible to add the secret directly in the handler URL. For example, setting it as a query parameter: https://example.com/webhook?secret=XXX. This can be helpful if accessing the HTTP headers of the request is complicated for your Webhook handler. Rate limiting Each Webhook is limited to 1,000 triggers per 24 hours. You can view your usage in the Webhook settings page in the “Activity” tab. If you need to increase the number of triggers for your Webhook, contact us at [email protected] . Developing your Webhooks If you do not have an HTTPS endpoint/URL, you can try out public tools for webhook testing. These tools act as catch-all (capture all requests) sent to them and give 200 OK status code. Beeceptor is one tool you can use to create a temporary HTTP endpoint and review the incoming payload. Another such tool is Webhook.site . Additionally, you can route a real Webhook payload to the code running locally on your machine during development. This is a great way to test and debug for faster integrations. You can do this by exposing your localhost port to the Internet. To be able to go this path, you can use ngrok or localtunnel . Debugging Webhooks You can easily find recently generated events for your webhooks. Open the activity tab for your webhook. There you will see the list of recent events. Here you can review the HTTP status code and the payload of the generated events. Additionally, you can replay these events by clicking on the Replay button! Note: When changing the target URL or secret of a Webhook, replaying an event will send the payload to the updated URL. FAQ Can I define webhooks on my organization vs my user account? No, this is not currently supported. How can I subscribe to events on all repos (or across a whole repo type, like on all models)? This is not currently exposed to end users but we can toggle this for you if you send an email to [email protected] . < > Update on GitHub ← Collections How-to: Automatic fine-tuning with Auto-Train → Webhooks Create your Webhook Webhook Payloads Event Repo Code changes Discussions and Pull Requests Comment Webhook secret Rate limiting Developing your Webhooks Debugging Webhooks FAQ Can I define webhooks on my organization vs my user account? How can I subscribe to events on all repos (or across a whole repo type, like on all models)?
Training_on_TPUs.txt
Training on TPUs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Training on TPUs Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Training on TPUs Training on TPUs can be slightly different from training on multi-gpu, even with Accelerate. This guide aims to show you where you should be careful and why, as well as the best practices in general. Training in a Notebook The main carepoint when training on TPUs comes from the notebook_launcher() . As mentioned in the notebook tutorial , you need to restructure your training code into a function that can get passed to the notebook_launcher() function and be careful about not declaring any tensors on the GPU. While on a TPU that last part is not as important, a critical part to understand is that when you launch code from a notebook you do so through a process called forking . When launching from the command-line, you perform spawning , where a python process is not currently running and you spawn a new process in. Since your Jupyter notebook is already utilizing a python process, you need to fork a new process from it to launch your code. Where this becomes important is in regard to declaring your model. On forked TPU processes, it is recommended that you instantiate your model once and pass this into your training function. This is different than training on GPUs where you create n models that have their gradients synced and back-propagated at certain moments. Instead, one model instance is shared between all the nodes and it is passed back and forth. This is important especially when training on low-resource TPUs such as those provided in Kaggle kernels or on Google Colaboratory. Below is an example of a training function passed to the notebook_launcher() if training on CPUs or GPUs: This code snippet is based off the one from the simple_nlp_example notebook found here with slight modifications for the sake of simplicity Copied def training_function (): # Initialize accelerator accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained( "bert-base-cased" , num_labels= 2 ) train_dataloader, eval_dataloader = create_dataloaders( train_batch_size=hyperparameters[ "train_batch_size" ], eval_batch_size=hyperparameters[ "eval_batch_size" ] ) # Instantiate optimizer optimizer = AdamW(params=model.parameters(), lr=hyperparameters[ "learning_rate" ]) # Prepare everything # There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the # prepare method. model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader ) num_epochs = hyperparameters[ "num_epochs" ] # Now we train the model for epoch in range (num_epochs): model.train() for step, batch in enumerate (train_dataloader): outputs = model(**batch) loss = outputs.loss accelerator.backward(loss) optimizer.step() optimizer.zero_grad() Copied from accelerate import notebook_launcher notebook_launcher(training_function) The notebook_launcher will default to 8 processes if Accelerate has been configured for a TPU If you use this example and declare the model inside the training loop, then on a low-resource system you will potentially see an error like: Copied ProcessExitedException : process 0 terminated with signal SIGSEGV This error is extremely cryptic but the basic explanation is you ran out of system RAM. You can avoid this entirely by reconfiguring the training function to accept a single model argument, and declare it in an outside cell: Copied # In another Jupyter cell model = AutoModelForSequenceClassification.from_pretrained( "bert-base-cased" , num_labels= 2 ) Copied + def training_function(model): # Initialize accelerator accelerator = Accelerator() - model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2) train_dataloader, eval_dataloader = create_dataloaders( train_batch_size=hyperparameters["train_batch_size"], eval_batch_size=hyperparameters["eval_batch_size"] ) ... And finally calling the training function with: Copied from accelerate import notebook_launcher - notebook_launcher(training_function) + notebook_launcher(training_function, (model,)) The above workaround is only needed when launching a TPU instance from a Jupyter Notebook on a low-resource server such as Google Colaboratory or Kaggle. If using a script or launching on a much beefier server declaring the model beforehand is not needed. Mixed Precision and Global Variables As mentioned in the mixed precision tutorial , Accelerate supports fp16 and bf16, both of which can be used on TPUs. That being said, ideally bf16 should be utilized as it is extremely efficient to use. There are two “layers” when using bf16 and Accelerate on TPUs, at the base level and at the operation level. At the base level, this is enabled when passing mixed_precision="bf16" to Accelerator , such as: Copied accelerator = Accelerator(mixed_precision= "bf16" ) By default, this will cast torch.float and torch.double to bfloat16 on TPUs. The specific configuration being set is an environmental variable of XLA_USE_BF16 is set to 1 . There is a further configuration you can perform which is setting the XLA_DOWNCAST_BF16 environmental variable. If set to 1 , then torch.float is bfloat16 and torch.double is float32 . This is performed in the Accelerator object when passing downcast_bf16=True : Copied accelerator = Accelerator(mixed_precision= "bf16" , downcast_bf16= True ) Using downcasting instead of bf16 everywhere is good for when you are trying to calculate metrics, log values, and more where raw bf16 tensors would be unusable. Training Times on TPUs As you launch your script, you may notice that training seems exceptionally slow at first. This is because TPUs first run through a few batches of data to see how much memory to allocate before finally utilizing this configured memory allocation extremely efficiently. If you notice that your evaluation code to calculate the metrics of your model takes longer due to a larger batch size being used, it is recommended to keep the batch size the same as the training data if it is too slow. Otherwise the memory will reallocate to this new batch size after the first few iterations. Just because the memory is allocated does not mean it will be used or that the batch size will increase when going back to your training dataloader. < > Update on GitHub ← Low precision training methods Accelerator → Training on TP Us Training in a Notebook Mixed Precision and Global Variables Training Times on TP Us
configs.txt
configs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation configs Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started configs Helper module for using model configs. For more information, see the corresponding Python documentation . Example: Load an AutoConfig . Copied import { AutoConfig } from '@huggingface/transformers' ; const config = await AutoConfig . from_pretrained ( 'bert-base-uncased' ); console . log (config); // PretrainedConfig { // "model_type": "bert", // "is_encoder_decoder": false, // "architectures": [ // "BertForMaskedLM" // ], // "vocab_size": 30522 // "num_attention_heads": 12, // "num_hidden_layers": 12, // "hidden_size": 768, // "max_position_embeddings": 512, // ... // } configs static .PretrainedConfig new PretrainedConfig(configJSON) instance .model_type : string | null .is_encoder_decoder : boolean .max_position_embeddings : number static .from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise.<PretrainedConfig> .AutoConfig .from_pretrained() : * .getKeyValueShapes(config) ⇒ Record.<string, Array<number>> ~decoderFeeds : Record.<string, Array<number>> inner ~loadConfig(pretrained_model_name_or_path, options) ⇒ Promise.<Object> ~getNormalizedConfig(config) ⇒ Object ~PretrainedOptions : * configs.PretrainedConfig Base class for all configuration classes. For more information, see the corresponding Python documentation . Kind : static class of configs .PretrainedConfig new PretrainedConfig(configJSON) instance .model_type : string | null .is_encoder_decoder : boolean .max_position_embeddings : number static .from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise.<PretrainedConfig> new PretrainedConfig(configJSON) Create a new PreTrainedTokenizer instance. Param Type Description configJSON Object The JSON of the config. pretrainedConfig.model_type : <code> string </code> | <code> null </code> Kind : instance property of PretrainedConfig pretrainedConfig.is_encoder_decoder : <code> boolean </code> Kind : instance property of PretrainedConfig pretrainedConfig.max_position_embeddings : <code> number </code> Kind : instance property of PretrainedConfig PretrainedConfig.from_pretrained(pretrained_model_name_or_path, options) ⇒ <code> Promise. < PretrainedConfig > </code> Loads a pre-trained config from the given pretrained_model_name_or_path . Kind : static method of PretrainedConfig Returns : Promise.<PretrainedConfig> - A new instance of the PretrainedConfig class. Throws : Error Throws an error if the config.json is not found in the `pretrained_model_name_or_path`. Param Type Description pretrained_model_name_or_path string The path to the pre-trained config. options PretrainedOptions Additional options for loading the config. configs.AutoConfig Helper class which is used to instantiate pretrained configs with the from_pretrained function. Kind : static class of configs AutoConfig.from_pretrained() : <code> * </code> Kind : static method of AutoConfig configs.getKeyValueShapes(config) ⇒ <code> Record. < string, Array < number > > </code> Kind : static method of configs Param Type config PretrainedConfig getKeyValueShapes~decoderFeeds : <code> Record. < string, Array < number > > </code> Kind : inner constant of getKeyValueShapes configs~loadConfig(pretrained_model_name_or_path, options) ⇒ <code> Promise. < Object > </code> Loads a config from the specified path. Kind : inner method of configs Returns : Promise.<Object> - A promise that resolves with information about the loaded config. Param Type Description pretrained_model_name_or_path string The path to the config directory. options PretrainedOptions Additional options for loading the config. configs~getNormalizedConfig(config) ⇒ <code> Object </code> Kind : inner method of configs Returns : Object - The normalized configuration. Param Type config PretrainedConfig configs~PretrainedOptions : <code> * </code> Kind : inner typedef of configs < > Update on GitHub ← Processors Environment variables → configs configs. Pretrained Config new Pretrained Config(configJSO N) pretrained Config.model_type : string | null pretrained Config.is_encoder_decoder : boolean pretrained Config.max_position_embeddings : number Pretrained Config.from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise. < Pretrained Config > configs. Auto Config Auto Config.from_pretrained() : * configs.get Key Value Shapes(config) ⇒ Record. < string, Array < number > > get Key Value Shapes~decoder Feeds : Record. < string, Array < number > > configs~load Config(pretrained_model_name_or_path, options) ⇒ Promise. < Object > configs~get Normalized Config(config) ⇒ Object configs~ Pretrained Options : *
Utilities_for_FeatureExtractors.txt
Utilities for FeatureExtractors Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Utilities for FeatureExtractors Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Utilities for FeatureExtractors This page lists all the utility functions that can be used by the audio FeatureExtractor in order to compute special features from a raw audio using common algorithms such as Short Time Fourier Transform or log mel spectrogram . Most of those are only useful if you are studying the code of the audio processors in the library. Audio Transformations transformers.audio_utils.hertz_to_mel < source > ( freq : typing.Union[float, numpy.ndarray] mel_scale : str = 'htk' ) → float or np.ndarray Parameters freq ( float or np.ndarray ) — The frequency, or multiple frequencies, in hertz (Hz). mel_scale ( str , optional , defaults to "htk" ) — The mel frequency scale to use, "htk" , "kaldi" or "slaney" . Returns float or np.ndarray The frequencies on the mel scale. Convert frequency from hertz to mels. transformers.audio_utils.mel_to_hertz < source > ( mels : typing.Union[float, numpy.ndarray] mel_scale : str = 'htk' ) → float or np.ndarray Parameters mels ( float or np.ndarray ) — The frequency, or multiple frequencies, in mels. mel_scale ( str , optional , "htk" ) — The mel frequency scale to use, "htk" , "kaldi" or "slaney" . Returns float or np.ndarray The frequencies in hertz. Convert frequency from mels to hertz. transformers.audio_utils.mel_filter_bank < source > ( num_frequency_bins : int num_mel_filters : int min_frequency : float max_frequency : float sampling_rate : int norm : typing.Optional[str] = None mel_scale : str = 'htk' triangularize_in_mel_space : bool = False ) → np.ndarray of shape ( num_frequency_bins , num_mel_filters ) Parameters num_frequency_bins ( int ) — Number of frequencies used to compute the spectrogram (should be the same as in stft ). num_mel_filters ( int ) — Number of mel filters to generate. min_frequency ( float ) — Lowest frequency of interest in Hz. max_frequency ( float ) — Highest frequency of interest in Hz. This should not exceed sampling_rate / 2 . sampling_rate ( int ) — Sample rate of the audio waveform. norm ( str , optional ) — If "slaney" , divide the triangular mel weights by the width of the mel band (area normalization). mel_scale ( str , optional , defaults to "htk" ) — The mel frequency scale to use, "htk" , "kaldi" or "slaney" . triangularize_in_mel_space ( bool , optional , defaults to False ) — If this option is enabled, the triangular filter is applied in mel space rather than frequency space. This should be set to true in order to get the same results as torchaudio when computing mel filters. Returns np.ndarray of shape ( num_frequency_bins , num_mel_filters ) Triangular filter bank matrix. This is a projection matrix to go from a spectrogram to a mel spectrogram. Creates a frequency bin conversion matrix used to obtain a mel spectrogram. This is called a mel filter bank , and various implementation exist, which differ in the number of filters, the shape of the filters, the way the filters are spaced, the bandwidth of the filters, and the manner in which the spectrum is warped. The goal of these features is to approximate the non-linear human perception of the variation in pitch with respect to the frequency. Different banks of mel filters were introduced in the literature. The following variations are supported: MFCC FB-20: introduced in 1980 by Davis and Mermelstein, it assumes a sampling frequency of 10 kHz and a speech bandwidth of [0, 4600] Hz. MFCC FB-24 HTK: from the Cambridge HMM Toolkit (HTK) (1995) uses a filter bank of 24 filters for a speech bandwidth of [0, 8000] Hz. This assumes sampling rate ≥ 16 kHz. MFCC FB-40: from the Auditory Toolbox for MATLAB written by Slaney in 1998, assumes a sampling rate of 16 kHz and speech bandwidth of [133, 6854] Hz. This version also includes area normalization. HFCC-E FB-29 (Human Factor Cepstral Coefficients) of Skowronski and Harris (2004), assumes a sampling rate of 12.5 kHz and speech bandwidth of [0, 6250] Hz. This code is adapted from torchaudio and librosa . Note that the default parameters of torchaudio’s melscale_fbanks implement the "htk" filters while librosa uses the "slaney" implementation. transformers.audio_utils.optimal_fft_length < source > ( window_length : int ) Finds the best FFT input size for a given window_length . This function takes a given window length and, if not already a power of two, rounds it up to the next power or two. The FFT algorithm works fastest when the length of the input is a power of two, which may be larger than the size of the window or analysis frame. For example, if the window is 400 samples, using an FFT input size of 512 samples is more optimal than an FFT size of 400 samples. Using a larger FFT size does not affect the detected frequencies, it simply gives a higher frequency resolution (i.e. the frequency bins are smaller). transformers.audio_utils.window_function < source > ( window_length : int name : str = 'hann' periodic : bool = True frame_length : typing.Optional[int] = None center : bool = True ) Parameters window_length ( int ) — The length of the window in samples. name ( str , optional , defaults to "hann" ) — The name of the window function. periodic ( bool , optional , defaults to True ) — Whether the window is periodic or symmetric. frame_length ( int , optional ) — The length of the analysis frames in samples. Provide a value for frame_length if the window is smaller than the frame length, so that it will be zero-padded. center ( bool , optional , defaults to True ) — Whether to center the window inside the FFT buffer. Only used when frame_length is provided. Returns an array containing the specified window. This window is intended to be used with stft . The following window types are supported: "boxcar" : a rectangular window "hamming" : the Hamming window "hann" : the Hann window "povey" : the Povey window transformers.audio_utils.spectrogram < source > ( waveform : ndarray window : ndarray frame_length : int hop_length : int fft_length : typing.Optional[int] = None power : typing.Optional[float] = 1.0 center : bool = True pad_mode : str = 'reflect' onesided : bool = True preemphasis : typing.Optional[float] = None mel_filters : typing.Optional[numpy.ndarray] = None mel_floor : float = 1e-10 log_mel : typing.Optional[str] = None reference : float = 1.0 min_value : float = 1e-10 db_range : typing.Optional[float] = None remove_dc_offset : typing.Optional[bool] = None dtype : dtype = <class 'numpy.float32'> ) Parameters waveform ( np.ndarray of shape (length,) ) — The input waveform. This must be a single real-valued, mono waveform. window ( np.ndarray of shape (frame_length,) ) — The windowing function to apply, including zero-padding if necessary. The actual window length may be shorter than frame_length , but we’re assuming the array has already been zero-padded. frame_length ( int ) — The length of the analysis frames in samples. With librosa this is always equal to fft_length but we also allow smaller sizes. hop_length ( int ) — The stride between successive analysis frames in samples. fft_length ( int , optional ) — The size of the FFT buffer in samples. This determines how many frequency bins the spectrogram will have. For optimal speed, this should be a power of two. If None , uses frame_length . power ( float , optional , defaults to 1.0) — If 1.0, returns the amplitude spectrogram. If 2.0, returns the power spectrogram. If None , returns complex numbers. center ( bool , optional , defaults to True ) — Whether to pad the waveform so that frame t is centered around time t * hop_length . If False , frame t will start at time t * hop_length . pad_mode ( str , optional , defaults to "reflect" ) — Padding mode used when center is True . Possible values are: "constant" (pad with zeros), "edge" (pad with edge values), "reflect" (pads with mirrored values). onesided ( bool , optional , defaults to True ) — If True, only computes the positive frequencies and returns a spectrogram containing fft_length // 2 + 1 frequency bins. If False, also computes the negative frequencies and returns fft_length frequency bins. preemphasis ( float , optional ) — Coefficient for a low-pass filter that applies pre-emphasis before the DFT. mel_filters ( np.ndarray of shape (num_freq_bins, num_mel_filters) , optional ) — The mel filter bank. If supplied, applies a this filter bank to create a mel spectrogram. mel_floor ( float , optional , defaults to 1e-10) — Minimum value of mel frequency banks. log_mel ( str , optional ) — How to convert the spectrogram to log scale. Possible options are: None (don’t convert), "log" (take the natural logarithm) "log10" (take the base-10 logarithm), "dB" (convert to decibels). Can only be used when power is not None . reference ( float , optional , defaults to 1.0) — Sets the input spectrogram value that corresponds to 0 dB. For example, use np.max(spectrogram) to set the loudest part to 0 dB. Must be greater than zero. min_value ( float , optional , defaults to 1e-10 ) — The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking log(0) . For a power spectrogram, the default of 1e-10 corresponds to a minimum of -100 dB. For an amplitude spectrogram, the value 1e-5 corresponds to -100 dB. Must be greater than zero. db_range ( float , optional ) — Sets the maximum dynamic range in decibels. For example, if db_range = 80 , the difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero. remove_dc_offset ( bool , optional ) — Subtract mean from waveform on each frame, applied before pre-emphasis. This should be set to true in order to get the same results as torchaudio.compliance.kaldi.fbank when computing mel filters. dtype ( np.dtype , optional , defaults to np.float32 ) — Data type of the spectrogram tensor. If power is None, this argument is ignored and the dtype will be np.complex64 . Calculates a spectrogram over one waveform using the Short-Time Fourier Transform. This function can create the following kinds of spectrograms: amplitude spectrogram ( power = 1.0 ) power spectrogram ( power = 2.0 ) complex-valued spectrogram ( power = None ) log spectrogram (use log_mel argument) mel spectrogram (provide mel_filters ) log-mel spectrogram (provide mel_filters and log_mel ) How this works: The input waveform is split into frames of size frame_length that are partially overlapping by `frame_length hop_length` samples. Each frame is multiplied by the window and placed into a buffer of size fft_length . The DFT is taken of each windowed frame. The results are stacked into a spectrogram. We make a distinction between the following “blocks” of sample data, each of which may have a different lengths: The analysis frame. This is the size of the time slices that the input waveform is split into. The window. Each analysis frame is multiplied by the window to avoid spectral leakage. The FFT input buffer. The length of this determines how many frequency bins are in the spectrogram. In this implementation, the window is assumed to be zero-padded to have the same size as the analysis frame. A padded window can be obtained from window_function() . The FFT input buffer may be larger than the analysis frame, typically the next power of two. Note: This function is not optimized for speed yet. It should be mostly compatible with librosa.stft and torchaudio.functional.transforms.Spectrogram , although it is more flexible due to the different ways spectrograms can be constructed. transformers.audio_utils.power_to_db < source > ( spectrogram : ndarray reference : float = 1.0 min_value : float = 1e-10 db_range : typing.Optional[float] = None ) → np.ndarray Parameters spectrogram ( np.ndarray ) — The input power (mel) spectrogram. Note that a power spectrogram has the amplitudes squared! reference ( float , optional , defaults to 1.0) — Sets the input spectrogram value that corresponds to 0 dB. For example, use np.max(spectrogram) to set the loudest part to 0 dB. Must be greater than zero. min_value ( float , optional , defaults to 1e-10 ) — The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking log(0) . The default of 1e-10 corresponds to a minimum of -100 dB. Must be greater than zero. db_range ( float , optional ) — Sets the maximum dynamic range in decibels. For example, if db_range = 80 , the difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero. Returns np.ndarray the spectrogram in decibels Converts a power spectrogram to the decibel scale. This computes 10 * log10(spectrogram / reference) , using basic logarithm properties for numerical stability. The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it. This means that large variations in energy may not sound all that different if the sound is loud to begin with. This compression operation makes the (mel) spectrogram features match more closely what humans actually hear. Based on the implementation of librosa.power_to_db . transformers.audio_utils.amplitude_to_db < source > ( spectrogram : ndarray reference : float = 1.0 min_value : float = 1e-05 db_range : typing.Optional[float] = None ) → np.ndarray Parameters spectrogram ( np.ndarray ) — The input amplitude (mel) spectrogram. reference ( float , optional , defaults to 1.0) — Sets the input spectrogram value that corresponds to 0 dB. For example, use np.max(spectrogram) to set the loudest part to 0 dB. Must be greater than zero. min_value ( float , optional , defaults to 1e-5 ) — The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking log(0) . The default of 1e-5 corresponds to a minimum of -100 dB. Must be greater than zero. db_range ( float , optional ) — Sets the maximum dynamic range in decibels. For example, if db_range = 80 , the difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero. Returns np.ndarray the spectrogram in decibels Converts an amplitude spectrogram to the decibel scale. This computes 20 * log10(spectrogram / reference) , using basic logarithm properties for numerical stability. The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it. This means that large variations in energy may not sound all that different if the sound is loud to begin with. This compression operation makes the (mel) spectrogram features match more closely what humans actually hear. < > Update on GitHub ← Utilities for Image Processors General Utilities → Utilities for Feature Extractors Audio Transformations
Interface__TextGenerationOutput.txt
Interface: TextGenerationOutput Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: TextGenerationOutput Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: TextGenerationOutput Text Generation Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts . Indexable ▪ [property: string ]: unknown Properties details • Optional details : TextGenerationOutputDetails Defined in tasks/dist/commonjs/tasks/text-generation/inference.d.ts:121 generated _ text • generated_text : string Defined in tasks/dist/commonjs/tasks/text-generation/inference.d.ts:122 < > Update on GitHub ← TextGenerationInput TextGenerationStreamBestOfSequence → Interface: Text Generation Output Indexable Properties details Defined in generated _ text Defined in
Auto_Classes.txt
Auto Classes Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Auto Classes Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Auto Classes In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you are supplying to the from_pretrained() method. AutoClasses are here to do this job for you so that you automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary. Instantiating one of AutoConfig , AutoModel , and AutoTokenizer will directly create a class of the relevant architecture. For instance Copied model = AutoModel.from_pretrained( "google-bert/bert-base-cased" ) will create a model that is an instance of BertModel . There is one class of AutoModel for each task, and for each backend (PyTorch, TensorFlow, or Flax). Extending the Auto Classes Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a custom class of model NewModel , make sure you have a NewModelConfig then you can add those to the auto classes like this: Copied from transformers import AutoConfig, AutoModel AutoConfig.register( "new-model" , NewModelConfig) AutoModel.register(NewModelConfig, NewModel) You will then be able to use the auto classes like you would usually do! If your NewModelConfig is a subclass of PretrainedConfig , make sure its model_type attribute is set to the same key you use when registering the config (here "new-model" ). Likewise, if your NewModel is a subclass of PreTrainedModel , make sure its config_class attribute is set to the same class you use when registering the model (here NewModelConfig ). AutoConfig class transformers. AutoConfig < source > ( ) This is a generic configuration class that will be instantiated as one of the configuration classes of the library when created with the from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( pretrained_model_name_or_path **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co. A path to a directory containing a configuration file saved using the save_pretrained() method, or the save_pretrained() method, e.g., ./my_model_directory/ . A path or url to a saved configuration JSON file , e.g., ./my_model_directory/configuration.json . cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. return_unused_kwargs ( bool , optional , defaults to False ) — If False , then this function returns just the final configuration object. If True , then this functions returns a Tuple(config, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs which has not been used to update config and is otherwise ignored. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs(additional keyword arguments, optional ) — The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter. Instantiate one of the configuration classes of the library from a pretrained model configuration. The configuration class to instantiate is selected based on the model_type property of the config object that is loaded, or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — AlbertConfig (ALBERT model) align — AlignConfig (ALIGN model) altclip — AltCLIPConfig (AltCLIP model) aria — AriaConfig (Aria model) aria_text — AriaTextConfig (AriaText model) audio-spectrogram-transformer — ASTConfig (Audio Spectrogram Transformer model) autoformer — AutoformerConfig (Autoformer model) bamba — BambaConfig (Bamba model) bark — BarkConfig (Bark model) bart — BartConfig (BART model) beit — BeitConfig (BEiT model) bert — BertConfig (BERT model) bert-generation — BertGenerationConfig (Bert Generation model) big_bird — BigBirdConfig (BigBird model) bigbird_pegasus — BigBirdPegasusConfig (BigBird-Pegasus model) biogpt — BioGptConfig (BioGpt model) bit — BitConfig (BiT model) blenderbot — BlenderbotConfig (Blenderbot model) blenderbot-small — BlenderbotSmallConfig (BlenderbotSmall model) blip — BlipConfig (BLIP model) blip-2 — Blip2Config (BLIP-2 model) bloom — BloomConfig (BLOOM model) bridgetower — BridgeTowerConfig (BridgeTower model) bros — BrosConfig (BROS model) camembert — CamembertConfig (CamemBERT model) canine — CanineConfig (CANINE model) chameleon — ChameleonConfig (Chameleon model) chinese_clip — ChineseCLIPConfig (Chinese-CLIP model) chinese_clip_vision_model — ChineseCLIPVisionConfig (ChineseCLIPVisionModel model) clap — ClapConfig (CLAP model) clip — CLIPConfig (CLIP model) clip_text_model — CLIPTextConfig (CLIPTextModel model) clip_vision_model — CLIPVisionConfig (CLIPVisionModel model) clipseg — CLIPSegConfig (CLIPSeg model) clvp — ClvpConfig (CLVP model) code_llama — LlamaConfig (CodeLlama model) codegen — CodeGenConfig (CodeGen model) cohere — CohereConfig (Cohere model) cohere2 — Cohere2Config (Cohere2 model) colpali — ColPaliConfig (ColPali model) conditional_detr — ConditionalDetrConfig (Conditional DETR model) convbert — ConvBertConfig (ConvBERT model) convnext — ConvNextConfig (ConvNeXT model) convnextv2 — ConvNextV2Config (ConvNeXTV2 model) cpmant — CpmAntConfig (CPM-Ant model) ctrl — CTRLConfig (CTRL model) cvt — CvtConfig (CvT model) dac — DacConfig (DAC model) data2vec-audio — Data2VecAudioConfig (Data2VecAudio model) data2vec-text — Data2VecTextConfig (Data2VecText model) data2vec-vision — Data2VecVisionConfig (Data2VecVision model) dbrx — DbrxConfig (DBRX model) deberta — DebertaConfig (DeBERTa model) deberta-v2 — DebertaV2Config (DeBERTa-v2 model) decision_transformer — DecisionTransformerConfig (Decision Transformer model) deformable_detr — DeformableDetrConfig (Deformable DETR model) deit — DeiTConfig (DeiT model) depth_anything — DepthAnythingConfig (Depth Anything model) deta — DetaConfig (DETA model) detr — DetrConfig (DETR model) diffllama — DiffLlamaConfig (DiffLlama model) dinat — DinatConfig (DiNAT model) dinov2 — Dinov2Config (DINOv2 model) dinov2_with_registers — Dinov2WithRegistersConfig (DINOv2 with Registers model) distilbert — DistilBertConfig (DistilBERT model) donut-swin — DonutSwinConfig (DonutSwin model) dpr — DPRConfig (DPR model) dpt — DPTConfig (DPT model) efficientformer — EfficientFormerConfig (EfficientFormer model) efficientnet — EfficientNetConfig (EfficientNet model) electra — ElectraConfig (ELECTRA model) emu3 — Emu3Config (Emu3 model) encodec — EncodecConfig (EnCodec model) encoder-decoder — EncoderDecoderConfig (Encoder decoder model) ernie — ErnieConfig (ERNIE model) ernie_m — ErnieMConfig (ErnieM model) esm — EsmConfig (ESM model) falcon — FalconConfig (Falcon model) falcon_mamba — FalconMambaConfig (FalconMamba model) fastspeech2_conformer — FastSpeech2ConformerConfig (FastSpeech2Conformer model) flaubert — FlaubertConfig (FlauBERT model) flava — FlavaConfig (FLAVA model) fnet — FNetConfig (FNet model) focalnet — FocalNetConfig (FocalNet model) fsmt — FSMTConfig (FairSeq Machine-Translation model) funnel — FunnelConfig (Funnel Transformer model) fuyu — FuyuConfig (Fuyu model) gemma — GemmaConfig (Gemma model) gemma2 — Gemma2Config (Gemma2 model) git — GitConfig (GIT model) glm — GlmConfig (GLM model) glpn — GLPNConfig (GLPN model) gpt-sw3 — GPT2Config (GPT-Sw3 model) gpt2 — GPT2Config (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeConfig (GPTBigCode model) gpt_neo — GPTNeoConfig (GPT Neo model) gpt_neox — GPTNeoXConfig (GPT NeoX model) gpt_neox_japanese — GPTNeoXJapaneseConfig (GPT NeoX Japanese model) gptj — GPTJConfig (GPT-J model) gptsan-japanese — GPTSanJapaneseConfig (GPTSAN-japanese model) granite — GraniteConfig (Granite model) granitemoe — GraniteMoeConfig (GraniteMoeMoe model) graphormer — GraphormerConfig (Graphormer model) grounding-dino — GroundingDinoConfig (Grounding DINO model) groupvit — GroupViTConfig (GroupViT model) hiera — HieraConfig (Hiera model) hubert — HubertConfig (Hubert model) ibert — IBertConfig (I-BERT model) idefics — IdeficsConfig (IDEFICS model) idefics2 — Idefics2Config (Idefics2 model) idefics3 — Idefics3Config (Idefics3 model) idefics3_vision — Idefics3VisionConfig (Idefics3VisionTransformer model) ijepa — IJepaConfig (I-JEPA model) imagegpt — ImageGPTConfig (ImageGPT model) informer — InformerConfig (Informer model) instructblip — InstructBlipConfig (InstructBLIP model) instructblipvideo — InstructBlipVideoConfig (InstructBlipVideo model) jamba — JambaConfig (Jamba model) jetmoe — JetMoeConfig (JetMoe model) jukebox — JukeboxConfig (Jukebox model) kosmos-2 — Kosmos2Config (KOSMOS-2 model) layoutlm — LayoutLMConfig (LayoutLM model) layoutlmv2 — LayoutLMv2Config (LayoutLMv2 model) layoutlmv3 — LayoutLMv3Config (LayoutLMv3 model) led — LEDConfig (LED model) levit — LevitConfig (LeViT model) lilt — LiltConfig (LiLT model) llama — LlamaConfig (LLaMA model) llava — LlavaConfig (LLaVa model) llava_next — LlavaNextConfig (LLaVA-NeXT model) llava_next_video — LlavaNextVideoConfig (LLaVa-NeXT-Video model) llava_onevision — LlavaOnevisionConfig (LLaVA-Onevision model) longformer — LongformerConfig (Longformer model) longt5 — LongT5Config (LongT5 model) luke — LukeConfig (LUKE model) lxmert — LxmertConfig (LXMERT model) m2m_100 — M2M100Config (M2M100 model) mamba — MambaConfig (Mamba model) mamba2 — Mamba2Config (mamba2 model) marian — MarianConfig (Marian model) markuplm — MarkupLMConfig (MarkupLM model) mask2former — Mask2FormerConfig (Mask2Former model) maskformer — MaskFormerConfig (MaskFormer model) maskformer-swin — MaskFormerSwinConfig (MaskFormerSwin model) mbart — MBartConfig (mBART model) mctct — MCTCTConfig (M-CTC-T model) mega — MegaConfig (MEGA model) megatron-bert — MegatronBertConfig (Megatron-BERT model) mgp-str — MgpstrConfig (MGP-STR model) mimi — MimiConfig (Mimi model) mistral — MistralConfig (Mistral model) mixtral — MixtralConfig (Mixtral model) mllama — MllamaConfig (Mllama model) mobilebert — MobileBertConfig (MobileBERT model) mobilenet_v1 — MobileNetV1Config (MobileNetV1 model) mobilenet_v2 — MobileNetV2Config (MobileNetV2 model) mobilevit — MobileViTConfig (MobileViT model) mobilevitv2 — MobileViTV2Config (MobileViTV2 model) modernbert — ModernBertConfig (ModernBERT model) moonshine — MoonshineConfig (Moonshine model) moshi — MoshiConfig (Moshi model) mpnet — MPNetConfig (MPNet model) mpt — MptConfig (MPT model) mra — MraConfig (MRA model) mt5 — MT5Config (MT5 model) musicgen — MusicgenConfig (MusicGen model) musicgen_melody — MusicgenMelodyConfig (MusicGen Melody model) mvp — MvpConfig (MVP model) nat — NatConfig (NAT model) nemotron — NemotronConfig (Nemotron model) nezha — NezhaConfig (Nezha model) nllb-moe — NllbMoeConfig (NLLB-MOE model) nougat — VisionEncoderDecoderConfig (Nougat model) nystromformer — NystromformerConfig (Nyströmformer model) olmo — OlmoConfig (OLMo model) olmo2 — Olmo2Config (OLMo2 model) olmoe — OlmoeConfig (OLMoE model) omdet-turbo — OmDetTurboConfig (OmDet-Turbo model) oneformer — OneFormerConfig (OneFormer model) open-llama — OpenLlamaConfig (OpenLlama model) openai-gpt — OpenAIGPTConfig (OpenAI GPT model) opt — OPTConfig (OPT model) owlv2 — Owlv2Config (OWLv2 model) owlvit — OwlViTConfig (OWL-ViT model) paligemma — PaliGemmaConfig (PaliGemma model) patchtsmixer — PatchTSMixerConfig (PatchTSMixer model) patchtst — PatchTSTConfig (PatchTST model) pegasus — PegasusConfig (Pegasus model) pegasus_x — PegasusXConfig (PEGASUS-X model) perceiver — PerceiverConfig (Perceiver model) persimmon — PersimmonConfig (Persimmon model) phi — PhiConfig (Phi model) phi3 — Phi3Config (Phi3 model) phimoe — PhimoeConfig (Phimoe model) pix2struct — Pix2StructConfig (Pix2Struct model) pixtral — PixtralVisionConfig (Pixtral model) plbart — PLBartConfig (PLBart model) poolformer — PoolFormerConfig (PoolFormer model) pop2piano — Pop2PianoConfig (Pop2Piano model) prophetnet — ProphetNetConfig (ProphetNet model) pvt — PvtConfig (PVT model) pvt_v2 — PvtV2Config (PVTv2 model) qdqbert — QDQBertConfig (QDQBert model) qwen2 — Qwen2Config (Qwen2 model) qwen2_audio — Qwen2AudioConfig (Qwen2Audio model) qwen2_audio_encoder — Qwen2AudioEncoderConfig (Qwen2AudioEncoder model) qwen2_moe — Qwen2MoeConfig (Qwen2MoE model) qwen2_vl — Qwen2VLConfig (Qwen2VL model) rag — RagConfig (RAG model) realm — RealmConfig (REALM model) recurrent_gemma — RecurrentGemmaConfig (RecurrentGemma model) reformer — ReformerConfig (Reformer model) regnet — RegNetConfig (RegNet model) rembert — RemBertConfig (RemBERT model) resnet — ResNetConfig (ResNet model) retribert — RetriBertConfig (RetriBERT model) roberta — RobertaConfig (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormConfig (RoBERTa-PreLayerNorm model) roc_bert — RoCBertConfig (RoCBert model) roformer — RoFormerConfig (RoFormer model) rt_detr — RTDetrConfig (RT-DETR model) rt_detr_resnet — RTDetrResNetConfig (RT-DETR-ResNet model) rwkv — RwkvConfig (RWKV model) sam — SamConfig (SAM model) seamless_m4t — SeamlessM4TConfig (SeamlessM4T model) seamless_m4t_v2 — SeamlessM4Tv2Config (SeamlessM4Tv2 model) segformer — SegformerConfig (SegFormer model) seggpt — SegGptConfig (SegGPT model) sew — SEWConfig (SEW model) sew-d — SEWDConfig (SEW-D model) siglip — SiglipConfig (SigLIP model) siglip_vision_model — SiglipVisionConfig (SiglipVisionModel model) speech-encoder-decoder — SpeechEncoderDecoderConfig (Speech Encoder decoder model) speech_to_text — Speech2TextConfig (Speech2Text model) speech_to_text_2 — Speech2Text2Config (Speech2Text2 model) speecht5 — SpeechT5Config (SpeechT5 model) splinter — SplinterConfig (Splinter model) squeezebert — SqueezeBertConfig (SqueezeBERT model) stablelm — StableLmConfig (StableLm model) starcoder2 — Starcoder2Config (Starcoder2 model) superpoint — SuperPointConfig (SuperPoint model) swiftformer — SwiftFormerConfig (SwiftFormer model) swin — SwinConfig (Swin Transformer model) swin2sr — Swin2SRConfig (Swin2SR model) swinv2 — Swinv2Config (Swin Transformer V2 model) switch_transformers — SwitchTransformersConfig (SwitchTransformers model) t5 — T5Config (T5 model) table-transformer — TableTransformerConfig (Table Transformer model) tapas — TapasConfig (TAPAS model) textnet — TextNetConfig (TextNet model) time_series_transformer — TimeSeriesTransformerConfig (Time Series Transformer model) timesformer — TimesformerConfig (TimeSformer model) timm_backbone — TimmBackboneConfig (TimmBackbone model) timm_wrapper — TimmWrapperConfig (TimmWrapperModel model) trajectory_transformer — TrajectoryTransformerConfig (Trajectory Transformer model) transfo-xl — TransfoXLConfig (Transformer-XL model) trocr — TrOCRConfig (TrOCR model) tvlt — TvltConfig (TVLT model) tvp — TvpConfig (TVP model) udop — UdopConfig (UDOP model) umt5 — UMT5Config (UMT5 model) unispeech — UniSpeechConfig (UniSpeech model) unispeech-sat — UniSpeechSatConfig (UniSpeechSat model) univnet — UnivNetConfig (UnivNet model) upernet — UperNetConfig (UPerNet model) van — VanConfig (VAN model) video_llava — VideoLlavaConfig (VideoLlava model) videomae — VideoMAEConfig (VideoMAE model) vilt — ViltConfig (ViLT model) vipllava — VipLlavaConfig (VipLlava model) vision-encoder-decoder — VisionEncoderDecoderConfig (Vision Encoder decoder model) vision-text-dual-encoder — VisionTextDualEncoderConfig (VisionTextDualEncoder model) visual_bert — VisualBertConfig (VisualBERT model) vit — ViTConfig (ViT model) vit_hybrid — ViTHybridConfig (ViT Hybrid model) vit_mae — ViTMAEConfig (ViTMAE model) vit_msn — ViTMSNConfig (ViTMSN model) vitdet — VitDetConfig (VitDet model) vitmatte — VitMatteConfig (ViTMatte model) vitpose — VitPoseConfig (VitPose model) vitpose_backbone — VitPoseBackboneConfig (VitPoseBackbone model) vits — VitsConfig (VITS model) vivit — VivitConfig (ViViT model) wav2vec2 — Wav2Vec2Config (Wav2Vec2 model) wav2vec2-bert — Wav2Vec2BertConfig (Wav2Vec2-BERT model) wav2vec2-conformer — Wav2Vec2ConformerConfig (Wav2Vec2-Conformer model) wavlm — WavLMConfig (WavLM model) whisper — WhisperConfig (Whisper model) xclip — XCLIPConfig (X-CLIP model) xglm — XGLMConfig (XGLM model) xlm — XLMConfig (XLM model) xlm-prophetnet — XLMProphetNetConfig (XLM-ProphetNet model) xlm-roberta — XLMRobertaConfig (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLConfig (XLM-RoBERTa-XL model) xlnet — XLNetConfig (XLNet model) xmod — XmodConfig (X-MOD model) yolos — YolosConfig (YOLOS model) yoso — YosoConfig (YOSO model) zamba — ZambaConfig (Zamba model) zoedepth — ZoeDepthConfig (ZoeDepth model) Examples: Copied >>> from transformers import AutoConfig >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-uncased" ) >>> # Download configuration from huggingface.co (user-uploaded) and cache. >>> config = AutoConfig.from_pretrained( "dbmdz/bert-base-german-cased" ) >>> # If configuration file is in a directory (e.g., was saved using *save_pretrained('./test/saved_model/')*). >>> config = AutoConfig.from_pretrained( "./test/bert_saved_model/" ) >>> # Load a specific configuration file. >>> config = AutoConfig.from_pretrained( "./test/bert_saved_model/my_configuration.json" ) >>> # Change some config attributes when loading a pretrained config. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-uncased" , output_attentions= True , foo= False ) >>> config.output_attentions True >>> config, unused_kwargs = AutoConfig.from_pretrained( ... "google-bert/bert-base-uncased" , output_attentions= True , foo= False , return_unused_kwargs= True ... ) >>> config.output_attentions True >>> unused_kwargs { 'foo' : False } register < source > ( model_type config exist_ok = False ) Parameters model_type ( str ) — The model type like “bert” or “gpt”. config ( PretrainedConfig ) — The config to register. Register a new configuration for this class. AutoTokenizer class transformers. AutoTokenizer < source > ( ) This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when created with the AutoTokenizer.from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( pretrained_model_name_or_path *inputs **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co. A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g., ./my_model_directory/ . A path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (like Bert or XLNet), e.g.: ./my_model_directory/vocab.txt . (Not applicable to all derived classes) inputs (additional positional arguments, optional ) — Will be passed along to the Tokenizer __init__() method. config ( PretrainedConfig , optional ) — The configuration object used to determine the tokenizer class to instantiate. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. subfolder ( str , optional ) — In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here. use_fast ( bool , optional , defaults to True ) — Use a fast Rust-based tokenizer if it is supported for a given model. If a fast tokenizer is not available for a given model, a normal Python-based tokenizer is returned instead. tokenizer_type ( str , optional ) — Tokenizer type to be loaded. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs (additional keyword arguments, optional ) — Will be passed to the Tokenizer __init__() method. Can be used to set special tokens like bos_token , eos_token , unk_token , sep_token , pad_token , cls_token , mask_token , additional_special_tokens . See parameters in the __init__() for more details. Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary. The tokenizer class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — AlbertTokenizer or AlbertTokenizerFast (ALBERT model) align — BertTokenizer or BertTokenizerFast (ALIGN model) aria — LlamaTokenizer or LlamaTokenizerFast (Aria model) bark — BertTokenizer or BertTokenizerFast (Bark model) bart — BartTokenizer or BartTokenizerFast (BART model) barthez — BarthezTokenizer or BarthezTokenizerFast (BARThez model) bartpho — BartphoTokenizer (BARTpho model) bert — BertTokenizer or BertTokenizerFast (BERT model) bert-generation — BertGenerationTokenizer (Bert Generation model) bert-japanese — BertJapaneseTokenizer (BertJapanese model) bertweet — BertweetTokenizer (BERTweet model) big_bird — BigBirdTokenizer or BigBirdTokenizerFast (BigBird model) bigbird_pegasus — PegasusTokenizer or PegasusTokenizerFast (BigBird-Pegasus model) biogpt — BioGptTokenizer (BioGpt model) blenderbot — BlenderbotTokenizer or BlenderbotTokenizerFast (Blenderbot model) blenderbot-small — BlenderbotSmallTokenizer (BlenderbotSmall model) blip — BertTokenizer or BertTokenizerFast (BLIP model) blip-2 — GPT2Tokenizer or GPT2TokenizerFast (BLIP-2 model) bloom — BloomTokenizerFast (BLOOM model) bridgetower — RobertaTokenizer or RobertaTokenizerFast (BridgeTower model) bros — BertTokenizer or BertTokenizerFast (BROS model) byt5 — ByT5Tokenizer (ByT5 model) camembert — CamembertTokenizer or CamembertTokenizerFast (CamemBERT model) canine — CanineTokenizer (CANINE model) chameleon — LlamaTokenizer or LlamaTokenizerFast (Chameleon model) chinese_clip — BertTokenizer or BertTokenizerFast (Chinese-CLIP model) clap — RobertaTokenizer or RobertaTokenizerFast (CLAP model) clip — CLIPTokenizer or CLIPTokenizerFast (CLIP model) clipseg — CLIPTokenizer or CLIPTokenizerFast (CLIPSeg model) clvp — ClvpTokenizer (CLVP model) code_llama — CodeLlamaTokenizer or CodeLlamaTokenizerFast (CodeLlama model) codegen — CodeGenTokenizer or CodeGenTokenizerFast (CodeGen model) cohere — CohereTokenizerFast (Cohere model) cohere2 — CohereTokenizerFast (Cohere2 model) colpali — LlamaTokenizer or LlamaTokenizerFast (ColPali model) convbert — ConvBertTokenizer or ConvBertTokenizerFast (ConvBERT model) cpm — CpmTokenizer or CpmTokenizerFast (CPM model) cpmant — CpmAntTokenizer (CPM-Ant model) ctrl — CTRLTokenizer (CTRL model) data2vec-audio — Wav2Vec2CTCTokenizer (Data2VecAudio model) data2vec-text — RobertaTokenizer or RobertaTokenizerFast (Data2VecText model) dbrx — GPT2Tokenizer or GPT2TokenizerFast (DBRX model) deberta — DebertaTokenizer or DebertaTokenizerFast (DeBERTa model) deberta-v2 — DebertaV2Tokenizer or DebertaV2TokenizerFast (DeBERTa-v2 model) diffllama — LlamaTokenizer or LlamaTokenizerFast (DiffLlama model) distilbert — DistilBertTokenizer or DistilBertTokenizerFast (DistilBERT model) dpr — DPRQuestionEncoderTokenizer or DPRQuestionEncoderTokenizerFast (DPR model) electra — ElectraTokenizer or ElectraTokenizerFast (ELECTRA model) emu3 — GPT2Tokenizer or GPT2TokenizerFast (Emu3 model) ernie — BertTokenizer or BertTokenizerFast (ERNIE model) ernie_m — ErnieMTokenizer (ErnieM model) esm — EsmTokenizer (ESM model) falcon — PreTrainedTokenizerFast (Falcon model) falcon_mamba — GPTNeoXTokenizerFast (FalconMamba model) fastspeech2_conformer — (FastSpeech2Conformer model) flaubert — FlaubertTokenizer (FlauBERT model) fnet — FNetTokenizer or FNetTokenizerFast (FNet model) fsmt — FSMTTokenizer (FairSeq Machine-Translation model) funnel — FunnelTokenizer or FunnelTokenizerFast (Funnel Transformer model) gemma — GemmaTokenizer or GemmaTokenizerFast (Gemma model) gemma2 — GemmaTokenizer or GemmaTokenizerFast (Gemma2 model) git — BertTokenizer or BertTokenizerFast (GIT model) glm — PreTrainedTokenizerFast (GLM model) gpt-sw3 — GPTSw3Tokenizer (GPT-Sw3 model) gpt2 — GPT2Tokenizer or GPT2TokenizerFast (OpenAI GPT-2 model) gpt_bigcode — GPT2Tokenizer or GPT2TokenizerFast (GPTBigCode model) gpt_neo — GPT2Tokenizer or GPT2TokenizerFast (GPT Neo model) gpt_neox — GPTNeoXTokenizerFast (GPT NeoX model) gpt_neox_japanese — GPTNeoXJapaneseTokenizer (GPT NeoX Japanese model) gptj — GPT2Tokenizer or GPT2TokenizerFast (GPT-J model) gptsan-japanese — GPTSanJapaneseTokenizer (GPTSAN-japanese model) grounding-dino — BertTokenizer or BertTokenizerFast (Grounding DINO model) groupvit — CLIPTokenizer or CLIPTokenizerFast (GroupViT model) herbert — HerbertTokenizer or HerbertTokenizerFast (HerBERT model) hubert — Wav2Vec2CTCTokenizer (Hubert model) ibert — RobertaTokenizer or RobertaTokenizerFast (I-BERT model) idefics — LlamaTokenizerFast (IDEFICS model) idefics2 — LlamaTokenizer or LlamaTokenizerFast (Idefics2 model) idefics3 — LlamaTokenizer or LlamaTokenizerFast (Idefics3 model) instructblip — GPT2Tokenizer or GPT2TokenizerFast (InstructBLIP model) instructblipvideo — GPT2Tokenizer or GPT2TokenizerFast (InstructBlipVideo model) jamba — LlamaTokenizer or LlamaTokenizerFast (Jamba model) jetmoe — LlamaTokenizer or LlamaTokenizerFast (JetMoe model) jukebox — JukeboxTokenizer (Jukebox model) kosmos-2 — XLMRobertaTokenizer or XLMRobertaTokenizerFast (KOSMOS-2 model) layoutlm — LayoutLMTokenizer or LayoutLMTokenizerFast (LayoutLM model) layoutlmv2 — LayoutLMv2Tokenizer or LayoutLMv2TokenizerFast (LayoutLMv2 model) layoutlmv3 — LayoutLMv3Tokenizer or LayoutLMv3TokenizerFast (LayoutLMv3 model) layoutxlm — LayoutXLMTokenizer or LayoutXLMTokenizerFast (LayoutXLM model) led — LEDTokenizer or LEDTokenizerFast (LED model) lilt — LayoutLMv3Tokenizer or LayoutLMv3TokenizerFast (LiLT model) llama — LlamaTokenizer or LlamaTokenizerFast (LLaMA model) llava — LlamaTokenizer or LlamaTokenizerFast (LLaVa model) llava_next — LlamaTokenizer or LlamaTokenizerFast (LLaVA-NeXT model) llava_next_video — LlamaTokenizer or LlamaTokenizerFast (LLaVa-NeXT-Video model) llava_onevision — LlamaTokenizer or LlamaTokenizerFast (LLaVA-Onevision model) longformer — LongformerTokenizer or LongformerTokenizerFast (Longformer model) longt5 — T5Tokenizer or T5TokenizerFast (LongT5 model) luke — LukeTokenizer (LUKE model) lxmert — LxmertTokenizer or LxmertTokenizerFast (LXMERT model) m2m_100 — M2M100Tokenizer (M2M100 model) mamba — GPTNeoXTokenizerFast (Mamba model) mamba2 — GPTNeoXTokenizerFast (mamba2 model) marian — MarianTokenizer (Marian model) mbart — MBartTokenizer or MBartTokenizerFast (mBART model) mbart50 — MBart50Tokenizer or MBart50TokenizerFast (mBART-50 model) mega — RobertaTokenizer or RobertaTokenizerFast (MEGA model) megatron-bert — BertTokenizer or BertTokenizerFast (Megatron-BERT model) mgp-str — MgpstrTokenizer (MGP-STR model) mistral — LlamaTokenizer or LlamaTokenizerFast (Mistral model) mixtral — LlamaTokenizer or LlamaTokenizerFast (Mixtral model) mllama — LlamaTokenizer or LlamaTokenizerFast (Mllama model) mluke — MLukeTokenizer (mLUKE model) mobilebert — MobileBertTokenizer or MobileBertTokenizerFast (MobileBERT model) modernbert — PreTrainedTokenizerFast (ModernBERT model) moonshine — PreTrainedTokenizerFast (Moonshine model) moshi — PreTrainedTokenizerFast (Moshi model) mpnet — MPNetTokenizer or MPNetTokenizerFast (MPNet model) mpt — GPTNeoXTokenizerFast (MPT model) mra — RobertaTokenizer or RobertaTokenizerFast (MRA model) mt5 — MT5Tokenizer or MT5TokenizerFast (MT5 model) musicgen — T5Tokenizer or T5TokenizerFast (MusicGen model) musicgen_melody — T5Tokenizer or T5TokenizerFast (MusicGen Melody model) mvp — MvpTokenizer or MvpTokenizerFast (MVP model) myt5 — MyT5Tokenizer (myt5 model) nezha — BertTokenizer or BertTokenizerFast (Nezha model) nllb — NllbTokenizer or NllbTokenizerFast (NLLB model) nllb-moe — NllbTokenizer or NllbTokenizerFast (NLLB-MOE model) nystromformer — AlbertTokenizer or AlbertTokenizerFast (Nyströmformer model) olmo — GPTNeoXTokenizerFast (OLMo model) olmo2 — GPTNeoXTokenizerFast (OLMo2 model) olmoe — GPTNeoXTokenizerFast (OLMoE model) omdet-turbo — CLIPTokenizer or CLIPTokenizerFast (OmDet-Turbo model) oneformer — CLIPTokenizer or CLIPTokenizerFast (OneFormer model) openai-gpt — OpenAIGPTTokenizer or OpenAIGPTTokenizerFast (OpenAI GPT model) opt — GPT2Tokenizer or GPT2TokenizerFast (OPT model) owlv2 — CLIPTokenizer or CLIPTokenizerFast (OWLv2 model) owlvit — CLIPTokenizer or CLIPTokenizerFast (OWL-ViT model) paligemma — LlamaTokenizer or LlamaTokenizerFast (PaliGemma model) pegasus — PegasusTokenizer or PegasusTokenizerFast (Pegasus model) pegasus_x — PegasusTokenizer or PegasusTokenizerFast (PEGASUS-X model) perceiver — PerceiverTokenizer (Perceiver model) persimmon — LlamaTokenizer or LlamaTokenizerFast (Persimmon model) phi — CodeGenTokenizer or CodeGenTokenizerFast (Phi model) phi3 — LlamaTokenizer or LlamaTokenizerFast (Phi3 model) phimoe — LlamaTokenizer or LlamaTokenizerFast (Phimoe model) phobert — PhobertTokenizer (PhoBERT model) pix2struct — T5Tokenizer or T5TokenizerFast (Pix2Struct model) pixtral — PreTrainedTokenizerFast (Pixtral model) plbart — PLBartTokenizer (PLBart model) prophetnet — ProphetNetTokenizer (ProphetNet model) qdqbert — BertTokenizer or BertTokenizerFast (QDQBert model) qwen2 — Qwen2Tokenizer or Qwen2TokenizerFast (Qwen2 model) qwen2_audio — Qwen2Tokenizer or Qwen2TokenizerFast (Qwen2Audio model) qwen2_moe — Qwen2Tokenizer or Qwen2TokenizerFast (Qwen2MoE model) qwen2_vl — Qwen2Tokenizer or Qwen2TokenizerFast (Qwen2VL model) rag — RagTokenizer (RAG model) realm — RealmTokenizer or RealmTokenizerFast (REALM model) recurrent_gemma — GemmaTokenizer or GemmaTokenizerFast (RecurrentGemma model) reformer — ReformerTokenizer or ReformerTokenizerFast (Reformer model) rembert — RemBertTokenizer or RemBertTokenizerFast (RemBERT model) retribert — RetriBertTokenizer or RetriBertTokenizerFast (RetriBERT model) roberta — RobertaTokenizer or RobertaTokenizerFast (RoBERTa model) roberta-prelayernorm — RobertaTokenizer or RobertaTokenizerFast (RoBERTa-PreLayerNorm model) roc_bert — RoCBertTokenizer (RoCBert model) roformer — RoFormerTokenizer or RoFormerTokenizerFast (RoFormer model) rwkv — GPTNeoXTokenizerFast (RWKV model) seamless_m4t — SeamlessM4TTokenizer or SeamlessM4TTokenizerFast (SeamlessM4T model) seamless_m4t_v2 — SeamlessM4TTokenizer or SeamlessM4TTokenizerFast (SeamlessM4Tv2 model) siglip — SiglipTokenizer (SigLIP model) speech_to_text — Speech2TextTokenizer (Speech2Text model) speech_to_text_2 — Speech2Text2Tokenizer (Speech2Text2 model) speecht5 — SpeechT5Tokenizer (SpeechT5 model) splinter — SplinterTokenizer or SplinterTokenizerFast (Splinter model) squeezebert — SqueezeBertTokenizer or SqueezeBertTokenizerFast (SqueezeBERT model) stablelm — GPTNeoXTokenizerFast (StableLm model) starcoder2 — GPT2Tokenizer or GPT2TokenizerFast (Starcoder2 model) switch_transformers — T5Tokenizer or T5TokenizerFast (SwitchTransformers model) t5 — T5Tokenizer or T5TokenizerFast (T5 model) tapas — TapasTokenizer (TAPAS model) tapex — TapexTokenizer (TAPEX model) transfo-xl — TransfoXLTokenizer (Transformer-XL model) tvp — BertTokenizer or BertTokenizerFast (TVP model) udop — UdopTokenizer or UdopTokenizerFast (UDOP model) umt5 — T5Tokenizer or T5TokenizerFast (UMT5 model) video_llava — LlamaTokenizer or LlamaTokenizerFast (VideoLlava model) vilt — BertTokenizer or BertTokenizerFast (ViLT model) vipllava — LlamaTokenizer or LlamaTokenizerFast (VipLlava model) visual_bert — BertTokenizer or BertTokenizerFast (VisualBERT model) vits — VitsTokenizer (VITS model) wav2vec2 — Wav2Vec2CTCTokenizer (Wav2Vec2 model) wav2vec2-bert — Wav2Vec2CTCTokenizer (Wav2Vec2-BERT model) wav2vec2-conformer — Wav2Vec2CTCTokenizer (Wav2Vec2-Conformer model) wav2vec2_phoneme — Wav2Vec2PhonemeCTCTokenizer (Wav2Vec2Phoneme model) whisper — WhisperTokenizer or WhisperTokenizerFast (Whisper model) xclip — CLIPTokenizer or CLIPTokenizerFast (X-CLIP model) xglm — XGLMTokenizer or XGLMTokenizerFast (XGLM model) xlm — XLMTokenizer (XLM model) xlm-prophetnet — XLMProphetNetTokenizer (XLM-ProphetNet model) xlm-roberta — XLMRobertaTokenizer or XLMRobertaTokenizerFast (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaTokenizer or XLMRobertaTokenizerFast (XLM-RoBERTa-XL model) xlnet — XLNetTokenizer or XLNetTokenizerFast (XLNet model) xmod — XLMRobertaTokenizer or XLMRobertaTokenizerFast (X-MOD model) yoso — AlbertTokenizer or AlbertTokenizerFast (YOSO model) zamba — LlamaTokenizer or LlamaTokenizerFast (Zamba model) Examples: Copied >>> from transformers import AutoTokenizer >>> # Download vocabulary from huggingface.co and cache. >>> tokenizer = AutoTokenizer.from_pretrained( "google-bert/bert-base-uncased" ) >>> # Download vocabulary from huggingface.co (user-uploaded) and cache. >>> tokenizer = AutoTokenizer.from_pretrained( "dbmdz/bert-base-german-cased" ) >>> # If vocabulary files are in a directory (e.g. tokenizer was saved using *save_pretrained('./test/saved_model/')*) >>> # tokenizer = AutoTokenizer.from_pretrained("./test/bert_saved_model/") >>> # Download vocabulary from huggingface.co and define model-specific arguments >>> tokenizer = AutoTokenizer.from_pretrained( "FacebookAI/roberta-base" , add_prefix_space= True ) register < source > ( config_class slow_tokenizer_class = None fast_tokenizer_class = None exist_ok = False ) Parameters config_class ( PretrainedConfig ) — The configuration corresponding to the model to register. slow_tokenizer_class ( PretrainedTokenizer , optional ) — The slow tokenizer to register. fast_tokenizer_class ( PretrainedTokenizerFast , optional ) — The fast tokenizer to register. Register a new tokenizer in this mapping. AutoFeatureExtractor class transformers. AutoFeatureExtractor < source > ( ) This is a generic feature extractor class that will be instantiated as one of the feature extractor classes of the library when created with the AutoFeatureExtractor.from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( pretrained_model_name_or_path **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — This can be either: a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. a path to a directory containing a feature extractor file saved using the save_pretrained() method, e.g., ./my_model_directory/ . a path or url to a saved feature extractor JSON file , e.g., ./my_model_directory/preprocessor_config.json . cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used. force_download ( bool , optional , defaults to False ) — Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token ( str or bool , optional ) — The token to use as HTTP bearer authorization for remote files. If True , will use the token generated when running huggingface-cli login (stored in ~/.huggingface ). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. return_unused_kwargs ( bool , optional , defaults to False ) — If False , then this function returns just the final feature extractor object. If True , then this functions returns a Tuple(feature_extractor, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of kwargs which has not been used to update feature_extractor and is otherwise ignored. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs ( Dict[str, Any] , optional ) — The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is controlled by the return_unused_kwargs keyword parameter. Instantiate one of the feature extractor classes of the library from a pretrained model vocabulary. The feature extractor class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : audio-spectrogram-transformer — ASTFeatureExtractor (Audio Spectrogram Transformer model) beit — BeitFeatureExtractor (BEiT model) chinese_clip — ChineseCLIPFeatureExtractor (Chinese-CLIP model) clap — ClapFeatureExtractor (CLAP model) clip — CLIPFeatureExtractor (CLIP model) clipseg — ViTFeatureExtractor (CLIPSeg model) clvp — ClvpFeatureExtractor (CLVP model) conditional_detr — ConditionalDetrFeatureExtractor (Conditional DETR model) convnext — ConvNextFeatureExtractor (ConvNeXT model) cvt — ConvNextFeatureExtractor (CvT model) dac — DacFeatureExtractor (DAC model) data2vec-audio — Wav2Vec2FeatureExtractor (Data2VecAudio model) data2vec-vision — BeitFeatureExtractor (Data2VecVision model) deformable_detr — DeformableDetrFeatureExtractor (Deformable DETR model) deit — DeiTFeatureExtractor (DeiT model) detr — DetrFeatureExtractor (DETR model) dinat — ViTFeatureExtractor (DiNAT model) donut-swin — DonutFeatureExtractor (DonutSwin model) dpt — DPTFeatureExtractor (DPT model) encodec — EncodecFeatureExtractor (EnCodec model) flava — FlavaFeatureExtractor (FLAVA model) glpn — GLPNFeatureExtractor (GLPN model) groupvit — CLIPFeatureExtractor (GroupViT model) hubert — Wav2Vec2FeatureExtractor (Hubert model) imagegpt — ImageGPTFeatureExtractor (ImageGPT model) layoutlmv2 — LayoutLMv2FeatureExtractor (LayoutLMv2 model) layoutlmv3 — LayoutLMv3FeatureExtractor (LayoutLMv3 model) levit — LevitFeatureExtractor (LeViT model) maskformer — MaskFormerFeatureExtractor (MaskFormer model) mctct — MCTCTFeatureExtractor (M-CTC-T model) mimi — EncodecFeatureExtractor (Mimi model) mobilenet_v1 — MobileNetV1FeatureExtractor (MobileNetV1 model) mobilenet_v2 — MobileNetV2FeatureExtractor (MobileNetV2 model) mobilevit — MobileViTFeatureExtractor (MobileViT model) moonshine — Wav2Vec2FeatureExtractor (Moonshine model) moshi — EncodecFeatureExtractor (Moshi model) nat — ViTFeatureExtractor (NAT model) owlvit — OwlViTFeatureExtractor (OWL-ViT model) perceiver — PerceiverFeatureExtractor (Perceiver model) poolformer — PoolFormerFeatureExtractor (PoolFormer model) pop2piano — Pop2PianoFeatureExtractor (Pop2Piano model) regnet — ConvNextFeatureExtractor (RegNet model) resnet — ConvNextFeatureExtractor (ResNet model) seamless_m4t — SeamlessM4TFeatureExtractor (SeamlessM4T model) seamless_m4t_v2 — SeamlessM4TFeatureExtractor (SeamlessM4Tv2 model) segformer — SegformerFeatureExtractor (SegFormer model) sew — Wav2Vec2FeatureExtractor (SEW model) sew-d — Wav2Vec2FeatureExtractor (SEW-D model) speech_to_text — Speech2TextFeatureExtractor (Speech2Text model) speecht5 — SpeechT5FeatureExtractor (SpeechT5 model) swiftformer — ViTFeatureExtractor (SwiftFormer model) swin — ViTFeatureExtractor (Swin Transformer model) swinv2 — ViTFeatureExtractor (Swin Transformer V2 model) table-transformer — DetrFeatureExtractor (Table Transformer model) timesformer — VideoMAEFeatureExtractor (TimeSformer model) tvlt — TvltFeatureExtractor (TVLT model) unispeech — Wav2Vec2FeatureExtractor (UniSpeech model) unispeech-sat — Wav2Vec2FeatureExtractor (UniSpeechSat model) univnet — UnivNetFeatureExtractor (UnivNet model) van — ConvNextFeatureExtractor (VAN model) videomae — VideoMAEFeatureExtractor (VideoMAE model) vilt — ViltFeatureExtractor (ViLT model) vit — ViTFeatureExtractor (ViT model) vit_mae — ViTFeatureExtractor (ViTMAE model) vit_msn — ViTFeatureExtractor (ViTMSN model) wav2vec2 — Wav2Vec2FeatureExtractor (Wav2Vec2 model) wav2vec2-bert — Wav2Vec2FeatureExtractor (Wav2Vec2-BERT model) wav2vec2-conformer — Wav2Vec2FeatureExtractor (Wav2Vec2-Conformer model) wavlm — Wav2Vec2FeatureExtractor (WavLM model) whisper — WhisperFeatureExtractor (Whisper model) xclip — CLIPFeatureExtractor (X-CLIP model) yolos — YolosFeatureExtractor (YOLOS model) Passing token=True is required when you want to use a private model. Examples: Copied >>> from transformers import AutoFeatureExtractor >>> # Download feature extractor from huggingface.co and cache. >>> feature_extractor = AutoFeatureExtractor.from_pretrained( "facebook/wav2vec2-base-960h" ) >>> # If feature extractor files are in a directory (e.g. feature extractor was saved using *save_pretrained('./test/saved_model/')*) >>> # feature_extractor = AutoFeatureExtractor.from_pretrained("./test/saved_model/") register < source > ( config_class feature_extractor_class exist_ok = False ) Parameters config_class ( PretrainedConfig ) — The configuration corresponding to the model to register. feature_extractor_class ( FeatureExtractorMixin ) — The feature extractor to register. Register a new feature extractor for this class. AutoImageProcessor class transformers. AutoImageProcessor < source > ( ) This is a generic image processor class that will be instantiated as one of the image processor classes of the library when created with the AutoImageProcessor.from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( pretrained_model_name_or_path *inputs **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — This can be either: a string, the model id of a pretrained image_processor hosted inside a model repo on huggingface.co. a path to a directory containing a image processor file saved using the save_pretrained() method, e.g., ./my_model_directory/ . a path or url to a saved image processor JSON file , e.g., ./my_model_directory/preprocessor_config.json . cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model image processor should be cached if the standard cache should not be used. force_download ( bool , optional , defaults to False ) — Whether or not to force to (re-)download the image processor files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token ( str or bool , optional ) — The token to use as HTTP bearer authorization for remote files. If True , will use the token generated when running huggingface-cli login (stored in ~/.huggingface ). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. use_fast ( bool , optional , defaults to False ) — Use a fast torchvision-base image processor if it is supported for a given model. If a fast image processor is not available for a given model, a normal numpy-based image processor is returned instead. return_unused_kwargs ( bool , optional , defaults to False ) — If False , then this function returns just the final image processor object. If True , then this functions returns a Tuple(image_processor, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not image processor attributes: i.e., the part of kwargs which has not been used to update image_processor and is otherwise ignored. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. image_processor_filename ( str , optional , defaults to "config.json" ) — The name of the file in the model directory to use for the image processor config. kwargs ( Dict[str, Any] , optional ) — The values in kwargs of any keys which are image processor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not image processor attributes is controlled by the return_unused_kwargs keyword parameter. Instantiate one of the image processor classes of the library from a pretrained model vocabulary. The image processor class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : align — EfficientNetImageProcessor (ALIGN model) aria — A or r (Aria model) beit — BeitImageProcessor (BEiT model) bit — BitImageProcessor (BiT model) blip — BlipImageProcessor (BLIP model) blip-2 — BlipImageProcessor (BLIP-2 model) bridgetower — BridgeTowerImageProcessor (BridgeTower model) chameleon — ChameleonImageProcessor (Chameleon model) chinese_clip — ChineseCLIPImageProcessor (Chinese-CLIP model) clip — CLIPImageProcessor (CLIP model) clipseg — ViTImageProcessor or ViTImageProcessorFast (CLIPSeg model) conditional_detr — ConditionalDetrImageProcessor (Conditional DETR model) convnext — ConvNextImageProcessor (ConvNeXT model) convnextv2 — ConvNextImageProcessor (ConvNeXTV2 model) cvt — ConvNextImageProcessor (CvT model) data2vec-vision — BeitImageProcessor (Data2VecVision model) deformable_detr — DeformableDetrImageProcessor or DeformableDetrImageProcessorFast (Deformable DETR model) deit — DeiTImageProcessor (DeiT model) depth_anything — DPTImageProcessor (Depth Anything model) deta — DetaImageProcessor (DETA model) detr — DetrImageProcessor or DetrImageProcessorFast (DETR model) dinat — ViTImageProcessor or ViTImageProcessorFast (DiNAT model) dinov2 — BitImageProcessor (DINOv2 model) donut-swin — DonutImageProcessor (DonutSwin model) dpt — DPTImageProcessor (DPT model) efficientformer — EfficientFormerImageProcessor (EfficientFormer model) efficientnet — EfficientNetImageProcessor (EfficientNet model) flava — FlavaImageProcessor (FLAVA model) focalnet — BitImageProcessor (FocalNet model) fuyu — FuyuImageProcessor (Fuyu model) git — CLIPImageProcessor (GIT model) glpn — GLPNImageProcessor (GLPN model) grounding-dino — GroundingDinoImageProcessor (Grounding DINO model) groupvit — CLIPImageProcessor (GroupViT model) hiera — BitImageProcessor (Hiera model) idefics — IdeficsImageProcessor (IDEFICS model) idefics2 — Idefics2ImageProcessor (Idefics2 model) idefics3 — Idefics3ImageProcessor (Idefics3 model) ijepa — ViTImageProcessor or ViTImageProcessorFast (I-JEPA model) imagegpt — ImageGPTImageProcessor (ImageGPT model) instructblip — BlipImageProcessor (InstructBLIP model) instructblipvideo — InstructBlipVideoImageProcessor (InstructBlipVideo model) kosmos-2 — CLIPImageProcessor (KOSMOS-2 model) layoutlmv2 — LayoutLMv2ImageProcessor (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ImageProcessor (LayoutLMv3 model) levit — LevitImageProcessor (LeViT model) llava — CLIPImageProcessor (LLaVa model) llava_next — LlavaNextImageProcessor (LLaVA-NeXT model) llava_next_video — LlavaNextVideoImageProcessor (LLaVa-NeXT-Video model) llava_onevision — LlavaOnevisionImageProcessor (LLaVA-Onevision model) mask2former — Mask2FormerImageProcessor (Mask2Former model) maskformer — MaskFormerImageProcessor (MaskFormer model) mgp-str — ViTImageProcessor or ViTImageProcessorFast (MGP-STR model) mllama — MllamaImageProcessor (Mllama model) mobilenet_v1 — MobileNetV1ImageProcessor (MobileNetV1 model) mobilenet_v2 — MobileNetV2ImageProcessor (MobileNetV2 model) mobilevit — MobileViTImageProcessor (MobileViT model) mobilevitv2 — MobileViTImageProcessor (MobileViTV2 model) nat — ViTImageProcessor or ViTImageProcessorFast (NAT model) nougat — NougatImageProcessor (Nougat model) oneformer — OneFormerImageProcessor (OneFormer model) owlv2 — Owlv2ImageProcessor (OWLv2 model) owlvit — OwlViTImageProcessor (OWL-ViT model) paligemma — SiglipImageProcessor (PaliGemma model) perceiver — PerceiverImageProcessor (Perceiver model) pix2struct — Pix2StructImageProcessor (Pix2Struct model) pixtral — PixtralImageProcessor or PixtralImageProcessorFast (Pixtral model) poolformer — PoolFormerImageProcessor (PoolFormer model) pvt — PvtImageProcessor (PVT model) pvt_v2 — PvtImageProcessor (PVTv2 model) qwen2_vl — Qwen2VLImageProcessor (Qwen2VL model) regnet — ConvNextImageProcessor (RegNet model) resnet — ConvNextImageProcessor (ResNet model) rt_detr — RTDetrImageProcessor or RTDetrImageProcessorFast (RT-DETR model) sam — SamImageProcessor (SAM model) segformer — SegformerImageProcessor (SegFormer model) seggpt — SegGptImageProcessor (SegGPT model) siglip — SiglipImageProcessor (SigLIP model) swiftformer — ViTImageProcessor or ViTImageProcessorFast (SwiftFormer model) swin — ViTImageProcessor or ViTImageProcessorFast (Swin Transformer model) swin2sr — Swin2SRImageProcessor (Swin2SR model) swinv2 — ViTImageProcessor or ViTImageProcessorFast (Swin Transformer V2 model) table-transformer — DetrImageProcessor (Table Transformer model) timesformer — VideoMAEImageProcessor (TimeSformer model) timm_wrapper — TimmWrapperImageProcessor (TimmWrapperModel model) tvlt — TvltImageProcessor (TVLT model) tvp — TvpImageProcessor (TVP model) udop — LayoutLMv3ImageProcessor (UDOP model) upernet — SegformerImageProcessor (UPerNet model) van — ConvNextImageProcessor (VAN model) videomae — VideoMAEImageProcessor (VideoMAE model) vilt — ViltImageProcessor (ViLT model) vipllava — CLIPImageProcessor (VipLlava model) vit — ViTImageProcessor or ViTImageProcessorFast (ViT model) vit_hybrid — ViTHybridImageProcessor (ViT Hybrid model) vit_mae — ViTImageProcessor or ViTImageProcessorFast (ViTMAE model) vit_msn — ViTImageProcessor or ViTImageProcessorFast (ViTMSN model) vitmatte — VitMatteImageProcessor (ViTMatte model) xclip — CLIPImageProcessor (X-CLIP model) yolos — YolosImageProcessor (YOLOS model) zoedepth — ZoeDepthImageProcessor (ZoeDepth model) Passing token=True is required when you want to use a private model. Examples: Copied >>> from transformers import AutoImageProcessor >>> # Download image processor from huggingface.co and cache. >>> image_processor = AutoImageProcessor.from_pretrained( "google/vit-base-patch16-224-in21k" ) >>> # If image processor files are in a directory (e.g. image processor was saved using *save_pretrained('./test/saved_model/')*) >>> # image_processor = AutoImageProcessor.from_pretrained("./test/saved_model/") register < source > ( config_class image_processor_class = None slow_image_processor_class = None fast_image_processor_class = None exist_ok = False ) Parameters config_class ( PretrainedConfig ) — The configuration corresponding to the model to register. image_processor_class ( ImageProcessingMixin ) — The image processor to register. Register a new image processor for this class. AutoProcessor class transformers. AutoProcessor < source > ( ) This is a generic processor class that will be instantiated as one of the processor classes of the library when created with the AutoProcessor.from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( pretrained_model_name_or_path **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — This can be either: a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. a path to a directory containing a processor files saved using the save_pretrained() method, e.g., ./my_model_directory/ . cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used. force_download ( bool , optional , defaults to False ) — Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token ( str or bool , optional ) — The token to use as HTTP bearer authorization for remote files. If True , will use the token generated when running huggingface-cli login (stored in ~/.huggingface ). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. return_unused_kwargs ( bool , optional , defaults to False ) — If False , then this function returns just the final feature extractor object. If True , then this functions returns a Tuple(feature_extractor, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of kwargs which has not been used to update feature_extractor and is otherwise ignored. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs ( Dict[str, Any] , optional ) — The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is controlled by the return_unused_kwargs keyword parameter. Instantiate one of the processor classes of the library from a pretrained model vocabulary. The processor class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible): align — AlignProcessor (ALIGN model) altclip — AltCLIPProcessor (AltCLIP model) aria — AriaProcessor (Aria model) bark — BarkProcessor (Bark model) blip — BlipProcessor (BLIP model) blip-2 — Blip2Processor (BLIP-2 model) bridgetower — BridgeTowerProcessor (BridgeTower model) chameleon — ChameleonProcessor (Chameleon model) chinese_clip — ChineseCLIPProcessor (Chinese-CLIP model) clap — ClapProcessor (CLAP model) clip — CLIPProcessor (CLIP model) clipseg — CLIPSegProcessor (CLIPSeg model) clvp — ClvpProcessor (CLVP model) colpali — ColPaliProcessor (ColPali model) emu3 — Emu3Processor (Emu3 model) flava — FlavaProcessor (FLAVA model) fuyu — FuyuProcessor (Fuyu model) git — GitProcessor (GIT model) grounding-dino — GroundingDinoProcessor (Grounding DINO model) groupvit — CLIPProcessor (GroupViT model) hubert — Wav2Vec2Processor (Hubert model) idefics — IdeficsProcessor (IDEFICS model) idefics2 — Idefics2Processor (Idefics2 model) idefics3 — Idefics3Processor (Idefics3 model) instructblip — InstructBlipProcessor (InstructBLIP model) instructblipvideo — InstructBlipVideoProcessor (InstructBlipVideo model) kosmos-2 — Kosmos2Processor (KOSMOS-2 model) layoutlmv2 — LayoutLMv2Processor (LayoutLMv2 model) layoutlmv3 — LayoutLMv3Processor (LayoutLMv3 model) llava — LlavaProcessor (LLaVa model) llava_next — LlavaNextProcessor (LLaVA-NeXT model) llava_next_video — LlavaNextVideoProcessor (LLaVa-NeXT-Video model) llava_onevision — LlavaOnevisionProcessor (LLaVA-Onevision model) markuplm — MarkupLMProcessor (MarkupLM model) mctct — MCTCTProcessor (M-CTC-T model) mgp-str — MgpstrProcessor (MGP-STR model) mllama — MllamaProcessor (Mllama model) moonshine — Wav2Vec2Processor (Moonshine model) oneformer — OneFormerProcessor (OneFormer model) owlv2 — Owlv2Processor (OWLv2 model) owlvit — OwlViTProcessor (OWL-ViT model) paligemma — PaliGemmaProcessor (PaliGemma model) pix2struct — Pix2StructProcessor (Pix2Struct model) pixtral — PixtralProcessor (Pixtral model) pop2piano — Pop2PianoProcessor (Pop2Piano model) qwen2_audio — Qwen2AudioProcessor (Qwen2Audio model) qwen2_vl — Qwen2VLProcessor (Qwen2VL model) sam — SamProcessor (SAM model) seamless_m4t — SeamlessM4TProcessor (SeamlessM4T model) sew — Wav2Vec2Processor (SEW model) sew-d — Wav2Vec2Processor (SEW-D model) siglip — SiglipProcessor (SigLIP model) speech_to_text — Speech2TextProcessor (Speech2Text model) speech_to_text_2 — Speech2Text2Processor (Speech2Text2 model) speecht5 — SpeechT5Processor (SpeechT5 model) trocr — TrOCRProcessor (TrOCR model) tvlt — TvltProcessor (TVLT model) tvp — TvpProcessor (TVP model) udop — UdopProcessor (UDOP model) unispeech — Wav2Vec2Processor (UniSpeech model) unispeech-sat — Wav2Vec2Processor (UniSpeechSat model) video_llava — VideoLlavaProcessor (VideoLlava model) vilt — ViltProcessor (ViLT model) vipllava — LlavaProcessor (VipLlava model) vision-text-dual-encoder — VisionTextDualEncoderProcessor (VisionTextDualEncoder model) wav2vec2 — Wav2Vec2Processor (Wav2Vec2 model) wav2vec2-bert — Wav2Vec2Processor (Wav2Vec2-BERT model) wav2vec2-conformer — Wav2Vec2Processor (Wav2Vec2-Conformer model) wavlm — Wav2Vec2Processor (WavLM model) whisper — WhisperProcessor (Whisper model) xclip — XCLIPProcessor (X-CLIP model) Passing token=True is required when you want to use a private model. Examples: Copied >>> from transformers import AutoProcessor >>> # Download processor from huggingface.co and cache. >>> processor = AutoProcessor.from_pretrained( "facebook/wav2vec2-base-960h" ) >>> # If processor files are in a directory (e.g. processor was saved using *save_pretrained('./test/saved_model/')*) >>> # processor = AutoProcessor.from_pretrained("./test/saved_model/") register < source > ( config_class processor_class exist_ok = False ) Parameters config_class ( PretrainedConfig ) — The configuration corresponding to the model to register. processor_class ( FeatureExtractorMixin ) — The processor to register. Register a new processor for this class. Generic model classes The following auto classes are available for instantiating a base model class without a specific head. AutoModel class transformers. AutoModel < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: ASTConfig configuration class: ASTModel (Audio Spectrogram Transformer model) AlbertConfig configuration class: AlbertModel (ALBERT model) AlignConfig configuration class: AlignModel (ALIGN model) AltCLIPConfig configuration class: AltCLIPModel (AltCLIP model) AriaConfig configuration class: AriaForConditionalGeneration (Aria model) AriaTextConfig configuration class: AriaTextModel (AriaText model) AutoformerConfig configuration class: AutoformerModel (Autoformer model) BambaConfig configuration class: BambaModel (Bamba model) BarkConfig configuration class: BarkModel (Bark model) BartConfig configuration class: BartModel (BART model) BeitConfig configuration class: BeitModel (BEiT model) BertConfig configuration class: BertModel (BERT model) BertGenerationConfig configuration class: BertGenerationEncoder (Bert Generation model) BigBirdConfig configuration class: BigBirdModel (BigBird model) BigBirdPegasusConfig configuration class: BigBirdPegasusModel (BigBird-Pegasus model) BioGptConfig configuration class: BioGptModel (BioGpt model) BitConfig configuration class: BitModel (BiT model) BlenderbotConfig configuration class: BlenderbotModel (Blenderbot model) BlenderbotSmallConfig configuration class: BlenderbotSmallModel (BlenderbotSmall model) Blip2Config configuration class: Blip2Model (BLIP-2 model) BlipConfig configuration class: BlipModel (BLIP model) BloomConfig configuration class: BloomModel (BLOOM model) BridgeTowerConfig configuration class: BridgeTowerModel (BridgeTower model) BrosConfig configuration class: BrosModel (BROS model) CLIPConfig configuration class: CLIPModel (CLIP model) CLIPSegConfig configuration class: CLIPSegModel (CLIPSeg model) CLIPTextConfig configuration class: CLIPTextModel (CLIPTextModel model) CLIPVisionConfig configuration class: CLIPVisionModel (CLIPVisionModel model) CTRLConfig configuration class: CTRLModel (CTRL model) CamembertConfig configuration class: CamembertModel (CamemBERT model) CanineConfig configuration class: CanineModel (CANINE model) ChameleonConfig configuration class: ChameleonModel (Chameleon model) ChineseCLIPConfig configuration class: ChineseCLIPModel (Chinese-CLIP model) ChineseCLIPVisionConfig configuration class: ChineseCLIPVisionModel (ChineseCLIPVisionModel model) ClapConfig configuration class: ClapModel (CLAP model) ClvpConfig configuration class: ClvpModelForConditionalGeneration (CLVP model) CodeGenConfig configuration class: CodeGenModel (CodeGen model) Cohere2Config configuration class: Cohere2Model (Cohere2 model) CohereConfig configuration class: CohereModel (Cohere model) ConditionalDetrConfig configuration class: ConditionalDetrModel (Conditional DETR model) ConvBertConfig configuration class: ConvBertModel (ConvBERT model) ConvNextConfig configuration class: ConvNextModel (ConvNeXT model) ConvNextV2Config configuration class: ConvNextV2Model (ConvNeXTV2 model) CpmAntConfig configuration class: CpmAntModel (CPM-Ant model) CvtConfig configuration class: CvtModel (CvT model) DPRConfig configuration class: DPRQuestionEncoder (DPR model) DPTConfig configuration class: DPTModel (DPT model) DacConfig configuration class: DacModel (DAC model) Data2VecAudioConfig configuration class: Data2VecAudioModel (Data2VecAudio model) Data2VecTextConfig configuration class: Data2VecTextModel (Data2VecText model) Data2VecVisionConfig configuration class: Data2VecVisionModel (Data2VecVision model) DbrxConfig configuration class: DbrxModel (DBRX model) DebertaConfig configuration class: DebertaModel (DeBERTa model) DebertaV2Config configuration class: DebertaV2Model (DeBERTa-v2 model) DecisionTransformerConfig configuration class: DecisionTransformerModel (Decision Transformer model) DeformableDetrConfig configuration class: DeformableDetrModel (Deformable DETR model) DeiTConfig configuration class: DeiTModel (DeiT model) DetaConfig configuration class: DetaModel (DETA model) DetrConfig configuration class: DetrModel (DETR model) DiffLlamaConfig configuration class: DiffLlamaModel (DiffLlama model) DinatConfig configuration class: DinatModel (DiNAT model) Dinov2Config configuration class: Dinov2Model (DINOv2 model) Dinov2WithRegistersConfig configuration class: Dinov2WithRegistersModel (DINOv2 with Registers model) DistilBertConfig configuration class: DistilBertModel (DistilBERT model) DonutSwinConfig configuration class: DonutSwinModel (DonutSwin model) EfficientFormerConfig configuration class: EfficientFormerModel (EfficientFormer model) EfficientNetConfig configuration class: EfficientNetModel (EfficientNet model) ElectraConfig configuration class: ElectraModel (ELECTRA model) EncodecConfig configuration class: EncodecModel (EnCodec model) ErnieConfig configuration class: ErnieModel (ERNIE model) ErnieMConfig configuration class: ErnieMModel (ErnieM model) EsmConfig configuration class: EsmModel (ESM model) FNetConfig configuration class: FNetModel (FNet model) FSMTConfig configuration class: FSMTModel (FairSeq Machine-Translation model) FalconConfig configuration class: FalconModel (Falcon model) FalconMambaConfig configuration class: FalconMambaModel (FalconMamba model) FastSpeech2ConformerConfig configuration class: FastSpeech2ConformerModel (FastSpeech2Conformer model) FlaubertConfig configuration class: FlaubertModel (FlauBERT model) FlavaConfig configuration class: FlavaModel (FLAVA model) FocalNetConfig configuration class: FocalNetModel (FocalNet model) FunnelConfig configuration class: FunnelModel or FunnelBaseModel (Funnel Transformer model) GLPNConfig configuration class: GLPNModel (GLPN model) GPT2Config configuration class: GPT2Model (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeModel (GPTBigCode model) GPTJConfig configuration class: GPTJModel (GPT-J model) GPTNeoConfig configuration class: GPTNeoModel (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXModel (GPT NeoX model) GPTNeoXJapaneseConfig configuration class: GPTNeoXJapaneseModel (GPT NeoX Japanese model) GPTSanJapaneseConfig configuration class: GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) Gemma2Config configuration class: Gemma2Model (Gemma2 model) GemmaConfig configuration class: GemmaModel (Gemma model) GitConfig configuration class: GitModel (GIT model) GlmConfig configuration class: GlmModel (GLM model) GraniteConfig configuration class: GraniteModel (Granite model) GraniteMoeConfig configuration class: GraniteMoeModel (GraniteMoeMoe model) GraphormerConfig configuration class: GraphormerModel (Graphormer model) GroundingDinoConfig configuration class: GroundingDinoModel (Grounding DINO model) GroupViTConfig configuration class: GroupViTModel (GroupViT model) HieraConfig configuration class: HieraModel (Hiera model) HubertConfig configuration class: HubertModel (Hubert model) IBertConfig configuration class: IBertModel (I-BERT model) IJepaConfig configuration class: IJepaModel (I-JEPA model) Idefics2Config configuration class: Idefics2Model (Idefics2 model) Idefics3Config configuration class: Idefics3Model (Idefics3 model) Idefics3VisionConfig configuration class: Idefics3VisionTransformer (Idefics3VisionTransformer model) IdeficsConfig configuration class: IdeficsModel (IDEFICS model) ImageGPTConfig configuration class: ImageGPTModel (ImageGPT model) InformerConfig configuration class: InformerModel (Informer model) JambaConfig configuration class: JambaModel (Jamba model) JetMoeConfig configuration class: JetMoeModel (JetMoe model) JukeboxConfig configuration class: JukeboxModel (Jukebox model) Kosmos2Config configuration class: Kosmos2Model (KOSMOS-2 model) LEDConfig configuration class: LEDModel (LED model) LayoutLMConfig configuration class: LayoutLMModel (LayoutLM model) LayoutLMv2Config configuration class: LayoutLMv2Model (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3Model (LayoutLMv3 model) LevitConfig configuration class: LevitModel (LeViT model) LiltConfig configuration class: LiltModel (LiLT model) LlamaConfig configuration class: LlamaModel (LLaMA model) LongT5Config configuration class: LongT5Model (LongT5 model) LongformerConfig configuration class: LongformerModel (Longformer model) LukeConfig configuration class: LukeModel (LUKE model) LxmertConfig configuration class: LxmertModel (LXMERT model) M2M100Config configuration class: M2M100Model (M2M100 model) MBartConfig configuration class: MBartModel (mBART model) MCTCTConfig configuration class: MCTCTModel (M-CTC-T model) MPNetConfig configuration class: MPNetModel (MPNet model) MT5Config configuration class: MT5Model (MT5 model) Mamba2Config configuration class: Mamba2Model (mamba2 model) MambaConfig configuration class: MambaModel (Mamba model) MarianConfig configuration class: MarianModel (Marian model) MarkupLMConfig configuration class: MarkupLMModel (MarkupLM model) Mask2FormerConfig configuration class: Mask2FormerModel (Mask2Former model) MaskFormerConfig configuration class: MaskFormerModel (MaskFormer model) MaskFormerSwinConfig configuration class: MaskFormerSwinModel (MaskFormerSwin model) MegaConfig configuration class: MegaModel (MEGA model) MegatronBertConfig configuration class: MegatronBertModel (Megatron-BERT model) MgpstrConfig configuration class: MgpstrForSceneTextRecognition (MGP-STR model) MimiConfig configuration class: MimiModel (Mimi model) MistralConfig configuration class: MistralModel (Mistral model) MixtralConfig configuration class: MixtralModel (Mixtral model) MobileBertConfig configuration class: MobileBertModel (MobileBERT model) MobileNetV1Config configuration class: MobileNetV1Model (MobileNetV1 model) MobileNetV2Config configuration class: MobileNetV2Model (MobileNetV2 model) MobileViTConfig configuration class: MobileViTModel (MobileViT model) MobileViTV2Config configuration class: MobileViTV2Model (MobileViTV2 model) ModernBertConfig configuration class: ModernBertModel (ModernBERT model) MoonshineConfig configuration class: MoonshineModel (Moonshine model) MoshiConfig configuration class: MoshiModel (Moshi model) MptConfig configuration class: MptModel (MPT model) MraConfig configuration class: MraModel (MRA model) MusicgenConfig configuration class: MusicgenModel (MusicGen model) MusicgenMelodyConfig configuration class: MusicgenMelodyModel (MusicGen Melody model) MvpConfig configuration class: MvpModel (MVP model) NatConfig configuration class: NatModel (NAT model) NemotronConfig configuration class: NemotronModel (Nemotron model) NezhaConfig configuration class: NezhaModel (Nezha model) NllbMoeConfig configuration class: NllbMoeModel (NLLB-MOE model) NystromformerConfig configuration class: NystromformerModel (Nyströmformer model) OPTConfig configuration class: OPTModel (OPT model) Olmo2Config configuration class: Olmo2Model (OLMo2 model) OlmoConfig configuration class: OlmoModel (OLMo model) OlmoeConfig configuration class: OlmoeModel (OLMoE model) OmDetTurboConfig configuration class: OmDetTurboForObjectDetection (OmDet-Turbo model) OneFormerConfig configuration class: OneFormerModel (OneFormer model) OpenAIGPTConfig configuration class: OpenAIGPTModel (OpenAI GPT model) OpenLlamaConfig configuration class: OpenLlamaModel (OpenLlama model) OwlViTConfig configuration class: OwlViTModel (OWL-ViT model) Owlv2Config configuration class: Owlv2Model (OWLv2 model) PLBartConfig configuration class: PLBartModel (PLBart model) PatchTSMixerConfig configuration class: PatchTSMixerModel (PatchTSMixer model) PatchTSTConfig configuration class: PatchTSTModel (PatchTST model) PegasusConfig configuration class: PegasusModel (Pegasus model) PegasusXConfig configuration class: PegasusXModel (PEGASUS-X model) PerceiverConfig configuration class: PerceiverModel (Perceiver model) PersimmonConfig configuration class: PersimmonModel (Persimmon model) Phi3Config configuration class: Phi3Model (Phi3 model) PhiConfig configuration class: PhiModel (Phi model) PhimoeConfig configuration class: PhimoeModel (Phimoe model) PixtralVisionConfig configuration class: PixtralVisionModel (Pixtral model) PoolFormerConfig configuration class: PoolFormerModel (PoolFormer model) ProphetNetConfig configuration class: ProphetNetModel (ProphetNet model) PvtConfig configuration class: PvtModel (PVT model) PvtV2Config configuration class: PvtV2Model (PVTv2 model) QDQBertConfig configuration class: QDQBertModel (QDQBert model) Qwen2AudioEncoderConfig configuration class: Qwen2AudioEncoder (Qwen2AudioEncoder model) Qwen2Config configuration class: Qwen2Model (Qwen2 model) Qwen2MoeConfig configuration class: Qwen2MoeModel (Qwen2MoE model) Qwen2VLConfig configuration class: Qwen2VLModel (Qwen2VL model) RTDetrConfig configuration class: RTDetrModel (RT-DETR model) RecurrentGemmaConfig configuration class: RecurrentGemmaModel (RecurrentGemma model) ReformerConfig configuration class: ReformerModel (Reformer model) RegNetConfig configuration class: RegNetModel (RegNet model) RemBertConfig configuration class: RemBertModel (RemBERT model) ResNetConfig configuration class: ResNetModel (ResNet model) RetriBertConfig configuration class: RetriBertModel (RetriBERT model) RoCBertConfig configuration class: RoCBertModel (RoCBert model) RoFormerConfig configuration class: RoFormerModel (RoFormer model) RobertaConfig configuration class: RobertaModel (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) RwkvConfig configuration class: RwkvModel (RWKV model) SEWConfig configuration class: SEWModel (SEW model) SEWDConfig configuration class: SEWDModel (SEW-D model) SamConfig configuration class: SamModel (SAM model) SeamlessM4TConfig configuration class: SeamlessM4TModel (SeamlessM4T model) SeamlessM4Tv2Config configuration class: SeamlessM4Tv2Model (SeamlessM4Tv2 model) SegGptConfig configuration class: SegGptModel (SegGPT model) SegformerConfig configuration class: SegformerModel (SegFormer model) SiglipConfig configuration class: SiglipModel (SigLIP model) SiglipVisionConfig configuration class: SiglipVisionModel (SiglipVisionModel model) Speech2TextConfig configuration class: Speech2TextModel (Speech2Text model) SpeechT5Config configuration class: SpeechT5Model (SpeechT5 model) SplinterConfig configuration class: SplinterModel (Splinter model) SqueezeBertConfig configuration class: SqueezeBertModel (SqueezeBERT model) StableLmConfig configuration class: StableLmModel (StableLm model) Starcoder2Config configuration class: Starcoder2Model (Starcoder2 model) SwiftFormerConfig configuration class: SwiftFormerModel (SwiftFormer model) Swin2SRConfig configuration class: Swin2SRModel (Swin2SR model) SwinConfig configuration class: SwinModel (Swin Transformer model) Swinv2Config configuration class: Swinv2Model (Swin Transformer V2 model) SwitchTransformersConfig configuration class: SwitchTransformersModel (SwitchTransformers model) T5Config configuration class: T5Model (T5 model) TableTransformerConfig configuration class: TableTransformerModel (Table Transformer model) TapasConfig configuration class: TapasModel (TAPAS model) TextNetConfig configuration class: TextNetModel (TextNet model) TimeSeriesTransformerConfig configuration class: TimeSeriesTransformerModel (Time Series Transformer model) TimesformerConfig configuration class: TimesformerModel (TimeSformer model) TimmBackboneConfig configuration class: TimmBackbone (TimmBackbone model) TimmWrapperConfig configuration class: TimmWrapperModel (TimmWrapperModel model) TrajectoryTransformerConfig configuration class: TrajectoryTransformerModel (Trajectory Transformer model) TransfoXLConfig configuration class: TransfoXLModel (Transformer-XL model) TvltConfig configuration class: TvltModel (TVLT model) TvpConfig configuration class: TvpModel (TVP model) UMT5Config configuration class: UMT5Model (UMT5 model) UdopConfig configuration class: UdopModel (UDOP model) UniSpeechConfig configuration class: UniSpeechModel (UniSpeech model) UniSpeechSatConfig configuration class: UniSpeechSatModel (UniSpeechSat model) UnivNetConfig configuration class: UnivNetModel (UnivNet model) VanConfig configuration class: VanModel (VAN model) ViTConfig configuration class: ViTModel (ViT model) ViTHybridConfig configuration class: ViTHybridModel (ViT Hybrid model) ViTMAEConfig configuration class: ViTMAEModel (ViTMAE model) ViTMSNConfig configuration class: ViTMSNModel (ViTMSN model) VideoMAEConfig configuration class: VideoMAEModel (VideoMAE model) ViltConfig configuration class: ViltModel (ViLT model) VisionTextDualEncoderConfig configuration class: VisionTextDualEncoderModel (VisionTextDualEncoder model) VisualBertConfig configuration class: VisualBertModel (VisualBERT model) VitDetConfig configuration class: VitDetModel (VitDet model) VitsConfig configuration class: VitsModel (VITS model) VivitConfig configuration class: VivitModel (ViViT model) Wav2Vec2BertConfig configuration class: Wav2Vec2BertModel (Wav2Vec2-BERT model) Wav2Vec2Config configuration class: Wav2Vec2Model (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerModel (Wav2Vec2-Conformer model) WavLMConfig configuration class: WavLMModel (WavLM model) WhisperConfig configuration class: WhisperModel (Whisper model) XCLIPConfig configuration class: XCLIPModel (X-CLIP model) XGLMConfig configuration class: XGLMModel (XGLM model) XLMConfig configuration class: XLMModel (XLM model) XLMProphetNetConfig configuration class: XLMProphetNetModel (XLM-ProphetNet model) XLMRobertaConfig configuration class: XLMRobertaModel (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLModel (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetModel (XLNet model) XmodConfig configuration class: XmodModel (X-MOD model) YolosConfig configuration class: YolosModel (YOLOS model) YosoConfig configuration class: YosoModel (YOSO model) ZambaConfig configuration class: ZambaModel (Zamba model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the base model classes of the library from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModel >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModel.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the base model classes of the library from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — AlbertModel (ALBERT model) align — AlignModel (ALIGN model) altclip — AltCLIPModel (AltCLIP model) aria — AriaForConditionalGeneration (Aria model) aria_text — AriaTextModel (AriaText model) audio-spectrogram-transformer — ASTModel (Audio Spectrogram Transformer model) autoformer — AutoformerModel (Autoformer model) bamba — BambaModel (Bamba model) bark — BarkModel (Bark model) bart — BartModel (BART model) beit — BeitModel (BEiT model) bert — BertModel (BERT model) bert-generation — BertGenerationEncoder (Bert Generation model) big_bird — BigBirdModel (BigBird model) bigbird_pegasus — BigBirdPegasusModel (BigBird-Pegasus model) biogpt — BioGptModel (BioGpt model) bit — BitModel (BiT model) blenderbot — BlenderbotModel (Blenderbot model) blenderbot-small — BlenderbotSmallModel (BlenderbotSmall model) blip — BlipModel (BLIP model) blip-2 — Blip2Model (BLIP-2 model) bloom — BloomModel (BLOOM model) bridgetower — BridgeTowerModel (BridgeTower model) bros — BrosModel (BROS model) camembert — CamembertModel (CamemBERT model) canine — CanineModel (CANINE model) chameleon — ChameleonModel (Chameleon model) chinese_clip — ChineseCLIPModel (Chinese-CLIP model) chinese_clip_vision_model — ChineseCLIPVisionModel (ChineseCLIPVisionModel model) clap — ClapModel (CLAP model) clip — CLIPModel (CLIP model) clip_text_model — CLIPTextModel (CLIPTextModel model) clip_vision_model — CLIPVisionModel (CLIPVisionModel model) clipseg — CLIPSegModel (CLIPSeg model) clvp — ClvpModelForConditionalGeneration (CLVP model) code_llama — LlamaModel (CodeLlama model) codegen — CodeGenModel (CodeGen model) cohere — CohereModel (Cohere model) cohere2 — Cohere2Model (Cohere2 model) conditional_detr — ConditionalDetrModel (Conditional DETR model) convbert — ConvBertModel (ConvBERT model) convnext — ConvNextModel (ConvNeXT model) convnextv2 — ConvNextV2Model (ConvNeXTV2 model) cpmant — CpmAntModel (CPM-Ant model) ctrl — CTRLModel (CTRL model) cvt — CvtModel (CvT model) dac — DacModel (DAC model) data2vec-audio — Data2VecAudioModel (Data2VecAudio model) data2vec-text — Data2VecTextModel (Data2VecText model) data2vec-vision — Data2VecVisionModel (Data2VecVision model) dbrx — DbrxModel (DBRX model) deberta — DebertaModel (DeBERTa model) deberta-v2 — DebertaV2Model (DeBERTa-v2 model) decision_transformer — DecisionTransformerModel (Decision Transformer model) deformable_detr — DeformableDetrModel (Deformable DETR model) deit — DeiTModel (DeiT model) deta — DetaModel (DETA model) detr — DetrModel (DETR model) diffllama — DiffLlamaModel (DiffLlama model) dinat — DinatModel (DiNAT model) dinov2 — Dinov2Model (DINOv2 model) dinov2_with_registers — Dinov2WithRegistersModel (DINOv2 with Registers model) distilbert — DistilBertModel (DistilBERT model) donut-swin — DonutSwinModel (DonutSwin model) dpr — DPRQuestionEncoder (DPR model) dpt — DPTModel (DPT model) efficientformer — EfficientFormerModel (EfficientFormer model) efficientnet — EfficientNetModel (EfficientNet model) electra — ElectraModel (ELECTRA model) encodec — EncodecModel (EnCodec model) ernie — ErnieModel (ERNIE model) ernie_m — ErnieMModel (ErnieM model) esm — EsmModel (ESM model) falcon — FalconModel (Falcon model) falcon_mamba — FalconMambaModel (FalconMamba model) fastspeech2_conformer — FastSpeech2ConformerModel (FastSpeech2Conformer model) flaubert — FlaubertModel (FlauBERT model) flava — FlavaModel (FLAVA model) fnet — FNetModel (FNet model) focalnet — FocalNetModel (FocalNet model) fsmt — FSMTModel (FairSeq Machine-Translation model) funnel — FunnelModel or FunnelBaseModel (Funnel Transformer model) gemma — GemmaModel (Gemma model) gemma2 — Gemma2Model (Gemma2 model) git — GitModel (GIT model) glm — GlmModel (GLM model) glpn — GLPNModel (GLPN model) gpt-sw3 — GPT2Model (GPT-Sw3 model) gpt2 — GPT2Model (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeModel (GPTBigCode model) gpt_neo — GPTNeoModel (GPT Neo model) gpt_neox — GPTNeoXModel (GPT NeoX model) gpt_neox_japanese — GPTNeoXJapaneseModel (GPT NeoX Japanese model) gptj — GPTJModel (GPT-J model) gptsan-japanese — GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) granite — GraniteModel (Granite model) granitemoe — GraniteMoeModel (GraniteMoeMoe model) graphormer — GraphormerModel (Graphormer model) grounding-dino — GroundingDinoModel (Grounding DINO model) groupvit — GroupViTModel (GroupViT model) hiera — HieraModel (Hiera model) hubert — HubertModel (Hubert model) ibert — IBertModel (I-BERT model) idefics — IdeficsModel (IDEFICS model) idefics2 — Idefics2Model (Idefics2 model) idefics3 — Idefics3Model (Idefics3 model) idefics3_vision — Idefics3VisionTransformer (Idefics3VisionTransformer model) ijepa — IJepaModel (I-JEPA model) imagegpt — ImageGPTModel (ImageGPT model) informer — InformerModel (Informer model) jamba — JambaModel (Jamba model) jetmoe — JetMoeModel (JetMoe model) jukebox — JukeboxModel (Jukebox model) kosmos-2 — Kosmos2Model (KOSMOS-2 model) layoutlm — LayoutLMModel (LayoutLM model) layoutlmv2 — LayoutLMv2Model (LayoutLMv2 model) layoutlmv3 — LayoutLMv3Model (LayoutLMv3 model) led — LEDModel (LED model) levit — LevitModel (LeViT model) lilt — LiltModel (LiLT model) llama — LlamaModel (LLaMA model) longformer — LongformerModel (Longformer model) longt5 — LongT5Model (LongT5 model) luke — LukeModel (LUKE model) lxmert — LxmertModel (LXMERT model) m2m_100 — M2M100Model (M2M100 model) mamba — MambaModel (Mamba model) mamba2 — Mamba2Model (mamba2 model) marian — MarianModel (Marian model) markuplm — MarkupLMModel (MarkupLM model) mask2former — Mask2FormerModel (Mask2Former model) maskformer — MaskFormerModel (MaskFormer model) maskformer-swin — MaskFormerSwinModel (MaskFormerSwin model) mbart — MBartModel (mBART model) mctct — MCTCTModel (M-CTC-T model) mega — MegaModel (MEGA model) megatron-bert — MegatronBertModel (Megatron-BERT model) mgp-str — MgpstrForSceneTextRecognition (MGP-STR model) mimi — MimiModel (Mimi model) mistral — MistralModel (Mistral model) mixtral — MixtralModel (Mixtral model) mobilebert — MobileBertModel (MobileBERT model) mobilenet_v1 — MobileNetV1Model (MobileNetV1 model) mobilenet_v2 — MobileNetV2Model (MobileNetV2 model) mobilevit — MobileViTModel (MobileViT model) mobilevitv2 — MobileViTV2Model (MobileViTV2 model) modernbert — ModernBertModel (ModernBERT model) moonshine — MoonshineModel (Moonshine model) moshi — MoshiModel (Moshi model) mpnet — MPNetModel (MPNet model) mpt — MptModel (MPT model) mra — MraModel (MRA model) mt5 — MT5Model (MT5 model) musicgen — MusicgenModel (MusicGen model) musicgen_melody — MusicgenMelodyModel (MusicGen Melody model) mvp — MvpModel (MVP model) nat — NatModel (NAT model) nemotron — NemotronModel (Nemotron model) nezha — NezhaModel (Nezha model) nllb-moe — NllbMoeModel (NLLB-MOE model) nystromformer — NystromformerModel (Nyströmformer model) olmo — OlmoModel (OLMo model) olmo2 — Olmo2Model (OLMo2 model) olmoe — OlmoeModel (OLMoE model) omdet-turbo — OmDetTurboForObjectDetection (OmDet-Turbo model) oneformer — OneFormerModel (OneFormer model) open-llama — OpenLlamaModel (OpenLlama model) openai-gpt — OpenAIGPTModel (OpenAI GPT model) opt — OPTModel (OPT model) owlv2 — Owlv2Model (OWLv2 model) owlvit — OwlViTModel (OWL-ViT model) patchtsmixer — PatchTSMixerModel (PatchTSMixer model) patchtst — PatchTSTModel (PatchTST model) pegasus — PegasusModel (Pegasus model) pegasus_x — PegasusXModel (PEGASUS-X model) perceiver — PerceiverModel (Perceiver model) persimmon — PersimmonModel (Persimmon model) phi — PhiModel (Phi model) phi3 — Phi3Model (Phi3 model) phimoe — PhimoeModel (Phimoe model) pixtral — PixtralVisionModel (Pixtral model) plbart — PLBartModel (PLBart model) poolformer — PoolFormerModel (PoolFormer model) prophetnet — ProphetNetModel (ProphetNet model) pvt — PvtModel (PVT model) pvt_v2 — PvtV2Model (PVTv2 model) qdqbert — QDQBertModel (QDQBert model) qwen2 — Qwen2Model (Qwen2 model) qwen2_audio_encoder — Qwen2AudioEncoder (Qwen2AudioEncoder model) qwen2_moe — Qwen2MoeModel (Qwen2MoE model) qwen2_vl — Qwen2VLModel (Qwen2VL model) recurrent_gemma — RecurrentGemmaModel (RecurrentGemma model) reformer — ReformerModel (Reformer model) regnet — RegNetModel (RegNet model) rembert — RemBertModel (RemBERT model) resnet — ResNetModel (ResNet model) retribert — RetriBertModel (RetriBERT model) roberta — RobertaModel (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) roc_bert — RoCBertModel (RoCBert model) roformer — RoFormerModel (RoFormer model) rt_detr — RTDetrModel (RT-DETR model) rwkv — RwkvModel (RWKV model) sam — SamModel (SAM model) seamless_m4t — SeamlessM4TModel (SeamlessM4T model) seamless_m4t_v2 — SeamlessM4Tv2Model (SeamlessM4Tv2 model) segformer — SegformerModel (SegFormer model) seggpt — SegGptModel (SegGPT model) sew — SEWModel (SEW model) sew-d — SEWDModel (SEW-D model) siglip — SiglipModel (SigLIP model) siglip_vision_model — SiglipVisionModel (SiglipVisionModel model) speech_to_text — Speech2TextModel (Speech2Text model) speecht5 — SpeechT5Model (SpeechT5 model) splinter — SplinterModel (Splinter model) squeezebert — SqueezeBertModel (SqueezeBERT model) stablelm — StableLmModel (StableLm model) starcoder2 — Starcoder2Model (Starcoder2 model) swiftformer — SwiftFormerModel (SwiftFormer model) swin — SwinModel (Swin Transformer model) swin2sr — Swin2SRModel (Swin2SR model) swinv2 — Swinv2Model (Swin Transformer V2 model) switch_transformers — SwitchTransformersModel (SwitchTransformers model) t5 — T5Model (T5 model) table-transformer — TableTransformerModel (Table Transformer model) tapas — TapasModel (TAPAS model) textnet — TextNetModel (TextNet model) time_series_transformer — TimeSeriesTransformerModel (Time Series Transformer model) timesformer — TimesformerModel (TimeSformer model) timm_backbone — TimmBackbone (TimmBackbone model) timm_wrapper — TimmWrapperModel (TimmWrapperModel model) trajectory_transformer — TrajectoryTransformerModel (Trajectory Transformer model) transfo-xl — TransfoXLModel (Transformer-XL model) tvlt — TvltModel (TVLT model) tvp — TvpModel (TVP model) udop — UdopModel (UDOP model) umt5 — UMT5Model (UMT5 model) unispeech — UniSpeechModel (UniSpeech model) unispeech-sat — UniSpeechSatModel (UniSpeechSat model) univnet — UnivNetModel (UnivNet model) van — VanModel (VAN model) videomae — VideoMAEModel (VideoMAE model) vilt — ViltModel (ViLT model) vision-text-dual-encoder — VisionTextDualEncoderModel (VisionTextDualEncoder model) visual_bert — VisualBertModel (VisualBERT model) vit — ViTModel (ViT model) vit_hybrid — ViTHybridModel (ViT Hybrid model) vit_mae — ViTMAEModel (ViTMAE model) vit_msn — ViTMSNModel (ViTMSN model) vitdet — VitDetModel (VitDet model) vits — VitsModel (VITS model) vivit — VivitModel (ViViT model) wav2vec2 — Wav2Vec2Model (Wav2Vec2 model) wav2vec2-bert — Wav2Vec2BertModel (Wav2Vec2-BERT model) wav2vec2-conformer — Wav2Vec2ConformerModel (Wav2Vec2-Conformer model) wavlm — WavLMModel (WavLM model) whisper — WhisperModel (Whisper model) xclip — XCLIPModel (X-CLIP model) xglm — XGLMModel (XGLM model) xlm — XLMModel (XLM model) xlm-prophetnet — XLMProphetNetModel (XLM-ProphetNet model) xlm-roberta — XLMRobertaModel (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLModel (XLM-RoBERTa-XL model) xlnet — XLNetModel (XLNet model) xmod — XmodModel (X-MOD model) yolos — YolosModel (YOLOS model) yoso — YosoModel (YOSO model) zamba — ZambaModel (Zamba model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModel >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModel.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModel.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModel.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModel class transformers. TFAutoModel < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertModel (ALBERT model) BartConfig configuration class: TFBartModel (BART model) BertConfig configuration class: TFBertModel (BERT model) BlenderbotConfig configuration class: TFBlenderbotModel (Blenderbot model) BlenderbotSmallConfig configuration class: TFBlenderbotSmallModel (BlenderbotSmall model) BlipConfig configuration class: TFBlipModel (BLIP model) CLIPConfig configuration class: TFCLIPModel (CLIP model) CTRLConfig configuration class: TFCTRLModel (CTRL model) CamembertConfig configuration class: TFCamembertModel (CamemBERT model) ConvBertConfig configuration class: TFConvBertModel (ConvBERT model) ConvNextConfig configuration class: TFConvNextModel (ConvNeXT model) ConvNextV2Config configuration class: TFConvNextV2Model (ConvNeXTV2 model) CvtConfig configuration class: TFCvtModel (CvT model) DPRConfig configuration class: TFDPRQuestionEncoder (DPR model) Data2VecVisionConfig configuration class: TFData2VecVisionModel (Data2VecVision model) DebertaConfig configuration class: TFDebertaModel (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2Model (DeBERTa-v2 model) DeiTConfig configuration class: TFDeiTModel (DeiT model) DistilBertConfig configuration class: TFDistilBertModel (DistilBERT model) EfficientFormerConfig configuration class: TFEfficientFormerModel (EfficientFormer model) ElectraConfig configuration class: TFElectraModel (ELECTRA model) EsmConfig configuration class: TFEsmModel (ESM model) FlaubertConfig configuration class: TFFlaubertModel (FlauBERT model) FunnelConfig configuration class: TFFunnelModel or TFFunnelBaseModel (Funnel Transformer model) GPT2Config configuration class: TFGPT2Model (OpenAI GPT-2 model) GPTJConfig configuration class: TFGPTJModel (GPT-J model) GroupViTConfig configuration class: TFGroupViTModel (GroupViT model) HubertConfig configuration class: TFHubertModel (Hubert model) IdeficsConfig configuration class: TFIdeficsModel (IDEFICS model) LEDConfig configuration class: TFLEDModel (LED model) LayoutLMConfig configuration class: TFLayoutLMModel (LayoutLM model) LayoutLMv3Config configuration class: TFLayoutLMv3Model (LayoutLMv3 model) LongformerConfig configuration class: TFLongformerModel (Longformer model) LxmertConfig configuration class: TFLxmertModel (LXMERT model) MBartConfig configuration class: TFMBartModel (mBART model) MPNetConfig configuration class: TFMPNetModel (MPNet model) MT5Config configuration class: TFMT5Model (MT5 model) MarianConfig configuration class: TFMarianModel (Marian model) MistralConfig configuration class: TFMistralModel (Mistral model) MobileBertConfig configuration class: TFMobileBertModel (MobileBERT model) MobileViTConfig configuration class: TFMobileViTModel (MobileViT model) OPTConfig configuration class: TFOPTModel (OPT model) OpenAIGPTConfig configuration class: TFOpenAIGPTModel (OpenAI GPT model) PegasusConfig configuration class: TFPegasusModel (Pegasus model) RegNetConfig configuration class: TFRegNetModel (RegNet model) RemBertConfig configuration class: TFRemBertModel (RemBERT model) ResNetConfig configuration class: TFResNetModel (ResNet model) RoFormerConfig configuration class: TFRoFormerModel (RoFormer model) RobertaConfig configuration class: TFRobertaModel (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) SamConfig configuration class: TFSamModel (SAM model) SegformerConfig configuration class: TFSegformerModel (SegFormer model) Speech2TextConfig configuration class: TFSpeech2TextModel (Speech2Text model) SwiftFormerConfig configuration class: TFSwiftFormerModel (SwiftFormer model) SwinConfig configuration class: TFSwinModel (Swin Transformer model) T5Config configuration class: TFT5Model (T5 model) TapasConfig configuration class: TFTapasModel (TAPAS model) TransfoXLConfig configuration class: TFTransfoXLModel (Transformer-XL model) ViTConfig configuration class: TFViTModel (ViT model) ViTMAEConfig configuration class: TFViTMAEModel (ViTMAE model) VisionTextDualEncoderConfig configuration class: TFVisionTextDualEncoderModel (VisionTextDualEncoder model) Wav2Vec2Config configuration class: TFWav2Vec2Model (Wav2Vec2 model) WhisperConfig configuration class: TFWhisperModel (Whisper model) XGLMConfig configuration class: TFXGLMModel (XGLM model) XLMConfig configuration class: TFXLMModel (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaModel (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetModel (XLNet model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the base model classes of the library from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModel >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModel.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the base model classes of the library from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — TFAlbertModel (ALBERT model) bart — TFBartModel (BART model) bert — TFBertModel (BERT model) blenderbot — TFBlenderbotModel (Blenderbot model) blenderbot-small — TFBlenderbotSmallModel (BlenderbotSmall model) blip — TFBlipModel (BLIP model) camembert — TFCamembertModel (CamemBERT model) clip — TFCLIPModel (CLIP model) convbert — TFConvBertModel (ConvBERT model) convnext — TFConvNextModel (ConvNeXT model) convnextv2 — TFConvNextV2Model (ConvNeXTV2 model) ctrl — TFCTRLModel (CTRL model) cvt — TFCvtModel (CvT model) data2vec-vision — TFData2VecVisionModel (Data2VecVision model) deberta — TFDebertaModel (DeBERTa model) deberta-v2 — TFDebertaV2Model (DeBERTa-v2 model) deit — TFDeiTModel (DeiT model) distilbert — TFDistilBertModel (DistilBERT model) dpr — TFDPRQuestionEncoder (DPR model) efficientformer — TFEfficientFormerModel (EfficientFormer model) electra — TFElectraModel (ELECTRA model) esm — TFEsmModel (ESM model) flaubert — TFFlaubertModel (FlauBERT model) funnel — TFFunnelModel or TFFunnelBaseModel (Funnel Transformer model) gpt-sw3 — TFGPT2Model (GPT-Sw3 model) gpt2 — TFGPT2Model (OpenAI GPT-2 model) gptj — TFGPTJModel (GPT-J model) groupvit — TFGroupViTModel (GroupViT model) hubert — TFHubertModel (Hubert model) idefics — TFIdeficsModel (IDEFICS model) layoutlm — TFLayoutLMModel (LayoutLM model) layoutlmv3 — TFLayoutLMv3Model (LayoutLMv3 model) led — TFLEDModel (LED model) longformer — TFLongformerModel (Longformer model) lxmert — TFLxmertModel (LXMERT model) marian — TFMarianModel (Marian model) mbart — TFMBartModel (mBART model) mistral — TFMistralModel (Mistral model) mobilebert — TFMobileBertModel (MobileBERT model) mobilevit — TFMobileViTModel (MobileViT model) mpnet — TFMPNetModel (MPNet model) mt5 — TFMT5Model (MT5 model) openai-gpt — TFOpenAIGPTModel (OpenAI GPT model) opt — TFOPTModel (OPT model) pegasus — TFPegasusModel (Pegasus model) regnet — TFRegNetModel (RegNet model) rembert — TFRemBertModel (RemBERT model) resnet — TFResNetModel (ResNet model) roberta — TFRobertaModel (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) roformer — TFRoFormerModel (RoFormer model) sam — TFSamModel (SAM model) segformer — TFSegformerModel (SegFormer model) speech_to_text — TFSpeech2TextModel (Speech2Text model) swiftformer — TFSwiftFormerModel (SwiftFormer model) swin — TFSwinModel (Swin Transformer model) t5 — TFT5Model (T5 model) tapas — TFTapasModel (TAPAS model) transfo-xl — TFTransfoXLModel (Transformer-XL model) vision-text-dual-encoder — TFVisionTextDualEncoderModel (VisionTextDualEncoder model) vit — TFViTModel (ViT model) vit_mae — TFViTMAEModel (ViTMAE model) wav2vec2 — TFWav2Vec2Model (Wav2Vec2 model) whisper — TFWhisperModel (Whisper model) xglm — TFXGLMModel (XGLM model) xlm — TFXLMModel (XLM model) xlm-roberta — TFXLMRobertaModel (XLM-RoBERTa model) xlnet — TFXLNetModel (XLNet model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModel >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModel.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModel.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModel.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModel class transformers. FlaxAutoModel < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertModel (ALBERT model) BartConfig configuration class: FlaxBartModel (BART model) BeitConfig configuration class: FlaxBeitModel (BEiT model) BertConfig configuration class: FlaxBertModel (BERT model) BigBirdConfig configuration class: FlaxBigBirdModel (BigBird model) BlenderbotConfig configuration class: FlaxBlenderbotModel (Blenderbot model) BlenderbotSmallConfig configuration class: FlaxBlenderbotSmallModel (BlenderbotSmall model) BloomConfig configuration class: FlaxBloomModel (BLOOM model) CLIPConfig configuration class: FlaxCLIPModel (CLIP model) Dinov2Config configuration class: FlaxDinov2Model (DINOv2 model) DistilBertConfig configuration class: FlaxDistilBertModel (DistilBERT model) ElectraConfig configuration class: FlaxElectraModel (ELECTRA model) GPT2Config configuration class: FlaxGPT2Model (OpenAI GPT-2 model) GPTJConfig configuration class: FlaxGPTJModel (GPT-J model) GPTNeoConfig configuration class: FlaxGPTNeoModel (GPT Neo model) GemmaConfig configuration class: FlaxGemmaModel (Gemma model) LlamaConfig configuration class: FlaxLlamaModel (LLaMA model) LongT5Config configuration class: FlaxLongT5Model (LongT5 model) MBartConfig configuration class: FlaxMBartModel (mBART model) MT5Config configuration class: FlaxMT5Model (MT5 model) MarianConfig configuration class: FlaxMarianModel (Marian model) MistralConfig configuration class: FlaxMistralModel (Mistral model) OPTConfig configuration class: FlaxOPTModel (OPT model) PegasusConfig configuration class: FlaxPegasusModel (Pegasus model) RegNetConfig configuration class: FlaxRegNetModel (RegNet model) ResNetConfig configuration class: FlaxResNetModel (ResNet model) RoFormerConfig configuration class: FlaxRoFormerModel (RoFormer model) RobertaConfig configuration class: FlaxRobertaModel (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) T5Config configuration class: FlaxT5Model (T5 model) ViTConfig configuration class: FlaxViTModel (ViT model) VisionTextDualEncoderConfig configuration class: FlaxVisionTextDualEncoderModel (VisionTextDualEncoder model) Wav2Vec2Config configuration class: FlaxWav2Vec2Model (Wav2Vec2 model) WhisperConfig configuration class: FlaxWhisperModel (Whisper model) XGLMConfig configuration class: FlaxXGLMModel (XGLM model) XLMRobertaConfig configuration class: FlaxXLMRobertaModel (XLM-RoBERTa model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the base model classes of the library from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModel >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModel.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the base model classes of the library from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — FlaxAlbertModel (ALBERT model) bart — FlaxBartModel (BART model) beit — FlaxBeitModel (BEiT model) bert — FlaxBertModel (BERT model) big_bird — FlaxBigBirdModel (BigBird model) blenderbot — FlaxBlenderbotModel (Blenderbot model) blenderbot-small — FlaxBlenderbotSmallModel (BlenderbotSmall model) bloom — FlaxBloomModel (BLOOM model) clip — FlaxCLIPModel (CLIP model) dinov2 — FlaxDinov2Model (DINOv2 model) distilbert — FlaxDistilBertModel (DistilBERT model) electra — FlaxElectraModel (ELECTRA model) gemma — FlaxGemmaModel (Gemma model) gpt-sw3 — FlaxGPT2Model (GPT-Sw3 model) gpt2 — FlaxGPT2Model (OpenAI GPT-2 model) gpt_neo — FlaxGPTNeoModel (GPT Neo model) gptj — FlaxGPTJModel (GPT-J model) llama — FlaxLlamaModel (LLaMA model) longt5 — FlaxLongT5Model (LongT5 model) marian — FlaxMarianModel (Marian model) mbart — FlaxMBartModel (mBART model) mistral — FlaxMistralModel (Mistral model) mt5 — FlaxMT5Model (MT5 model) opt — FlaxOPTModel (OPT model) pegasus — FlaxPegasusModel (Pegasus model) regnet — FlaxRegNetModel (RegNet model) resnet — FlaxResNetModel (ResNet model) roberta — FlaxRobertaModel (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerModel (RoFormer model) t5 — FlaxT5Model (T5 model) vision-text-dual-encoder — FlaxVisionTextDualEncoderModel (VisionTextDualEncoder model) vit — FlaxViTModel (ViT model) wav2vec2 — FlaxWav2Vec2Model (Wav2Vec2 model) whisper — FlaxWhisperModel (Whisper model) xglm — FlaxXGLMModel (XGLM model) xlm-roberta — FlaxXLMRobertaModel (XLM-RoBERTa model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModel >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModel.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModel.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModel.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) Generic pretraining classes The following auto classes are available for instantiating a model with a pretraining head. AutoModelForPreTraining class transformers. AutoModelForPreTraining < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForPreTraining (ALBERT model) BartConfig configuration class: BartForConditionalGeneration (BART model) BertConfig configuration class: BertForPreTraining (BERT model) BigBirdConfig configuration class: BigBirdForPreTraining (BigBird model) BloomConfig configuration class: BloomForCausalLM (BLOOM model) CTRLConfig configuration class: CTRLLMHeadModel (CTRL model) CamembertConfig configuration class: CamembertForMaskedLM (CamemBERT model) ColPaliConfig configuration class: ColPaliForRetrieval (ColPali model) Data2VecTextConfig configuration class: Data2VecTextForMaskedLM (Data2VecText model) DebertaConfig configuration class: DebertaForMaskedLM (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForMaskedLM (DeBERTa-v2 model) DistilBertConfig configuration class: DistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: ElectraForPreTraining (ELECTRA model) ErnieConfig configuration class: ErnieForPreTraining (ERNIE model) FNetConfig configuration class: FNetForPreTraining (FNet model) FSMTConfig configuration class: FSMTForConditionalGeneration (FairSeq Machine-Translation model) FalconMambaConfig configuration class: FalconMambaForCausalLM (FalconMamba model) FlaubertConfig configuration class: FlaubertWithLMHeadModel (FlauBERT model) FlavaConfig configuration class: FlavaForPreTraining (FLAVA model) FunnelConfig configuration class: FunnelForPreTraining (Funnel Transformer model) GPT2Config configuration class: GPT2LMHeadModel (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeForCausalLM (GPTBigCode model) GPTSanJapaneseConfig configuration class: GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) HieraConfig configuration class: HieraForPreTraining (Hiera model) IBertConfig configuration class: IBertForMaskedLM (I-BERT model) Idefics2Config configuration class: Idefics2ForConditionalGeneration (Idefics2 model) Idefics3Config configuration class: Idefics3ForConditionalGeneration (Idefics3 model) IdeficsConfig configuration class: IdeficsForVisionText2Text (IDEFICS model) LayoutLMConfig configuration class: LayoutLMForMaskedLM (LayoutLM model) LlavaConfig configuration class: LlavaForConditionalGeneration (LLaVa model) LlavaNextConfig configuration class: LlavaNextForConditionalGeneration (LLaVA-NeXT model) LlavaNextVideoConfig configuration class: LlavaNextVideoForConditionalGeneration (LLaVa-NeXT-Video model) LlavaOnevisionConfig configuration class: LlavaOnevisionForConditionalGeneration (LLaVA-Onevision model) LongformerConfig configuration class: LongformerForMaskedLM (Longformer model) LukeConfig configuration class: LukeForMaskedLM (LUKE model) LxmertConfig configuration class: LxmertForPreTraining (LXMERT model) MPNetConfig configuration class: MPNetForMaskedLM (MPNet model) Mamba2Config configuration class: Mamba2ForCausalLM (mamba2 model) MambaConfig configuration class: MambaForCausalLM (Mamba model) MegaConfig configuration class: MegaForMaskedLM (MEGA model) MegatronBertConfig configuration class: MegatronBertForPreTraining (Megatron-BERT model) MllamaConfig configuration class: MllamaForConditionalGeneration (Mllama model) MobileBertConfig configuration class: MobileBertForPreTraining (MobileBERT model) MptConfig configuration class: MptForCausalLM (MPT model) MraConfig configuration class: MraForMaskedLM (MRA model) MvpConfig configuration class: MvpForConditionalGeneration (MVP model) NezhaConfig configuration class: NezhaForPreTraining (Nezha model) NllbMoeConfig configuration class: NllbMoeForConditionalGeneration (NLLB-MOE model) OpenAIGPTConfig configuration class: OpenAIGPTLMHeadModel (OpenAI GPT model) PaliGemmaConfig configuration class: PaliGemmaForConditionalGeneration (PaliGemma model) Qwen2AudioConfig configuration class: Qwen2AudioForConditionalGeneration (Qwen2Audio model) RetriBertConfig configuration class: RetriBertModel (RetriBERT model) RoCBertConfig configuration class: RoCBertForPreTraining (RoCBert model) RobertaConfig configuration class: RobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) RwkvConfig configuration class: RwkvForCausalLM (RWKV model) SplinterConfig configuration class: SplinterForPreTraining (Splinter model) SqueezeBertConfig configuration class: SqueezeBertForMaskedLM (SqueezeBERT model) SwitchTransformersConfig configuration class: SwitchTransformersForConditionalGeneration (SwitchTransformers model) T5Config configuration class: T5ForConditionalGeneration (T5 model) TapasConfig configuration class: TapasForMaskedLM (TAPAS model) TransfoXLConfig configuration class: TransfoXLLMHeadModel (Transformer-XL model) TvltConfig configuration class: TvltForPreTraining (TVLT model) UniSpeechConfig configuration class: UniSpeechForPreTraining (UniSpeech model) UniSpeechSatConfig configuration class: UniSpeechSatForPreTraining (UniSpeechSat model) ViTMAEConfig configuration class: ViTMAEForPreTraining (ViTMAE model) VideoLlavaConfig configuration class: VideoLlavaForConditionalGeneration (VideoLlava model) VideoMAEConfig configuration class: VideoMAEForPreTraining (VideoMAE model) VipLlavaConfig configuration class: VipLlavaForConditionalGeneration (VipLlava model) VisualBertConfig configuration class: VisualBertForPreTraining (VisualBERT model) Wav2Vec2Config configuration class: Wav2Vec2ForPreTraining (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForPreTraining (Wav2Vec2-Conformer model) XLMConfig configuration class: XLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: XLMRobertaForMaskedLM (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetLMHeadModel (XLNet model) XmodConfig configuration class: XmodForMaskedLM (X-MOD model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a pretraining head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForPreTraining >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForPreTraining.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — AlbertForPreTraining (ALBERT model) bart — BartForConditionalGeneration (BART model) bert — BertForPreTraining (BERT model) big_bird — BigBirdForPreTraining (BigBird model) bloom — BloomForCausalLM (BLOOM model) camembert — CamembertForMaskedLM (CamemBERT model) colpali — ColPaliForRetrieval (ColPali model) ctrl — CTRLLMHeadModel (CTRL model) data2vec-text — Data2VecTextForMaskedLM (Data2VecText model) deberta — DebertaForMaskedLM (DeBERTa model) deberta-v2 — DebertaV2ForMaskedLM (DeBERTa-v2 model) distilbert — DistilBertForMaskedLM (DistilBERT model) electra — ElectraForPreTraining (ELECTRA model) ernie — ErnieForPreTraining (ERNIE model) falcon_mamba — FalconMambaForCausalLM (FalconMamba model) flaubert — FlaubertWithLMHeadModel (FlauBERT model) flava — FlavaForPreTraining (FLAVA model) fnet — FNetForPreTraining (FNet model) fsmt — FSMTForConditionalGeneration (FairSeq Machine-Translation model) funnel — FunnelForPreTraining (Funnel Transformer model) gpt-sw3 — GPT2LMHeadModel (GPT-Sw3 model) gpt2 — GPT2LMHeadModel (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeForCausalLM (GPTBigCode model) gptsan-japanese — GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) hiera — HieraForPreTraining (Hiera model) ibert — IBertForMaskedLM (I-BERT model) idefics — IdeficsForVisionText2Text (IDEFICS model) idefics2 — Idefics2ForConditionalGeneration (Idefics2 model) idefics3 — Idefics3ForConditionalGeneration (Idefics3 model) layoutlm — LayoutLMForMaskedLM (LayoutLM model) llava — LlavaForConditionalGeneration (LLaVa model) llava_next — LlavaNextForConditionalGeneration (LLaVA-NeXT model) llava_next_video — LlavaNextVideoForConditionalGeneration (LLaVa-NeXT-Video model) llava_onevision — LlavaOnevisionForConditionalGeneration (LLaVA-Onevision model) longformer — LongformerForMaskedLM (Longformer model) luke — LukeForMaskedLM (LUKE model) lxmert — LxmertForPreTraining (LXMERT model) mamba — MambaForCausalLM (Mamba model) mamba2 — Mamba2ForCausalLM (mamba2 model) mega — MegaForMaskedLM (MEGA model) megatron-bert — MegatronBertForPreTraining (Megatron-BERT model) mllama — MllamaForConditionalGeneration (Mllama model) mobilebert — MobileBertForPreTraining (MobileBERT model) mpnet — MPNetForMaskedLM (MPNet model) mpt — MptForCausalLM (MPT model) mra — MraForMaskedLM (MRA model) mvp — MvpForConditionalGeneration (MVP model) nezha — NezhaForPreTraining (Nezha model) nllb-moe — NllbMoeForConditionalGeneration (NLLB-MOE model) openai-gpt — OpenAIGPTLMHeadModel (OpenAI GPT model) paligemma — PaliGemmaForConditionalGeneration (PaliGemma model) qwen2_audio — Qwen2AudioForConditionalGeneration (Qwen2Audio model) retribert — RetriBertModel (RetriBERT model) roberta — RobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForPreTraining (RoCBert model) rwkv — RwkvForCausalLM (RWKV model) splinter — SplinterForPreTraining (Splinter model) squeezebert — SqueezeBertForMaskedLM (SqueezeBERT model) switch_transformers — SwitchTransformersForConditionalGeneration (SwitchTransformers model) t5 — T5ForConditionalGeneration (T5 model) tapas — TapasForMaskedLM (TAPAS model) transfo-xl — TransfoXLLMHeadModel (Transformer-XL model) tvlt — TvltForPreTraining (TVLT model) unispeech — UniSpeechForPreTraining (UniSpeech model) unispeech-sat — UniSpeechSatForPreTraining (UniSpeechSat model) video_llava — VideoLlavaForConditionalGeneration (VideoLlava model) videomae — VideoMAEForPreTraining (VideoMAE model) vipllava — VipLlavaForConditionalGeneration (VipLlava model) visual_bert — VisualBertForPreTraining (VisualBERT model) vit_mae — ViTMAEForPreTraining (ViTMAE model) wav2vec2 — Wav2Vec2ForPreTraining (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2ConformerForPreTraining (Wav2Vec2-Conformer model) xlm — XLMWithLMHeadModel (XLM model) xlm-roberta — XLMRobertaForMaskedLM (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model) xlnet — XLNetLMHeadModel (XLNet model) xmod — XmodForMaskedLM (X-MOD model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForPreTraining >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForPreTraining.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForPreTraining.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForPreTraining.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForPreTraining class transformers. TFAutoModelForPreTraining < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForPreTraining (ALBERT model) BartConfig configuration class: TFBartForConditionalGeneration (BART model) BertConfig configuration class: TFBertForPreTraining (BERT model) CTRLConfig configuration class: TFCTRLLMHeadModel (CTRL model) CamembertConfig configuration class: TFCamembertForMaskedLM (CamemBERT model) DistilBertConfig configuration class: TFDistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: TFElectraForPreTraining (ELECTRA model) FlaubertConfig configuration class: TFFlaubertWithLMHeadModel (FlauBERT model) FunnelConfig configuration class: TFFunnelForPreTraining (Funnel Transformer model) GPT2Config configuration class: TFGPT2LMHeadModel (OpenAI GPT-2 model) IdeficsConfig configuration class: TFIdeficsForVisionText2Text (IDEFICS model) LayoutLMConfig configuration class: TFLayoutLMForMaskedLM (LayoutLM model) LxmertConfig configuration class: TFLxmertForPreTraining (LXMERT model) MPNetConfig configuration class: TFMPNetForMaskedLM (MPNet model) MobileBertConfig configuration class: TFMobileBertForPreTraining (MobileBERT model) OpenAIGPTConfig configuration class: TFOpenAIGPTLMHeadModel (OpenAI GPT model) RobertaConfig configuration class: TFRobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) T5Config configuration class: TFT5ForConditionalGeneration (T5 model) TapasConfig configuration class: TFTapasForMaskedLM (TAPAS model) TransfoXLConfig configuration class: TFTransfoXLLMHeadModel (Transformer-XL model) ViTMAEConfig configuration class: TFViTMAEForPreTraining (ViTMAE model) XLMConfig configuration class: TFXLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForMaskedLM (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetLMHeadModel (XLNet model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a pretraining head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForPreTraining >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForPreTraining.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — TFAlbertForPreTraining (ALBERT model) bart — TFBartForConditionalGeneration (BART model) bert — TFBertForPreTraining (BERT model) camembert — TFCamembertForMaskedLM (CamemBERT model) ctrl — TFCTRLLMHeadModel (CTRL model) distilbert — TFDistilBertForMaskedLM (DistilBERT model) electra — TFElectraForPreTraining (ELECTRA model) flaubert — TFFlaubertWithLMHeadModel (FlauBERT model) funnel — TFFunnelForPreTraining (Funnel Transformer model) gpt-sw3 — TFGPT2LMHeadModel (GPT-Sw3 model) gpt2 — TFGPT2LMHeadModel (OpenAI GPT-2 model) idefics — TFIdeficsForVisionText2Text (IDEFICS model) layoutlm — TFLayoutLMForMaskedLM (LayoutLM model) lxmert — TFLxmertForPreTraining (LXMERT model) mobilebert — TFMobileBertForPreTraining (MobileBERT model) mpnet — TFMPNetForMaskedLM (MPNet model) openai-gpt — TFOpenAIGPTLMHeadModel (OpenAI GPT model) roberta — TFRobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) t5 — TFT5ForConditionalGeneration (T5 model) tapas — TFTapasForMaskedLM (TAPAS model) transfo-xl — TFTransfoXLLMHeadModel (Transformer-XL model) vit_mae — TFViTMAEForPreTraining (ViTMAE model) xlm — TFXLMWithLMHeadModel (XLM model) xlm-roberta — TFXLMRobertaForMaskedLM (XLM-RoBERTa model) xlnet — TFXLNetLMHeadModel (XLNet model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForPreTraining >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForPreTraining.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForPreTraining.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForPreTraining.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForPreTraining class transformers. FlaxAutoModelForPreTraining < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForPreTraining (ALBERT model) BartConfig configuration class: FlaxBartForConditionalGeneration (BART model) BertConfig configuration class: FlaxBertForPreTraining (BERT model) BigBirdConfig configuration class: FlaxBigBirdForPreTraining (BigBird model) ElectraConfig configuration class: FlaxElectraForPreTraining (ELECTRA model) LongT5Config configuration class: FlaxLongT5ForConditionalGeneration (LongT5 model) MBartConfig configuration class: FlaxMBartForConditionalGeneration (mBART model) MT5Config configuration class: FlaxMT5ForConditionalGeneration (MT5 model) RoFormerConfig configuration class: FlaxRoFormerForMaskedLM (RoFormer model) RobertaConfig configuration class: FlaxRobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) T5Config configuration class: FlaxT5ForConditionalGeneration (T5 model) Wav2Vec2Config configuration class: FlaxWav2Vec2ForPreTraining (Wav2Vec2 model) WhisperConfig configuration class: FlaxWhisperForConditionalGeneration (Whisper model) XLMRobertaConfig configuration class: FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a pretraining head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForPreTraining >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForPreTraining.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — FlaxAlbertForPreTraining (ALBERT model) bart — FlaxBartForConditionalGeneration (BART model) bert — FlaxBertForPreTraining (BERT model) big_bird — FlaxBigBirdForPreTraining (BigBird model) electra — FlaxElectraForPreTraining (ELECTRA model) longt5 — FlaxLongT5ForConditionalGeneration (LongT5 model) mbart — FlaxMBartForConditionalGeneration (mBART model) mt5 — FlaxMT5ForConditionalGeneration (MT5 model) roberta — FlaxRobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForMaskedLM (RoFormer model) t5 — FlaxT5ForConditionalGeneration (T5 model) wav2vec2 — FlaxWav2Vec2ForPreTraining (Wav2Vec2 model) whisper — FlaxWhisperForConditionalGeneration (Whisper model) xlm-roberta — FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForPreTraining >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForPreTraining.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForPreTraining.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForPreTraining.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) Natural Language Processing The following auto classes are available for the following natural language processing tasks. AutoModelForCausalLM class transformers. AutoModelForCausalLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AriaTextConfig configuration class: AriaTextForCausalLM (AriaText model) BambaConfig configuration class: BambaForCausalLM (Bamba model) BartConfig configuration class: BartForCausalLM (BART model) BertConfig configuration class: BertLMHeadModel (BERT model) BertGenerationConfig configuration class: BertGenerationDecoder (Bert Generation model) BigBirdConfig configuration class: BigBirdForCausalLM (BigBird model) BigBirdPegasusConfig configuration class: BigBirdPegasusForCausalLM (BigBird-Pegasus model) BioGptConfig configuration class: BioGptForCausalLM (BioGpt model) BlenderbotConfig configuration class: BlenderbotForCausalLM (Blenderbot model) BlenderbotSmallConfig configuration class: BlenderbotSmallForCausalLM (BlenderbotSmall model) BloomConfig configuration class: BloomForCausalLM (BLOOM model) CTRLConfig configuration class: CTRLLMHeadModel (CTRL model) CamembertConfig configuration class: CamembertForCausalLM (CamemBERT model) CodeGenConfig configuration class: CodeGenForCausalLM (CodeGen model) Cohere2Config configuration class: Cohere2ForCausalLM (Cohere2 model) CohereConfig configuration class: CohereForCausalLM (Cohere model) CpmAntConfig configuration class: CpmAntForCausalLM (CPM-Ant model) Data2VecTextConfig configuration class: Data2VecTextForCausalLM (Data2VecText model) DbrxConfig configuration class: DbrxForCausalLM (DBRX model) DiffLlamaConfig configuration class: DiffLlamaForCausalLM (DiffLlama model) ElectraConfig configuration class: ElectraForCausalLM (ELECTRA model) Emu3Config configuration class: Emu3ForCausalLM (Emu3 model) ErnieConfig configuration class: ErnieForCausalLM (ERNIE model) FalconConfig configuration class: FalconForCausalLM (Falcon model) FalconMambaConfig configuration class: FalconMambaForCausalLM (FalconMamba model) FuyuConfig configuration class: FuyuForCausalLM (Fuyu model) GPT2Config configuration class: GPT2LMHeadModel (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeForCausalLM (GPTBigCode model) GPTJConfig configuration class: GPTJForCausalLM (GPT-J model) GPTNeoConfig configuration class: GPTNeoForCausalLM (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXForCausalLM (GPT NeoX model) GPTNeoXJapaneseConfig configuration class: GPTNeoXJapaneseForCausalLM (GPT NeoX Japanese model) Gemma2Config configuration class: Gemma2ForCausalLM (Gemma2 model) GemmaConfig configuration class: GemmaForCausalLM (Gemma model) GitConfig configuration class: GitForCausalLM (GIT model) GlmConfig configuration class: GlmForCausalLM (GLM model) GraniteConfig configuration class: GraniteForCausalLM (Granite model) GraniteMoeConfig configuration class: GraniteMoeForCausalLM (GraniteMoeMoe model) JambaConfig configuration class: JambaForCausalLM (Jamba model) JetMoeConfig configuration class: JetMoeForCausalLM (JetMoe model) LlamaConfig configuration class: LlamaForCausalLM (LLaMA model) MBartConfig configuration class: MBartForCausalLM (mBART model) Mamba2Config configuration class: Mamba2ForCausalLM (mamba2 model) MambaConfig configuration class: MambaForCausalLM (Mamba model) MarianConfig configuration class: MarianForCausalLM (Marian model) MegaConfig configuration class: MegaForCausalLM (MEGA model) MegatronBertConfig configuration class: MegatronBertForCausalLM (Megatron-BERT model) MistralConfig configuration class: MistralForCausalLM (Mistral model) MixtralConfig configuration class: MixtralForCausalLM (Mixtral model) MllamaConfig configuration class: MllamaForCausalLM (Mllama model) MoshiConfig configuration class: MoshiForCausalLM (Moshi model) MptConfig configuration class: MptForCausalLM (MPT model) MusicgenConfig configuration class: MusicgenForCausalLM (MusicGen model) MusicgenMelodyConfig configuration class: MusicgenMelodyForCausalLM (MusicGen Melody model) MvpConfig configuration class: MvpForCausalLM (MVP model) NemotronConfig configuration class: NemotronForCausalLM (Nemotron model) OPTConfig configuration class: OPTForCausalLM (OPT model) Olmo2Config configuration class: Olmo2ForCausalLM (OLMo2 model) OlmoConfig configuration class: OlmoForCausalLM (OLMo model) OlmoeConfig configuration class: OlmoeForCausalLM (OLMoE model) OpenAIGPTConfig configuration class: OpenAIGPTLMHeadModel (OpenAI GPT model) OpenLlamaConfig configuration class: OpenLlamaForCausalLM (OpenLlama model) PLBartConfig configuration class: PLBartForCausalLM (PLBart model) PegasusConfig configuration class: PegasusForCausalLM (Pegasus model) PersimmonConfig configuration class: PersimmonForCausalLM (Persimmon model) Phi3Config configuration class: Phi3ForCausalLM (Phi3 model) PhiConfig configuration class: PhiForCausalLM (Phi model) PhimoeConfig configuration class: PhimoeForCausalLM (Phimoe model) ProphetNetConfig configuration class: ProphetNetForCausalLM (ProphetNet model) QDQBertConfig configuration class: QDQBertLMHeadModel (QDQBert model) Qwen2Config configuration class: Qwen2ForCausalLM (Qwen2 model) Qwen2MoeConfig configuration class: Qwen2MoeForCausalLM (Qwen2MoE model) RecurrentGemmaConfig configuration class: RecurrentGemmaForCausalLM (RecurrentGemma model) ReformerConfig configuration class: ReformerModelWithLMHead (Reformer model) RemBertConfig configuration class: RemBertForCausalLM (RemBERT model) RoCBertConfig configuration class: RoCBertForCausalLM (RoCBert model) RoFormerConfig configuration class: RoFormerForCausalLM (RoFormer model) RobertaConfig configuration class: RobertaForCausalLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) RwkvConfig configuration class: RwkvForCausalLM (RWKV model) Speech2Text2Config configuration class: Speech2Text2ForCausalLM (Speech2Text2 model) StableLmConfig configuration class: StableLmForCausalLM (StableLm model) Starcoder2Config configuration class: Starcoder2ForCausalLM (Starcoder2 model) TrOCRConfig configuration class: TrOCRForCausalLM (TrOCR model) TransfoXLConfig configuration class: TransfoXLLMHeadModel (Transformer-XL model) WhisperConfig configuration class: WhisperForCausalLM (Whisper model) XGLMConfig configuration class: XGLMForCausalLM (XGLM model) XLMConfig configuration class: XLMWithLMHeadModel (XLM model) XLMProphetNetConfig configuration class: XLMProphetNetForCausalLM (XLM-ProphetNet model) XLMRobertaConfig configuration class: XLMRobertaForCausalLM (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForCausalLM (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetLMHeadModel (XLNet model) XmodConfig configuration class: XmodForCausalLM (X-MOD model) ZambaConfig configuration class: ZambaForCausalLM (Zamba model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForCausalLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForCausalLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : aria_text — AriaTextForCausalLM (AriaText model) bamba — BambaForCausalLM (Bamba model) bart — BartForCausalLM (BART model) bert — BertLMHeadModel (BERT model) bert-generation — BertGenerationDecoder (Bert Generation model) big_bird — BigBirdForCausalLM (BigBird model) bigbird_pegasus — BigBirdPegasusForCausalLM (BigBird-Pegasus model) biogpt — BioGptForCausalLM (BioGpt model) blenderbot — BlenderbotForCausalLM (Blenderbot model) blenderbot-small — BlenderbotSmallForCausalLM (BlenderbotSmall model) bloom — BloomForCausalLM (BLOOM model) camembert — CamembertForCausalLM (CamemBERT model) code_llama — LlamaForCausalLM (CodeLlama model) codegen — CodeGenForCausalLM (CodeGen model) cohere — CohereForCausalLM (Cohere model) cohere2 — Cohere2ForCausalLM (Cohere2 model) cpmant — CpmAntForCausalLM (CPM-Ant model) ctrl — CTRLLMHeadModel (CTRL model) data2vec-text — Data2VecTextForCausalLM (Data2VecText model) dbrx — DbrxForCausalLM (DBRX model) diffllama — DiffLlamaForCausalLM (DiffLlama model) electra — ElectraForCausalLM (ELECTRA model) emu3 — Emu3ForCausalLM (Emu3 model) ernie — ErnieForCausalLM (ERNIE model) falcon — FalconForCausalLM (Falcon model) falcon_mamba — FalconMambaForCausalLM (FalconMamba model) fuyu — FuyuForCausalLM (Fuyu model) gemma — GemmaForCausalLM (Gemma model) gemma2 — Gemma2ForCausalLM (Gemma2 model) git — GitForCausalLM (GIT model) glm — GlmForCausalLM (GLM model) gpt-sw3 — GPT2LMHeadModel (GPT-Sw3 model) gpt2 — GPT2LMHeadModel (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeForCausalLM (GPTBigCode model) gpt_neo — GPTNeoForCausalLM (GPT Neo model) gpt_neox — GPTNeoXForCausalLM (GPT NeoX model) gpt_neox_japanese — GPTNeoXJapaneseForCausalLM (GPT NeoX Japanese model) gptj — GPTJForCausalLM (GPT-J model) granite — GraniteForCausalLM (Granite model) granitemoe — GraniteMoeForCausalLM (GraniteMoeMoe model) jamba — JambaForCausalLM (Jamba model) jetmoe — JetMoeForCausalLM (JetMoe model) llama — LlamaForCausalLM (LLaMA model) mamba — MambaForCausalLM (Mamba model) mamba2 — Mamba2ForCausalLM (mamba2 model) marian — MarianForCausalLM (Marian model) mbart — MBartForCausalLM (mBART model) mega — MegaForCausalLM (MEGA model) megatron-bert — MegatronBertForCausalLM (Megatron-BERT model) mistral — MistralForCausalLM (Mistral model) mixtral — MixtralForCausalLM (Mixtral model) mllama — MllamaForCausalLM (Mllama model) moshi — MoshiForCausalLM (Moshi model) mpt — MptForCausalLM (MPT model) musicgen — MusicgenForCausalLM (MusicGen model) musicgen_melody — MusicgenMelodyForCausalLM (MusicGen Melody model) mvp — MvpForCausalLM (MVP model) nemotron — NemotronForCausalLM (Nemotron model) olmo — OlmoForCausalLM (OLMo model) olmo2 — Olmo2ForCausalLM (OLMo2 model) olmoe — OlmoeForCausalLM (OLMoE model) open-llama — OpenLlamaForCausalLM (OpenLlama model) openai-gpt — OpenAIGPTLMHeadModel (OpenAI GPT model) opt — OPTForCausalLM (OPT model) pegasus — PegasusForCausalLM (Pegasus model) persimmon — PersimmonForCausalLM (Persimmon model) phi — PhiForCausalLM (Phi model) phi3 — Phi3ForCausalLM (Phi3 model) phimoe — PhimoeForCausalLM (Phimoe model) plbart — PLBartForCausalLM (PLBart model) prophetnet — ProphetNetForCausalLM (ProphetNet model) qdqbert — QDQBertLMHeadModel (QDQBert model) qwen2 — Qwen2ForCausalLM (Qwen2 model) qwen2_moe — Qwen2MoeForCausalLM (Qwen2MoE model) recurrent_gemma — RecurrentGemmaForCausalLM (RecurrentGemma model) reformer — ReformerModelWithLMHead (Reformer model) rembert — RemBertForCausalLM (RemBERT model) roberta — RobertaForCausalLM (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForCausalLM (RoCBert model) roformer — RoFormerForCausalLM (RoFormer model) rwkv — RwkvForCausalLM (RWKV model) speech_to_text_2 — Speech2Text2ForCausalLM (Speech2Text2 model) stablelm — StableLmForCausalLM (StableLm model) starcoder2 — Starcoder2ForCausalLM (Starcoder2 model) transfo-xl — TransfoXLLMHeadModel (Transformer-XL model) trocr — TrOCRForCausalLM (TrOCR model) whisper — WhisperForCausalLM (Whisper model) xglm — XGLMForCausalLM (XGLM model) xlm — XLMWithLMHeadModel (XLM model) xlm-prophetnet — XLMProphetNetForCausalLM (XLM-ProphetNet model) xlm-roberta — XLMRobertaForCausalLM (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForCausalLM (XLM-RoBERTa-XL model) xlnet — XLNetLMHeadModel (XLNet model) xmod — XmodForCausalLM (X-MOD model) zamba — ZambaForCausalLM (Zamba model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForCausalLM >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForCausalLM.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForCausalLM.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForCausalLM.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForCausalLM class transformers. TFAutoModelForCausalLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BertConfig configuration class: TFBertLMHeadModel (BERT model) CTRLConfig configuration class: TFCTRLLMHeadModel (CTRL model) CamembertConfig configuration class: TFCamembertForCausalLM (CamemBERT model) GPT2Config configuration class: TFGPT2LMHeadModel (OpenAI GPT-2 model) GPTJConfig configuration class: TFGPTJForCausalLM (GPT-J model) MistralConfig configuration class: TFMistralForCausalLM (Mistral model) OPTConfig configuration class: TFOPTForCausalLM (OPT model) OpenAIGPTConfig configuration class: TFOpenAIGPTLMHeadModel (OpenAI GPT model) RemBertConfig configuration class: TFRemBertForCausalLM (RemBERT model) RoFormerConfig configuration class: TFRoFormerForCausalLM (RoFormer model) RobertaConfig configuration class: TFRobertaForCausalLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) TransfoXLConfig configuration class: TFTransfoXLLMHeadModel (Transformer-XL model) XGLMConfig configuration class: TFXGLMForCausalLM (XGLM model) XLMConfig configuration class: TFXLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForCausalLM (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetLMHeadModel (XLNet model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForCausalLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForCausalLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : bert — TFBertLMHeadModel (BERT model) camembert — TFCamembertForCausalLM (CamemBERT model) ctrl — TFCTRLLMHeadModel (CTRL model) gpt-sw3 — TFGPT2LMHeadModel (GPT-Sw3 model) gpt2 — TFGPT2LMHeadModel (OpenAI GPT-2 model) gptj — TFGPTJForCausalLM (GPT-J model) mistral — TFMistralForCausalLM (Mistral model) openai-gpt — TFOpenAIGPTLMHeadModel (OpenAI GPT model) opt — TFOPTForCausalLM (OPT model) rembert — TFRemBertForCausalLM (RemBERT model) roberta — TFRobertaForCausalLM (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForCausalLM (RoFormer model) transfo-xl — TFTransfoXLLMHeadModel (Transformer-XL model) xglm — TFXGLMForCausalLM (XGLM model) xlm — TFXLMWithLMHeadModel (XLM model) xlm-roberta — TFXLMRobertaForCausalLM (XLM-RoBERTa model) xlnet — TFXLNetLMHeadModel (XLNet model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForCausalLM >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForCausalLM.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForCausalLM.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForCausalLM.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForCausalLM class transformers. FlaxAutoModelForCausalLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BartConfig configuration class: FlaxBartForCausalLM (BART model) BertConfig configuration class: FlaxBertForCausalLM (BERT model) BigBirdConfig configuration class: FlaxBigBirdForCausalLM (BigBird model) BloomConfig configuration class: FlaxBloomForCausalLM (BLOOM model) ElectraConfig configuration class: FlaxElectraForCausalLM (ELECTRA model) GPT2Config configuration class: FlaxGPT2LMHeadModel (OpenAI GPT-2 model) GPTJConfig configuration class: FlaxGPTJForCausalLM (GPT-J model) GPTNeoConfig configuration class: FlaxGPTNeoForCausalLM (GPT Neo model) GemmaConfig configuration class: FlaxGemmaForCausalLM (Gemma model) LlamaConfig configuration class: FlaxLlamaForCausalLM (LLaMA model) MistralConfig configuration class: FlaxMistralForCausalLM (Mistral model) OPTConfig configuration class: FlaxOPTForCausalLM (OPT model) RobertaConfig configuration class: FlaxRobertaForCausalLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) XGLMConfig configuration class: FlaxXGLMForCausalLM (XGLM model) XLMRobertaConfig configuration class: FlaxXLMRobertaForCausalLM (XLM-RoBERTa model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForCausalLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForCausalLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : bart — FlaxBartForCausalLM (BART model) bert — FlaxBertForCausalLM (BERT model) big_bird — FlaxBigBirdForCausalLM (BigBird model) bloom — FlaxBloomForCausalLM (BLOOM model) electra — FlaxElectraForCausalLM (ELECTRA model) gemma — FlaxGemmaForCausalLM (Gemma model) gpt-sw3 — FlaxGPT2LMHeadModel (GPT-Sw3 model) gpt2 — FlaxGPT2LMHeadModel (OpenAI GPT-2 model) gpt_neo — FlaxGPTNeoForCausalLM (GPT Neo model) gptj — FlaxGPTJForCausalLM (GPT-J model) llama — FlaxLlamaForCausalLM (LLaMA model) mistral — FlaxMistralForCausalLM (Mistral model) opt — FlaxOPTForCausalLM (OPT model) roberta — FlaxRobertaForCausalLM (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) xglm — FlaxXGLMForCausalLM (XGLM model) xlm-roberta — FlaxXLMRobertaForCausalLM (XLM-RoBERTa model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForCausalLM >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForCausalLM.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForCausalLM.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForCausalLM.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForMaskedLM class transformers. AutoModelForMaskedLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForMaskedLM (ALBERT model) BartConfig configuration class: BartForConditionalGeneration (BART model) BertConfig configuration class: BertForMaskedLM (BERT model) BigBirdConfig configuration class: BigBirdForMaskedLM (BigBird model) CamembertConfig configuration class: CamembertForMaskedLM (CamemBERT model) ConvBertConfig configuration class: ConvBertForMaskedLM (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForMaskedLM (Data2VecText model) DebertaConfig configuration class: DebertaForMaskedLM (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForMaskedLM (DeBERTa-v2 model) DistilBertConfig configuration class: DistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: ElectraForMaskedLM (ELECTRA model) ErnieConfig configuration class: ErnieForMaskedLM (ERNIE model) EsmConfig configuration class: EsmForMaskedLM (ESM model) FNetConfig configuration class: FNetForMaskedLM (FNet model) FlaubertConfig configuration class: FlaubertWithLMHeadModel (FlauBERT model) FunnelConfig configuration class: FunnelForMaskedLM (Funnel Transformer model) IBertConfig configuration class: IBertForMaskedLM (I-BERT model) LayoutLMConfig configuration class: LayoutLMForMaskedLM (LayoutLM model) LongformerConfig configuration class: LongformerForMaskedLM (Longformer model) LukeConfig configuration class: LukeForMaskedLM (LUKE model) MBartConfig configuration class: MBartForConditionalGeneration (mBART model) MPNetConfig configuration class: MPNetForMaskedLM (MPNet model) MegaConfig configuration class: MegaForMaskedLM (MEGA model) MegatronBertConfig configuration class: MegatronBertForMaskedLM (Megatron-BERT model) MobileBertConfig configuration class: MobileBertForMaskedLM (MobileBERT model) ModernBertConfig configuration class: ModernBertForMaskedLM (ModernBERT model) MraConfig configuration class: MraForMaskedLM (MRA model) MvpConfig configuration class: MvpForConditionalGeneration (MVP model) NezhaConfig configuration class: NezhaForMaskedLM (Nezha model) NystromformerConfig configuration class: NystromformerForMaskedLM (Nyströmformer model) PerceiverConfig configuration class: PerceiverForMaskedLM (Perceiver model) QDQBertConfig configuration class: QDQBertForMaskedLM (QDQBert model) ReformerConfig configuration class: ReformerForMaskedLM (Reformer model) RemBertConfig configuration class: RemBertForMaskedLM (RemBERT model) RoCBertConfig configuration class: RoCBertForMaskedLM (RoCBert model) RoFormerConfig configuration class: RoFormerForMaskedLM (RoFormer model) RobertaConfig configuration class: RobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) SqueezeBertConfig configuration class: SqueezeBertForMaskedLM (SqueezeBERT model) TapasConfig configuration class: TapasForMaskedLM (TAPAS model) Wav2Vec2Config configuration class: Wav2Vec2ForMaskedLM (Wav2Vec2 model) XLMConfig configuration class: XLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: XLMRobertaForMaskedLM (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model) XmodConfig configuration class: XmodForMaskedLM (X-MOD model) YosoConfig configuration class: YosoForMaskedLM (YOSO model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForMaskedLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForMaskedLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — AlbertForMaskedLM (ALBERT model) bart — BartForConditionalGeneration (BART model) bert — BertForMaskedLM (BERT model) big_bird — BigBirdForMaskedLM (BigBird model) camembert — CamembertForMaskedLM (CamemBERT model) convbert — ConvBertForMaskedLM (ConvBERT model) data2vec-text — Data2VecTextForMaskedLM (Data2VecText model) deberta — DebertaForMaskedLM (DeBERTa model) deberta-v2 — DebertaV2ForMaskedLM (DeBERTa-v2 model) distilbert — DistilBertForMaskedLM (DistilBERT model) electra — ElectraForMaskedLM (ELECTRA model) ernie — ErnieForMaskedLM (ERNIE model) esm — EsmForMaskedLM (ESM model) flaubert — FlaubertWithLMHeadModel (FlauBERT model) fnet — FNetForMaskedLM (FNet model) funnel — FunnelForMaskedLM (Funnel Transformer model) ibert — IBertForMaskedLM (I-BERT model) layoutlm — LayoutLMForMaskedLM (LayoutLM model) longformer — LongformerForMaskedLM (Longformer model) luke — LukeForMaskedLM (LUKE model) mbart — MBartForConditionalGeneration (mBART model) mega — MegaForMaskedLM (MEGA model) megatron-bert — MegatronBertForMaskedLM (Megatron-BERT model) mobilebert — MobileBertForMaskedLM (MobileBERT model) modernbert — ModernBertForMaskedLM (ModernBERT model) mpnet — MPNetForMaskedLM (MPNet model) mra — MraForMaskedLM (MRA model) mvp — MvpForConditionalGeneration (MVP model) nezha — NezhaForMaskedLM (Nezha model) nystromformer — NystromformerForMaskedLM (Nyströmformer model) perceiver — PerceiverForMaskedLM (Perceiver model) qdqbert — QDQBertForMaskedLM (QDQBert model) reformer — ReformerForMaskedLM (Reformer model) rembert — RemBertForMaskedLM (RemBERT model) roberta — RobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForMaskedLM (RoCBert model) roformer — RoFormerForMaskedLM (RoFormer model) squeezebert — SqueezeBertForMaskedLM (SqueezeBERT model) tapas — TapasForMaskedLM (TAPAS model) wav2vec2 — Wav2Vec2ForMaskedLM (Wav2Vec2 model) xlm — XLMWithLMHeadModel (XLM model) xlm-roberta — XLMRobertaForMaskedLM (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model) xmod — XmodForMaskedLM (X-MOD model) yoso — YosoForMaskedLM (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForMaskedLM >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForMaskedLM.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForMaskedLM.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForMaskedLM.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForMaskedLM class transformers. TFAutoModelForMaskedLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForMaskedLM (ALBERT model) BertConfig configuration class: TFBertForMaskedLM (BERT model) CamembertConfig configuration class: TFCamembertForMaskedLM (CamemBERT model) ConvBertConfig configuration class: TFConvBertForMaskedLM (ConvBERT model) DebertaConfig configuration class: TFDebertaForMaskedLM (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2ForMaskedLM (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: TFElectraForMaskedLM (ELECTRA model) EsmConfig configuration class: TFEsmForMaskedLM (ESM model) FlaubertConfig configuration class: TFFlaubertWithLMHeadModel (FlauBERT model) FunnelConfig configuration class: TFFunnelForMaskedLM (Funnel Transformer model) LayoutLMConfig configuration class: TFLayoutLMForMaskedLM (LayoutLM model) LongformerConfig configuration class: TFLongformerForMaskedLM (Longformer model) MPNetConfig configuration class: TFMPNetForMaskedLM (MPNet model) MobileBertConfig configuration class: TFMobileBertForMaskedLM (MobileBERT model) RemBertConfig configuration class: TFRemBertForMaskedLM (RemBERT model) RoFormerConfig configuration class: TFRoFormerForMaskedLM (RoFormer model) RobertaConfig configuration class: TFRobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) TapasConfig configuration class: TFTapasForMaskedLM (TAPAS model) XLMConfig configuration class: TFXLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForMaskedLM (XLM-RoBERTa model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForMaskedLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForMaskedLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — TFAlbertForMaskedLM (ALBERT model) bert — TFBertForMaskedLM (BERT model) camembert — TFCamembertForMaskedLM (CamemBERT model) convbert — TFConvBertForMaskedLM (ConvBERT model) deberta — TFDebertaForMaskedLM (DeBERTa model) deberta-v2 — TFDebertaV2ForMaskedLM (DeBERTa-v2 model) distilbert — TFDistilBertForMaskedLM (DistilBERT model) electra — TFElectraForMaskedLM (ELECTRA model) esm — TFEsmForMaskedLM (ESM model) flaubert — TFFlaubertWithLMHeadModel (FlauBERT model) funnel — TFFunnelForMaskedLM (Funnel Transformer model) layoutlm — TFLayoutLMForMaskedLM (LayoutLM model) longformer — TFLongformerForMaskedLM (Longformer model) mobilebert — TFMobileBertForMaskedLM (MobileBERT model) mpnet — TFMPNetForMaskedLM (MPNet model) rembert — TFRemBertForMaskedLM (RemBERT model) roberta — TFRobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForMaskedLM (RoFormer model) tapas — TFTapasForMaskedLM (TAPAS model) xlm — TFXLMWithLMHeadModel (XLM model) xlm-roberta — TFXLMRobertaForMaskedLM (XLM-RoBERTa model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForMaskedLM >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForMaskedLM.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForMaskedLM.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForMaskedLM.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForMaskedLM class transformers. FlaxAutoModelForMaskedLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForMaskedLM (ALBERT model) BartConfig configuration class: FlaxBartForConditionalGeneration (BART model) BertConfig configuration class: FlaxBertForMaskedLM (BERT model) BigBirdConfig configuration class: FlaxBigBirdForMaskedLM (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: FlaxElectraForMaskedLM (ELECTRA model) MBartConfig configuration class: FlaxMBartForConditionalGeneration (mBART model) RoFormerConfig configuration class: FlaxRoFormerForMaskedLM (RoFormer model) RobertaConfig configuration class: FlaxRobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForMaskedLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — FlaxAlbertForMaskedLM (ALBERT model) bart — FlaxBartForConditionalGeneration (BART model) bert — FlaxBertForMaskedLM (BERT model) big_bird — FlaxBigBirdForMaskedLM (BigBird model) distilbert — FlaxDistilBertForMaskedLM (DistilBERT model) electra — FlaxElectraForMaskedLM (ELECTRA model) mbart — FlaxMBartForConditionalGeneration (mBART model) roberta — FlaxRobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForMaskedLM (RoFormer model) xlm-roberta — FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForMaskedLM.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForMaskedLM.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForMaskedLM.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForMaskGeneration class transformers. AutoModelForMaskGeneration < source > ( *args **kwargs ) TFAutoModelForMaskGeneration class transformers. TFAutoModelForMaskGeneration < source > ( *args **kwargs ) AutoModelForSeq2SeqLM class transformers. AutoModelForSeq2SeqLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BartConfig configuration class: BartForConditionalGeneration (BART model) BigBirdPegasusConfig configuration class: BigBirdPegasusForConditionalGeneration (BigBird-Pegasus model) BlenderbotConfig configuration class: BlenderbotForConditionalGeneration (Blenderbot model) BlenderbotSmallConfig configuration class: BlenderbotSmallForConditionalGeneration (BlenderbotSmall model) EncoderDecoderConfig configuration class: EncoderDecoderModel (Encoder decoder model) FSMTConfig configuration class: FSMTForConditionalGeneration (FairSeq Machine-Translation model) GPTSanJapaneseConfig configuration class: GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) LEDConfig configuration class: LEDForConditionalGeneration (LED model) LongT5Config configuration class: LongT5ForConditionalGeneration (LongT5 model) M2M100Config configuration class: M2M100ForConditionalGeneration (M2M100 model) MBartConfig configuration class: MBartForConditionalGeneration (mBART model) MT5Config configuration class: MT5ForConditionalGeneration (MT5 model) MarianConfig configuration class: MarianMTModel (Marian model) MvpConfig configuration class: MvpForConditionalGeneration (MVP model) NllbMoeConfig configuration class: NllbMoeForConditionalGeneration (NLLB-MOE model) PLBartConfig configuration class: PLBartForConditionalGeneration (PLBart model) PegasusConfig configuration class: PegasusForConditionalGeneration (Pegasus model) PegasusXConfig configuration class: PegasusXForConditionalGeneration (PEGASUS-X model) ProphetNetConfig configuration class: ProphetNetForConditionalGeneration (ProphetNet model) Qwen2AudioConfig configuration class: Qwen2AudioForConditionalGeneration (Qwen2Audio model) SeamlessM4TConfig configuration class: SeamlessM4TForTextToText (SeamlessM4T model) SeamlessM4Tv2Config configuration class: SeamlessM4Tv2ForTextToText (SeamlessM4Tv2 model) SwitchTransformersConfig configuration class: SwitchTransformersForConditionalGeneration (SwitchTransformers model) T5Config configuration class: T5ForConditionalGeneration (T5 model) UMT5Config configuration class: UMT5ForConditionalGeneration (UMT5 model) XLMProphetNetConfig configuration class: XLMProphetNetForConditionalGeneration (XLM-ProphetNet model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForSeq2SeqLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-t5/t5-base" ) >>> model = AutoModelForSeq2SeqLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : bart — BartForConditionalGeneration (BART model) bigbird_pegasus — BigBirdPegasusForConditionalGeneration (BigBird-Pegasus model) blenderbot — BlenderbotForConditionalGeneration (Blenderbot model) blenderbot-small — BlenderbotSmallForConditionalGeneration (BlenderbotSmall model) encoder-decoder — EncoderDecoderModel (Encoder decoder model) fsmt — FSMTForConditionalGeneration (FairSeq Machine-Translation model) gptsan-japanese — GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) led — LEDForConditionalGeneration (LED model) longt5 — LongT5ForConditionalGeneration (LongT5 model) m2m_100 — M2M100ForConditionalGeneration (M2M100 model) marian — MarianMTModel (Marian model) mbart — MBartForConditionalGeneration (mBART model) mt5 — MT5ForConditionalGeneration (MT5 model) mvp — MvpForConditionalGeneration (MVP model) nllb-moe — NllbMoeForConditionalGeneration (NLLB-MOE model) pegasus — PegasusForConditionalGeneration (Pegasus model) pegasus_x — PegasusXForConditionalGeneration (PEGASUS-X model) plbart — PLBartForConditionalGeneration (PLBart model) prophetnet — ProphetNetForConditionalGeneration (ProphetNet model) qwen2_audio — Qwen2AudioForConditionalGeneration (Qwen2Audio model) seamless_m4t — SeamlessM4TForTextToText (SeamlessM4T model) seamless_m4t_v2 — SeamlessM4Tv2ForTextToText (SeamlessM4Tv2 model) switch_transformers — SwitchTransformersForConditionalGeneration (SwitchTransformers model) t5 — T5ForConditionalGeneration (T5 model) umt5 — UMT5ForConditionalGeneration (UMT5 model) xlm-prophetnet — XLMProphetNetForConditionalGeneration (XLM-ProphetNet model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForSeq2SeqLM >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForSeq2SeqLM.from_pretrained( "google-t5/t5-base" ) >>> # Update configuration during loading >>> model = AutoModelForSeq2SeqLM.from_pretrained( "google-t5/t5-base" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/t5_tf_model_config.json" ) >>> model = AutoModelForSeq2SeqLM.from_pretrained( ... "./tf_model/t5_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForSeq2SeqLM class transformers. TFAutoModelForSeq2SeqLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BartConfig configuration class: TFBartForConditionalGeneration (BART model) BlenderbotConfig configuration class: TFBlenderbotForConditionalGeneration (Blenderbot model) BlenderbotSmallConfig configuration class: TFBlenderbotSmallForConditionalGeneration (BlenderbotSmall model) EncoderDecoderConfig configuration class: TFEncoderDecoderModel (Encoder decoder model) LEDConfig configuration class: TFLEDForConditionalGeneration (LED model) MBartConfig configuration class: TFMBartForConditionalGeneration (mBART model) MT5Config configuration class: TFMT5ForConditionalGeneration (MT5 model) MarianConfig configuration class: TFMarianMTModel (Marian model) PegasusConfig configuration class: TFPegasusForConditionalGeneration (Pegasus model) T5Config configuration class: TFT5ForConditionalGeneration (T5 model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-t5/t5-base" ) >>> model = TFAutoModelForSeq2SeqLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : bart — TFBartForConditionalGeneration (BART model) blenderbot — TFBlenderbotForConditionalGeneration (Blenderbot model) blenderbot-small — TFBlenderbotSmallForConditionalGeneration (BlenderbotSmall model) encoder-decoder — TFEncoderDecoderModel (Encoder decoder model) led — TFLEDForConditionalGeneration (LED model) marian — TFMarianMTModel (Marian model) mbart — TFMBartForConditionalGeneration (mBART model) mt5 — TFMT5ForConditionalGeneration (MT5 model) pegasus — TFPegasusForConditionalGeneration (Pegasus model) t5 — TFT5ForConditionalGeneration (T5 model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForSeq2SeqLM.from_pretrained( "google-t5/t5-base" ) >>> # Update configuration during loading >>> model = TFAutoModelForSeq2SeqLM.from_pretrained( "google-t5/t5-base" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/t5_pt_model_config.json" ) >>> model = TFAutoModelForSeq2SeqLM.from_pretrained( ... "./pt_model/t5_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForSeq2SeqLM class transformers. FlaxAutoModelForSeq2SeqLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BartConfig configuration class: FlaxBartForConditionalGeneration (BART model) BlenderbotConfig configuration class: FlaxBlenderbotForConditionalGeneration (Blenderbot model) BlenderbotSmallConfig configuration class: FlaxBlenderbotSmallForConditionalGeneration (BlenderbotSmall model) EncoderDecoderConfig configuration class: FlaxEncoderDecoderModel (Encoder decoder model) LongT5Config configuration class: FlaxLongT5ForConditionalGeneration (LongT5 model) MBartConfig configuration class: FlaxMBartForConditionalGeneration (mBART model) MT5Config configuration class: FlaxMT5ForConditionalGeneration (MT5 model) MarianConfig configuration class: FlaxMarianMTModel (Marian model) PegasusConfig configuration class: FlaxPegasusForConditionalGeneration (Pegasus model) T5Config configuration class: FlaxT5ForConditionalGeneration (T5 model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-t5/t5-base" ) >>> model = FlaxAutoModelForSeq2SeqLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : bart — FlaxBartForConditionalGeneration (BART model) blenderbot — FlaxBlenderbotForConditionalGeneration (Blenderbot model) blenderbot-small — FlaxBlenderbotSmallForConditionalGeneration (BlenderbotSmall model) encoder-decoder — FlaxEncoderDecoderModel (Encoder decoder model) longt5 — FlaxLongT5ForConditionalGeneration (LongT5 model) marian — FlaxMarianMTModel (Marian model) mbart — FlaxMBartForConditionalGeneration (mBART model) mt5 — FlaxMT5ForConditionalGeneration (MT5 model) pegasus — FlaxPegasusForConditionalGeneration (Pegasus model) t5 — FlaxT5ForConditionalGeneration (T5 model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained( "google-t5/t5-base" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained( "google-t5/t5-base" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/t5_pt_model_config.json" ) >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained( ... "./pt_model/t5_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForSequenceClassification class transformers. AutoModelForSequenceClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForSequenceClassification (ALBERT model) BartConfig configuration class: BartForSequenceClassification (BART model) BertConfig configuration class: BertForSequenceClassification (BERT model) BigBirdConfig configuration class: BigBirdForSequenceClassification (BigBird model) BigBirdPegasusConfig configuration class: BigBirdPegasusForSequenceClassification (BigBird-Pegasus model) BioGptConfig configuration class: BioGptForSequenceClassification (BioGpt model) BloomConfig configuration class: BloomForSequenceClassification (BLOOM model) CTRLConfig configuration class: CTRLForSequenceClassification (CTRL model) CamembertConfig configuration class: CamembertForSequenceClassification (CamemBERT model) CanineConfig configuration class: CanineForSequenceClassification (CANINE model) ConvBertConfig configuration class: ConvBertForSequenceClassification (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForSequenceClassification (Data2VecText model) DebertaConfig configuration class: DebertaForSequenceClassification (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForSequenceClassification (DeBERTa-v2 model) DiffLlamaConfig configuration class: DiffLlamaForSequenceClassification (DiffLlama model) DistilBertConfig configuration class: DistilBertForSequenceClassification (DistilBERT model) ElectraConfig configuration class: ElectraForSequenceClassification (ELECTRA model) ErnieConfig configuration class: ErnieForSequenceClassification (ERNIE model) ErnieMConfig configuration class: ErnieMForSequenceClassification (ErnieM model) EsmConfig configuration class: EsmForSequenceClassification (ESM model) FNetConfig configuration class: FNetForSequenceClassification (FNet model) FalconConfig configuration class: FalconForSequenceClassification (Falcon model) FlaubertConfig configuration class: FlaubertForSequenceClassification (FlauBERT model) FunnelConfig configuration class: FunnelForSequenceClassification (Funnel Transformer model) GPT2Config configuration class: GPT2ForSequenceClassification (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeForSequenceClassification (GPTBigCode model) GPTJConfig configuration class: GPTJForSequenceClassification (GPT-J model) GPTNeoConfig configuration class: GPTNeoForSequenceClassification (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXForSequenceClassification (GPT NeoX model) Gemma2Config configuration class: Gemma2ForSequenceClassification (Gemma2 model) GemmaConfig configuration class: GemmaForSequenceClassification (Gemma model) GlmConfig configuration class: GlmForSequenceClassification (GLM model) IBertConfig configuration class: IBertForSequenceClassification (I-BERT model) JambaConfig configuration class: JambaForSequenceClassification (Jamba model) JetMoeConfig configuration class: JetMoeForSequenceClassification (JetMoe model) LEDConfig configuration class: LEDForSequenceClassification (LED model) LayoutLMConfig configuration class: LayoutLMForSequenceClassification (LayoutLM model) LayoutLMv2Config configuration class: LayoutLMv2ForSequenceClassification (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3ForSequenceClassification (LayoutLMv3 model) LiltConfig configuration class: LiltForSequenceClassification (LiLT model) LlamaConfig configuration class: LlamaForSequenceClassification (LLaMA model) LongformerConfig configuration class: LongformerForSequenceClassification (Longformer model) LukeConfig configuration class: LukeForSequenceClassification (LUKE model) MBartConfig configuration class: MBartForSequenceClassification (mBART model) MPNetConfig configuration class: MPNetForSequenceClassification (MPNet model) MT5Config configuration class: MT5ForSequenceClassification (MT5 model) MarkupLMConfig configuration class: MarkupLMForSequenceClassification (MarkupLM model) MegaConfig configuration class: MegaForSequenceClassification (MEGA model) MegatronBertConfig configuration class: MegatronBertForSequenceClassification (Megatron-BERT model) MistralConfig configuration class: MistralForSequenceClassification (Mistral model) MixtralConfig configuration class: MixtralForSequenceClassification (Mixtral model) MobileBertConfig configuration class: MobileBertForSequenceClassification (MobileBERT model) ModernBertConfig configuration class: ModernBertForSequenceClassification (ModernBERT model) MptConfig configuration class: MptForSequenceClassification (MPT model) MraConfig configuration class: MraForSequenceClassification (MRA model) MvpConfig configuration class: MvpForSequenceClassification (MVP model) NemotronConfig configuration class: NemotronForSequenceClassification (Nemotron model) NezhaConfig configuration class: NezhaForSequenceClassification (Nezha model) NystromformerConfig configuration class: NystromformerForSequenceClassification (Nyströmformer model) OPTConfig configuration class: OPTForSequenceClassification (OPT model) OpenAIGPTConfig configuration class: OpenAIGPTForSequenceClassification (OpenAI GPT model) OpenLlamaConfig configuration class: OpenLlamaForSequenceClassification (OpenLlama model) PLBartConfig configuration class: PLBartForSequenceClassification (PLBart model) PerceiverConfig configuration class: PerceiverForSequenceClassification (Perceiver model) PersimmonConfig configuration class: PersimmonForSequenceClassification (Persimmon model) Phi3Config configuration class: Phi3ForSequenceClassification (Phi3 model) PhiConfig configuration class: PhiForSequenceClassification (Phi model) PhimoeConfig configuration class: PhimoeForSequenceClassification (Phimoe model) QDQBertConfig configuration class: QDQBertForSequenceClassification (QDQBert model) Qwen2Config configuration class: Qwen2ForSequenceClassification (Qwen2 model) Qwen2MoeConfig configuration class: Qwen2MoeForSequenceClassification (Qwen2MoE model) ReformerConfig configuration class: ReformerForSequenceClassification (Reformer model) RemBertConfig configuration class: RemBertForSequenceClassification (RemBERT model) RoCBertConfig configuration class: RoCBertForSequenceClassification (RoCBert model) RoFormerConfig configuration class: RoFormerForSequenceClassification (RoFormer model) RobertaConfig configuration class: RobertaForSequenceClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) SqueezeBertConfig configuration class: SqueezeBertForSequenceClassification (SqueezeBERT model) StableLmConfig configuration class: StableLmForSequenceClassification (StableLm model) Starcoder2Config configuration class: Starcoder2ForSequenceClassification (Starcoder2 model) T5Config configuration class: T5ForSequenceClassification (T5 model) TapasConfig configuration class: TapasForSequenceClassification (TAPAS model) TransfoXLConfig configuration class: TransfoXLForSequenceClassification (Transformer-XL model) UMT5Config configuration class: UMT5ForSequenceClassification (UMT5 model) XLMConfig configuration class: XLMForSequenceClassification (XLM model) XLMRobertaConfig configuration class: XLMRobertaForSequenceClassification (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForSequenceClassification (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetForSequenceClassification (XLNet model) XmodConfig configuration class: XmodForSequenceClassification (X-MOD model) YosoConfig configuration class: YosoForSequenceClassification (YOSO model) ZambaConfig configuration class: ZambaForSequenceClassification (Zamba model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a sequence classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForSequenceClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForSequenceClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — AlbertForSequenceClassification (ALBERT model) bart — BartForSequenceClassification (BART model) bert — BertForSequenceClassification (BERT model) big_bird — BigBirdForSequenceClassification (BigBird model) bigbird_pegasus — BigBirdPegasusForSequenceClassification (BigBird-Pegasus model) biogpt — BioGptForSequenceClassification (BioGpt model) bloom — BloomForSequenceClassification (BLOOM model) camembert — CamembertForSequenceClassification (CamemBERT model) canine — CanineForSequenceClassification (CANINE model) code_llama — LlamaForSequenceClassification (CodeLlama model) convbert — ConvBertForSequenceClassification (ConvBERT model) ctrl — CTRLForSequenceClassification (CTRL model) data2vec-text — Data2VecTextForSequenceClassification (Data2VecText model) deberta — DebertaForSequenceClassification (DeBERTa model) deberta-v2 — DebertaV2ForSequenceClassification (DeBERTa-v2 model) diffllama — DiffLlamaForSequenceClassification (DiffLlama model) distilbert — DistilBertForSequenceClassification (DistilBERT model) electra — ElectraForSequenceClassification (ELECTRA model) ernie — ErnieForSequenceClassification (ERNIE model) ernie_m — ErnieMForSequenceClassification (ErnieM model) esm — EsmForSequenceClassification (ESM model) falcon — FalconForSequenceClassification (Falcon model) flaubert — FlaubertForSequenceClassification (FlauBERT model) fnet — FNetForSequenceClassification (FNet model) funnel — FunnelForSequenceClassification (Funnel Transformer model) gemma — GemmaForSequenceClassification (Gemma model) gemma2 — Gemma2ForSequenceClassification (Gemma2 model) glm — GlmForSequenceClassification (GLM model) gpt-sw3 — GPT2ForSequenceClassification (GPT-Sw3 model) gpt2 — GPT2ForSequenceClassification (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeForSequenceClassification (GPTBigCode model) gpt_neo — GPTNeoForSequenceClassification (GPT Neo model) gpt_neox — GPTNeoXForSequenceClassification (GPT NeoX model) gptj — GPTJForSequenceClassification (GPT-J model) ibert — IBertForSequenceClassification (I-BERT model) jamba — JambaForSequenceClassification (Jamba model) jetmoe — JetMoeForSequenceClassification (JetMoe model) layoutlm — LayoutLMForSequenceClassification (LayoutLM model) layoutlmv2 — LayoutLMv2ForSequenceClassification (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ForSequenceClassification (LayoutLMv3 model) led — LEDForSequenceClassification (LED model) lilt — LiltForSequenceClassification (LiLT model) llama — LlamaForSequenceClassification (LLaMA model) longformer — LongformerForSequenceClassification (Longformer model) luke — LukeForSequenceClassification (LUKE model) markuplm — MarkupLMForSequenceClassification (MarkupLM model) mbart — MBartForSequenceClassification (mBART model) mega — MegaForSequenceClassification (MEGA model) megatron-bert — MegatronBertForSequenceClassification (Megatron-BERT model) mistral — MistralForSequenceClassification (Mistral model) mixtral — MixtralForSequenceClassification (Mixtral model) mobilebert — MobileBertForSequenceClassification (MobileBERT model) modernbert — ModernBertForSequenceClassification (ModernBERT model) mpnet — MPNetForSequenceClassification (MPNet model) mpt — MptForSequenceClassification (MPT model) mra — MraForSequenceClassification (MRA model) mt5 — MT5ForSequenceClassification (MT5 model) mvp — MvpForSequenceClassification (MVP model) nemotron — NemotronForSequenceClassification (Nemotron model) nezha — NezhaForSequenceClassification (Nezha model) nystromformer — NystromformerForSequenceClassification (Nyströmformer model) open-llama — OpenLlamaForSequenceClassification (OpenLlama model) openai-gpt — OpenAIGPTForSequenceClassification (OpenAI GPT model) opt — OPTForSequenceClassification (OPT model) perceiver — PerceiverForSequenceClassification (Perceiver model) persimmon — PersimmonForSequenceClassification (Persimmon model) phi — PhiForSequenceClassification (Phi model) phi3 — Phi3ForSequenceClassification (Phi3 model) phimoe — PhimoeForSequenceClassification (Phimoe model) plbart — PLBartForSequenceClassification (PLBart model) qdqbert — QDQBertForSequenceClassification (QDQBert model) qwen2 — Qwen2ForSequenceClassification (Qwen2 model) qwen2_moe — Qwen2MoeForSequenceClassification (Qwen2MoE model) reformer — ReformerForSequenceClassification (Reformer model) rembert — RemBertForSequenceClassification (RemBERT model) roberta — RobertaForSequenceClassification (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForSequenceClassification (RoCBert model) roformer — RoFormerForSequenceClassification (RoFormer model) squeezebert — SqueezeBertForSequenceClassification (SqueezeBERT model) stablelm — StableLmForSequenceClassification (StableLm model) starcoder2 — Starcoder2ForSequenceClassification (Starcoder2 model) t5 — T5ForSequenceClassification (T5 model) tapas — TapasForSequenceClassification (TAPAS model) transfo-xl — TransfoXLForSequenceClassification (Transformer-XL model) umt5 — UMT5ForSequenceClassification (UMT5 model) xlm — XLMForSequenceClassification (XLM model) xlm-roberta — XLMRobertaForSequenceClassification (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForSequenceClassification (XLM-RoBERTa-XL model) xlnet — XLNetForSequenceClassification (XLNet model) xmod — XmodForSequenceClassification (X-MOD model) yoso — YosoForSequenceClassification (YOSO model) zamba — ZambaForSequenceClassification (Zamba model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForSequenceClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForSequenceClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForSequenceClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForSequenceClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForSequenceClassification class transformers. TFAutoModelForSequenceClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForSequenceClassification (ALBERT model) BartConfig configuration class: TFBartForSequenceClassification (BART model) BertConfig configuration class: TFBertForSequenceClassification (BERT model) CTRLConfig configuration class: TFCTRLForSequenceClassification (CTRL model) CamembertConfig configuration class: TFCamembertForSequenceClassification (CamemBERT model) ConvBertConfig configuration class: TFConvBertForSequenceClassification (ConvBERT model) DebertaConfig configuration class: TFDebertaForSequenceClassification (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2ForSequenceClassification (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForSequenceClassification (DistilBERT model) ElectraConfig configuration class: TFElectraForSequenceClassification (ELECTRA model) EsmConfig configuration class: TFEsmForSequenceClassification (ESM model) FlaubertConfig configuration class: TFFlaubertForSequenceClassification (FlauBERT model) FunnelConfig configuration class: TFFunnelForSequenceClassification (Funnel Transformer model) GPT2Config configuration class: TFGPT2ForSequenceClassification (OpenAI GPT-2 model) GPTJConfig configuration class: TFGPTJForSequenceClassification (GPT-J model) LayoutLMConfig configuration class: TFLayoutLMForSequenceClassification (LayoutLM model) LayoutLMv3Config configuration class: TFLayoutLMv3ForSequenceClassification (LayoutLMv3 model) LongformerConfig configuration class: TFLongformerForSequenceClassification (Longformer model) MPNetConfig configuration class: TFMPNetForSequenceClassification (MPNet model) MistralConfig configuration class: TFMistralForSequenceClassification (Mistral model) MobileBertConfig configuration class: TFMobileBertForSequenceClassification (MobileBERT model) OpenAIGPTConfig configuration class: TFOpenAIGPTForSequenceClassification (OpenAI GPT model) RemBertConfig configuration class: TFRemBertForSequenceClassification (RemBERT model) RoFormerConfig configuration class: TFRoFormerForSequenceClassification (RoFormer model) RobertaConfig configuration class: TFRobertaForSequenceClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) TapasConfig configuration class: TFTapasForSequenceClassification (TAPAS model) TransfoXLConfig configuration class: TFTransfoXLForSequenceClassification (Transformer-XL model) XLMConfig configuration class: TFXLMForSequenceClassification (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForSequenceClassification (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetForSequenceClassification (XLNet model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a sequence classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForSequenceClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForSequenceClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — TFAlbertForSequenceClassification (ALBERT model) bart — TFBartForSequenceClassification (BART model) bert — TFBertForSequenceClassification (BERT model) camembert — TFCamembertForSequenceClassification (CamemBERT model) convbert — TFConvBertForSequenceClassification (ConvBERT model) ctrl — TFCTRLForSequenceClassification (CTRL model) deberta — TFDebertaForSequenceClassification (DeBERTa model) deberta-v2 — TFDebertaV2ForSequenceClassification (DeBERTa-v2 model) distilbert — TFDistilBertForSequenceClassification (DistilBERT model) electra — TFElectraForSequenceClassification (ELECTRA model) esm — TFEsmForSequenceClassification (ESM model) flaubert — TFFlaubertForSequenceClassification (FlauBERT model) funnel — TFFunnelForSequenceClassification (Funnel Transformer model) gpt-sw3 — TFGPT2ForSequenceClassification (GPT-Sw3 model) gpt2 — TFGPT2ForSequenceClassification (OpenAI GPT-2 model) gptj — TFGPTJForSequenceClassification (GPT-J model) layoutlm — TFLayoutLMForSequenceClassification (LayoutLM model) layoutlmv3 — TFLayoutLMv3ForSequenceClassification (LayoutLMv3 model) longformer — TFLongformerForSequenceClassification (Longformer model) mistral — TFMistralForSequenceClassification (Mistral model) mobilebert — TFMobileBertForSequenceClassification (MobileBERT model) mpnet — TFMPNetForSequenceClassification (MPNet model) openai-gpt — TFOpenAIGPTForSequenceClassification (OpenAI GPT model) rembert — TFRemBertForSequenceClassification (RemBERT model) roberta — TFRobertaForSequenceClassification (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForSequenceClassification (RoFormer model) tapas — TFTapasForSequenceClassification (TAPAS model) transfo-xl — TFTransfoXLForSequenceClassification (Transformer-XL model) xlm — TFXLMForSequenceClassification (XLM model) xlm-roberta — TFXLMRobertaForSequenceClassification (XLM-RoBERTa model) xlnet — TFXLNetForSequenceClassification (XLNet model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForSequenceClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForSequenceClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForSequenceClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForSequenceClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForSequenceClassification class transformers. FlaxAutoModelForSequenceClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForSequenceClassification (ALBERT model) BartConfig configuration class: FlaxBartForSequenceClassification (BART model) BertConfig configuration class: FlaxBertForSequenceClassification (BERT model) BigBirdConfig configuration class: FlaxBigBirdForSequenceClassification (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForSequenceClassification (DistilBERT model) ElectraConfig configuration class: FlaxElectraForSequenceClassification (ELECTRA model) MBartConfig configuration class: FlaxMBartForSequenceClassification (mBART model) RoFormerConfig configuration class: FlaxRoFormerForSequenceClassification (RoFormer model) RobertaConfig configuration class: FlaxRobertaForSequenceClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForSequenceClassification (XLM-RoBERTa model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a sequence classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForSequenceClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — FlaxAlbertForSequenceClassification (ALBERT model) bart — FlaxBartForSequenceClassification (BART model) bert — FlaxBertForSequenceClassification (BERT model) big_bird — FlaxBigBirdForSequenceClassification (BigBird model) distilbert — FlaxDistilBertForSequenceClassification (DistilBERT model) electra — FlaxElectraForSequenceClassification (ELECTRA model) mbart — FlaxMBartForSequenceClassification (mBART model) roberta — FlaxRobertaForSequenceClassification (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForSequenceClassification (RoFormer model) xlm-roberta — FlaxXLMRobertaForSequenceClassification (XLM-RoBERTa model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForSequenceClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForSequenceClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForSequenceClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForMultipleChoice class transformers. AutoModelForMultipleChoice < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForMultipleChoice (ALBERT model) BertConfig configuration class: BertForMultipleChoice (BERT model) BigBirdConfig configuration class: BigBirdForMultipleChoice (BigBird model) CamembertConfig configuration class: CamembertForMultipleChoice (CamemBERT model) CanineConfig configuration class: CanineForMultipleChoice (CANINE model) ConvBertConfig configuration class: ConvBertForMultipleChoice (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForMultipleChoice (Data2VecText model) DebertaV2Config configuration class: DebertaV2ForMultipleChoice (DeBERTa-v2 model) DistilBertConfig configuration class: DistilBertForMultipleChoice (DistilBERT model) ElectraConfig configuration class: ElectraForMultipleChoice (ELECTRA model) ErnieConfig configuration class: ErnieForMultipleChoice (ERNIE model) ErnieMConfig configuration class: ErnieMForMultipleChoice (ErnieM model) FNetConfig configuration class: FNetForMultipleChoice (FNet model) FlaubertConfig configuration class: FlaubertForMultipleChoice (FlauBERT model) FunnelConfig configuration class: FunnelForMultipleChoice (Funnel Transformer model) IBertConfig configuration class: IBertForMultipleChoice (I-BERT model) LongformerConfig configuration class: LongformerForMultipleChoice (Longformer model) LukeConfig configuration class: LukeForMultipleChoice (LUKE model) MPNetConfig configuration class: MPNetForMultipleChoice (MPNet model) MegaConfig configuration class: MegaForMultipleChoice (MEGA model) MegatronBertConfig configuration class: MegatronBertForMultipleChoice (Megatron-BERT model) MobileBertConfig configuration class: MobileBertForMultipleChoice (MobileBERT model) MraConfig configuration class: MraForMultipleChoice (MRA model) NezhaConfig configuration class: NezhaForMultipleChoice (Nezha model) NystromformerConfig configuration class: NystromformerForMultipleChoice (Nyströmformer model) QDQBertConfig configuration class: QDQBertForMultipleChoice (QDQBert model) RemBertConfig configuration class: RemBertForMultipleChoice (RemBERT model) RoCBertConfig configuration class: RoCBertForMultipleChoice (RoCBert model) RoFormerConfig configuration class: RoFormerForMultipleChoice (RoFormer model) RobertaConfig configuration class: RobertaForMultipleChoice (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) SqueezeBertConfig configuration class: SqueezeBertForMultipleChoice (SqueezeBERT model) XLMConfig configuration class: XLMForMultipleChoice (XLM model) XLMRobertaConfig configuration class: XLMRobertaForMultipleChoice (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForMultipleChoice (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetForMultipleChoice (XLNet model) XmodConfig configuration class: XmodForMultipleChoice (X-MOD model) YosoConfig configuration class: YosoForMultipleChoice (YOSO model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a multiple choice head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForMultipleChoice >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForMultipleChoice.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — AlbertForMultipleChoice (ALBERT model) bert — BertForMultipleChoice (BERT model) big_bird — BigBirdForMultipleChoice (BigBird model) camembert — CamembertForMultipleChoice (CamemBERT model) canine — CanineForMultipleChoice (CANINE model) convbert — ConvBertForMultipleChoice (ConvBERT model) data2vec-text — Data2VecTextForMultipleChoice (Data2VecText model) deberta-v2 — DebertaV2ForMultipleChoice (DeBERTa-v2 model) distilbert — DistilBertForMultipleChoice (DistilBERT model) electra — ElectraForMultipleChoice (ELECTRA model) ernie — ErnieForMultipleChoice (ERNIE model) ernie_m — ErnieMForMultipleChoice (ErnieM model) flaubert — FlaubertForMultipleChoice (FlauBERT model) fnet — FNetForMultipleChoice (FNet model) funnel — FunnelForMultipleChoice (Funnel Transformer model) ibert — IBertForMultipleChoice (I-BERT model) longformer — LongformerForMultipleChoice (Longformer model) luke — LukeForMultipleChoice (LUKE model) mega — MegaForMultipleChoice (MEGA model) megatron-bert — MegatronBertForMultipleChoice (Megatron-BERT model) mobilebert — MobileBertForMultipleChoice (MobileBERT model) mpnet — MPNetForMultipleChoice (MPNet model) mra — MraForMultipleChoice (MRA model) nezha — NezhaForMultipleChoice (Nezha model) nystromformer — NystromformerForMultipleChoice (Nyströmformer model) qdqbert — QDQBertForMultipleChoice (QDQBert model) rembert — RemBertForMultipleChoice (RemBERT model) roberta — RobertaForMultipleChoice (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForMultipleChoice (RoCBert model) roformer — RoFormerForMultipleChoice (RoFormer model) squeezebert — SqueezeBertForMultipleChoice (SqueezeBERT model) xlm — XLMForMultipleChoice (XLM model) xlm-roberta — XLMRobertaForMultipleChoice (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForMultipleChoice (XLM-RoBERTa-XL model) xlnet — XLNetForMultipleChoice (XLNet model) xmod — XmodForMultipleChoice (X-MOD model) yoso — YosoForMultipleChoice (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForMultipleChoice >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForMultipleChoice.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForMultipleChoice.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForMultipleChoice.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForMultipleChoice class transformers. TFAutoModelForMultipleChoice < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForMultipleChoice (ALBERT model) BertConfig configuration class: TFBertForMultipleChoice (BERT model) CamembertConfig configuration class: TFCamembertForMultipleChoice (CamemBERT model) ConvBertConfig configuration class: TFConvBertForMultipleChoice (ConvBERT model) DebertaV2Config configuration class: TFDebertaV2ForMultipleChoice (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForMultipleChoice (DistilBERT model) ElectraConfig configuration class: TFElectraForMultipleChoice (ELECTRA model) FlaubertConfig configuration class: TFFlaubertForMultipleChoice (FlauBERT model) FunnelConfig configuration class: TFFunnelForMultipleChoice (Funnel Transformer model) LongformerConfig configuration class: TFLongformerForMultipleChoice (Longformer model) MPNetConfig configuration class: TFMPNetForMultipleChoice (MPNet model) MobileBertConfig configuration class: TFMobileBertForMultipleChoice (MobileBERT model) RemBertConfig configuration class: TFRemBertForMultipleChoice (RemBERT model) RoFormerConfig configuration class: TFRoFormerForMultipleChoice (RoFormer model) RobertaConfig configuration class: TFRobertaForMultipleChoice (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) XLMConfig configuration class: TFXLMForMultipleChoice (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForMultipleChoice (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetForMultipleChoice (XLNet model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a multiple choice head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForMultipleChoice >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForMultipleChoice.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — TFAlbertForMultipleChoice (ALBERT model) bert — TFBertForMultipleChoice (BERT model) camembert — TFCamembertForMultipleChoice (CamemBERT model) convbert — TFConvBertForMultipleChoice (ConvBERT model) deberta-v2 — TFDebertaV2ForMultipleChoice (DeBERTa-v2 model) distilbert — TFDistilBertForMultipleChoice (DistilBERT model) electra — TFElectraForMultipleChoice (ELECTRA model) flaubert — TFFlaubertForMultipleChoice (FlauBERT model) funnel — TFFunnelForMultipleChoice (Funnel Transformer model) longformer — TFLongformerForMultipleChoice (Longformer model) mobilebert — TFMobileBertForMultipleChoice (MobileBERT model) mpnet — TFMPNetForMultipleChoice (MPNet model) rembert — TFRemBertForMultipleChoice (RemBERT model) roberta — TFRobertaForMultipleChoice (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForMultipleChoice (RoFormer model) xlm — TFXLMForMultipleChoice (XLM model) xlm-roberta — TFXLMRobertaForMultipleChoice (XLM-RoBERTa model) xlnet — TFXLNetForMultipleChoice (XLNet model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForMultipleChoice >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForMultipleChoice.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForMultipleChoice.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForMultipleChoice.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForMultipleChoice class transformers. FlaxAutoModelForMultipleChoice < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForMultipleChoice (ALBERT model) BertConfig configuration class: FlaxBertForMultipleChoice (BERT model) BigBirdConfig configuration class: FlaxBigBirdForMultipleChoice (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForMultipleChoice (DistilBERT model) ElectraConfig configuration class: FlaxElectraForMultipleChoice (ELECTRA model) RoFormerConfig configuration class: FlaxRoFormerForMultipleChoice (RoFormer model) RobertaConfig configuration class: FlaxRobertaForMultipleChoice (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForMultipleChoice (XLM-RoBERTa model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a multiple choice head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForMultipleChoice.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — FlaxAlbertForMultipleChoice (ALBERT model) bert — FlaxBertForMultipleChoice (BERT model) big_bird — FlaxBigBirdForMultipleChoice (BigBird model) distilbert — FlaxDistilBertForMultipleChoice (DistilBERT model) electra — FlaxElectraForMultipleChoice (ELECTRA model) roberta — FlaxRobertaForMultipleChoice (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForMultipleChoice (RoFormer model) xlm-roberta — FlaxXLMRobertaForMultipleChoice (XLM-RoBERTa model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForMultipleChoice.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForMultipleChoice.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForMultipleChoice.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForNextSentencePrediction class transformers. AutoModelForNextSentencePrediction < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BertConfig configuration class: BertForNextSentencePrediction (BERT model) ErnieConfig configuration class: ErnieForNextSentencePrediction (ERNIE model) FNetConfig configuration class: FNetForNextSentencePrediction (FNet model) MegatronBertConfig configuration class: MegatronBertForNextSentencePrediction (Megatron-BERT model) MobileBertConfig configuration class: MobileBertForNextSentencePrediction (MobileBERT model) NezhaConfig configuration class: NezhaForNextSentencePrediction (Nezha model) QDQBertConfig configuration class: QDQBertForNextSentencePrediction (QDQBert model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForNextSentencePrediction >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForNextSentencePrediction.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : bert — BertForNextSentencePrediction (BERT model) ernie — ErnieForNextSentencePrediction (ERNIE model) fnet — FNetForNextSentencePrediction (FNet model) megatron-bert — MegatronBertForNextSentencePrediction (Megatron-BERT model) mobilebert — MobileBertForNextSentencePrediction (MobileBERT model) nezha — NezhaForNextSentencePrediction (Nezha model) qdqbert — QDQBertForNextSentencePrediction (QDQBert model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForNextSentencePrediction >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForNextSentencePrediction.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForNextSentencePrediction.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForNextSentencePrediction.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForNextSentencePrediction class transformers. TFAutoModelForNextSentencePrediction < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BertConfig configuration class: TFBertForNextSentencePrediction (BERT model) MobileBertConfig configuration class: TFMobileBertForNextSentencePrediction (MobileBERT model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForNextSentencePrediction >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForNextSentencePrediction.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : bert — TFBertForNextSentencePrediction (BERT model) mobilebert — TFMobileBertForNextSentencePrediction (MobileBERT model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForNextSentencePrediction >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForNextSentencePrediction.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForNextSentencePrediction.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForNextSentencePrediction.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForNextSentencePrediction class transformers. FlaxAutoModelForNextSentencePrediction < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BertConfig configuration class: FlaxBertForNextSentencePrediction (BERT model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForNextSentencePrediction.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : bert — FlaxBertForNextSentencePrediction (BERT model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForTokenClassification class transformers. AutoModelForTokenClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForTokenClassification (ALBERT model) BertConfig configuration class: BertForTokenClassification (BERT model) BigBirdConfig configuration class: BigBirdForTokenClassification (BigBird model) BioGptConfig configuration class: BioGptForTokenClassification (BioGpt model) BloomConfig configuration class: BloomForTokenClassification (BLOOM model) BrosConfig configuration class: BrosForTokenClassification (BROS model) CamembertConfig configuration class: CamembertForTokenClassification (CamemBERT model) CanineConfig configuration class: CanineForTokenClassification (CANINE model) ConvBertConfig configuration class: ConvBertForTokenClassification (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForTokenClassification (Data2VecText model) DebertaConfig configuration class: DebertaForTokenClassification (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForTokenClassification (DeBERTa-v2 model) DiffLlamaConfig configuration class: DiffLlamaForTokenClassification (DiffLlama model) DistilBertConfig configuration class: DistilBertForTokenClassification (DistilBERT model) ElectraConfig configuration class: ElectraForTokenClassification (ELECTRA model) ErnieConfig configuration class: ErnieForTokenClassification (ERNIE model) ErnieMConfig configuration class: ErnieMForTokenClassification (ErnieM model) EsmConfig configuration class: EsmForTokenClassification (ESM model) FNetConfig configuration class: FNetForTokenClassification (FNet model) FalconConfig configuration class: FalconForTokenClassification (Falcon model) FlaubertConfig configuration class: FlaubertForTokenClassification (FlauBERT model) FunnelConfig configuration class: FunnelForTokenClassification (Funnel Transformer model) GPT2Config configuration class: GPT2ForTokenClassification (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeForTokenClassification (GPTBigCode model) GPTNeoConfig configuration class: GPTNeoForTokenClassification (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXForTokenClassification (GPT NeoX model) Gemma2Config configuration class: Gemma2ForTokenClassification (Gemma2 model) GemmaConfig configuration class: GemmaForTokenClassification (Gemma model) GlmConfig configuration class: GlmForTokenClassification (GLM model) IBertConfig configuration class: IBertForTokenClassification (I-BERT model) LayoutLMConfig configuration class: LayoutLMForTokenClassification (LayoutLM model) LayoutLMv2Config configuration class: LayoutLMv2ForTokenClassification (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3ForTokenClassification (LayoutLMv3 model) LiltConfig configuration class: LiltForTokenClassification (LiLT model) LlamaConfig configuration class: LlamaForTokenClassification (LLaMA model) LongformerConfig configuration class: LongformerForTokenClassification (Longformer model) LukeConfig configuration class: LukeForTokenClassification (LUKE model) MPNetConfig configuration class: MPNetForTokenClassification (MPNet model) MT5Config configuration class: MT5ForTokenClassification (MT5 model) MarkupLMConfig configuration class: MarkupLMForTokenClassification (MarkupLM model) MegaConfig configuration class: MegaForTokenClassification (MEGA model) MegatronBertConfig configuration class: MegatronBertForTokenClassification (Megatron-BERT model) MistralConfig configuration class: MistralForTokenClassification (Mistral model) MixtralConfig configuration class: MixtralForTokenClassification (Mixtral model) MobileBertConfig configuration class: MobileBertForTokenClassification (MobileBERT model) ModernBertConfig configuration class: ModernBertForTokenClassification (ModernBERT model) MptConfig configuration class: MptForTokenClassification (MPT model) MraConfig configuration class: MraForTokenClassification (MRA model) NemotronConfig configuration class: NemotronForTokenClassification (Nemotron model) NezhaConfig configuration class: NezhaForTokenClassification (Nezha model) NystromformerConfig configuration class: NystromformerForTokenClassification (Nyströmformer model) PersimmonConfig configuration class: PersimmonForTokenClassification (Persimmon model) Phi3Config configuration class: Phi3ForTokenClassification (Phi3 model) PhiConfig configuration class: PhiForTokenClassification (Phi model) QDQBertConfig configuration class: QDQBertForTokenClassification (QDQBert model) Qwen2Config configuration class: Qwen2ForTokenClassification (Qwen2 model) Qwen2MoeConfig configuration class: Qwen2MoeForTokenClassification (Qwen2MoE model) RemBertConfig configuration class: RemBertForTokenClassification (RemBERT model) RoCBertConfig configuration class: RoCBertForTokenClassification (RoCBert model) RoFormerConfig configuration class: RoFormerForTokenClassification (RoFormer model) RobertaConfig configuration class: RobertaForTokenClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) SqueezeBertConfig configuration class: SqueezeBertForTokenClassification (SqueezeBERT model) StableLmConfig configuration class: StableLmForTokenClassification (StableLm model) Starcoder2Config configuration class: Starcoder2ForTokenClassification (Starcoder2 model) T5Config configuration class: T5ForTokenClassification (T5 model) UMT5Config configuration class: UMT5ForTokenClassification (UMT5 model) XLMConfig configuration class: XLMForTokenClassification (XLM model) XLMRobertaConfig configuration class: XLMRobertaForTokenClassification (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForTokenClassification (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetForTokenClassification (XLNet model) XmodConfig configuration class: XmodForTokenClassification (X-MOD model) YosoConfig configuration class: YosoForTokenClassification (YOSO model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a token classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForTokenClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForTokenClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a token classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — AlbertForTokenClassification (ALBERT model) bert — BertForTokenClassification (BERT model) big_bird — BigBirdForTokenClassification (BigBird model) biogpt — BioGptForTokenClassification (BioGpt model) bloom — BloomForTokenClassification (BLOOM model) bros — BrosForTokenClassification (BROS model) camembert — CamembertForTokenClassification (CamemBERT model) canine — CanineForTokenClassification (CANINE model) convbert — ConvBertForTokenClassification (ConvBERT model) data2vec-text — Data2VecTextForTokenClassification (Data2VecText model) deberta — DebertaForTokenClassification (DeBERTa model) deberta-v2 — DebertaV2ForTokenClassification (DeBERTa-v2 model) diffllama — DiffLlamaForTokenClassification (DiffLlama model) distilbert — DistilBertForTokenClassification (DistilBERT model) electra — ElectraForTokenClassification (ELECTRA model) ernie — ErnieForTokenClassification (ERNIE model) ernie_m — ErnieMForTokenClassification (ErnieM model) esm — EsmForTokenClassification (ESM model) falcon — FalconForTokenClassification (Falcon model) flaubert — FlaubertForTokenClassification (FlauBERT model) fnet — FNetForTokenClassification (FNet model) funnel — FunnelForTokenClassification (Funnel Transformer model) gemma — GemmaForTokenClassification (Gemma model) gemma2 — Gemma2ForTokenClassification (Gemma2 model) glm — GlmForTokenClassification (GLM model) gpt-sw3 — GPT2ForTokenClassification (GPT-Sw3 model) gpt2 — GPT2ForTokenClassification (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeForTokenClassification (GPTBigCode model) gpt_neo — GPTNeoForTokenClassification (GPT Neo model) gpt_neox — GPTNeoXForTokenClassification (GPT NeoX model) ibert — IBertForTokenClassification (I-BERT model) layoutlm — LayoutLMForTokenClassification (LayoutLM model) layoutlmv2 — LayoutLMv2ForTokenClassification (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ForTokenClassification (LayoutLMv3 model) lilt — LiltForTokenClassification (LiLT model) llama — LlamaForTokenClassification (LLaMA model) longformer — LongformerForTokenClassification (Longformer model) luke — LukeForTokenClassification (LUKE model) markuplm — MarkupLMForTokenClassification (MarkupLM model) mega — MegaForTokenClassification (MEGA model) megatron-bert — MegatronBertForTokenClassification (Megatron-BERT model) mistral — MistralForTokenClassification (Mistral model) mixtral — MixtralForTokenClassification (Mixtral model) mobilebert — MobileBertForTokenClassification (MobileBERT model) modernbert — ModernBertForTokenClassification (ModernBERT model) mpnet — MPNetForTokenClassification (MPNet model) mpt — MptForTokenClassification (MPT model) mra — MraForTokenClassification (MRA model) mt5 — MT5ForTokenClassification (MT5 model) nemotron — NemotronForTokenClassification (Nemotron model) nezha — NezhaForTokenClassification (Nezha model) nystromformer — NystromformerForTokenClassification (Nyströmformer model) persimmon — PersimmonForTokenClassification (Persimmon model) phi — PhiForTokenClassification (Phi model) phi3 — Phi3ForTokenClassification (Phi3 model) qdqbert — QDQBertForTokenClassification (QDQBert model) qwen2 — Qwen2ForTokenClassification (Qwen2 model) qwen2_moe — Qwen2MoeForTokenClassification (Qwen2MoE model) rembert — RemBertForTokenClassification (RemBERT model) roberta — RobertaForTokenClassification (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForTokenClassification (RoCBert model) roformer — RoFormerForTokenClassification (RoFormer model) squeezebert — SqueezeBertForTokenClassification (SqueezeBERT model) stablelm — StableLmForTokenClassification (StableLm model) starcoder2 — Starcoder2ForTokenClassification (Starcoder2 model) t5 — T5ForTokenClassification (T5 model) umt5 — UMT5ForTokenClassification (UMT5 model) xlm — XLMForTokenClassification (XLM model) xlm-roberta — XLMRobertaForTokenClassification (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForTokenClassification (XLM-RoBERTa-XL model) xlnet — XLNetForTokenClassification (XLNet model) xmod — XmodForTokenClassification (X-MOD model) yoso — YosoForTokenClassification (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForTokenClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForTokenClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForTokenClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForTokenClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForTokenClassification class transformers. TFAutoModelForTokenClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForTokenClassification (ALBERT model) BertConfig configuration class: TFBertForTokenClassification (BERT model) CamembertConfig configuration class: TFCamembertForTokenClassification (CamemBERT model) ConvBertConfig configuration class: TFConvBertForTokenClassification (ConvBERT model) DebertaConfig configuration class: TFDebertaForTokenClassification (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2ForTokenClassification (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForTokenClassification (DistilBERT model) ElectraConfig configuration class: TFElectraForTokenClassification (ELECTRA model) EsmConfig configuration class: TFEsmForTokenClassification (ESM model) FlaubertConfig configuration class: TFFlaubertForTokenClassification (FlauBERT model) FunnelConfig configuration class: TFFunnelForTokenClassification (Funnel Transformer model) LayoutLMConfig configuration class: TFLayoutLMForTokenClassification (LayoutLM model) LayoutLMv3Config configuration class: TFLayoutLMv3ForTokenClassification (LayoutLMv3 model) LongformerConfig configuration class: TFLongformerForTokenClassification (Longformer model) MPNetConfig configuration class: TFMPNetForTokenClassification (MPNet model) MobileBertConfig configuration class: TFMobileBertForTokenClassification (MobileBERT model) RemBertConfig configuration class: TFRemBertForTokenClassification (RemBERT model) RoFormerConfig configuration class: TFRoFormerForTokenClassification (RoFormer model) RobertaConfig configuration class: TFRobertaForTokenClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) XLMConfig configuration class: TFXLMForTokenClassification (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForTokenClassification (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetForTokenClassification (XLNet model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a token classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForTokenClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForTokenClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a token classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — TFAlbertForTokenClassification (ALBERT model) bert — TFBertForTokenClassification (BERT model) camembert — TFCamembertForTokenClassification (CamemBERT model) convbert — TFConvBertForTokenClassification (ConvBERT model) deberta — TFDebertaForTokenClassification (DeBERTa model) deberta-v2 — TFDebertaV2ForTokenClassification (DeBERTa-v2 model) distilbert — TFDistilBertForTokenClassification (DistilBERT model) electra — TFElectraForTokenClassification (ELECTRA model) esm — TFEsmForTokenClassification (ESM model) flaubert — TFFlaubertForTokenClassification (FlauBERT model) funnel — TFFunnelForTokenClassification (Funnel Transformer model) layoutlm — TFLayoutLMForTokenClassification (LayoutLM model) layoutlmv3 — TFLayoutLMv3ForTokenClassification (LayoutLMv3 model) longformer — TFLongformerForTokenClassification (Longformer model) mobilebert — TFMobileBertForTokenClassification (MobileBERT model) mpnet — TFMPNetForTokenClassification (MPNet model) rembert — TFRemBertForTokenClassification (RemBERT model) roberta — TFRobertaForTokenClassification (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForTokenClassification (RoFormer model) xlm — TFXLMForTokenClassification (XLM model) xlm-roberta — TFXLMRobertaForTokenClassification (XLM-RoBERTa model) xlnet — TFXLNetForTokenClassification (XLNet model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForTokenClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForTokenClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForTokenClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForTokenClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForTokenClassification class transformers. FlaxAutoModelForTokenClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForTokenClassification (ALBERT model) BertConfig configuration class: FlaxBertForTokenClassification (BERT model) BigBirdConfig configuration class: FlaxBigBirdForTokenClassification (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForTokenClassification (DistilBERT model) ElectraConfig configuration class: FlaxElectraForTokenClassification (ELECTRA model) RoFormerConfig configuration class: FlaxRoFormerForTokenClassification (RoFormer model) RobertaConfig configuration class: FlaxRobertaForTokenClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForTokenClassification (XLM-RoBERTa model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a token classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForTokenClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a token classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — FlaxAlbertForTokenClassification (ALBERT model) bert — FlaxBertForTokenClassification (BERT model) big_bird — FlaxBigBirdForTokenClassification (BigBird model) distilbert — FlaxDistilBertForTokenClassification (DistilBERT model) electra — FlaxElectraForTokenClassification (ELECTRA model) roberta — FlaxRobertaForTokenClassification (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForTokenClassification (RoFormer model) xlm-roberta — FlaxXLMRobertaForTokenClassification (XLM-RoBERTa model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForTokenClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForTokenClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForTokenClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForQuestionAnswering class transformers. AutoModelForQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForQuestionAnswering (ALBERT model) BartConfig configuration class: BartForQuestionAnswering (BART model) BertConfig configuration class: BertForQuestionAnswering (BERT model) BigBirdConfig configuration class: BigBirdForQuestionAnswering (BigBird model) BigBirdPegasusConfig configuration class: BigBirdPegasusForQuestionAnswering (BigBird-Pegasus model) BloomConfig configuration class: BloomForQuestionAnswering (BLOOM model) CamembertConfig configuration class: CamembertForQuestionAnswering (CamemBERT model) CanineConfig configuration class: CanineForQuestionAnswering (CANINE model) ConvBertConfig configuration class: ConvBertForQuestionAnswering (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForQuestionAnswering (Data2VecText model) DebertaConfig configuration class: DebertaForQuestionAnswering (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForQuestionAnswering (DeBERTa-v2 model) DiffLlamaConfig configuration class: DiffLlamaForQuestionAnswering (DiffLlama model) DistilBertConfig configuration class: DistilBertForQuestionAnswering (DistilBERT model) ElectraConfig configuration class: ElectraForQuestionAnswering (ELECTRA model) ErnieConfig configuration class: ErnieForQuestionAnswering (ERNIE model) ErnieMConfig configuration class: ErnieMForQuestionAnswering (ErnieM model) FNetConfig configuration class: FNetForQuestionAnswering (FNet model) FalconConfig configuration class: FalconForQuestionAnswering (Falcon model) FlaubertConfig configuration class: FlaubertForQuestionAnsweringSimple (FlauBERT model) FunnelConfig configuration class: FunnelForQuestionAnswering (Funnel Transformer model) GPT2Config configuration class: GPT2ForQuestionAnswering (OpenAI GPT-2 model) GPTJConfig configuration class: GPTJForQuestionAnswering (GPT-J model) GPTNeoConfig configuration class: GPTNeoForQuestionAnswering (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXForQuestionAnswering (GPT NeoX model) IBertConfig configuration class: IBertForQuestionAnswering (I-BERT model) LEDConfig configuration class: LEDForQuestionAnswering (LED model) LayoutLMv2Config configuration class: LayoutLMv2ForQuestionAnswering (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3ForQuestionAnswering (LayoutLMv3 model) LiltConfig configuration class: LiltForQuestionAnswering (LiLT model) LlamaConfig configuration class: LlamaForQuestionAnswering (LLaMA model) LongformerConfig configuration class: LongformerForQuestionAnswering (Longformer model) LukeConfig configuration class: LukeForQuestionAnswering (LUKE model) LxmertConfig configuration class: LxmertForQuestionAnswering (LXMERT model) MBartConfig configuration class: MBartForQuestionAnswering (mBART model) MPNetConfig configuration class: MPNetForQuestionAnswering (MPNet model) MT5Config configuration class: MT5ForQuestionAnswering (MT5 model) MarkupLMConfig configuration class: MarkupLMForQuestionAnswering (MarkupLM model) MegaConfig configuration class: MegaForQuestionAnswering (MEGA model) MegatronBertConfig configuration class: MegatronBertForQuestionAnswering (Megatron-BERT model) MistralConfig configuration class: MistralForQuestionAnswering (Mistral model) MixtralConfig configuration class: MixtralForQuestionAnswering (Mixtral model) MobileBertConfig configuration class: MobileBertForQuestionAnswering (MobileBERT model) MptConfig configuration class: MptForQuestionAnswering (MPT model) MraConfig configuration class: MraForQuestionAnswering (MRA model) MvpConfig configuration class: MvpForQuestionAnswering (MVP model) NemotronConfig configuration class: NemotronForQuestionAnswering (Nemotron model) NezhaConfig configuration class: NezhaForQuestionAnswering (Nezha model) NystromformerConfig configuration class: NystromformerForQuestionAnswering (Nyströmformer model) OPTConfig configuration class: OPTForQuestionAnswering (OPT model) QDQBertConfig configuration class: QDQBertForQuestionAnswering (QDQBert model) Qwen2Config configuration class: Qwen2ForQuestionAnswering (Qwen2 model) Qwen2MoeConfig configuration class: Qwen2MoeForQuestionAnswering (Qwen2MoE model) ReformerConfig configuration class: ReformerForQuestionAnswering (Reformer model) RemBertConfig configuration class: RemBertForQuestionAnswering (RemBERT model) RoCBertConfig configuration class: RoCBertForQuestionAnswering (RoCBert model) RoFormerConfig configuration class: RoFormerForQuestionAnswering (RoFormer model) RobertaConfig configuration class: RobertaForQuestionAnswering (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) SplinterConfig configuration class: SplinterForQuestionAnswering (Splinter model) SqueezeBertConfig configuration class: SqueezeBertForQuestionAnswering (SqueezeBERT model) T5Config configuration class: T5ForQuestionAnswering (T5 model) UMT5Config configuration class: UMT5ForQuestionAnswering (UMT5 model) XLMConfig configuration class: XLMForQuestionAnsweringSimple (XLM model) XLMRobertaConfig configuration class: XLMRobertaForQuestionAnswering (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForQuestionAnswering (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetForQuestionAnsweringSimple (XLNet model) XmodConfig configuration class: XmodForQuestionAnswering (X-MOD model) YosoConfig configuration class: YosoForQuestionAnswering (YOSO model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — AlbertForQuestionAnswering (ALBERT model) bart — BartForQuestionAnswering (BART model) bert — BertForQuestionAnswering (BERT model) big_bird — BigBirdForQuestionAnswering (BigBird model) bigbird_pegasus — BigBirdPegasusForQuestionAnswering (BigBird-Pegasus model) bloom — BloomForQuestionAnswering (BLOOM model) camembert — CamembertForQuestionAnswering (CamemBERT model) canine — CanineForQuestionAnswering (CANINE model) convbert — ConvBertForQuestionAnswering (ConvBERT model) data2vec-text — Data2VecTextForQuestionAnswering (Data2VecText model) deberta — DebertaForQuestionAnswering (DeBERTa model) deberta-v2 — DebertaV2ForQuestionAnswering (DeBERTa-v2 model) diffllama — DiffLlamaForQuestionAnswering (DiffLlama model) distilbert — DistilBertForQuestionAnswering (DistilBERT model) electra — ElectraForQuestionAnswering (ELECTRA model) ernie — ErnieForQuestionAnswering (ERNIE model) ernie_m — ErnieMForQuestionAnswering (ErnieM model) falcon — FalconForQuestionAnswering (Falcon model) flaubert — FlaubertForQuestionAnsweringSimple (FlauBERT model) fnet — FNetForQuestionAnswering (FNet model) funnel — FunnelForQuestionAnswering (Funnel Transformer model) gpt2 — GPT2ForQuestionAnswering (OpenAI GPT-2 model) gpt_neo — GPTNeoForQuestionAnswering (GPT Neo model) gpt_neox — GPTNeoXForQuestionAnswering (GPT NeoX model) gptj — GPTJForQuestionAnswering (GPT-J model) ibert — IBertForQuestionAnswering (I-BERT model) layoutlmv2 — LayoutLMv2ForQuestionAnswering (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ForQuestionAnswering (LayoutLMv3 model) led — LEDForQuestionAnswering (LED model) lilt — LiltForQuestionAnswering (LiLT model) llama — LlamaForQuestionAnswering (LLaMA model) longformer — LongformerForQuestionAnswering (Longformer model) luke — LukeForQuestionAnswering (LUKE model) lxmert — LxmertForQuestionAnswering (LXMERT model) markuplm — MarkupLMForQuestionAnswering (MarkupLM model) mbart — MBartForQuestionAnswering (mBART model) mega — MegaForQuestionAnswering (MEGA model) megatron-bert — MegatronBertForQuestionAnswering (Megatron-BERT model) mistral — MistralForQuestionAnswering (Mistral model) mixtral — MixtralForQuestionAnswering (Mixtral model) mobilebert — MobileBertForQuestionAnswering (MobileBERT model) mpnet — MPNetForQuestionAnswering (MPNet model) mpt — MptForQuestionAnswering (MPT model) mra — MraForQuestionAnswering (MRA model) mt5 — MT5ForQuestionAnswering (MT5 model) mvp — MvpForQuestionAnswering (MVP model) nemotron — NemotronForQuestionAnswering (Nemotron model) nezha — NezhaForQuestionAnswering (Nezha model) nystromformer — NystromformerForQuestionAnswering (Nyströmformer model) opt — OPTForQuestionAnswering (OPT model) qdqbert — QDQBertForQuestionAnswering (QDQBert model) qwen2 — Qwen2ForQuestionAnswering (Qwen2 model) qwen2_moe — Qwen2MoeForQuestionAnswering (Qwen2MoE model) reformer — ReformerForQuestionAnswering (Reformer model) rembert — RemBertForQuestionAnswering (RemBERT model) roberta — RobertaForQuestionAnswering (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForQuestionAnswering (RoCBert model) roformer — RoFormerForQuestionAnswering (RoFormer model) splinter — SplinterForQuestionAnswering (Splinter model) squeezebert — SqueezeBertForQuestionAnswering (SqueezeBERT model) t5 — T5ForQuestionAnswering (T5 model) umt5 — UMT5ForQuestionAnswering (UMT5 model) xlm — XLMForQuestionAnsweringSimple (XLM model) xlm-roberta — XLMRobertaForQuestionAnswering (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForQuestionAnswering (XLM-RoBERTa-XL model) xlnet — XLNetForQuestionAnsweringSimple (XLNet model) xmod — XmodForQuestionAnswering (X-MOD model) yoso — YosoForQuestionAnswering (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForQuestionAnswering.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForQuestionAnswering.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForQuestionAnswering.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForQuestionAnswering class transformers. TFAutoModelForQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForQuestionAnswering (ALBERT model) BertConfig configuration class: TFBertForQuestionAnswering (BERT model) CamembertConfig configuration class: TFCamembertForQuestionAnswering (CamemBERT model) ConvBertConfig configuration class: TFConvBertForQuestionAnswering (ConvBERT model) DebertaConfig configuration class: TFDebertaForQuestionAnswering (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2ForQuestionAnswering (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForQuestionAnswering (DistilBERT model) ElectraConfig configuration class: TFElectraForQuestionAnswering (ELECTRA model) FlaubertConfig configuration class: TFFlaubertForQuestionAnsweringSimple (FlauBERT model) FunnelConfig configuration class: TFFunnelForQuestionAnswering (Funnel Transformer model) GPTJConfig configuration class: TFGPTJForQuestionAnswering (GPT-J model) LayoutLMv3Config configuration class: TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model) LongformerConfig configuration class: TFLongformerForQuestionAnswering (Longformer model) MPNetConfig configuration class: TFMPNetForQuestionAnswering (MPNet model) MobileBertConfig configuration class: TFMobileBertForQuestionAnswering (MobileBERT model) RemBertConfig configuration class: TFRemBertForQuestionAnswering (RemBERT model) RoFormerConfig configuration class: TFRoFormerForQuestionAnswering (RoFormer model) RobertaConfig configuration class: TFRobertaForQuestionAnswering (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) XLMConfig configuration class: TFXLMForQuestionAnsweringSimple (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForQuestionAnswering (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetForQuestionAnsweringSimple (XLNet model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — TFAlbertForQuestionAnswering (ALBERT model) bert — TFBertForQuestionAnswering (BERT model) camembert — TFCamembertForQuestionAnswering (CamemBERT model) convbert — TFConvBertForQuestionAnswering (ConvBERT model) deberta — TFDebertaForQuestionAnswering (DeBERTa model) deberta-v2 — TFDebertaV2ForQuestionAnswering (DeBERTa-v2 model) distilbert — TFDistilBertForQuestionAnswering (DistilBERT model) electra — TFElectraForQuestionAnswering (ELECTRA model) flaubert — TFFlaubertForQuestionAnsweringSimple (FlauBERT model) funnel — TFFunnelForQuestionAnswering (Funnel Transformer model) gptj — TFGPTJForQuestionAnswering (GPT-J model) layoutlmv3 — TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model) longformer — TFLongformerForQuestionAnswering (Longformer model) mobilebert — TFMobileBertForQuestionAnswering (MobileBERT model) mpnet — TFMPNetForQuestionAnswering (MPNet model) rembert — TFRemBertForQuestionAnswering (RemBERT model) roberta — TFRobertaForQuestionAnswering (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForQuestionAnswering (RoFormer model) xlm — TFXLMForQuestionAnsweringSimple (XLM model) xlm-roberta — TFXLMRobertaForQuestionAnswering (XLM-RoBERTa model) xlnet — TFXLNetForQuestionAnsweringSimple (XLNet model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForQuestionAnswering.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForQuestionAnswering.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForQuestionAnswering.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForQuestionAnswering class transformers. FlaxAutoModelForQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForQuestionAnswering (ALBERT model) BartConfig configuration class: FlaxBartForQuestionAnswering (BART model) BertConfig configuration class: FlaxBertForQuestionAnswering (BERT model) BigBirdConfig configuration class: FlaxBigBirdForQuestionAnswering (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForQuestionAnswering (DistilBERT model) ElectraConfig configuration class: FlaxElectraForQuestionAnswering (ELECTRA model) MBartConfig configuration class: FlaxMBartForQuestionAnswering (mBART model) RoFormerConfig configuration class: FlaxRoFormerForQuestionAnswering (RoFormer model) RobertaConfig configuration class: FlaxRobertaForQuestionAnswering (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForQuestionAnswering (XLM-RoBERTa model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : albert — FlaxAlbertForQuestionAnswering (ALBERT model) bart — FlaxBartForQuestionAnswering (BART model) bert — FlaxBertForQuestionAnswering (BERT model) big_bird — FlaxBigBirdForQuestionAnswering (BigBird model) distilbert — FlaxDistilBertForQuestionAnswering (DistilBERT model) electra — FlaxElectraForQuestionAnswering (ELECTRA model) mbart — FlaxMBartForQuestionAnswering (mBART model) roberta — FlaxRobertaForQuestionAnswering (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForQuestionAnswering (RoFormer model) xlm-roberta — FlaxXLMRobertaForQuestionAnswering (XLM-RoBERTa model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForQuestionAnswering.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForQuestionAnswering.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForQuestionAnswering.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForTextEncoding class transformers. AutoModelForTextEncoding < source > ( *args **kwargs ) TFAutoModelForTextEncoding class transformers. TFAutoModelForTextEncoding < source > ( *args **kwargs ) Computer vision The following auto classes are available for the following computer vision tasks. AutoModelForDepthEstimation class transformers. AutoModelForDepthEstimation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a depth estimation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: DPTConfig configuration class: DPTForDepthEstimation (DPT model) DepthAnythingConfig configuration class: DepthAnythingForDepthEstimation (Depth Anything model) GLPNConfig configuration class: GLPNForDepthEstimation (GLPN model) ZoeDepthConfig configuration class: ZoeDepthForDepthEstimation (ZoeDepth model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a depth estimation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForDepthEstimation >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForDepthEstimation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a depth estimation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : depth_anything — DepthAnythingForDepthEstimation (Depth Anything model) dpt — DPTForDepthEstimation (DPT model) glpn — GLPNForDepthEstimation (GLPN model) zoedepth — ZoeDepthForDepthEstimation (ZoeDepth model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForDepthEstimation >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForDepthEstimation.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForDepthEstimation.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForDepthEstimation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForImageClassification class transformers. AutoModelForImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BeitConfig configuration class: BeitForImageClassification (BEiT model) BitConfig configuration class: BitForImageClassification (BiT model) CLIPConfig configuration class: CLIPForImageClassification (CLIP model) ConvNextConfig configuration class: ConvNextForImageClassification (ConvNeXT model) ConvNextV2Config configuration class: ConvNextV2ForImageClassification (ConvNeXTV2 model) CvtConfig configuration class: CvtForImageClassification (CvT model) Data2VecVisionConfig configuration class: Data2VecVisionForImageClassification (Data2VecVision model) DeiTConfig configuration class: DeiTForImageClassification or DeiTForImageClassificationWithTeacher (DeiT model) DinatConfig configuration class: DinatForImageClassification (DiNAT model) Dinov2Config configuration class: Dinov2ForImageClassification (DINOv2 model) Dinov2WithRegistersConfig configuration class: Dinov2WithRegistersForImageClassification (DINOv2 with Registers model) EfficientFormerConfig configuration class: EfficientFormerForImageClassification or EfficientFormerForImageClassificationWithTeacher (EfficientFormer model) EfficientNetConfig configuration class: EfficientNetForImageClassification (EfficientNet model) FocalNetConfig configuration class: FocalNetForImageClassification (FocalNet model) HieraConfig configuration class: HieraForImageClassification (Hiera model) IJepaConfig configuration class: IJepaForImageClassification (I-JEPA model) ImageGPTConfig configuration class: ImageGPTForImageClassification (ImageGPT model) LevitConfig configuration class: LevitForImageClassification or LevitForImageClassificationWithTeacher (LeViT model) MobileNetV1Config configuration class: MobileNetV1ForImageClassification (MobileNetV1 model) MobileNetV2Config configuration class: MobileNetV2ForImageClassification (MobileNetV2 model) MobileViTConfig configuration class: MobileViTForImageClassification (MobileViT model) MobileViTV2Config configuration class: MobileViTV2ForImageClassification (MobileViTV2 model) NatConfig configuration class: NatForImageClassification (NAT model) PerceiverConfig configuration class: PerceiverForImageClassificationLearned or PerceiverForImageClassificationFourier or PerceiverForImageClassificationConvProcessing (Perceiver model) PoolFormerConfig configuration class: PoolFormerForImageClassification (PoolFormer model) PvtConfig configuration class: PvtForImageClassification (PVT model) PvtV2Config configuration class: PvtV2ForImageClassification (PVTv2 model) RegNetConfig configuration class: RegNetForImageClassification (RegNet model) ResNetConfig configuration class: ResNetForImageClassification (ResNet model) SegformerConfig configuration class: SegformerForImageClassification (SegFormer model) SiglipConfig configuration class: SiglipForImageClassification (SigLIP model) SwiftFormerConfig configuration class: SwiftFormerForImageClassification (SwiftFormer model) SwinConfig configuration class: SwinForImageClassification (Swin Transformer model) Swinv2Config configuration class: Swinv2ForImageClassification (Swin Transformer V2 model) TextNetConfig configuration class: TextNetForImageClassification (TextNet model) TimmWrapperConfig configuration class: TimmWrapperForImageClassification (TimmWrapperModel model) VanConfig configuration class: VanForImageClassification (VAN model) ViTConfig configuration class: ViTForImageClassification (ViT model) ViTHybridConfig configuration class: ViTHybridForImageClassification (ViT Hybrid model) ViTMSNConfig configuration class: ViTMSNForImageClassification (ViTMSN model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a image classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForImageClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForImageClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : beit — BeitForImageClassification (BEiT model) bit — BitForImageClassification (BiT model) clip — CLIPForImageClassification (CLIP model) convnext — ConvNextForImageClassification (ConvNeXT model) convnextv2 — ConvNextV2ForImageClassification (ConvNeXTV2 model) cvt — CvtForImageClassification (CvT model) data2vec-vision — Data2VecVisionForImageClassification (Data2VecVision model) deit — DeiTForImageClassification or DeiTForImageClassificationWithTeacher (DeiT model) dinat — DinatForImageClassification (DiNAT model) dinov2 — Dinov2ForImageClassification (DINOv2 model) dinov2_with_registers — Dinov2WithRegistersForImageClassification (DINOv2 with Registers model) efficientformer — EfficientFormerForImageClassification or EfficientFormerForImageClassificationWithTeacher (EfficientFormer model) efficientnet — EfficientNetForImageClassification (EfficientNet model) focalnet — FocalNetForImageClassification (FocalNet model) hiera — HieraForImageClassification (Hiera model) ijepa — IJepaForImageClassification (I-JEPA model) imagegpt — ImageGPTForImageClassification (ImageGPT model) levit — LevitForImageClassification or LevitForImageClassificationWithTeacher (LeViT model) mobilenet_v1 — MobileNetV1ForImageClassification (MobileNetV1 model) mobilenet_v2 — MobileNetV2ForImageClassification (MobileNetV2 model) mobilevit — MobileViTForImageClassification (MobileViT model) mobilevitv2 — MobileViTV2ForImageClassification (MobileViTV2 model) nat — NatForImageClassification (NAT model) perceiver — PerceiverForImageClassificationLearned or PerceiverForImageClassificationFourier or PerceiverForImageClassificationConvProcessing (Perceiver model) poolformer — PoolFormerForImageClassification (PoolFormer model) pvt — PvtForImageClassification (PVT model) pvt_v2 — PvtV2ForImageClassification (PVTv2 model) regnet — RegNetForImageClassification (RegNet model) resnet — ResNetForImageClassification (ResNet model) segformer — SegformerForImageClassification (SegFormer model) siglip — SiglipForImageClassification (SigLIP model) swiftformer — SwiftFormerForImageClassification (SwiftFormer model) swin — SwinForImageClassification (Swin Transformer model) swinv2 — Swinv2ForImageClassification (Swin Transformer V2 model) textnet — TextNetForImageClassification (TextNet model) timm_wrapper — TimmWrapperForImageClassification (TimmWrapperModel model) van — VanForImageClassification (VAN model) vit — ViTForImageClassification (ViT model) vit_hybrid — ViTHybridForImageClassification (ViT Hybrid model) vit_msn — ViTMSNForImageClassification (ViTMSN model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForImageClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForImageClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForImageClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForImageClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForImageClassification class transformers. TFAutoModelForImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: ConvNextConfig configuration class: TFConvNextForImageClassification (ConvNeXT model) ConvNextV2Config configuration class: TFConvNextV2ForImageClassification (ConvNeXTV2 model) CvtConfig configuration class: TFCvtForImageClassification (CvT model) Data2VecVisionConfig configuration class: TFData2VecVisionForImageClassification (Data2VecVision model) DeiTConfig configuration class: TFDeiTForImageClassification or TFDeiTForImageClassificationWithTeacher (DeiT model) EfficientFormerConfig configuration class: TFEfficientFormerForImageClassification or TFEfficientFormerForImageClassificationWithTeacher (EfficientFormer model) MobileViTConfig configuration class: TFMobileViTForImageClassification (MobileViT model) RegNetConfig configuration class: TFRegNetForImageClassification (RegNet model) ResNetConfig configuration class: TFResNetForImageClassification (ResNet model) SegformerConfig configuration class: TFSegformerForImageClassification (SegFormer model) SwiftFormerConfig configuration class: TFSwiftFormerForImageClassification (SwiftFormer model) SwinConfig configuration class: TFSwinForImageClassification (Swin Transformer model) ViTConfig configuration class: TFViTForImageClassification (ViT model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a image classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForImageClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForImageClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : convnext — TFConvNextForImageClassification (ConvNeXT model) convnextv2 — TFConvNextV2ForImageClassification (ConvNeXTV2 model) cvt — TFCvtForImageClassification (CvT model) data2vec-vision — TFData2VecVisionForImageClassification (Data2VecVision model) deit — TFDeiTForImageClassification or TFDeiTForImageClassificationWithTeacher (DeiT model) efficientformer — TFEfficientFormerForImageClassification or TFEfficientFormerForImageClassificationWithTeacher (EfficientFormer model) mobilevit — TFMobileViTForImageClassification (MobileViT model) regnet — TFRegNetForImageClassification (RegNet model) resnet — TFResNetForImageClassification (ResNet model) segformer — TFSegformerForImageClassification (SegFormer model) swiftformer — TFSwiftFormerForImageClassification (SwiftFormer model) swin — TFSwinForImageClassification (Swin Transformer model) vit — TFViTForImageClassification (ViT model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForImageClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForImageClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForImageClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForImageClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForImageClassification class transformers. FlaxAutoModelForImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BeitConfig configuration class: FlaxBeitForImageClassification (BEiT model) Dinov2Config configuration class: FlaxDinov2ForImageClassification (DINOv2 model) RegNetConfig configuration class: FlaxRegNetForImageClassification (RegNet model) ResNetConfig configuration class: FlaxResNetForImageClassification (ResNet model) ViTConfig configuration class: FlaxViTForImageClassification (ViT model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a image classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForImageClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForImageClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : beit — FlaxBeitForImageClassification (BEiT model) dinov2 — FlaxDinov2ForImageClassification (DINOv2 model) regnet — FlaxRegNetForImageClassification (RegNet model) resnet — FlaxResNetForImageClassification (ResNet model) vit — FlaxViTForImageClassification (ViT model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForImageClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForImageClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForImageClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForImageClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForVideoClassification class transformers. AutoModelForVideoClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a video classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: TimesformerConfig configuration class: TimesformerForVideoClassification (TimeSformer model) VideoMAEConfig configuration class: VideoMAEForVideoClassification (VideoMAE model) VivitConfig configuration class: VivitForVideoClassification (ViViT model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a video classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForVideoClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForVideoClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a video classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : timesformer — TimesformerForVideoClassification (TimeSformer model) videomae — VideoMAEForVideoClassification (VideoMAE model) vivit — VivitForVideoClassification (ViViT model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForVideoClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForVideoClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForVideoClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForVideoClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForKeypointDetection class transformers. AutoModelForKeypointDetection < source > ( *args **kwargs ) AutoModelForMaskedImageModeling class transformers. AutoModelForMaskedImageModeling < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: DeiTConfig configuration class: DeiTForMaskedImageModeling (DeiT model) FocalNetConfig configuration class: FocalNetForMaskedImageModeling (FocalNet model) SwinConfig configuration class: SwinForMaskedImageModeling (Swin Transformer model) Swinv2Config configuration class: Swinv2ForMaskedImageModeling (Swin Transformer V2 model) ViTConfig configuration class: ViTForMaskedImageModeling (ViT model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a masked image modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForMaskedImageModeling >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForMaskedImageModeling.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : deit — DeiTForMaskedImageModeling (DeiT model) focalnet — FocalNetForMaskedImageModeling (FocalNet model) swin — SwinForMaskedImageModeling (Swin Transformer model) swinv2 — Swinv2ForMaskedImageModeling (Swin Transformer V2 model) vit — ViTForMaskedImageModeling (ViT model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForMaskedImageModeling >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForMaskedImageModeling.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForMaskedImageModeling.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForMaskedImageModeling.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForMaskedImageModeling class transformers. TFAutoModelForMaskedImageModeling < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: DeiTConfig configuration class: TFDeiTForMaskedImageModeling (DeiT model) SwinConfig configuration class: TFSwinForMaskedImageModeling (Swin Transformer model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a masked image modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForMaskedImageModeling >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForMaskedImageModeling.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : deit — TFDeiTForMaskedImageModeling (DeiT model) swin — TFSwinForMaskedImageModeling (Swin Transformer model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForMaskedImageModeling >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForMaskedImageModeling.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForMaskedImageModeling.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForMaskedImageModeling.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForObjectDetection class transformers. AutoModelForObjectDetection < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a object detection head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: ConditionalDetrConfig configuration class: ConditionalDetrForObjectDetection (Conditional DETR model) DeformableDetrConfig configuration class: DeformableDetrForObjectDetection (Deformable DETR model) DetaConfig configuration class: DetaForObjectDetection (DETA model) DetrConfig configuration class: DetrForObjectDetection (DETR model) RTDetrConfig configuration class: RTDetrForObjectDetection (RT-DETR model) TableTransformerConfig configuration class: TableTransformerForObjectDetection (Table Transformer model) YolosConfig configuration class: YolosForObjectDetection (YOLOS model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a object detection head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForObjectDetection >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForObjectDetection.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a object detection head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : conditional_detr — ConditionalDetrForObjectDetection (Conditional DETR model) deformable_detr — DeformableDetrForObjectDetection (Deformable DETR model) deta — DetaForObjectDetection (DETA model) detr — DetrForObjectDetection (DETR model) rt_detr — RTDetrForObjectDetection (RT-DETR model) table-transformer — TableTransformerForObjectDetection (Table Transformer model) yolos — YolosForObjectDetection (YOLOS model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForObjectDetection >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForObjectDetection.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForObjectDetection.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForObjectDetection.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForImageSegmentation class transformers. AutoModelForImageSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a image segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: DetrConfig configuration class: DetrForSegmentation (DETR model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a image segmentation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForImageSegmentation >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForImageSegmentation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a image segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : detr — DetrForSegmentation (DETR model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForImageSegmentation >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForImageSegmentation.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForImageSegmentation.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForImageSegmentation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForImageToImage class transformers. AutoModelForImageToImage < source > ( *args **kwargs ) AutoModelForSemanticSegmentation class transformers. AutoModelForSemanticSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BeitConfig configuration class: BeitForSemanticSegmentation (BEiT model) DPTConfig configuration class: DPTForSemanticSegmentation (DPT model) Data2VecVisionConfig configuration class: Data2VecVisionForSemanticSegmentation (Data2VecVision model) MobileNetV2Config configuration class: MobileNetV2ForSemanticSegmentation (MobileNetV2 model) MobileViTConfig configuration class: MobileViTForSemanticSegmentation (MobileViT model) MobileViTV2Config configuration class: MobileViTV2ForSemanticSegmentation (MobileViTV2 model) SegformerConfig configuration class: SegformerForSemanticSegmentation (SegFormer model) UperNetConfig configuration class: UperNetForSemanticSegmentation (UPerNet model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForSemanticSegmentation >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForSemanticSegmentation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : beit — BeitForSemanticSegmentation (BEiT model) data2vec-vision — Data2VecVisionForSemanticSegmentation (Data2VecVision model) dpt — DPTForSemanticSegmentation (DPT model) mobilenet_v2 — MobileNetV2ForSemanticSegmentation (MobileNetV2 model) mobilevit — MobileViTForSemanticSegmentation (MobileViT model) mobilevitv2 — MobileViTV2ForSemanticSegmentation (MobileViTV2 model) segformer — SegformerForSemanticSegmentation (SegFormer model) upernet — UperNetForSemanticSegmentation (UPerNet model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForSemanticSegmentation >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForSemanticSegmentation.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForSemanticSegmentation.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForSemanticSegmentation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForSemanticSegmentation class transformers. TFAutoModelForSemanticSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: Data2VecVisionConfig configuration class: TFData2VecVisionForSemanticSegmentation (Data2VecVision model) MobileViTConfig configuration class: TFMobileViTForSemanticSegmentation (MobileViT model) SegformerConfig configuration class: TFSegformerForSemanticSegmentation (SegFormer model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForSemanticSegmentation >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForSemanticSegmentation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : data2vec-vision — TFData2VecVisionForSemanticSegmentation (Data2VecVision model) mobilevit — TFMobileViTForSemanticSegmentation (MobileViT model) segformer — TFSegformerForSemanticSegmentation (SegFormer model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForSemanticSegmentation >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForSemanticSegmentation.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForSemanticSegmentation.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForSemanticSegmentation.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForInstanceSegmentation class transformers. AutoModelForInstanceSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a instance segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: MaskFormerConfig configuration class: MaskFormerForInstanceSegmentation (MaskFormer model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a instance segmentation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForInstanceSegmentation >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForInstanceSegmentation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a instance segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : maskformer — MaskFormerForInstanceSegmentation (MaskFormer model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForInstanceSegmentation >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForInstanceSegmentation.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForInstanceSegmentation.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForInstanceSegmentation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForUniversalSegmentation class transformers. AutoModelForUniversalSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a universal image segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: DetrConfig configuration class: DetrForSegmentation (DETR model) Mask2FormerConfig configuration class: Mask2FormerForUniversalSegmentation (Mask2Former model) MaskFormerConfig configuration class: MaskFormerForInstanceSegmentation (MaskFormer model) OneFormerConfig configuration class: OneFormerForUniversalSegmentation (OneFormer model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a universal image segmentation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForUniversalSegmentation >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForUniversalSegmentation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a universal image segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : detr — DetrForSegmentation (DETR model) mask2former — Mask2FormerForUniversalSegmentation (Mask2Former model) maskformer — MaskFormerForInstanceSegmentation (MaskFormer model) oneformer — OneFormerForUniversalSegmentation (OneFormer model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForUniversalSegmentation >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForUniversalSegmentation.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForUniversalSegmentation.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForUniversalSegmentation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForZeroShotImageClassification class transformers. AutoModelForZeroShotImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AlignConfig configuration class: AlignModel (ALIGN model) AltCLIPConfig configuration class: AltCLIPModel (AltCLIP model) Blip2Config configuration class: Blip2ForImageTextRetrieval (BLIP-2 model) BlipConfig configuration class: BlipModel (BLIP model) CLIPConfig configuration class: CLIPModel (CLIP model) CLIPSegConfig configuration class: CLIPSegModel (CLIPSeg model) ChineseCLIPConfig configuration class: ChineseCLIPModel (Chinese-CLIP model) SiglipConfig configuration class: SiglipModel (SigLIP model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a zero-shot image classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForZeroShotImageClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForZeroShotImageClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a zero-shot image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : align — AlignModel (ALIGN model) altclip — AltCLIPModel (AltCLIP model) blip — BlipModel (BLIP model) blip-2 — Blip2ForImageTextRetrieval (BLIP-2 model) chinese_clip — ChineseCLIPModel (Chinese-CLIP model) clip — CLIPModel (CLIP model) clipseg — CLIPSegModel (CLIPSeg model) siglip — SiglipModel (SigLIP model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForZeroShotImageClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForZeroShotImageClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForZeroShotImageClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForZeroShotImageClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForZeroShotImageClassification class transformers. TFAutoModelForZeroShotImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BlipConfig configuration class: TFBlipModel (BLIP model) CLIPConfig configuration class: TFCLIPModel (CLIP model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a zero-shot image classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForZeroShotImageClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForZeroShotImageClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a zero-shot image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : blip — TFBlipModel (BLIP model) clip — TFCLIPModel (CLIP model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForZeroShotImageClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForZeroShotImageClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForZeroShotImageClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForZeroShotImageClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForZeroShotObjectDetection class transformers. AutoModelForZeroShotObjectDetection < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot object detection head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: GroundingDinoConfig configuration class: GroundingDinoForObjectDetection (Grounding DINO model) OmDetTurboConfig configuration class: OmDetTurboForObjectDetection (OmDet-Turbo model) OwlViTConfig configuration class: OwlViTForObjectDetection (OWL-ViT model) Owlv2Config configuration class: Owlv2ForObjectDetection (OWLv2 model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a zero-shot object detection head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForZeroShotObjectDetection.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a zero-shot object detection head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : grounding-dino — GroundingDinoForObjectDetection (Grounding DINO model) omdet-turbo — OmDetTurboForObjectDetection (OmDet-Turbo model) owlv2 — Owlv2ForObjectDetection (OWLv2 model) owlvit — OwlViTForObjectDetection (OWL-ViT model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForZeroShotObjectDetection.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForZeroShotObjectDetection.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForZeroShotObjectDetection.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) Audio The following auto classes are available for the following audio tasks. AutoModelForAudioClassification class transformers. AutoModelForAudioClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: ASTConfig configuration class: ASTForAudioClassification (Audio Spectrogram Transformer model) Data2VecAudioConfig configuration class: Data2VecAudioForSequenceClassification (Data2VecAudio model) HubertConfig configuration class: HubertForSequenceClassification (Hubert model) SEWConfig configuration class: SEWForSequenceClassification (SEW model) SEWDConfig configuration class: SEWDForSequenceClassification (SEW-D model) UniSpeechConfig configuration class: UniSpeechForSequenceClassification (UniSpeech model) UniSpeechSatConfig configuration class: UniSpeechSatForSequenceClassification (UniSpeechSat model) Wav2Vec2BertConfig configuration class: Wav2Vec2BertForSequenceClassification (Wav2Vec2-BERT model) Wav2Vec2Config configuration class: Wav2Vec2ForSequenceClassification (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForSequenceClassification (Wav2Vec2-Conformer model) WavLMConfig configuration class: WavLMForSequenceClassification (WavLM model) WhisperConfig configuration class: WhisperForAudioClassification (Whisper model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a audio classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForAudioClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForAudioClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : audio-spectrogram-transformer — ASTForAudioClassification (Audio Spectrogram Transformer model) data2vec-audio — Data2VecAudioForSequenceClassification (Data2VecAudio model) hubert — HubertForSequenceClassification (Hubert model) sew — SEWForSequenceClassification (SEW model) sew-d — SEWDForSequenceClassification (SEW-D model) unispeech — UniSpeechForSequenceClassification (UniSpeech model) unispeech-sat — UniSpeechSatForSequenceClassification (UniSpeechSat model) wav2vec2 — Wav2Vec2ForSequenceClassification (Wav2Vec2 model) wav2vec2-bert — Wav2Vec2BertForSequenceClassification (Wav2Vec2-BERT model) wav2vec2-conformer — Wav2Vec2ConformerForSequenceClassification (Wav2Vec2-Conformer model) wavlm — WavLMForSequenceClassification (WavLM model) whisper — WhisperForAudioClassification (Whisper model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForAudioClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForAudioClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForAudioClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForAudioClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForAudioFrameClassification class transformers. TFAutoModelForAudioClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: Wav2Vec2Config configuration class: TFWav2Vec2ForSequenceClassification (Wav2Vec2 model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a audio classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForAudioClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForAudioClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : wav2vec2 — TFWav2Vec2ForSequenceClassification (Wav2Vec2 model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForAudioClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForAudioClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForAudioClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForAudioClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) TFAutoModelForAudioFrameClassification class transformers. AutoModelForAudioFrameClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a audio frame (token) classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: Data2VecAudioConfig configuration class: Data2VecAudioForAudioFrameClassification (Data2VecAudio model) UniSpeechSatConfig configuration class: UniSpeechSatForAudioFrameClassification (UniSpeechSat model) Wav2Vec2BertConfig configuration class: Wav2Vec2BertForAudioFrameClassification (Wav2Vec2-BERT model) Wav2Vec2Config configuration class: Wav2Vec2ForAudioFrameClassification (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForAudioFrameClassification (Wav2Vec2-Conformer model) WavLMConfig configuration class: WavLMForAudioFrameClassification (WavLM model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a audio frame (token) classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForAudioFrameClassification >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForAudioFrameClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a audio frame (token) classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : data2vec-audio — Data2VecAudioForAudioFrameClassification (Data2VecAudio model) unispeech-sat — UniSpeechSatForAudioFrameClassification (UniSpeechSat model) wav2vec2 — Wav2Vec2ForAudioFrameClassification (Wav2Vec2 model) wav2vec2-bert — Wav2Vec2BertForAudioFrameClassification (Wav2Vec2-BERT model) wav2vec2-conformer — Wav2Vec2ConformerForAudioFrameClassification (Wav2Vec2-Conformer model) wavlm — WavLMForAudioFrameClassification (WavLM model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForAudioFrameClassification >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForAudioFrameClassification.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForAudioFrameClassification.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForAudioFrameClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForCTC class transformers. AutoModelForCTC < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a connectionist temporal classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: Data2VecAudioConfig configuration class: Data2VecAudioForCTC (Data2VecAudio model) HubertConfig configuration class: HubertForCTC (Hubert model) MCTCTConfig configuration class: MCTCTForCTC (M-CTC-T model) SEWConfig configuration class: SEWForCTC (SEW model) SEWDConfig configuration class: SEWDForCTC (SEW-D model) UniSpeechConfig configuration class: UniSpeechForCTC (UniSpeech model) UniSpeechSatConfig configuration class: UniSpeechSatForCTC (UniSpeechSat model) Wav2Vec2BertConfig configuration class: Wav2Vec2BertForCTC (Wav2Vec2-BERT model) Wav2Vec2Config configuration class: Wav2Vec2ForCTC (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForCTC (Wav2Vec2-Conformer model) WavLMConfig configuration class: WavLMForCTC (WavLM model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a connectionist temporal classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForCTC >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForCTC.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a connectionist temporal classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : data2vec-audio — Data2VecAudioForCTC (Data2VecAudio model) hubert — HubertForCTC (Hubert model) mctct — MCTCTForCTC (M-CTC-T model) sew — SEWForCTC (SEW model) sew-d — SEWDForCTC (SEW-D model) unispeech — UniSpeechForCTC (UniSpeech model) unispeech-sat — UniSpeechSatForCTC (UniSpeechSat model) wav2vec2 — Wav2Vec2ForCTC (Wav2Vec2 model) wav2vec2-bert — Wav2Vec2BertForCTC (Wav2Vec2-BERT model) wav2vec2-conformer — Wav2Vec2ConformerForCTC (Wav2Vec2-Conformer model) wavlm — WavLMForCTC (WavLM model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForCTC >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForCTC.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForCTC.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForCTC.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForSpeechSeq2Seq class transformers. AutoModelForSpeechSeq2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: MoonshineConfig configuration class: MoonshineForConditionalGeneration (Moonshine model) Pop2PianoConfig configuration class: Pop2PianoForConditionalGeneration (Pop2Piano model) SeamlessM4TConfig configuration class: SeamlessM4TForSpeechToText (SeamlessM4T model) SeamlessM4Tv2Config configuration class: SeamlessM4Tv2ForSpeechToText (SeamlessM4Tv2 model) Speech2TextConfig configuration class: Speech2TextForConditionalGeneration (Speech2Text model) SpeechEncoderDecoderConfig configuration class: SpeechEncoderDecoderModel (Speech Encoder decoder model) SpeechT5Config configuration class: SpeechT5ForSpeechToText (SpeechT5 model) WhisperConfig configuration class: WhisperForConditionalGeneration (Whisper model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForSpeechSeq2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : moonshine — MoonshineForConditionalGeneration (Moonshine model) pop2piano — Pop2PianoForConditionalGeneration (Pop2Piano model) seamless_m4t — SeamlessM4TForSpeechToText (SeamlessM4T model) seamless_m4t_v2 — SeamlessM4Tv2ForSpeechToText (SeamlessM4Tv2 model) speech-encoder-decoder — SpeechEncoderDecoderModel (Speech Encoder decoder model) speech_to_text — Speech2TextForConditionalGeneration (Speech2Text model) speecht5 — SpeechT5ForSpeechToText (SpeechT5 model) whisper — WhisperForConditionalGeneration (Whisper model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForSpeechSeq2Seq.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForSpeechSeq2Seq.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForSpeechSeq2Seq.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForSpeechSeq2Seq class transformers. TFAutoModelForSpeechSeq2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: Speech2TextConfig configuration class: TFSpeech2TextForConditionalGeneration (Speech2Text model) WhisperConfig configuration class: TFWhisperForConditionalGeneration (Whisper model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForSpeechSeq2Seq >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForSpeechSeq2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : speech_to_text — TFSpeech2TextForConditionalGeneration (Speech2Text model) whisper — TFWhisperForConditionalGeneration (Whisper model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForSpeechSeq2Seq >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForSpeechSeq2Seq class transformers. FlaxAutoModelForSpeechSeq2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: SpeechEncoderDecoderConfig configuration class: FlaxSpeechEncoderDecoderModel (Speech Encoder decoder model) WhisperConfig configuration class: FlaxWhisperForConditionalGeneration (Whisper model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForSpeechSeq2Seq >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForSpeechSeq2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : speech-encoder-decoder — FlaxSpeechEncoderDecoderModel (Speech Encoder decoder model) whisper — FlaxWhisperForConditionalGeneration (Whisper model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForSpeechSeq2Seq >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForAudioXVector class transformers. AutoModelForAudioXVector < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a audio retrieval via x-vector head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: Data2VecAudioConfig configuration class: Data2VecAudioForXVector (Data2VecAudio model) UniSpeechSatConfig configuration class: UniSpeechSatForXVector (UniSpeechSat model) Wav2Vec2BertConfig configuration class: Wav2Vec2BertForXVector (Wav2Vec2-BERT model) Wav2Vec2Config configuration class: Wav2Vec2ForXVector (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForXVector (Wav2Vec2-Conformer model) WavLMConfig configuration class: WavLMForXVector (WavLM model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a audio retrieval via x-vector head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForAudioXVector >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForAudioXVector.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a audio retrieval via x-vector head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : data2vec-audio — Data2VecAudioForXVector (Data2VecAudio model) unispeech-sat — UniSpeechSatForXVector (UniSpeechSat model) wav2vec2 — Wav2Vec2ForXVector (Wav2Vec2 model) wav2vec2-bert — Wav2Vec2BertForXVector (Wav2Vec2-BERT model) wav2vec2-conformer — Wav2Vec2ConformerForXVector (Wav2Vec2-Conformer model) wavlm — WavLMForXVector (WavLM model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForAudioXVector >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForAudioXVector.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForAudioXVector.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForAudioXVector.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForTextToSpectrogram class transformers. AutoModelForTextToSpectrogram < source > ( *args **kwargs ) AutoModelForTextToWaveform class transformers. AutoModelForTextToWaveform < source > ( *args **kwargs ) Multimodal The following auto classes are available for the following multimodal tasks. AutoModelForTableQuestionAnswering class transformers. AutoModelForTableQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: TapasConfig configuration class: TapasForQuestionAnswering (TAPAS model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a table question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google/tapas-base-finetuned-wtq" ) >>> model = AutoModelForTableQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : tapas — TapasForQuestionAnswering (TAPAS model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForTableQuestionAnswering.from_pretrained( "google/tapas-base-finetuned-wtq" ) >>> # Update configuration during loading >>> model = AutoModelForTableQuestionAnswering.from_pretrained( "google/tapas-base-finetuned-wtq" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/tapas_tf_model_config.json" ) >>> model = AutoModelForTableQuestionAnswering.from_pretrained( ... "./tf_model/tapas_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForTableQuestionAnswering class transformers. TFAutoModelForTableQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: TapasConfig configuration class: TFTapasForQuestionAnswering (TAPAS model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a table question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForTableQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google/tapas-base-finetuned-wtq" ) >>> model = TFAutoModelForTableQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : tapas — TFTapasForQuestionAnswering (TAPAS model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForTableQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForTableQuestionAnswering.from_pretrained( "google/tapas-base-finetuned-wtq" ) >>> # Update configuration during loading >>> model = TFAutoModelForTableQuestionAnswering.from_pretrained( "google/tapas-base-finetuned-wtq" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/tapas_pt_model_config.json" ) >>> model = TFAutoModelForTableQuestionAnswering.from_pretrained( ... "./pt_model/tapas_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForDocumentQuestionAnswering class transformers. AutoModelForDocumentQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: LayoutLMConfig configuration class: LayoutLMForQuestionAnswering (LayoutLM model) LayoutLMv2Config configuration class: LayoutLMv2ForQuestionAnswering (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3ForQuestionAnswering (LayoutLMv3 model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a document question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "impira/layoutlm-document-qa" , revision= "52e01b3" ) >>> model = AutoModelForDocumentQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : layoutlm — LayoutLMForQuestionAnswering (LayoutLM model) layoutlmv2 — LayoutLMv2ForQuestionAnswering (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ForQuestionAnswering (LayoutLMv3 model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained( "impira/layoutlm-document-qa" , revision= "52e01b3" ) >>> # Update configuration during loading >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained( "impira/layoutlm-document-qa" , revision= "52e01b3" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/layoutlm_tf_model_config.json" ) >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained( ... "./tf_model/layoutlm_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForDocumentQuestionAnswering class transformers. TFAutoModelForDocumentQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: LayoutLMConfig configuration class: TFLayoutLMForQuestionAnswering (LayoutLM model) LayoutLMv3Config configuration class: TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a document question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "impira/layoutlm-document-qa" , revision= "52e01b3" ) >>> model = TFAutoModelForDocumentQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : layoutlm — TFLayoutLMForQuestionAnswering (LayoutLM model) layoutlmv3 — TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained( "impira/layoutlm-document-qa" , revision= "52e01b3" ) >>> # Update configuration during loading >>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained( "impira/layoutlm-document-qa" , revision= "52e01b3" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/layoutlm_pt_model_config.json" ) >>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained( ... "./pt_model/layoutlm_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForVisualQuestionAnswering class transformers. AutoModelForVisualQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a visual question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: Blip2Config configuration class: Blip2ForConditionalGeneration (BLIP-2 model) BlipConfig configuration class: BlipForQuestionAnswering (BLIP model) ViltConfig configuration class: ViltForQuestionAnswering (ViLT model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a visual question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "dandelin/vilt-b32-finetuned-vqa" ) >>> model = AutoModelForVisualQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a visual question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : blip — BlipForQuestionAnswering (BLIP model) blip-2 — Blip2ForConditionalGeneration (BLIP-2 model) vilt — ViltForQuestionAnswering (ViLT model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForVisualQuestionAnswering.from_pretrained( "dandelin/vilt-b32-finetuned-vqa" ) >>> # Update configuration during loading >>> model = AutoModelForVisualQuestionAnswering.from_pretrained( "dandelin/vilt-b32-finetuned-vqa" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/vilt_tf_model_config.json" ) >>> model = AutoModelForVisualQuestionAnswering.from_pretrained( ... "./tf_model/vilt_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) AutoModelForVision2Seq class transformers. AutoModelForVision2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: Blip2Config configuration class: Blip2ForConditionalGeneration (BLIP-2 model) BlipConfig configuration class: BlipForConditionalGeneration (BLIP model) ChameleonConfig configuration class: ChameleonForConditionalGeneration (Chameleon model) GitConfig configuration class: GitForCausalLM (GIT model) Idefics2Config configuration class: Idefics2ForConditionalGeneration (Idefics2 model) Idefics3Config configuration class: Idefics3ForConditionalGeneration (Idefics3 model) InstructBlipConfig configuration class: InstructBlipForConditionalGeneration (InstructBLIP model) InstructBlipVideoConfig configuration class: InstructBlipVideoForConditionalGeneration (InstructBlipVideo model) Kosmos2Config configuration class: Kosmos2ForConditionalGeneration (KOSMOS-2 model) LlavaConfig configuration class: LlavaForConditionalGeneration (LLaVa model) LlavaNextConfig configuration class: LlavaNextForConditionalGeneration (LLaVA-NeXT model) LlavaNextVideoConfig configuration class: LlavaNextVideoForConditionalGeneration (LLaVa-NeXT-Video model) LlavaOnevisionConfig configuration class: LlavaOnevisionForConditionalGeneration (LLaVA-Onevision model) MllamaConfig configuration class: MllamaForConditionalGeneration (Mllama model) PaliGemmaConfig configuration class: PaliGemmaForConditionalGeneration (PaliGemma model) Pix2StructConfig configuration class: Pix2StructForConditionalGeneration (Pix2Struct model) Qwen2VLConfig configuration class: Qwen2VLForConditionalGeneration (Qwen2VL model) VideoLlavaConfig configuration class: VideoLlavaForConditionalGeneration (VideoLlava model) VipLlavaConfig configuration class: VipLlavaForConditionalGeneration (VipLlava model) VisionEncoderDecoderConfig configuration class: VisionEncoderDecoderModel (Vision Encoder decoder model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForVision2Seq >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForVision2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : blip — BlipForConditionalGeneration (BLIP model) blip-2 — Blip2ForConditionalGeneration (BLIP-2 model) chameleon — ChameleonForConditionalGeneration (Chameleon model) git — GitForCausalLM (GIT model) idefics2 — Idefics2ForConditionalGeneration (Idefics2 model) idefics3 — Idefics3ForConditionalGeneration (Idefics3 model) instructblip — InstructBlipForConditionalGeneration (InstructBLIP model) instructblipvideo — InstructBlipVideoForConditionalGeneration (InstructBlipVideo model) kosmos-2 — Kosmos2ForConditionalGeneration (KOSMOS-2 model) llava — LlavaForConditionalGeneration (LLaVa model) llava_next — LlavaNextForConditionalGeneration (LLaVA-NeXT model) llava_next_video — LlavaNextVideoForConditionalGeneration (LLaVa-NeXT-Video model) llava_onevision — LlavaOnevisionForConditionalGeneration (LLaVA-Onevision model) mllama — MllamaForConditionalGeneration (Mllama model) paligemma — PaliGemmaForConditionalGeneration (PaliGemma model) pix2struct — Pix2StructForConditionalGeneration (Pix2Struct model) qwen2_vl — Qwen2VLForConditionalGeneration (Qwen2VL model) video_llava — VideoLlavaForConditionalGeneration (VideoLlava model) vipllava — VipLlavaForConditionalGeneration (VipLlava model) vision-encoder-decoder — VisionEncoderDecoderModel (Vision Encoder decoder model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForVision2Seq >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForVision2Seq.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForVision2Seq.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForVision2Seq.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) TFAutoModelForVision2Seq class transformers. TFAutoModelForVision2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: BlipConfig configuration class: TFBlipForConditionalGeneration (BLIP model) VisionEncoderDecoderConfig configuration class: TFVisionEncoderDecoderModel (Vision Encoder decoder model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForVision2Seq >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = TFAutoModelForVision2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : blip — TFBlipForConditionalGeneration (BLIP model) vision-encoder-decoder — TFVisionEncoderDecoderModel (Vision Encoder decoder model) Examples: Copied >>> from transformers import AutoConfig, TFAutoModelForVision2Seq >>> # Download model and configuration from huggingface.co and cache. >>> model = TFAutoModelForVision2Seq.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = TFAutoModelForVision2Seq.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = TFAutoModelForVision2Seq.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) FlaxAutoModelForVision2Seq class transformers. FlaxAutoModelForVision2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: VisionEncoderDecoderConfig configuration class: FlaxVisionEncoderDecoderModel (Vision Encoder decoder model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForVision2Seq >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = FlaxAutoModelForVision2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin ). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt ( bool , optional , defaults to False ) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : vision-encoder-decoder — FlaxVisionEncoderDecoderModel (Vision Encoder decoder model) Examples: Copied >>> from transformers import AutoConfig, FlaxAutoModelForVision2Seq >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxAutoModelForVision2Seq.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = FlaxAutoModelForVision2Seq.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower) >>> config = AutoConfig.from_pretrained( "./pt_model/bert_pt_model_config.json" ) >>> model = FlaxAutoModelForVision2Seq.from_pretrained( ... "./pt_model/bert_pytorch_model.bin" , from_pt= True , config=config ... ) AutoModelForImageTextToText class transformers. AutoModelForImageTextToText < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a image-text-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config ( PretrainedConfig ) — The model class to instantiate is selected based on the configuration class: AriaConfig configuration class: AriaForConditionalGeneration (Aria model) Blip2Config configuration class: Blip2ForConditionalGeneration (BLIP-2 model) BlipConfig configuration class: BlipForConditionalGeneration (BLIP model) ChameleonConfig configuration class: ChameleonForConditionalGeneration (Chameleon model) Emu3Config configuration class: Emu3ForConditionalGeneration (Emu3 model) FuyuConfig configuration class: FuyuForCausalLM (Fuyu model) GitConfig configuration class: GitForCausalLM (GIT model) Idefics2Config configuration class: Idefics2ForConditionalGeneration (Idefics2 model) Idefics3Config configuration class: Idefics3ForConditionalGeneration (Idefics3 model) IdeficsConfig configuration class: IdeficsForVisionText2Text (IDEFICS model) InstructBlipConfig configuration class: InstructBlipForConditionalGeneration (InstructBLIP model) Kosmos2Config configuration class: Kosmos2ForConditionalGeneration (KOSMOS-2 model) LlavaConfig configuration class: LlavaForConditionalGeneration (LLaVa model) LlavaNextConfig configuration class: LlavaNextForConditionalGeneration (LLaVA-NeXT model) LlavaOnevisionConfig configuration class: LlavaOnevisionForConditionalGeneration (LLaVA-Onevision model) MllamaConfig configuration class: MllamaForConditionalGeneration (Mllama model) PaliGemmaConfig configuration class: PaliGemmaForConditionalGeneration (PaliGemma model) Pix2StructConfig configuration class: Pix2StructForConditionalGeneration (Pix2Struct model) PixtralVisionConfig configuration class: LlavaForConditionalGeneration (Pixtral model) Qwen2VLConfig configuration class: Qwen2VLForConditionalGeneration (Qwen2VL model) UdopConfig configuration class: UdopForConditionalGeneration (UDOP model) VipLlavaConfig configuration class: VipLlavaForConditionalGeneration (VipLlava model) VisionEncoderDecoderConfig configuration class: VisionEncoderDecoderModel (Vision Encoder decoder model) attn_implementation ( str , optional ) — The attention implementation to use in the model (if relevant). Can be any of "eager" (manual implementation of the attention), "sdpa" (using F.scaled_dot_product_attention ), or "flash_attention_2" (using Dao-AILab/flash-attention ). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager" implementation. Instantiates one of the model classes of the library (with a image-text-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: Copied >>> from transformers import AutoConfig, AutoModelForImageTextToText >>> # Download configuration from huggingface.co and cache. >>> config = AutoConfig.from_pretrained( "google-bert/bert-base-cased" ) >>> model = AutoModelForImageTextToText.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path ( str or os.PathLike ) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. A path to a directory containing model weights saved using save_pretrained() , e.g., ./my_model_directory/ . A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index ). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional ) — Will be passed along to the underlying model __init__() method. config ( PretrainedConfig , optional ) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict ( Dict[str, torch.Tensor] , optional ) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir ( str or os.PathLike , optional ) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf ( bool , optional , defaults to False ) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download ( bool , optional , defaults to False ) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers. proxies ( Dict[str, str] , optional ) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'} . The proxies are used on each request. output_loading_info( bool , optional , defaults to False ) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only( bool , optional , defaults to False ) — Whether or not to only look at local files (e.g., not try downloading the model). revision ( str , optional , defaults to "main" ) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code ( bool , optional , defaults to False ) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision ( str , optional , defaults to "main" ) — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional ) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True ). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config , **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function ( from_pretrained() ). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a image-text-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path : aria — AriaForConditionalGeneration (Aria model) blip — BlipForConditionalGeneration (BLIP model) blip-2 — Blip2ForConditionalGeneration (BLIP-2 model) chameleon — ChameleonForConditionalGeneration (Chameleon model) emu3 — Emu3ForConditionalGeneration (Emu3 model) fuyu — FuyuForCausalLM (Fuyu model) git — GitForCausalLM (GIT model) idefics — IdeficsForVisionText2Text (IDEFICS model) idefics2 — Idefics2ForConditionalGeneration (Idefics2 model) idefics3 — Idefics3ForConditionalGeneration (Idefics3 model) instructblip — InstructBlipForConditionalGeneration (InstructBLIP model) kosmos-2 — Kosmos2ForConditionalGeneration (KOSMOS-2 model) llava — LlavaForConditionalGeneration (LLaVa model) llava_next — LlavaNextForConditionalGeneration (LLaVA-NeXT model) llava_onevision — LlavaOnevisionForConditionalGeneration (LLaVA-Onevision model) mllama — MllamaForConditionalGeneration (Mllama model) paligemma — PaliGemmaForConditionalGeneration (PaliGemma model) pix2struct — Pix2StructForConditionalGeneration (Pix2Struct model) pixtral — LlavaForConditionalGeneration (Pixtral model) qwen2_vl — Qwen2VLForConditionalGeneration (Qwen2VL model) udop — UdopForConditionalGeneration (UDOP model) vipllava — VipLlavaForConditionalGeneration (VipLlava model) vision-encoder-decoder — VisionEncoderDecoderModel (Vision Encoder decoder model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: Copied >>> from transformers import AutoConfig, AutoModelForImageTextToText >>> # Download model and configuration from huggingface.co and cache. >>> model = AutoModelForImageTextToText.from_pretrained( "google-bert/bert-base-cased" ) >>> # Update configuration during loading >>> model = AutoModelForImageTextToText.from_pretrained( "google-bert/bert-base-cased" , output_attentions= True ) >>> model.config.output_attentions True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower) >>> config = AutoConfig.from_pretrained( "./tf_model/bert_tf_model_config.json" ) >>> model = AutoModelForImageTextToText.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index" , from_tf= True , config=config ... ) < > Update on GitHub ← Agents and Tools Backbones → Auto Classes Extending the Auto Classes Auto Config Auto Tokenizer Auto Feature Extractor Auto Image Processor Auto Processor Generic model classes Auto Model TF Auto Model Flax Auto Model Generic pretraining classes Auto Model For Pre Training TF Auto Model For Pre Training Flax Auto Model For Pre Training Natural Language Processing Auto Model For CausalLM TF Auto Model For CausalLM Flax Auto Model For CausalLM Auto Model For MaskedLM TF Auto Model For MaskedLM Flax Auto Model For MaskedLM Auto Model For Mask Generation TF Auto Model For Mask Generation Auto Model For Seq2 SeqLM TF Auto Model For Seq2 SeqLM Flax Auto Model For Seq2 SeqLM Auto Model For Sequence Classification TF Auto Model For Sequence Classification Flax Auto Model For Sequence Classification Auto Model For Multiple Choice TF Auto Model For Multiple Choice Flax Auto Model For Multiple Choice Auto Model For Next Sentence Prediction TF Auto Model For Next Sentence Prediction Flax Auto Model For Next Sentence Prediction Auto Model For Token Classification TF Auto Model For Token Classification Flax Auto Model For Token Classification Auto Model For Question Answering TF Auto Model For Question Answering Flax Auto Model For Question Answering Auto Model For Text Encoding TF Auto Model For Text Encoding Computer vision Auto Model For Depth Estimation Auto Model For Image Classification TF Auto Model For Image Classification Flax Auto Model For Image Classification Auto Model For Video Classification Auto Model For Keypoint Detection Auto Model For Masked Image Modeling TF Auto Model For Masked Image Modeling Auto Model For Object Detection Auto Model For Image Segmentation Auto Model For Image To Image Auto Model For Semantic Segmentation TF Auto Model For Semantic Segmentation Auto Model For Instance Segmentation Auto Model For Universal Segmentation Auto Model For Zero Shot Image Classification TF Auto Model For Zero Shot Image Classification Auto Model For Zero Shot Object Detection Audio Auto Model For Audio Classification Auto Model For Audio Frame Classification TF Auto Model For Audio Frame Classification Auto Model ForCTC Auto Model For Speech Seq2 Seq TF Auto Model For Speech Seq2 Seq Flax Auto Model For Speech Seq2 Seq Auto Model For AudioX Vector Auto Model For Text To Spectrogram Auto Model For Text To Waveform Multimodal Auto Model For Table Question Answering TF Auto Model For Table Question Answering Auto Model For Document Question Answering TF Auto Model For Document Question Answering Auto Model For Visual Question Answering Auto Model For Vision2 Seq TF Auto Model For Vision2 Seq Flax Auto Model For Vision2 Seq Auto Model For Image Text To Text
Spaces_Overview.txt
Spaces Overview Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Spaces Overview Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Handling Spaces Dependencies Spaces Settings Using Spaces for Organization Cards Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Spaces Overview Hugging Face Spaces make it easy for you to create and deploy ML-powered demos in minutes. Watch the following video for a quick introduction to Spaces: In the following sections, you’ll learn the basics of creating a Space, configuring it, and deploying your code to it. Creating a new Space To make a new Space , visit the Spaces main page and click on Create new Space . Along with choosing a name for your Space, selecting an optional license, and setting your Space’s visibility, you’ll be prompted to choose the SDK for your Space. The Hub offers four SDK options: Gradio, Streamlit, Docker and static HTML. If you select “Gradio” as your SDK, you’ll be navigated to a new repo showing the following page: Under the hood, Spaces stores your code inside a git repository, just like the model and dataset repositories. Thanks to this, the same tools we use for all the other repositories on the Hub ( git and git-lfs ) also work for Spaces. Follow the same flow as in Getting Started with Repositories to add files to your Space. Each time a new commit is pushed, the Space will automatically rebuild and restart. For step-by-step tutorials to creating your first Space, see the guides below: Creating a Gradio Space Creating a Streamlit Space Creating a Docker Space Hardware resources Each Spaces environment is limited to 16GB RAM, 2 CPU cores and 50GB of (not persistent) disk space by default, which you can use free of charge. You can upgrade to better hardware, including a variety of GPU accelerators and persistent storage, for a competitive price . To request an upgrade, please click the Settings button in your Space and select your preferred hardware environment. Hardware GPU Memory CPU Memory Disk Hourly Price CPU Basic - 2 vCPU 16 GB 50 GB Free! CPU Upgrade - 8 vCPU 32 GB 50 GB $0.03 Nvidia T4 - small 16GB 4 vCPU 15 GB 50 GB $0.60 Nvidia T4 - medium 16GB 8 vCPU 30 GB 100 GB $0.90 Nvidia A10G - small 24GB 4 vCPU 15 GB 110 GB $1.05 Nvidia A10G - large 24GB 12 vCPU 46 GB 200 GB $3.15 2x Nvidia A10G - large 48GB 24 vCPU 92 GB 1000 GB $5.70 4x Nvidia A10G - large 96GB 48 vCPU 184 GB 2000 GB $10.80 Nvidia A100 - large 40GB 12 vCPU 142 GB 1000 GB $4.13 Storage tier Size Persistent Monthly price Ephemeral (default) 50GB No Free! Small Ephemeral + 20GB Yes $5 Medium Ephemeral + 150GB Yes $25 Large Ephemeral + 1TB yes $100 Note: Find more detailed and comprehensive pricing information on our pricing page . Do you have an awesome Space but need help covering the hardware upgrade costs? We love helping out those with an innovative Space so please feel free to apply for a community GPU grant using the link in the Settings tab of your Space and see if yours makes the cut! Read more in our dedicated sections on Spaces GPU Upgrades and Spaces Storage Upgrades . Managing secrets and environment variables If your app requires environment variables (for instance, secret keys or tokens), do not hard-code them inside your app! Instead, go to the Settings page of your Space repository and add a new variable or secret . Use variables if you need to store non-sensitive configuration values and secrets for storing access tokens, API keys, or any sensitive value or credentials. You can use: Variables if you need to store non-sensitive configuration values. They are publicly accessible and viewable and will be automatically added to Spaces duplicated from yours. Secrets to store access tokens, API keys, or any sensitive values or credentials. They are private and their value cannot be read from the Space’s settings page once set. They won’t be added to Spaces duplicated from your repository. Accessing secrets and variables is different depending on your Space SDK: For Static Spaces, both are available through client-side JavaScript in window.huggingface.variables For Docker Spaces, check out environment management with Docker For Streamlit Spaces, secrets are exposed to your app through Streamlit Secrets Management , and public variables are directly available as environment variables For other Spaces, both are exposed to your app as environment variables. Here is a very simple example of accessing the previously declared MODEL_REPO_ID variable in Python (it would be the same for secrets): Copied import os print (os.getenv( 'MODEL_REPO_ID' )) Spaces owners are warned when our Spaces Secrets Scanner finds hard-coded secrets . Duplicating a Space Duplicating a Space can be useful if you want to build a new demo using another demo as an initial template. Duplicated Spaces can also be useful if you want to have an individual Upgraded Space for your use with fast inference. If you want to duplicate a Space, you can click the three dots at the top right of the space and click Duplicate this Space . Once you do this, you will be able to change the following attributes: Owner: The duplicated Space can be under your account or any organization in which you have write access Space name Visibility: The Space is private by default. Read more about private repositories here . Hardware: You can choose the hardware on which the Space will be running. Read more about hardware upgrades here . Storage: If the original repo uses persistent storage, you will be prompted to choose a storage tier. Read more about persistent storage here . Secrets and variables: If the original repo has set some secrets and variables, you’ll be able to set them while duplicating the repo. Some Spaces might have environment variables that you may need to set up. In these cases, the duplicate workflow will auto-populate the public Variables from the source Space, and give you a warning about setting up the Secrets. The duplicated Space will use a free CPU hardware by default, but you can later upgrade if needed. Networking If your Space needs to make any network requests, you can make requests through the standard HTTP and HTTPS ports (80 and 443) along with port 8080. Any requests going to other ports will be blocked. Lifecycle management On free hardware, your Space will “go to sleep” and stop executing after a period of time if unused. If you wish for your Space to run indefinitely, consider upgrading to a paid hardware . You can also manually pause your Space from the Settings tab. A paused Space stops executing until manually restarted by its owner. Paused time is not billed. Helper environment variables In some cases, you might be interested in having programmatic access to the Space author or repository name. This feature is particularly useful when you expect users to duplicate your Space. To help with this, Spaces exposes different environment variables at runtime. Given a Space osanseviero/i-like-flan : CPU_CORES : 4 MEMORY : 15Gi SPACE_AUTHOR_NAME : osanseviero SPACE_REPO_NAME : i-like-flan SPACE_TITLE : I Like Flan (specified in the README file) SPACE_ID : osanseviero/i-like-flan SPACE_HOST : osanseviero-i-like-flan.hf.space SPACE_CREATOR_USER_ID : 6032802e1f993496bc14d9e3 - This is the ID of the user that originally created the Space. It’s useful if the Space is under an organization. You can get the user information with an API call to https://huggingface.co/api/users/{SPACE_CREATOR_USER_ID}/overview . In case OAuth is enabled for your Space, the following variables will also be available: OAUTH_CLIENT_ID : the client ID of your OAuth app (public) OAUTH_CLIENT_SECRET : the client secret of your OAuth app OAUTH_SCOPES : scopes accessible by your OAuth app. Currently, this is always "openid profile" . OPENID_PROVIDER_URL : The URL of the OpenID provider. The OpenID metadata will be available at {OPENID_PROVIDER_URL}/.well-known/openid-configuration . Clone the Repository You can easily clone your Space repo locally. Start by clicking on the dropdown menu in the top right of your Space page: Select “Clone repository”, and then you’ll be able to follow the instructions to clone the Space repo to your local machine using HTTPS or SSH. Linking Models and Datasets on the Hub You can showcase all the models and datasets that your Space links to by adding their identifier in your Space’s README metadata. To do so, you can define them under the models and datasets keys. In addition to listing the artefacts in the README file, you can also record them in any .py , .ini or .html file as well. We’ll parse it auto-magically! Here’s an example linking two models from a space: Copied title : My lovely space emoji : 🤗 colorFrom : blue colorTo : green sdk : docker pinned : false models : - reach-vb/musicgen-large-fp16-endpoint - reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lt-ft < > Update on GitHub ← Spaces Handling Spaces Dependencies → Spaces Overview Creating a new Space Hardware resources Managing secrets and environment variables Duplicating a Space Networking Lifecycle management Helper environment variables Clone the Repository Linking Models and Datasets on the Hub
Audio_Classification.txt
Audio Classification Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Audio Classification api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Audio Classification Audio classification is the task of assigning a label or class to a given audio. Example applications: Recognizing which command a user is giving Identifying a speaker Detecting the genre of a song For more details about the audio-classification task, check out its dedicated page ! You will find examples and related materials. Recommended models ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition : An emotion recognition model. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" headers = { "Authorization" : "Bearer hf_***" } def query ( filename ): with open (filename, "rb" ) as f: data = f.read() response = requests.post(API_URL, headers=headers, data=data) return response.json() output = query( "sample1.flac" ) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload inputs* string The input audio data as a base64-encoded string. If no parameters are provided, you can also provide the audio data as a raw bytes payload. parameters object function_to_apply enum Possible values: sigmoid, softmax, none. top_k integer When specified, limits the output to the top K most probable classes. Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body (array) object[] Output is an array of objects. label string The predicted class label. score number The corresponding probability. < > Update on GitHub ← Parameters Automatic Speech Recognition → Audio Classification Recommended models Using the API AP I specification Request Response
Utilities.txt
Utilities Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Utilities Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Utilities Configure logging 🤗 Datasets strives to be transparent and explicit about how it works, but this can be quite verbose at times. We have included a series of logging methods which allow you to easily adjust the level of verbosity of the entire library. Currently the default verbosity of the library is set to WARNING . To change the level of verbosity, use one of the direct setters. For instance, here is how to change the verbosity to the INFO level: Copied import datasets datasets.logging.set_verbosity_info() You can also use the environment variable DATASETS_VERBOSITY to override the default verbosity, and set it to one of the following: debug , info , warning , error , critical : Copied DATASETS_VERBOSITY=error ./myprogram.py All the methods of this logging module are documented below. The main ones are: logging.get_verbosity() to get the current level of verbosity in the logger logging.set_verbosity() to set the verbosity to the level of your choice In order from the least to the most verbose (with their corresponding int values): logging.CRITICAL or logging.FATAL (int value, 50): only report the most critical errors. logging.ERROR (int value, 40): only report errors. logging.WARNING or logging.WARN (int value, 30): only reports error and warnings. This the default level used by the library. logging.INFO (int value, 20): reports error, warnings and basic information. logging.DEBUG (int value, 10): report all information. datasets.utils.logging.get_verbosity < source > ( ) Return the current level for the HuggingFace datasets library’s root logger. HuggingFace datasets library has following logging levels: datasets.logging.CRITICAL , datasets.logging.FATAL datasets.logging.ERROR datasets.logging.WARNING , datasets.logging.WARN datasets.logging.INFO datasets.logging.DEBUG datasets.utils.logging.set_verbosity < source > ( verbosity : int ) Parameters verbosity — Logging level, e.g., datasets.logging.DEBUG and datasets.logging.INFO . Set the level for the Hugging Face Datasets library’s root logger. datasets.utils.logging.set_verbosity_info < source > ( ) Set the level for the Hugging Face datasets library’s root logger to INFO . This will display most of the logging information and tqdm bars. Shortcut to datasets.logging.set_verbosity(datasets.logging.INFO) . datasets.utils.logging.set_verbosity_warning < source > ( ) Set the level for the Hugging Face datasets library’s root logger to WARNING . This will display only the warning and errors logging information and tqdm bars. Shortcut to datasets.logging.set_verbosity(datasets.logging.WARNING) . datasets.utils.logging.set_verbosity_debug < source > ( ) Set the level for the Hugging Face datasets library’s root logger to DEBUG . This will display all the logging information and tqdm bars. Shortcut to datasets.logging.set_verbosity(datasets.logging.DEBUG) . datasets.utils.logging.set_verbosity_error < source > ( ) Set the level for the Hugging Face datasets library’s root logger to ERROR . This will display only the errors logging information and tqdm bars. Shortcut to datasets.logging.set_verbosity(datasets.logging.ERROR) . datasets.utils.logging.disable_propagation < source > ( ) Disable propagation of the library log outputs. Note that log propagation is disabled by default. datasets.utils.logging.enable_propagation < source > ( ) Enable propagation of the library log outputs. Please disable the Hugging Face datasets library’s default handler to prevent double logging if the root logger has been configured. Configure progress bars By default, tqdm progress bars will be displayed during dataset download and preprocessing. You can disable them globally by setting HF_DATASETS_DISABLE_PROGRESS_BARS environment variable. You can also enable/disable them using enable_progress_bars() and disable_progress_bars() . If set, the environment variable has priority on the helpers. datasets.enable_progress_bars < source > ( ) Enable globally progress bars used in datasets except if HF_DATASETS_DISABLE_PROGRESS_BAR environment variable has been set. Use disable_progress_bars() to disable them. datasets.disable_progress_bars < source > ( ) Disable globally progress bars used in datasets except if HF_DATASETS_DISABLE_PROGRESS_BAR environment variable has been set. Use enable_progress_bars() to re-enable them. datasets.are_progress_bars_disabled < source > ( ) Return whether progress bars are globally disabled or not. Progress bars used in datasets can be enable or disabled globally using enable_progress_bars() and disable_progress_bars() or by setting HF_DATASETS_DISABLE_PROGRESS_BAR as environment variable. < > Update on GitHub ← Table Classes Utilities Configure logging Configure progress bars
Efficient_Training_on_Multiple_CPUs.txt
Efficient Training on Multiple CPUs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Efficient Training on Multiple CPUs Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Efficient Training on Multiple CPUs When training on a single CPU is too slow, we can use multiple CPUs. This guide focuses on PyTorch-based DDP enabling distributed CPU training efficiently on bare metal and Kubernetes . Intel® oneCCL Bindings for PyTorch Intel® oneCCL (collective communications library) is a library for efficient distributed deep learning training implementing such collectives like allreduce, allgather, alltoall. For more information on oneCCL, please refer to the oneCCL documentation and oneCCL specification . Module oneccl_bindings_for_pytorch ( torch_ccl before version 1.12) implements PyTorch C10D ProcessGroup API and can be dynamically loaded as external ProcessGroup and only works on Linux platform now Check more detailed information for oneccl_bind_pt . Intel® oneCCL Bindings for PyTorch installation Wheel files are available for the following Python versions: Extension Version Python 3.7 Python 3.8 Python 3.9 Python 3.10 Python 3.11 2.5.0 √ √ √ √ 2.4.0 √ √ √ √ 2.3.0 √ √ √ √ 2.2.0 √ √ √ √ Please run pip list | grep torch to get your pytorch_version . Copied pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu where {pytorch_version} should be your PyTorch version, for instance 2.4.0. Check more approaches for oneccl_bind_pt installation . Versions of oneCCL and PyTorch must match. Intel® MPI library Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. This component is part of the Intel® oneAPI HPC Toolkit. oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it. Copied oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)" ) source $oneccl_bindings_for_pytorch_path /env/setvars.sh Intel® Extension for PyTorch installation Intel Extension for PyTorch (IPEX) provides performance optimizations for CPU training with both Float32 and BFloat16 (refer to the single CPU section to learn more). The following “Usage in Trainer” takes mpirun in Intel® MPI library as an example. Usage in Trainer To enable multi CPU distributed training in the Trainer with the ccl backend, users should add --ddp_backend ccl in the command arguments. Let’s see an example with the question-answering example The following command enables training with 2 processes on one Xeon node, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. Copied export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 examples/pytorch/question-answering/run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex The following command enables training with a total of four processes on two Xeons (node0 and node1, taking node0 as the main process), ppn (processes per node) is set to 2, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument. Copied cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip Now, run the following command in node0 and 4DDP will be enabled in node0 and node1 with BF16 auto mixed precision: Copied export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 examples/pytorch/question-answering/run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 Usage with Kubernetes The same distributed training job from the previous section can be deployed to a Kubernetes cluster using the Kubeflow PyTorchJob training operator . Setup This example assumes that you have: Access to a Kubernetes cluster with Kubeflow installed kubectl installed and configured to access the Kubernetes cluster A Persistent Volume Claim (PVC) that can be used to store datasets and model files. There are multiple options for setting up the PVC including using an NFS storage class or a cloud storage bucket. A Docker container that includes your model training script and all the dependencies needed to run the script. For distributed CPU training jobs, this typically includes PyTorch, Transformers, Intel Extension for PyTorch, Intel oneCCL Bindings for PyTorch, and OpenSSH to communicate between the containers. The snippet below is an example of a Dockerfile that uses a base image that supports distributed CPU training and then extracts a Transformers release to the /workspace directory, so that the example scripts are included in the image: Copied FROM intel/intel-optimized-pytorch: 2.4 . 0 -pip-multinode RUN apt-get update -y && \ apt-get install -y --no-install-recommends --fix-missing \ google-perftools \ libomp-dev WORKDIR /workspace # Download and extract the transformers code ARG HF_TRANSFORMERS_VER= "4.46.0" RUN pip install --no-cache-dir \ transformers== ${HF_TRANSFORMERS_VER} && \ mkdir transformers && \ curl -sSL --retry 5 https://github.com/huggingface/transformers/archive/refs/tags/v ${HF_TRANSFORMERS_VER} .tar.gz | tar -C transformers --strip-components=1 -xzf - The image needs to be built and copied to the cluster’s nodes or pushed to a container registry prior to deploying the PyTorchJob to the cluster. PyTorchJob Specification File The Kubeflow PyTorchJob is used to run the distributed training job on the cluster. The yaml file for the PyTorchJob defines parameters such as: The name of the PyTorchJob The number of replicas (workers) The python script and it’s parameters that will be used to run the training job The types of resources (node selector, memory, and CPU) needed for each worker The image/tag for the Docker container to use Environment variables A volume mount for the PVC The volume mount defines a path where the PVC will be mounted in the container for each worker pod. This location can be used for the dataset, checkpoint files, and the saved model after training completes. The snippet below is an example of a yaml file for a PyTorchJob with 4 workers running the question-answering example . Copied apiVersion: "kubeflow.org/v1" kind: PyTorchJob metadata: name: transformers-pytorchjob spec: elasticPolicy: rdzvBackend: c10d minReplicas: 1 maxReplicas: 4 maxRestarts: 10 pytorchReplicaSpecs: Worker: replicas: 4 # The number of worker pods restartPolicy: OnFailure template: spec: containers: - name: pytorch image: <image name>:<tag> # Specify the docker image to use for the worker pods imagePullPolicy: IfNotPresent command: [ "/bin/bash" , "-c" ] args: - >- cd /workspace/transformers; pip install -r /workspace/transformers/examples/pytorch/question-answering/requirements.txt; source /usr/local/lib/python3.10/dist-packages/oneccl_bindings_for_pytorch/env/setvars.sh; torchrun /workspace/transformers/examples/pytorch/question-answering/run_qa.py \ --model_name_or_path distilbert/distilbert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/pvc-mount/output_$(date +%Y%m%d_%H%M%S) \ --no_cuda \ --ddp_backend ccl \ --bf16 \ --use_ipex; env: - name: LD_PRELOAD value: "/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4.5.9:/usr/local/lib/libiomp5.so" - name: TRANSFORMERS_CACHE value: "/tmp/pvc-mount/transformers_cache" - name: HF_DATASETS_CACHE value: "/tmp/pvc-mount/hf_datasets_cache" - name: LOGLEVEL value: "INFO" - name: CCL_WORKER_COUNT value: "1" - name: OMP_NUM_THREADS # Can be tuned for optimal performance value: "240" resources: limits: cpu: 240 # Update the CPU and memory limit values based on your nodes memory: 128Gi requests: cpu: 240 # Update the CPU and memory request values based on your nodes memory: 128Gi volumeMounts: - name: pvc-volume mountPath: /tmp/pvc-mount - mountPath: /dev/shm name: dshm restartPolicy: Never nodeSelector: # Optionally use nodeSelector to match a certain node label for the worker pods node-type: gnr volumes: - name: pvc-volume persistentVolumeClaim: claimName: transformers-pvc - name: dshm emptyDir: medium: Memory To run this example, update the yaml based on your training script and the nodes in your cluster. The CPU resource limits/requests in the yaml are defined in cpu units where 1 CPU unit is equivalent to 1 physical CPU core or 1 virtual core (depending on whether the node is a physical host or a VM). The amount of CPU and memory limits/requests defined in the yaml should be less than the amount of available CPU/memory capacity on a single machine. It is usually a good idea to not use the entire machine’s capacity in order to leave some resources for the kubelet and OS. In order to get “guaranteed” quality of service for the worker pods, set the same CPU and memory amounts for both the resource limits and requests. Deploy After the PyTorchJob spec has been updated with values appropriate for your cluster and training job, it can be deployed to the cluster using: Copied export NAMESPACE=<specify your namespace> kubectl create -f pytorchjob.yaml -n ${NAMESPACE} The kubectl get pods -n ${NAMESPACE} command can then be used to list the pods in your namespace. You should see the worker pods for the PyTorchJob that was just deployed. At first, they will probably have a status of “Pending” as the containers get pulled and created, then the status should change to “Running”. Copied NAME READY STATUS RESTARTS AGE ... transformers-pytorchjob-worker-0 1/1 Running 0 7m37s transformers-pytorchjob-worker-1 1/1 Running 0 7m37s transformers-pytorchjob-worker-2 1/1 Running 0 7m37s transformers-pytorchjob-worker-3 1/1 Running 0 7m37s ... The logs for worker can be viewed using kubectl logs <pod name> -n ${NAMESPACE} . Add -f to stream the logs, for example: Copied kubectl logs transformers-pytorchjob-worker-0 -n ${NAMESPACE} -f After the training job completes, the trained model can be copied from the PVC or storage location. When you are done with the job, the PyTorchJob resource can be deleted from the cluster using kubectl delete -f pytorchjob.yaml -n ${NAMESPACE} . Summary This guide covered running distributed PyTorch training jobs using multiple CPUs on bare metal and on a Kubernetes cluster. Both cases utilize Intel Extension for PyTorch and Intel oneCCL Bindings for PyTorch for optimal training performance, and can be used as a template to run your own workload on multiple nodes. < > Update on GitHub ← Efficient training on CPU Training on TPU with TensorFlow → Efficient Training on Multiple CP Us Intel® oneCC L Bindings for Py Torch Intel® oneCC L Bindings for Py Torch installation Intel® MP I library Intel® Extension for Py Torch installation Usage in Trainer Usage with Kubernetes Setup Py Torch Job Specification File Deploy Summary
Serving_Private_&_Gated_Models.txt
Serving Private & Gated Models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Serving Private & Gated Models text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Serving Private & Gated Models If the model you wish to serve is behind gated access or the model repository on Hugging Face Hub is private, and you have access to the model, you can provide your Hugging Face Hub access token. You can generate and copy a read token from Hugging Face Hub tokens page If you’re using the CLI, set the HF_TOKEN environment variable. For example: Copied export HF_TOKEN =<YOUR READ TOKEN> If you would like to do it through Docker, you can provide your token by specifying HF_TOKEN as shown below. Copied model=meta-llama/Llama-2-7b-chat-hf volume= $PWD /data token=<your READ token> docker run --gpus all \ --shm-size 1g \ -e HF_TOKEN= $token \ -p 8080:80 \ -v $volume :/data ghcr.io/huggingface/text-generation-inference:3.0.1 \ --model-id $model < > Update on GitHub ← Preparing Model for Serving Using TGI CLI → Serving Private & Gated Models
Text_Generation_Inference.txt
Text Generation Inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Text Generation Inference text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Text Generation Inference Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open-source LLMs, including Llama, Falcon, StarCoder, BLOOM, GPT-NeoX, and T5. Text Generation Inference implements many optimizations and features, such as: Simple launcher to serve most popular LLMs Production ready (distributed tracing with Open Telemetry, Prometheus metrics) Tensor Parallelism for faster inference on multiple GPUs Token streaming using Server-Sent Events (SSE) Continuous batching of incoming requests for increased total throughput Optimized transformers code for inference using Flash Attention and Paged Attention on the most popular architectures Quantization with bitsandbytes and GPT-Q Safetensors weight loading Watermarking with A Watermark for Large Language Models Logits warper (temperature scaling, top-p, top-k, repetition penalty) Stop sequences Log probabilities Fine-tuning Support: Utilize fine-tuned models for specific tasks to achieve higher accuracy and performance. Guidance : Enable function calling and tool-use by forcing the model to generate structured outputs based on your own predefined output schemas. Text Generation Inference is used in production by multiple projects, such as: Hugging Chat , an open-source interface for open-access models, such as Open Assistant and Llama OpenAssistant , an open-source community effort to train LLMs in the open nat.dev , a playground to explore and compare LLMs. < > Update on GitHub Quick Tour → Text Generation Inference
Load_schedulers_and_models.txt
Load schedulers and models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Load schedulers and models Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Load schedulers and models Diffusion pipelines are a collection of interchangeable schedulers and models that can be mixed and matched to tailor a pipeline to a specific use case. The scheduler encapsulates the entire denoising process such as the number of denoising steps and the algorithm for finding the denoised sample. A scheduler is not parameterized or trained so they don’t take very much memory. The model is usually only concerned with the forward pass of going from a noisy input to a less noisy sample. This guide will show you how to load schedulers and models to customize a pipeline. You’ll use the stable-diffusion-v1-5/stable-diffusion-v1-5 checkpoint throughout this guide, so let’s load it first. Copied import torch from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) You can see what scheduler this pipeline uses with the pipeline.scheduler attribute. Copied pipeline.scheduler PNDMScheduler { "_class_name" : "PNDMScheduler" , "_diffusers_version" : "0.21.4" , "beta_end" : 0.012 , "beta_schedule" : "scaled_linear" , "beta_start" : 0.00085 , "clip_sample" : false, "num_train_timesteps" : 1000 , "set_alpha_to_one" : false, "skip_prk_steps" : true, "steps_offset" : 1 , "timestep_spacing" : "leading" , "trained_betas" : null } Load a scheduler Schedulers are defined by a configuration file that can be used by a variety of schedulers. Load a scheduler with the SchedulerMixin.from_pretrained() method, and specify the subfolder parameter to load the configuration file into the correct subfolder of the pipeline repository. For example, to load the DDIMScheduler : Copied from diffusers import DDIMScheduler, DiffusionPipeline ddim = DDIMScheduler.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , subfolder= "scheduler" ) Then you can pass the newly loaded scheduler to the pipeline. Copied pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , scheduler=ddim, torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) Compare schedulers Schedulers have their own unique strengths and weaknesses, making it difficult to quantitatively compare which scheduler works best for a pipeline. You typically have to make a trade-off between denoising speed and denoising quality. We recommend trying out different schedulers to find one that works best for your use case. Call the pipeline.scheduler.compatibles attribute to see what schedulers are compatible with a pipeline. Let’s compare the LMSDiscreteScheduler , EulerDiscreteScheduler , EulerAncestralDiscreteScheduler , and the DPMSolverMultistepScheduler on the following prompt and seed. Copied import torch from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." generator = torch.Generator(device= "cuda" ).manual_seed( 8 ) To change the pipelines scheduler, use the from_config() method to load a different scheduler’s pipeline.scheduler.config into the pipeline. LMSDiscreteScheduler EulerDiscreteScheduler EulerAncestralDiscreteScheduler DPMSolverMultistepScheduler LMSDiscreteScheduler typically generates higher quality images than the default scheduler. Copied from diffusers import LMSDiscreteScheduler pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) image = pipeline(prompt, generator=generator).images[ 0 ] image LMSDiscreteScheduler EulerDiscreteScheduler EulerAncestralDiscreteScheduler DPMSolverMultistepScheduler Most images look very similar and are comparable in quality. Again, it often comes down to your specific use case so a good approach is to run multiple different schedulers and compare the results. Flax schedulers To compare Flax schedulers, you need to additionally load the scheduler state into the model parameters. For example, let’s change the default scheduler in FlaxStableDiffusionPipeline to use the super fast FlaxDPMSolverMultistepScheduler . The FlaxLMSDiscreteScheduler and FlaxDDPMScheduler are not compatible with the FlaxStableDiffusionPipeline yet. Copied import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , subfolder= "scheduler" ) pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , scheduler=scheduler, variant= "bf16" , dtype=jax.numpy.bfloat16, ) params[ "scheduler" ] = scheduler_state Then you can take advantage of Flax’s compatibility with TPUs to generate a number of images in parallel. You’ll need to make a copy of the model parameters for each available device and then split the inputs across them to generate your desired number of images. Copied # Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." num_samples = jax.device_count() prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) prng_seed = jax.random.PRNGKey( 0 ) num_inference_steps = 25 # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, jax.device_count()) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit= True ).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[- 3 :]))) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for stable-diffusion-v1-5/stable-diffusion-v1-5 are stored in the unet subfolder. Copied from diffusers import UNet2DConditionModel unet = UNet2DConditionModel.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , subfolder= "unet" , use_safetensors= True ) They can also be directly loaded from a repository . Copied from diffusers import UNet2DModel unet = UNet2DModel.from_pretrained( "google/ddpm-cifar10-32" , use_safetensors= True ) To load and save model variants, specify the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained() . Copied from diffusers import UNet2DConditionModel unet = UNet2DConditionModel.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , subfolder= "unet" , variant= "non_ema" , use_safetensors= True ) unet.save_pretrained( "./local-unet" , variant= "non_ema" ) < > Update on GitHub ← Load community pipelines and components Model files and layouts → Load schedulers and models Load a scheduler Compare schedulers Flax schedulers Models
Model_merge.txt
Model merge Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Model merge PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Model merge PEFT provides several internal utilities for merging LoRA adapters with the TIES and DARE methods. peft.utils.merge_utils.prune < source > ( tensor : Tensor density : float method : typing.Literal['magnitude', 'random'] rescale : bool = False ) → torch.Tensor Parameters tensor ( torch.Tensor ) —The tensor to prune. density ( float ) —The fraction of values to preserve. Should be in [0,1]. method ( str ) —The method to use to prune. Should be one of [“magnitude”, “random”]. rescale ( bool ) —Whether to rescale the result to preserve the expected value of the original tensor. Returns torch.Tensor The pruned tensor. Prune the values of task tensors based on the method . peft.utils.merge_utils.calculate_majority_sign_mask < source > ( tensor : Tensor method : typing.Literal['total', 'frequency'] = 'total' ) → torch.Tensor Parameters tensor ( torch.Tensor ) —The tensor to get the mask from. method ( str ) —The method to use to get the mask. Should be one of [“total”, “frequency”]. Returns torch.Tensor The majority sign mask. Get the mask of the majority sign across the task tensors. Task tensors are stacked on dimension 0. peft.utils.merge_utils.disjoint_merge < source > ( task_tensors : Tensor majority_sign_mask : Tensor ) → torch.Tensor Parameters task_tensors ( torch.Tensor ) —The task tensors to merge. majority_sign_mask ( torch.Tensor ) —The mask of the majority sign across the task tensors. Returns torch.Tensor The merged tensor. Merge the task tensors using disjoint merge. peft.utils.merge_utils.task_arithmetic < source > ( task_tensors : typing.List[torch.Tensor] weights : Tensor ) → torch.Tensor Parameters task_tensors( List[torch.Tensor] ) —The task tensors to merge. weights ( torch.Tensor ) —The weights of the task tensors. Returns torch.Tensor The merged tensor. Merge the task tensors using task arithmetic . peft.utils.merge_utils.ties < source > ( task_tensors : typing.List[torch.Tensor] weights : Tensor density : float majority_sign_method : typing.Literal['total', 'frequency'] = 'total' ) → torch.Tensor Parameters task_tensors( List[torch.Tensor] ) —The task tensors to merge. weights ( torch.Tensor ) —The weights of the task tensors. density ( float ) —The fraction of values to preserve. Should be in [0,1]. majority_sign_method ( str ) — The method to use to get the majority sign mask. Should be one of [“total”, “frequency”]. Returns torch.Tensor The merged tensor. Merge the task tensors using ties . peft.utils.merge_utils.dare_linear < source > ( task_tensors : typing.List[torch.Tensor] weights : Tensor density : float ) → torch.Tensor Parameters task_tensors( List[torch.Tensor] ) —The task tensors to merge. weights ( torch.Tensor ) —The weights of the task tensors. density ( float ) —The fraction of values to preserve. Should be in [0,1]. Returns torch.Tensor The merged tensor. Merge the task tensors using dare linear . peft.utils.merge_utils.dare_ties < source > ( task_tensors : typing.List[torch.Tensor] weights : Tensor density : float majority_sign_method : typing.Literal['total', 'frequency'] = 'total' ) → torch.Tensor Parameters task_tensors( List[torch.Tensor] ) —The task tensors to merge. weights ( torch.Tensor ) —The weights of the task tensors. density ( float ) —The fraction of values to preserve. Should be in [0,1]. majority_sign_method ( str ) — The method to use to get the majority sign mask. Should be one of [“total”, “frequency”]. Returns torch.Tensor The merged tensor. Merge the task tensors using dare ties . < > Update on GitHub ← Bone Helpers → Model merge
Aligning_Text_to_Image_Diffusion_Models_with_Rewar.txt
Aligning Text-to-Image Diffusion Models with Reward Backpropagation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Aligning Text-to-Image Diffusion Models with Reward Backpropagation TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Aligning Text-to-Image Diffusion Models with Reward Backpropagation The why If your reward function is differentiable, directly backpropagating gradients from the reward models to the diffusion model is significantly more sample and compute efficient (25x) than doing policy gradient algorithm like DDPO. AlignProp does full backpropagation through time, which allows updating the earlier steps of denoising via reward backpropagation. Getting started with examples/scripts/alignprop.py The alignprop.py script is a working example of using the AlignProp trainer to finetune a Stable Diffusion model. This example explicitly configures a small subset of the overall parameters associated with the config object ( AlignPropConfig ). Note: one A100 GPU is recommended to get this running. For lower memory setting, consider setting truncated_backprop_rand to False. With default settings this will do truncated backpropagation with K=1. Almost every configuration parameter has a default. There is only one commandline flag argument that is required of the user to get things up and running. The user is expected to have a huggingface user access token that will be used to upload the model post finetuning to HuggingFace hub. The following bash command is to be entered to get things running Copied python alignprop. py --hf_user_access_token <token> To obtain the documentation of stable_diffusion_tuning.py , please run python stable_diffusion_tuning.py --help The following are things to keep in mind (The code checks this for you as well) in general while configuring the trainer (beyond the use case of using the example script) The configurable randomized truncation range ( --alignprop_config.truncated_rand_backprop_minmax=(0,50) ) the first number should be equal and greater to 0, while the second number should equal or less to the number of diffusion timesteps (sample_num_steps) The configurable truncation backprop absolute step ( --alignprop_config.truncated_backprop_timestep=49 ) the number should be less than the number of diffusion timesteps (sample_num_steps), it only matters when truncated_backprop_rand is set to False Setting up the image logging hook function Expect the function to be given a dictionary with keys Copied [ 'image' , 'prompt' , 'prompt_metadata' , 'rewards' ] and image , prompt , prompt_metadata , rewards are batched. You are free to log however you want the use of wandb or tensorboard is recommended. Key terms rewards : The rewards/score is a numerical associated with the generated image and is key to steering the RL process prompt : The prompt is the text that is used to generate the image prompt_metadata : The prompt metadata is the metadata associated with the prompt. A situation where this will not be empty is when the reward model comprises of a FLAVA setup where questions and ground answers (linked to the generated image) are expected with the generated image (See here: https://github.com/kvablack/ddpo-pytorch/blob/main/ddpo_pytorch/rewards.py#L45 ) image : The image generated by the Stable Diffusion model Example code for logging sampled images with wandb is given below. Copied # for logging these images to wandb def image_outputs_hook ( image_data, global_step, accelerate_logger ): # For the sake of this example, we only care about the last batch # hence we extract the last element of the list result = {} images, prompts, rewards = [image_data[ 'images' ],image_data[ 'prompts' ],image_data[ 'rewards' ]] for i, image in enumerate (images): pil = Image.fromarray( (image.cpu().numpy().transpose( 1 , 2 , 0 ) * 255 ).astype(np.uint8) ) pil = pil.resize(( 256 , 256 )) result[ f" {prompts[i]: .25 } | {rewards[i]: .2 f} " ] = [pil] accelerate_logger.log_images( result, step=global_step, ) Using the finetuned model Assuming you’ve done with all the epochs and have pushed up your model to the hub, you can use the finetuned model as follows Copied from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5" ) pipeline.to( "cuda" ) pipeline.load_lora_weights( 'mihirpd/alignprop-trl-aesthetics' ) prompts = [ "squirrel" , "crab" , "starfish" , "whale" , "sponge" , "plankton" ] results = pipeline(prompts) for prompt, image in zip (prompts,results.images): image.save( f"dump/ {prompt} .png" ) Credits This work is heavily influenced by the repo here and the associated paper Aligning Text-to-Image Diffusion Models with Reward Backpropagation by Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, Katerina Fragkiadaki . < > Update on GitHub ← Understanding Logs BCO → Aligning Text-to- Image Diffusion Models with Reward Backpropagation The why Getting started with examples/scripts/alignprop.py Setting up the image logging hook function Key terms Using the finetuned model Credits
Launching_Accelerate_scripts.txt
Launching Accelerate scripts Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Launching Accelerate scripts Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Launching Accelerate scripts In the previous tutorial, you were introduced to how to modify your current training script to use Accelerate. The final version of that code is shown below: Copied from accelerate import Accelerator accelerator = Accelerator() model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) for batch in training_dataloader: optimizer.zero_grad() inputs, targets = batch outputs = model(inputs) loss = loss_function(outputs, targets) accelerator.backward(loss) optimizer.step() scheduler.step() But how do you run this code and have it utilize the special hardware available to it? First, you should rewrite the above code into a function, and make it callable as a script. For example: Copied from accelerate import Accelerator + def main(): accelerator = Accelerator() model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) for batch in training_dataloader: optimizer.zero_grad() inputs, targets = batch outputs = model(inputs) loss = loss_function(outputs, targets) accelerator.backward(loss) optimizer.step() scheduler.step() + if __name__ == "__main__": + main() Next, you need to launch it with accelerate launch . It’s recommended you run accelerate config before using accelerate launch to configure your environment to your liking. Otherwise Accelerate will use very basic defaults depending on your system setup. Using accelerate launch Accelerate has a special CLI command to help you launch your code in your system through accelerate launch . This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is. If you are familiar with launching scripts in PyTorch yourself such as with torchrun , you can still do this. It is not required to use accelerate launch . You can launch your script quickly by using: Copied accelerate launch {script_name.py} --arg1 --arg2 ... Just put accelerate launch at the start of your command, and pass in additional arguments and parameters to your script afterward like normal! Since this runs the various torch spawn methods, all of the expected environment variables can be modified here as well. For example, here is how to use accelerate launch with a single GPU: Copied # for cuda device: CUDA_VISIBLE_DEVICES= "0" accelerate launch {script_name.py} --arg1 --arg2 ... # for xpu device: ZE_AFFINITY_MASK= "0" accelerate launch {script_name.py} --arg1 --arg2 ... You can also use accelerate launch without performing accelerate config first, but you may need to manually pass in the right configuration parameters. In this case, Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision. Here is how you would use all GPUs and train with mixed precision disabled: Copied accelerate launch --multi_gpu {script_name.py} {--arg1} {--arg2} ... Or by specifying a number of GPUs to use: Copied accelerate launch --num_processes=2 {script_name.py} {--arg1} {--arg2} ... To get more specific you should pass in the needed parameters yourself. For instance, here is how you would also launch that same script on two GPUs using mixed precision while avoiding all of the warnings: Copied accelerate launch --multi_gpu --mixed_precision=fp16 --num_processes=2 {script_name.py} {--arg1} {--arg2} ... For a complete list of parameters you can pass in, run: Copied accelerate launch -h Even if you are not using Accelerate in your code, you can still use the launcher for starting your scripts! For a visualization of this difference, that earlier accelerate launch on multi-gpu would look something like so with torchrun : Copied MIXED_PRECISION= "fp16" torchrun --nproc_per_node=2 --nnodes=1 {script_name.py} {--arg1} {--arg2} ... You can also launch your script utilizing the launch CLI as a python module itself, enabling the ability to pass in other python-specific launching behaviors. To do so, use accelerate.commands.launch instead of accelerate launch : Copied python -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2} If you want to execute the script with any other python flags, you can pass them in as well similar to -m , such as the below example enabling unbuffered stdout and stderr: Copied python -u -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2} You can run your code on CPU as well! This is helpful for debugging and testing purposes on toy models and datasets. Copied accelerate launch --cpu {script_name.py} {--arg1} {--arg2} Why you should always use accelerate config Why is it useful to the point you should always run accelerate config ? Remember that earlier call to accelerate launch as well as torchrun ? Post configuration, to run that script with the needed parts you just need to use accelerate launch outright, without passing anything else in: Copied accelerate launch {script_name.py} {--arg1} {--arg2} ... Custom Configurations As briefly mentioned earlier, accelerate launch should be mostly used through combining set configurations made with the accelerate config command. These configs are saved to a default_config.yaml file in your cache folder for Accelerate. This cache folder is located at (with decreasing order of priority): The content of your environment variable HF_HOME suffixed with accelerate . If it does not exist, the content of your environment variable XDG_CACHE_HOME suffixed with huggingface/accelerate . If this does not exist either, the folder ~/.cache/huggingface/accelerate . To have multiple configurations, the flag --config_file can be passed to the accelerate launch command paired with the location of the custom yaml. An example yaml may look something like the following for two GPUs on a single machine using fp16 for mixed precision: Copied compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: MULTI_GPU fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 2 use_cpu: false Launching a script from the location of that custom yaml file looks like the following: Copied accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ... Multi-node training Multi-node training with Accelerate is similar to multi-node training with torchrun . The simplest way to launch a multi-node training run is to do the following: Copy your codebase and data to all nodes. (or place them on a shared filesystem) Setup your python packages on all nodes. Run accelerate config on the main single node first. After specifying the number of nodes, you will be asked to specify the rank of each node (this will be 0 for the main/master node), along with the IP address and port for the main process. This is required for the worker nodes to communicate with the main process. Afterwards, you can copy or send this config file across all of your nodes, changing the machine_rank to 1, 2,3, etc. to avoid having to run the command (or just follow their directions directly for launching with torchrun as well) Once you have done this, you can start your multi-node training run by running accelerate launch (or torchrun ) on all nodes. It is required that the command be ran on all nodes for everything to start, not just running it from the main node. You can use something like SLURM or a different process executor to wrap around this requirement and call everything from a single command. It is recommended to use the intranet IP of your main node over the public IP for better latency. This is the 192.168.x.x or the 172.x.x.x address you see when you run hostname -I on the main node. To get a better idea about multi-node training, check out our example for multi-node training with FSDP . < > Update on GitHub ← TPU training Launching distributed training from Jupyter Notebooks → Launching Accelerate scripts Using accelerate launch Why you should always use accelerate config Custom Configurations Multi-node training
DuckDB.txt
DuckDB Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation DuckDB Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started DuckDB DuckDB is a database that supports reading and querying Parquet files really fast. Begin by creating a connection to DuckDB, and then install and load the httpfs extension to read and write remote files: Python JavaScript Copied import duckdb url = "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet" con = duckdb.connect() con.execute( "INSTALL httpfs;" ) con.execute( "LOAD httpfs;" ) Now you can write and execute your SQL query on the Parquet file: Python JavaScript Copied con.sql( f"SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM ' {url} ' GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5)" ) ┌───────────┬──────────────┬────────────────────┐ │ sign │ count_star() │ avg_blog_length │ │ varchar │ int64 │ double │ ├───────────┼──────────────┼────────────────────┤ │ Cancer │ 38956 │ 1206.5212034089743 │ │ Leo │ 35487 │ 1180.0673767858652 │ │ Aquarius │ 32723 │ 1152.1136815084192 │ │ Virgo │ 36189 │ 1117.1982094006466 │ │ Capricorn │ 31825 │ 1102.397360565593 │ └───────────┴──────────────┴────────────────────┘ To query multiple files - for example, if the dataset is sharded: Python JavaScript Copied urls = [ "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet" , "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet" ] con.sql( f"SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM read_parquet( {urls} ) GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5)" ) ┌──────────┬──────────────┬────────────────────┐ │ sign │ count_star() │ avg_blog_length │ │ varchar │ int64 │ double │ ├──────────┼──────────────┼────────────────────┤ │ Aquarius │ 49687 │ 1191.417211745527 │ │ Leo │ 53811 │ 1183.8782219248853 │ │ Cancer │ 65048 │ 1158.9691612347804 │ │ Gemini │ 51985 │ 1156.0693084543618 │ │ Virgo │ 60399 │ 1140.9584430205798 │ └──────────┴──────────────┴────────────────────┘ DuckDB-Wasm , a package powered by WebAssembly , is also available for running DuckDB in any browser. This could be useful, for instance, if you want to create a web app to query Parquet files from the browser! < > Update on GitHub ← cuDF Pandas → DuckDB
Llama_3_1_8b_performance_on_AWS_Inferentia2__Laten.txt
Llama-3.1-8b performance on AWS Inferentia2 (Latency & Througput) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Llama-3.1-8b performance on AWS Inferentia2 (Latency & Througput) AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Llama-3.1-8b performance on AWS Inferentia2 (Latency & Througput) How fast is Llama-3.1-8b on Inferentia2? Let’s figure out! For this benchmark we will use the following configurations: Model type batch_size sequence_length Llama3.1 8b BS1 1 4096 Llama3.1 8b BS4 4 4096 Llama3.1 8b BS8 8 4096 Llama3.1 8b BS16 16 4096 Llama3.1 8b BS32 32 4096 Note: all models are compiled to use 4 devices corresponding to 8 cores on the inf2.48xlarge instance. Note: please refer to the inferentia2 product page for details on the available instances. Time to first token The time to first token is the time required to process the input tokens and generate the first output token. It is a very important metric, as it corresponds to the latency directly perceived by the user when streaming generated tokens. We test the time to first token for increasing context sizes, from a typical Q/A usage, to heavy Retrieval Augmented Generation (RAG) use-cases. Time to first token is expressed in seconds . Inter-token Latency The inter-token latency corresponds to the average time elapsed between two generated tokens. It is expressed in milliseconds . Throughput Unlike some other benchmarks, we evaluate the throughput using generated tokens only, by dividing their number by the end-to-end latency. Throughput is expressed in tokens/second . ← Mistral Small on AWS Inferentia2 Add support for a new model architecture → Llama-3.1-8b performance on AW S Inferentia2 ( Latency & Througput) Time to first token Inter-token Latency Throughput
Shap-E.txt
Shap-E Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Shap-E Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab #!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline . The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch from diffusers import ShapEPipeline device = torch.device( "cuda" if torch.cuda.is_available() else "cpu" ) pipe = ShapEPipeline.from_pretrained( "openai/shap-e" , torch_dtype=torch.float16, variant= "fp16" ) pipe = pipe.to(device) guidance_scale = 15.0 prompt = [ "A firecracker" , "A birthday cupcake" ] images = pipe( prompt, guidance_scale=guidance_scale, num_inference_steps= 64 , frame_size= 256 , ).images 이제 export_to_gif() 함수를 사용해 이미지 프레임 리스트를 3D 오브젝트의 gif로 변환합니다. Copied from diffusers.utils import export_to_gif export_to_gif(images[ 0 ], "firecracker_3d.gif" ) export_to_gif(images[ 1 ], "cake_3d.gif" ) prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline . You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline import torch prior_pipeline = DiffusionPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) pipeline = DiffusionPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) prompt = "A cheeseburger, white background" image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale= 1.0 ).to_tuple() image = pipeline( prompt, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, ).images[ 0 ] image.save( "burger.png" ) Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image from diffusers import ShapEImg2ImgPipeline from diffusers.utils import export_to_gif pipe = ShapEImg2ImgPipeline.from_pretrained( "openai/shap-e-img2img" , torch_dtype=torch.float16, variant= "fp16" ).to( "cuda" ) guidance_scale = 3.0 image = Image. open ( "burger.png" ).resize(( 256 , 256 )) images = pipe( image, guidance_scale=guidance_scale, num_inference_steps= 64 , frame_size= 256 , ).images gif_path = export_to_gif(images[ 0 ], "burger_3d.gif" ) cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer . You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh" : Copied import torch from diffusers import ShapEPipeline device = torch.device( "cuda" if torch.cuda.is_available() else "cpu" ) pipe = ShapEPipeline.from_pretrained( "openai/shap-e" , torch_dtype=torch.float16, variant= "fp16" ) pipe = pipe.to(device) guidance_scale = 15.0 prompt = "A birthday cupcake" images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps= 64 , frame_size= 256 , output_type= "mesh" ).images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply ply_path = export_to_ply(images[ 0 ], "3d_cake.ply" ) print ( f"Saved to folder: {ply_path} " ) Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh mesh = trimesh.load( "3d_cake.ply" ) mesh_export = mesh.export( "3d_cake.glb" , file_type= "glb" ) By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh import numpy as np mesh = trimesh.load( "3d_cake.ply" ) rot = trimesh.transformations.rotation_matrix(-np.pi / 2 , [ 1 , 0 , 0 ]) mesh = mesh.apply_transform(rot) mesh_export = mesh.export( "3d_cake.glb" , file_type= "glb" ) Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! < > Update on GitHub ← Textual inversion DiffEdit → Shap-E Text-to-3D Image-to-3D Generate mesh
Extractive_Question_Answering_with_AutoTrain.txt
Extractive Question Answering with AutoTrain Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AutoTrain documentation Extractive Question Answering with AutoTrain AutoTrain 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.8.24 v0.7.129 v0.6.48 v0.5.2 EN Getting Started 🤗 AutoTrain How much does it cost? Get help and support Frequently Asked Questions Quickstart Train on Spaces Python SDK Train Locally Config File Tasks LLM Finetuning Text Classification/Regression Extractive QA Sentence Transformer Image Classification / Regression Object Detection Seq2Seq Token Classification Tabular Miscellaneous Understanding Column Mapping AutoTrain API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.8.24 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Extractive Question Answering with AutoTrain Extractive Question Answering (QA) enables AI models to find and extract precise answers from text passages. This guide shows you how to train custom QA models using AutoTrain, supporting popular architectures like BERT, RoBERTa, and DeBERTa. What is Extractive Question Answering? Extractive QA models learn to: Locate exact answer spans within longer text passages Understand questions and match them to relevant context Extract precise answers rather than generating them Handle both simple and complex queries about the text Preparing your Data Your dataset needs these essential columns: text : The passage containing potential answers (also called context) question : The query you want to answer answer : Answer span information including text and position Here is an example of how your dataset should look: Copied { "context" : "Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \" Venite Ad Me Omnes \ ". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary." , "question" : "To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?" , "answers" :{ "text" :[ "Saint Bernadette Soubirous" ], "answer_start" :[ 515 ]}} { "context" : "Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \" Venite Ad Me Omnes \ ". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary." , "question" : "What is in front of the Notre Dame Main Building?" , "answers" :{ "text" :[ "a copper statue of Christ" ], "answer_start" :[ 188 ]}} { "context" : "Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \" Venite Ad Me Omnes \ ". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary." , "question" : "The Basilica of the Sacred heart at Notre Dame is beside to which structure?" , "answers" :{ "text" :[ "the Main Building" ], "answer_start" :[ 279 ]}} Note: the preferred format for question answering is JSONL, if you want to use CSV, the answer column should be stringified JSON with the keys text and answer_start . Example dataset from Hugging Face Hub: lhoestq/squad P.S. You can use both squad and squad v2 data format with correct column mappings. Training Options Local Training Train models on your own hardware with full control over the process. To train an Extractive QA model locally, you need a config file: Copied task: extractive-qa base_model: google-bert/bert-base-uncased project_name: autotrain-bert-ex-qa1 log: tensorboard backend: local data: path: lhoestq/squad train_split: train valid_split: validation column_mapping: text_column: context question_column: question answer_column: answers params: max_seq_length: 512 max_doc_stride: 128 epochs: 3 batch_size: 4 lr: 2e-5 optimizer: adamw_torch scheduler: linear gradient_accumulation: 1 mixed_precision: fp16 hub: username: ${HF_USERNAME} token: ${HF_TOKEN} push_to_hub: true To train the model, run the following command: Copied $ autotrain --config config.yaml Here, we are training a BERT model on the SQuAD dataset using the Extractive QA task. The model is trained for 3 epochs with a batch size of 4 and a learning rate of 2e-5. The training process is logged using TensorBoard. The model is trained locally and pushed to the Hugging Face Hub after training. Cloud Training on Hugging Face Train models using Hugging Face’s cloud infrastructure for better scalability. As always, pay special attention to column mapping. Parameter Reference class autotrain.trainers.extractive_question_answering.params. ExtractiveQuestionAnsweringParams < source > ( data_path : str = None model : str = 'bert-base-uncased' lr : float = 5e-05 epochs : int = 3 max_seq_length : int = 128 max_doc_stride : int = 128 batch_size : int = 8 warmup_ratio : float = 0.1 gradient_accumulation : int = 1 optimizer : str = 'adamw_torch' scheduler : str = 'linear' weight_decay : float = 0.0 max_grad_norm : float = 1.0 seed : int = 42 train_split : str = 'train' valid_split : typing.Optional[str] = None text_column : str = 'context' question_column : str = 'question' answer_column : str = 'answers' logging_steps : int = -1 project_name : str = 'project-name' auto_find_batch_size : bool = False mixed_precision : typing.Optional[str] = None save_total_limit : int = 1 token : typing.Optional[str] = None push_to_hub : bool = False eval_strategy : str = 'epoch' username : typing.Optional[str] = None log : str = 'none' early_stopping_patience : int = 5 early_stopping_threshold : float = 0.01 ) Parameters data_path (str) — Path to the dataset. model (str) — Pre-trained model name. Default is “bert-base-uncased”. lr (float) — Learning rate for the optimizer. Default is 5e-5. epochs (int) — Number of training epochs. Default is 3. max_seq_length (int) — Maximum sequence length for inputs. Default is 128. max_doc_stride (int) — Maximum document stride for splitting context. Default is 128. batch_size (int) — Batch size for training. Default is 8. warmup_ratio (float) — Warmup proportion for learning rate scheduler. Default is 0.1. gradient_accumulation (int) — Number of gradient accumulation steps. Default is 1. optimizer (str) — Optimizer type. Default is “adamw_torch”. scheduler (str) — Learning rate scheduler type. Default is “linear”. weight_decay (float) — Weight decay for the optimizer. Default is 0.0. max_grad_norm (float) — Maximum gradient norm for clipping. Default is 1.0. seed (int) — Random seed for reproducibility. Default is 42. train_split (str) — Name of the training data split. Default is “train”. valid_split (Optional[str]) — Name of the validation data split. Default is None. text_column (str) — Column name for context/text. Default is “context”. question_column (str) — Column name for questions. Default is “question”. answer_column (str) — Column name for answers. Default is “answers”. logging_steps (int) — Number of steps between logging. Default is -1. project_name (str) — Name of the project for output directory. Default is “project-name”. auto_find_batch_size (bool) — Automatically find optimal batch size. Default is False. mixed_precision (Optional[str]) — Mixed precision training mode (fp16, bf16, or None). Default is None. save_total_limit (int) — Maximum number of checkpoints to save. Default is 1. token (Optional[str]) — Authentication token for Hugging Face Hub. Default is None. push_to_hub (bool) — Whether to push the model to Hugging Face Hub. Default is False. eval_strategy (str) — Evaluation strategy during training. Default is “epoch”. username (Optional[str]) — Hugging Face username for authentication. Default is None. log (str) — Logging method for experiment tracking. Default is “none”. early_stopping_patience (int) — Number of epochs with no improvement for early stopping. Default is 5. early_stopping_threshold (float) — Threshold for early stopping improvement. Default is 0.01. ExtractiveQuestionAnsweringParams < > Update on GitHub ← Text Classification/Regression Sentence Transformer → Extractive Question Answering with Auto Train What is Extractive Question Answering? Preparing your Data Training Options Local Training Cloud Training on Hugging Face Parameter Reference
Optimum-quanto.txt
Optimum-quanto Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Optimum-quanto Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Optimum-quanto Try optimum-quanto + transformers with this notebook ! 🤗 optimum-quanto library is a versatile pytorch quantization toolkit. The quantization method used is the linear quantization. Quanto provides several unique features such as: weights quantization ( float8 , int8 , int4 , int2 ) activation quantization ( float8 , int8 ) modality agnostic (e.g CV,LLM) device agnostic (e.g CUDA,XPU,MPS,CPU) compatibility with torch.compile easy to add custom kernel for specific device supports quantization aware training Before you begin, make sure the following libraries are installed: Copied pip install optimum-quanto accelerate transformers Now you can quantize a model by passing QuantoConfig object in the from_pretrained() method. This works for any model in any modality, as long as it contains torch.nn.Linear layers. The integration with transformers only supports weights quantization. For the more complex use case such as activation quantization, calibration and quantization aware training, you should use optimum-quanto library instead. By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set torch_dtype="auto" to load the weights in the data type defined in a model’s config.json file to automatically load the most memory-optimal data type. Copied from transformers import AutoModelForCausalLM, AutoTokenizer, QuantoConfig model_id = "facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_id) quantization_config = QuantoConfig(weights= "int8" ) quantized_model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype= "auto" , device_map= "cuda:0" , quantization_config=quantization_config) Note that serialization is not supported yet with transformers but it is coming soon! If you want to save the model, you can use quanto library instead. Optimum-quanto library uses linear quantization algorithm for quantization. Even though this is a basic quantization technique, we get very good results! Have a look at the following benchmark (llama-2-7b on perplexity metric). You can find more benchmarks here The library is versatile enough to be compatible with most PTQ optimization algorithms. The plan in the future is to integrate the most popular algorithms in the most seamless possible way (AWQ, Smoothquant). < > Update on GitHub ← VPTQ EETQ → Optimum-quanto
Supported_Models.txt
Supported Models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Supported Models text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Supported Models Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported. Deepseek V2 Idefics 2 (Multimodal) Idefics 3 (Multimodal) Llava Next (1.6) (Multimodal) Llama Phi 3 Granite Gemma PaliGemma Gemma2 Cohere Dbrx Mamba Mistral Mixtral Gpt Bigcode Phi PhiMoe Baichuan Falcon StarCoder 2 Qwen 2 Qwen 2 VL Opt T5 Galactica SantaCoder Bloom Mpt Gpt2 Gpt Neox Gptj Idefics (Multimodal) Mllama (Multimodal) If the above list lacks the model you would like to serve, depending on the model’s pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn’t guaranteed for non-optimized models: Copied # for causal LMs/text-generation models AutoModelForCausalLM.from_pretrained(<model>, device_map= "auto" ) # or, for text-to-text generation models AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map= "auto" ) If you wish to serve a supported model that already exists on a local folder, just point to the local folder. Copied text-generation-launcher --model-id <PATH-TO-LOCAL-BLOOM> < > Update on GitHub ← Quick Tour Using TGI with Nvidia GPUs → Supported Models
Using_TGI_with_Nvidia_GPUs.txt
Using TGI with Nvidia GPUs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Using TGI with Nvidia GPUs text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using TGI with Nvidia GPUs TGI optimized models are supported on NVIDIA H100 , A100 , A10G and T4 GPUs with CUDA 12.2+. Note that you have to install NVIDIA Container Toolkit to use it. For other NVIDIA GPUs, continuous batching will still apply, but some operations like flash attention and paged attention will not be executed. TGI can be used on NVIDIA GPUs through its official docker image: Copied model=teknium/OpenHermes-2.5-Mistral-7B volume= $PWD /data # share a volume with the Docker container to avoid downloading weights every run docker run --gpus all --shm-size 64g -p 8080:80 -v $volume :/data \ ghcr.io/huggingface/text-generation-inference:3.0.1 \ --model-id $model The launched TGI server can then be queried from clients, make sure to check out the Consuming TGI guide. < > Update on GitHub ← Supported Models Using TGI with AMD GPUs → Using TG I with Nvidia GP Us
Downloading_datasets.txt
Downloading datasets Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Downloading datasets Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Downloading datasets Integrated libraries If a dataset on the Hub is tied to a supported library , loading the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the “Use this dataset” button on the dataset page to see how to do so. For example, samsum shows how to do so with 🤗 Datasets below. Using the Hugging Face Client Library You can use the huggingface_hub library to create, delete, update and retrieve information from repos. You can also download files from repos or integrate them into your library! For example, you can quickly load a CSV dataset with a few lines using Pandas. Copied from huggingface_hub import hf_hub_download import pandas as pd REPO_ID = "YOUR_REPO_ID" FILENAME = "data.csv" dataset = pd.read_csv( hf_hub_download(repo_id=REPO_ID, filename=FILENAME, repo_type= "dataset" ) ) Using Git Since all datasets on the Hub are Git repositories, you can clone the datasets locally by running: Copied git lfs install git clone [email protected]:datasets/<dataset ID> # example: git clone [email protected]:datasets/allenai/c4 If you have write-access to the particular dataset repo, you’ll also have the ability to commit and push revisions to the dataset. Add your SSH public key to your user settings to push changes and/or access private repos. < > Update on GitHub ← Uploading Datasets Integrated Libraries → Downloading datasets Integrated libraries Using the Hugging Face Client Library Using Git
Docker_Spaces.txt
Docker Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Docker Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Docker Spaces Spaces accommodate custom Docker containers for apps outside the scope of Streamlit and Gradio. Docker Spaces allow users to go beyond the limits of what was previously possible with the standard SDKs. From FastAPI and Go endpoints to Phoenix apps and ML Ops tools, Docker Spaces can help in many different setups. Setting up Docker Spaces Selecting Docker as the SDK when creating a new Space will initialize your Space by setting the sdk property to docker in your README.md file’s YAML block. Alternatively, given an existing Space repository, set sdk: docker inside the YAML block at the top of your Spaces README.md file. You can also change the default exposed port 7860 by setting app_port: 7860 . Afterwards, you can create a usual Dockerfile . Copied --- title: Basic Docker SDK Space emoji: 🐳 colorFrom: purple colorTo: gray sdk: docker app_port: 7860 --- Internally you could have as many open ports as you want. For instance, you can install Elasticsearch inside your Space and call it internally on its default port 9200. If you want to expose apps served on multiple ports to the outside world, a workaround is to use a reverse proxy like Nginx to dispatch requests from the broader internet (on a single port) to different internal ports. Secrets and Variables Management You can manage a Space’s environment variables in the Space Settings. Read more here . Variables Buildtime Variables are passed as build-arg s when building your Docker Space. Read Docker’s dedicated documentation for a complete guide on how to use this in the Dockerfile. Copied # Declare your environment variables with the ARG directive ARG MODEL_REPO_NAME FROM python:latest # [...] # You can use them like environment variables RUN predict.py $MODEL_REPO_NAME Runtime Variables are injected in the container’s environment at runtime. Secrets Buildtime In Docker Spaces, the secrets management is different for security reasons. Once you create a secret in the Settings tab , you can expose the secret by adding the following line in your Dockerfile: For example, if SECRET_EXAMPLE is the name of the secret you created in the Settings tab, you can read it at build time by mounting it to a file, then reading it with $(cat /run/secrets/SECRET_EXAMPLE) . See an example below: Copied # Expose the secret SECRET_EXAMPLE at buildtime and use its value as git remote URL RUN --mount= type =secret, id =SECRET_EXAMPLE,mode=0444,required= true \ git init && \ git remote add origin $( cat /run/secrets/SECRET_EXAMPLE) Copied # Expose the secret SECRET_EXAMPLE at buildtime and use its value as a Bearer token for a curl request RUN --mount= type =secret, id =SECRET_EXAMPLE,mode=0444,required= true \ curl test -H 'Authorization: Bearer $(cat /run/secrets/SECRET_EXAMPLE)' Runtime Same as for public Variables, at runtime, you can access the secrets as environment variables. For example, in Python you would use os.environ.get("SECRET_EXAMPLE") . Check out this example of a Docker Space that uses secrets. Permissions The container runs with user ID 1000. To avoid permission issues you should create a user and set its WORKDIR before any COPY or download. Copied # Set up a new user named "user" with user ID 1000 RUN useradd -m -u 1000 user # Switch to the "user" user USER user # Set home to the user's home directory ENV HOME=/home/ user \ PATH=/home/ user /.local/bin:$PATH # Set the working directory to the user's home directory WORKDIR $HOME /app # Try and run pip command after setting the user with `USER user` to avoid permission issues with Python RUN pip install --no-cache-dir --upgrade pip # Copy the current directory contents into the container at $HOME/app setting the owner to the user COPY -- chown =user . $HOME /app # Download a checkpoint RUN mkdir content ADD -- chown =user https://<SOME_ASSET_URL> content/<SOME_ASSET_NAME> Always specify the `--chown=user` with `ADD` and `COPY` to ensure the new files are owned by your user. If you still face permission issues, you might need to use chmod or chown in your Dockerfile to grant the right permissions. For example, if you want to use the directory /data , you can do: Copied RUN mkdir -p /data RUN chmod 777 /data You should always avoid superfluous chowns. Updating metadata for a file creates a new copy stored in the new layer. Therefore, a recursive chown can result in a very large image due to the duplication of all affected files. Rather than fixing permission by running chown : Copied COPY checkpoint . RUN chown -R user checkpoint you should always do: Copied COPY -- chown= user checkpoint . (same goes for ADD command) Data Persistence The data written on disk is lost whenever your Docker Space restarts, unless you opt-in for a persistent storage upgrade. If you opt-in for a persistent storage upgrade, you can use the /data directory to store data. This directory is mounted on a persistent volume, which means that the data written in this directory will be persisted across restarts. At the moment, /data volume is only available at runtime, i.e. you cannot use /data during the build step of your Dockerfile. You can also use our Datasets Hub for specific cases, where you can store state and data in a git LFS repository. You can find an example of persistence here , which uses the huggingface_hub library for programmatically uploading files to a dataset repository. This Space example along with this guide will help you define which solution fits best your data type. Finally, in some cases, you might want to use an external storage solution from your Space’s code like an external hosted DB, S3, etc. Docker container with GPU You can run Docker containers with GPU support by using one of our GPU-flavored Spaces Hardware . We recommend using the nvidia/cuda from Docker Hub as a base image, which comes with CUDA and cuDNN pre-installed. During Docker buildtime, you don't have access to a GPU hardware. Therefore, you should not try to run any GPU-related command during the build step of your Dockerfile. For example, you can't run `nvidia-smi` or `torch.cuda.is_available()` building an image. Read more [here](https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker#description). Read More Full Docker demo example List of Docker Spaces examples Spaces Examples < > Update on GitHub ← Static HTML Spaces Your first Docker Spaces → Docker Spaces Setting up Docker Spaces Secrets and Variables Management Variables Buildtime Runtime Secrets Buildtime Runtime Permissions Data Persistence Docker container with GPU Read More
Model_files_and_layouts.txt
Model files and layouts Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Model files and layouts Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Model files and layouts Diffusion models are saved in various file types and organized in different layouts. Diffusers stores model weights as safetensors files in Diffusers-multifolder layout and it also supports loading files (like safetensors and ckpt files) from a single-file layout which is commonly used in the diffusion ecosystem. Each layout has its own benefits and use cases, and this guide will show you how to load the different files and layouts, and how to convert them. Files PyTorch model weights are typically saved with Python’s pickle utility as ckpt or bin files. However, pickle is not secure and pickled files may contain malicious code that can be executed. This vulnerability is a serious concern given the popularity of model sharing. To address this security issue, the Safetensors library was developed as a secure alternative to pickle, which saves models as safetensors files. safetensors Learn more about the design decisions and why safetensor files are preferred for saving and loading model weights in the Safetensors audited as really safe and becoming the default blog post. Safetensors is a safe and fast file format for securely storing and loading tensors. Safetensors restricts the header size to limit certain types of attacks, supports lazy loading (useful for distributed setups), and has generally faster loading speeds. Make sure you have the Safetensors library installed. Copied !pip install safetensors Safetensors stores weights in a safetensors file. Diffusers loads safetensors files by default if they’re available and the Safetensors library is installed. There are two ways safetensors files can be organized: Diffusers-multifolder layout: there may be several separate safetensors files, one for each pipeline component (text encoder, UNet, VAE), organized in subfolders (check out the stable-diffusion-v1-5/stable-diffusion-v1-5 repository as an example) single-file layout: all the model weights may be saved in a single file (check out the WarriorMama777/OrangeMixs repository as an example) multifolder single file Use the from_pretrained() method to load a model with safetensors files stored in multiple folders. Copied from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , use_safetensors= True ) LoRA files LoRA is a lightweight adapter that is fast and easy to train, making them especially popular for generating images in a certain way or style. These adapters are commonly stored in a safetensors file, and are widely popular on model sharing platforms like civitai . LoRAs are loaded into a base model with the load_lora_weights() method. Copied from diffusers import StableDiffusionXLPipeline import torch # base model pipeline = StableDiffusionXLPipeline.from_pretrained( "Lykon/dreamshaper-xl-1-0" , torch_dtype=torch.float16, variant= "fp16" ).to( "cuda" ) # download LoRA weights !wget https://civitai.com/api/download/models/ 168776 -O blueprintify.safetensors # load LoRA weights pipeline.load_lora_weights( "." , weight_name= "blueprintify.safetensors" ) prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" image = pipeline( prompt=prompt, negative_prompt=negative_prompt, generator=torch.manual_seed( 0 ), ).images[ 0 ] image ckpt Pickled files may be unsafe because they can be exploited to execute malicious code. It is recommended to use safetensors files instead where possible, or convert the weights to safetensors files. PyTorch’s torch.save function uses Python’s pickle utility to serialize and save models. These files are saved as a ckpt file and they contain the entire model’s weights. Use the from_single_file() method to directly load a ckpt file. Copied from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_single_file( "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned.ckpt" ) Storage layout There are two ways model files are organized, either in a Diffusers-multifolder layout or in a single-file layout. The Diffusers-multifolder layout is the default, and each component file (text encoder, UNet, VAE) is stored in a separate subfolder. Diffusers also supports loading models from a single-file layout where all the components are bundled together. Diffusers-multifolder The Diffusers-multifolder layout is the default storage layout for Diffusers. Each component’s (text encoder, UNet, VAE) weights are stored in a separate subfolder. The weights can be stored as safetensors or ckpt files. multifolder layout UNet subfolder To load from Diffusers-multifolder layout, use the from_pretrained() method. Copied from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True , ).to( "cuda" ) Benefits of using the Diffusers-multifolder layout include: Faster to load each component file individually or in parallel. Reduced memory usage because you only load the components you need. For example, models like SDXL Turbo , SDXL Lightning , and Hyper-SD have the same components except for the UNet. You can reuse their shared components with the from_pipe() method without consuming any additional memory (take a look at the Reuse a pipeline guide) and only load the UNet. This way, you don’t need to download redundant components and unnecessarily use more memory. Copied import torch from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, EulerDiscreteScheduler # download one model sdxl_pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True , ).to( "cuda" ) # switch UNet for another model unet = UNet2DConditionModel.from_pretrained( "stabilityai/sdxl-turbo" , subfolder= "unet" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) # reuse all the same components in new model except for the UNet turbo_pipeline = StableDiffusionXLPipeline.from_pipe( sdxl_pipeline, unet=unet, ).to( "cuda" ) turbo_pipeline.scheduler = EulerDiscreteScheduler.from_config( turbo_pipeline.scheduler.config, timestep+spacing= "trailing" ) image = turbo_pipeline( "an astronaut riding a unicorn on mars" , num_inference_steps= 1 , guidance_scale= 0.0 , ).images[ 0 ] image Reduced storage requirements because if a component, such as the SDXL VAE , is shared across multiple models, you only need to download and store a single copy of it instead of downloading and storing it multiple times. For 10 SDXL models, this can save ~3.5GB of storage. The storage savings is even greater for newer models like PixArt Sigma, where the text encoder alone is ~19GB! Flexibility to replace a component in the model with a newer or better version. Copied from diffusers import DiffusionPipeline, AutoencoderKL vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix" , torch_dtype=torch.float16, use_safetensors= True ) pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , vae=vae, torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True , ).to( "cuda" ) More visibility and information about a model’s components, which are stored in a config.json file in each component subfolder. Single-file The single-file layout stores all the model weights in a single file. All the model components (text encoder, UNet, VAE) weights are kept together instead of separately in subfolders. This can be a safetensors or ckpt file. To load from a single-file layout, use the from_single_file() method. Copied import torch from diffusers import StableDiffusionXLPipeline pipeline = StableDiffusionXLPipeline.from_single_file( "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True , ).to( "cuda" ) Benefits of using a single-file layout include: Easy compatibility with diffusion interfaces such as ComfyUI or Automatic1111 which commonly use a single-file layout. Easier to manage (download and share) a single file. Convert layout and files Diffusers provides many scripts and methods to convert storage layouts and file formats to enable broader support across the diffusion ecosystem. Take a look at the diffusers/scripts collection to find a script that fits your conversion needs. Scripts that have ” to_diffusers ” appended at the end mean they convert a model to the Diffusers-multifolder layout. Each script has their own specific set of arguments for configuring the conversion, so make sure you check what arguments are available! For example, to convert a Stable Diffusion XL model stored in Diffusers-multifolder layout to a single-file layout, run the convert_diffusers_to_original_sdxl.py script. Provide the path to the model to convert, and the path to save the converted model to. You can optionally specify whether you want to save the model as a safetensors file and whether to save the model in half-precision. Copied python convert_diffusers_to_original_sdxl.py --model_path path/to/model/to/convert --checkpoint_path path/to/save/model/to --use_safetensors You can also save a model to Diffusers-multifolder layout with the save_pretrained() method. This creates a directory for you if it doesn’t already exist, and it also saves the files as a safetensors file by default. Copied from diffusers import StableDiffusionXLPipeline pipeline = StableDiffusionXLPipeline.from_single_file( "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors" , ) pipeline.save_pretrained() Lastly, there are also Spaces, such as SD To Diffusers and SD-XL To Diffusers , that provide a more user-friendly interface for converting models to Diffusers-multifolder layout. This is the easiest and most convenient option for converting layouts, and it’ll open a PR on your model repository with the converted files. However, this option is not as reliable as running a script, and the Space may fail for more complicated models. Single-file layout usage Now that you’re familiar with the differences between the Diffusers-multifolder and single-file layout, this section shows you how to load models and pipeline components, customize configuration options for loading, and load local files with the from_single_file() method. Load a pipeline or model Pass the file path of the pipeline or model to the from_single_file() method to load it. pipeline model Copied from diffusers import StableDiffusionXLPipeline ckpt_path = "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors" pipeline = StableDiffusionXLPipeline.from_single_file(ckpt_path) Customize components in the pipeline by passing them directly to the from_single_file() method. For example, you can use a different scheduler in a pipeline. Copied from diffusers import StableDiffusionXLPipeline, DDIMScheduler ckpt_path = "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors" scheduler = DDIMScheduler() pipeline = StableDiffusionXLPipeline.from_single_file(ckpt_path, scheduler=scheduler) Or you could use a ControlNet model in the pipeline. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel ckpt_path = "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.safetensors" controlnet = ControlNetModel.from_pretrained( "lllyasviel/control_v11p_sd15_canny" ) pipeline = StableDiffusionControlNetPipeline.from_single_file(ckpt_path, controlnet=controlnet) Customize configuration options Models have a configuration file that define their attributes like the number of inputs in a UNet. Pipelines configuration options are available in the pipeline’s class. For example, if you look at the StableDiffusionXLInstructPix2PixPipeline class, there is an option to scale the image latents with the is_cosxl_edit parameter. These configuration files can be found in the models Hub repository or another location from which the configuration file originated (for example, a GitHub repository or locally on your device). Hub configuration file original configuration file The from_single_file() method automatically maps the checkpoint to the appropriate model repository, but there are cases where it is useful to use the config parameter. For example, if the model components in the checkpoint are different from the original checkpoint or if a checkpoint doesn’t have the necessary metadata to correctly determine the configuration to use for the pipeline. The from_single_file() method automatically determines the configuration to use from the configuration file in the model repository. You could also explicitly specify the configuration to use by providing the repository id to the config parameter. Copied from diffusers import StableDiffusionXLPipeline ckpt_path = "https://huggingface.co/segmind/SSD-1B/blob/main/SSD-1B.safetensors" repo_id = "segmind/SSD-1B" pipeline = StableDiffusionXLPipeline.from_single_file(ckpt_path, config=repo_id) The model loads the configuration file for the UNet , VAE , and text encoder from their respective subfolders in the repository. While the configuration files specify the pipeline or models default parameters, you can override them by providing the parameters directly to the from_single_file() method. Any parameter supported by the model or pipeline class can be configured in this way. pipeline model For example, to scale the image latents in StableDiffusionXLInstructPix2PixPipeline pass the is_cosxl_edit parameter. Copied from diffusers import StableDiffusionXLInstructPix2PixPipeline ckpt_path = "https://huggingface.co/stabilityai/cosxl/blob/main/cosxl_edit.safetensors" pipeline = StableDiffusionXLInstructPix2PixPipeline.from_single_file(ckpt_path, config= "diffusers/sdxl-instructpix2pix-768" , is_cosxl_edit= True ) Local files In Diffusers>=v0.28.0, the from_single_file() method attempts to configure a pipeline or model by inferring the model type from the keys in the checkpoint file. The inferred model type is used to determine the appropriate model repository on the Hugging Face Hub to configure the model or pipeline. For example, any single file checkpoint based on the Stable Diffusion XL base model will use the stabilityai/stable-diffusion-xl-base-1.0 model repository to configure the pipeline. But if you’re working in an environment with restricted internet access, you should download the configuration files with the snapshot_download function, and the model checkpoint with the hf_hub_download function. By default, these files are downloaded to the Hugging Face Hub cache directory , but you can specify a preferred directory to download the files to with the local_dir parameter. Pass the configuration and checkpoint paths to the from_single_file() method to load locally. Hub cache directory specific local directory Copied from huggingface_hub import hf_hub_download, snapshot_download my_local_checkpoint_path = hf_hub_download( repo_id= "segmind/SSD-1B" , filename= "SSD-1B.safetensors" ) my_local_config_path = snapshot_download( repo_id= "segmind/SSD-1B" , allow_patterns=[ "*.json" , "**/*.json" , "*.txt" , "**/*.txt" ] ) pipeline = StableDiffusionXLPipeline.from_single_file(my_local_checkpoint_path, config=my_local_config_path, local_files_only= True ) Local files without symlink In huggingface_hub>=v0.23.0, the local_dir_use_symlinks argument isn’t necessary for the hf_hub_download and snapshot_download functions. The from_single_file() method relies on the huggingface_hub caching mechanism to fetch and store checkpoints and configuration files for models and pipelines. If you’re working with a file system that does not support symlinking, you should download the checkpoint file to a local directory first, and disable symlinking with the local_dir_use_symlink=False parameter in the hf_hub_download function and snapshot_download functions. Copied from huggingface_hub import hf_hub_download, snapshot_download my_local_checkpoint_path = hf_hub_download( repo_id= "segmind/SSD-1B" , filename= "SSD-1B.safetensors" local_dir= "my_local_checkpoints" , local_dir_use_symlinks= False ) print ( "My local checkpoint: " , my_local_checkpoint_path) my_local_config_path = snapshot_download( repo_id= "segmind/SSD-1B" , allow_patterns=[ "*.json" , "**/*.json" , "*.txt" , "**/*.txt" ] local_dir_use_symlinks= False , ) print ( "My local config: " , my_local_config_path) Then you can pass the local paths to the pretrained_model_link_or_path and config parameters. Copied pipeline = StableDiffusionXLPipeline.from_single_file(my_local_checkpoint_path, config=my_local_config_path, local_files_only= True ) < > Update on GitHub ← Load schedulers and models Load adapters → Model files and layouts Files safetensors LoR A files ckpt Storage layout Diffusers-multifolder Single-file Convert layout and files Single-file layout usage Load a pipeline or model Customize configuration options Local files Local files without symlink
Text_Environments.txt
Text Environments Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Text Environments TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Text Environments Text environments provide a learning ground for language agents. It allows a language model to use tools to accomplish a task such as using a Python interpreter to answer math questions or using a search index for trivia questions. Having access to tools allows language models to solve tasks that would be very hard for the models itself but can be trivial for the appropriate tools. A good example is arithmetics of large numbers that become a simple copy-paste task once you have access to a calculator. Let’s dive into how text environments work and start with tools! Tools One of the core building blocks of text environments are tools that the model can use to solve tasks. In general tools can be any Python function that takes a string as input and returns string. The TextEnvironment offers two options for tools: either go with predefined tools from transformers.Tool or define your own function or class with __call__ method. Let’s have a look at both! transformers.Tool Text environments fully support tools of the class transformers.Tool . The advantage of building tools in that framework is that they can easily be shared Copied from transformers import load_tool # simple calculator tool that runs +-/* operations calc_tool = load_tool( "ybelkada/simple-calculator" ) # python interpreter that executes program and returns outputs py_tool = load_tool( "lvwerra/python-interpreter" ) # wikipedia search index that returns best search match wiki_tool = load_tool( "vwxyzjn/pyserini-wikipedia-kilt-doc" ) These tools are either loaded from the hub or from a local folder. Using the tool is as simple as calling them with a text query: Copied calc_tool( "1/2" ) >>> "0.5" Note that both input and return values are strings to enable easy usage with a language model. Custom Tools The following is an example of a tool that adds two integers: Copied def add ( text ): int_1, int_2 = text.split( "+" ) result = int (int_1) + int (int_2) return str (result) print (add( "1+1" )) >>> "2" We looked at basic examples such as a calculator but the principle holds for more complex tools as well such as a web search tool where you input the query and get the search results in return. Now let’s look at how the model can use the tools with the call syntax. Call syntax In order to have a unified way for the model to call a tool we created a simple syntax that looks as follows: Copied "<request><TOOL_NAME>QUERY<call>TOOL_RESPONSE<response>" There are a few special tokens involved so let’s decompose it: First the model can signal that it wants to use a tool by emitting the <request> token. After that we want to know the name of the tool to call which is done by enclosing the tool name with <> brackets. Once we know which tool to call the tool query follows which is in free text form. The <call> tokens signifies the end of the query and stops the model generation. At this point the model output is parsed and the query sent to the tool. The environment appends the tool response to the string followed by the <response> token to show the end the tool output. Let’s look at the concrete example of the calculator and assume its name is Calculator (more on how the name of a tool is inferred later): Copied "<request><Calculator>1/2<call>0.5<response>" Finally, the episode is ended and generation stops when the model generates <submit> which marks the interaction as completed. Now let’s have a look how we can create a new text environment! Create a TextEnvironment Copied prompt = """\ What is 13-3? <request><SimpleCalculatorTool>13-3<call>10.0<response> Result=10<submit> """ def reward_fn ( result, answer ): """Simplified reward function returning 1 if result matches answer and 0 otherwise.""" result_parsed = result.split( "=" )[ 1 ].split( "<" )[ 0 ] return int (result_parsed==answer) text_env = TextEnvironemnt( model=model, tokenizer=tokenizer, tools= { "SimpleCalculatorTool" : load_tool( "ybelkada/simple-calculator" )}, reward_fn=exact_match_reward, prompt=prompt, max_turns= 1 max_tool_response= 100 generation_kwargs={ "do_sample" : "true" } ) Let’s decompose the settings: Argument Description model Language model to interact with the environment and generate requests. tokenizer Tokenizer of language model handling tokenization of strings. tools list of dict of tools. If former the name of the tool is inferred from class name and otherwise it’s the keys of the dictionary. reward_fn A function that takes a string as input and returns. Can have extra arguments that are passed to .run() such as ground truth. prompt Prompt to prepend to every task. Usually a few examples to demonstrate to the model how to use the tools in a few-shot fashion. max_turns Maximum number of interactions between model and tools before episode ends. max_tool_response The tool response is truncated to this number to avoid running out of model context. max_length The maximum number of tokens to allow in an episode. generation_kwargs Generation settings used by the language model. You can customize the environment to your needs and add custom tools and settings. Let’s see how you can use the environment to have the model interact with the available tools! Run an Episode To run a set of queries through the text environment one can simply use the run method. Copied queries = [ "What is 1/2?" ] answers = [ "0.5" ] queries, responses, masks, rewards, histories = text_env.run(queries, answers=answers) This will execute the model/tool feedback loop for each query until either no tool is called anymore, the maximum number of turns is reached or to maximum number of tokens in an episode is exceeded. The extra kwargs (e.g. answers=answers above) passed to run will be passed on to the reward function. There are five objects that are returned by run : queries : a list of the tokenized queries responses : all tokens that have been generated withing the environment including model and tool tokens masks : mask that indicates which tokens have been generated by the model and which tokens are generated by the tool rewards : a list of reward for each query/response histories : list of TextHistory objects, which are useful objects containing all the above and also the text equivalents The masks are crucial for training as we don’t want to optimize tokens that the model has not generated which are tokens produced by the tools. Next, we’ll train a PPO step with the generated responses! Train Training on episodes from the TextEnvironment is straight forward and simply requires forwarding all the returned variables except the TextHistory objects to the step method: Copied train_stats = ppo_trainer.step(queries, responses, rewards, masks) TextHistory The TextHistory object stores the interactions between the model and the text environment. It stores tokens and text generated in each turn and their source in each turn (model or system) as well as rewards. Let’s go through the class attributes and methods. Attributes The following table summarises the available attributes of the TextEnvironment class: Attribute Description text The full string of the text generated in the text environment with both model and system generated text. text_spans A list of tuples with the spans for each model or system generated text segment. system_spans A list of boolean values indicating if the segment is model or system generated. tokens All tokens generated in text environment with both model and system generated tokens. token_spans Similar to text_spans the token_spans indicate the boundaries of model andsystem generated tokens. token_masks The token masks can be used to ignore system generated tokens by masking them. completed Indicates if the interaction with the environment has completed. truncated Indicates if the interaction with the environment has completed because max length was reached. With these attributes you can reconstruct every interaction of the model with the TextEnvironment . The TextHistory also lets you visualize the text history. Let’s have a look! Visualization When the model interacts inside the TextEnvironment it can be useful to visualize and separate which parts of the text outputs were generated by the model and which parts come from the system and tools. For that purpose there are the two methods TextHistory.show_text() and TextHistory.show_tokens() . They print the text and tokens respectively and highlight the various segments using the rich libray (make sure to install it before using these methods). You can see that the prompt is highlighted in gray, whereas system segments such as query and tool responses are highlighted in green. All segments generated by the model are highlighted in blue and in addition to the pure text output the reward is displayed as additional text in plum. Here an example of show_text : Sometimes there can be tricky tokenization related issues that are hidden when showing the decoded text. Thus TextHistory also offers an option to display the same highlighting on the tokens directly with show_tokens : Note that you can turn on the colour legend by passing show_legend=True . API Documentation class trl. TextEnvironment < source > ( model = None tokenizer = None tools = None reward_fn = None prompt = None max_turns = 4 max_tool_reponse = 100 max_length = None generation_kwargs = None ) The TextEnvironment enables interaction of a LLM with an environment using tools. compute_reward < source > ( histories **reward_kwargs ) Compute the reward for a list of histories. generate < source > ( histories ) Generate responses for a list of histories. parse_tool_call < source > ( text ) Parse request string. Expected format: <request><tool_name>query<call> run < source > ( queries **rewards_kwargs ) Parameters queries (list[str]) — A list of queries to run the model in the environment on. Run the environment on a list of queries. step < source > ( history ) Parameters history ( TextHistory ) — The history to step forward. Step the environment forward one turn. task_end_check < source > ( history model_turn = True ) Check if the current generation sequence has finished. tasks_end_check < source > ( histories model_turn = True ) Check if the current generation sequences have finished. class trl. TextHistory < source > ( text tokens system = True ) The TextHistory class keeps track of the history of an interaction between the language model and the environment. append_segment < source > ( text tokens system = True ) Parameters text ( str ) — The text of the new segment. tokens ( torch.LongTensor ) — The tokens of the new segment. system ( bool , optional ) — Whether the new segment is a system or user segment. Append a new segment to the history. complete < source > ( truncated = False ) Mark the history as completed. show_colour_legend < source > ( ) Print the colour legend. show_text < source > ( show_legend = False ) Print the text history. show_tokens < source > ( tokenizer show_legend = False ) Print the history tokens. split_query_response_tokens < source > ( ) Split the tokens into query and response tokens. < > Update on GitHub ← Data Utilities Script Utilities → Text Environments Tools transformers. Tool Custom Tools Call syntax Create a Text Environment Run an Episode Train Text History Attributes Visualization AP I Documentation
Text_Classification.txt
Text Classification Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Text Classification api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Text Classification Text Classification is the task of assigning a label or class to a given text. Some use cases are sentiment analysis, natural language inference, and assessing grammatical correctness. For more details about the text-classification task, check out its dedicated page ! You will find examples and related materials. Recommended models distilbert/distilbert-base-uncased-finetuned-sst-2-english : A robust model trained for sentiment analysis. ProsusAI/finbert : A sentiment analysis model specialized in financial sentiment. cardiffnlp/twitter-roberta-base-sentiment-latest : A sentiment analysis model specialized in analyzing tweets. papluca/xlm-roberta-base-language-detection : A model that can classify languages. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/distilbert/distilbert-base-uncased-finetuned-sst-2-english" headers = { "Authorization" : "Bearer hf_***" } def query ( payload ): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs" : "I like you. I love you" , }) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload inputs* string The text to classify parameters object function_to_apply enum Possible values: sigmoid, softmax, none. top_k integer When specified, limits the output to the top K most probable classes. Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body (array) object[] Output is an array of objects. label string The predicted class label. score number The corresponding probability. < > Update on GitHub ← Table Question Answering Text Generation → Text Classification Recommended models Using the API AP I specification Request Response
Orthogonal_Finetuning_(OFT_and_BOFT).txt
Orthogonal Finetuning (OFT and BOFT) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Orthogonal Finetuning (OFT and BOFT) PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Orthogonal Finetuning (OFT and BOFT) This conceptual guide gives a brief overview of OFT and BOFT , a parameter-efficient fine-tuning technique that utilizes orthogonal matrix to multiplicatively transform the pretrained weight matrices. To achieve efficient fine-tuning, OFT represents the weight updates with an orthogonal transformation. The orthogonal transformation is parameterized by an orthogonal matrix multiplied to the pretrained weight matrix. These new matrices can be trained to adapt to the new data while keeping the overall number of changes low. The original weight matrix remains frozen and doesn’t receive any further adjustments. To produce the final results, both the original and the adapted weights are multiplied togethor. Orthogonal Butterfly (BOFT) generalizes OFT with Butterfly factorization and further improves its parameter efficiency and finetuning flexibility. In short, OFT can be viewed as a special case of BOFT. Different from LoRA that uses additive low-rank weight updates, BOFT uses multiplicative orthogonal weight updates. The comparison is shown below. BOFT has some advantages compared to LoRA: BOFT proposes a simple yet generic way to finetune pretrained models to downstream tasks, yielding a better preservation of pretraining knowledge and a better parameter efficiency. Through the orthogonality, BOFT introduces a structural constraint, i.e., keeping the hyperspherical energy unchanged during finetuning. This can effectively reduce the forgetting of pretraining knowledge. BOFT uses the butterfly factorization to efficiently parameterize the orthogonal matrix, which yields a compact yet expressive learning space (i.e., hypothesis class). The sparse matrix decomposition in BOFT brings in additional inductive biases that are beneficial to generalization. In principle, BOFT can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Given the target layers for injecting BOFT parameters, the number of trainable parameters can be determined based on the size of the weight matrices. Merge OFT/BOFT weights into the base model Similar to LoRA, the weights learned by OFT/BOFT can be integrated into the pretrained weight matrices using the merge_and_unload() function. This function merges the adapter weights with the base model which allows you to effectively use the newly merged model as a standalone model. This works because during training, the orthogonal weight matrix (R in the diagram above) and the pretrained weight matrices are separate. But once training is complete, these weights can actually be merged (multiplied) into a new weight matrix that is equivalent. Utils for OFT / BOFT Common OFT / BOFT parameters in PEFT As with other methods supported by PEFT, to fine-tune a model using OFT or BOFT, you need to: Instantiate a base model. Create a configuration ( OFTConfig or BOFTConfig ) where you define OFT/BOFT-specific parameters. Wrap the base model with get_peft_model() to get a trainable PeftModel . Train the PeftModel as you normally would train the base model. BOFT-specific paramters BOFTConfig allows you to control how OFT/BOFT is applied to the base model through the following parameters: boft_block_size : the BOFT matrix block size across different layers, expressed in int . Smaller block size results in sparser update matrices with fewer trainable paramters. Note , please choose boft_block_size to be divisible by most layer’s input dimension ( in_features ), e.g., 4, 8, 16. Also, please only specify either boft_block_size or boft_block_num , but not both simultaneously or leaving both to 0, because boft_block_size x boft_block_num must equal the layer’s input dimension. boft_block_num : the number of BOFT matrix blocks across different layers, expressed in int . Fewer blocks result in sparser update matrices with fewer trainable paramters. Note , please choose boft_block_num to be divisible by most layer’s input dimension ( in_features ), e.g., 4, 8, 16. Also, please only specify either boft_block_size or boft_block_num , but not both simultaneously or leaving both to 0, because boft_block_size x boft_block_num must equal the layer’s input dimension. boft_n_butterfly_factor : the number of butterfly factors. Note , for boft_n_butterfly_factor=1 , BOFT is the same as vanilla OFT, for boft_n_butterfly_factor=2 , the effective block size of OFT becomes twice as big and the number of blocks become half. bias : specify if the bias parameters should be trained. Can be "none" , "all" or "boft_only" . boft_dropout : specify the probability of multiplicative dropout. target_modules : The modules (for example, attention blocks) to inject the OFT/BOFT matrices. modules_to_save : List of modules apart from OFT/BOFT matrices to be set as trainable and saved in the final checkpoint. These typically include model’s custom head that is randomly initialized for the fine-tuning task. BOFT Example Usage For an example of the BOFT method application to various downstream tasks, please refer to the following guides: Take a look at the following step-by-step guides on how to finetune a model with BOFT: Dreambooth finetuning with BOFT Controllable generation finetuning with BOFT (ControlNet) For the task of image classification, one can initialize the BOFT config for a DinoV2 model as follows: Copied import transformers from transformers import AutoModelForSeq2SeqLM, BOFTConfig from peft import BOFTConfig, get_peft_model config = BOFTConfig( boft_block_size= 4 , boft_n_butterfly_factor= 2 , target_modules=[ "query" , "value" , "key" , "output.dense" , "mlp.fc1" , "mlp.fc2" ], boft_dropout= 0.1 , bias= "boft_only" , modules_to_save=[ "classifier" ], ) model = transformers.Dinov2ForImageClassification.from_pretrained( "facebook/dinov2-large" , num_labels= 100 , ) boft_model = get_peft_model(model, config) < > Update on GitHub ← IA3 AutoPeftModel → Orthogonal Finetuning (OF T and BOF T) Merge OF T/BOF T weights into the base model Utils for OF T / BOFT Common OF T / BOF T parameters in PEFT BOF T-specific paramters BOF T Example Usage
Transformers.js.txt
Transformers.js Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Transformers.js Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Transformers.js State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server! Transformers.js is designed to be functionally equivalent to Hugging Face’s transformers python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as: 📝 Natural Language Processing : text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation. 🖼️ Computer Vision : image classification, object detection, segmentation, and depth estimation. 🗣️ Audio : automatic speech recognition, audio classification, and text-to-speech. 🐙 Multimodal : embeddings, zero-shot audio classification, zero-shot image classification, and zero-shot object detection. Transformers.js uses ONNX Runtime to run models in the browser. The best part about it, is that you can easily convert your pretrained PyTorch, TensorFlow, or JAX models to ONNX using 🤗 Optimum . For more information, check out the full documentation . Quick tour It’s super simple to translate from existing code! Just like the python library, we support the pipeline API. Pipelines group together a pretrained model with preprocessing of inputs and postprocessing of outputs, making it the easiest way to run models with the library. Python (original) Javascript (ours) Copied from transformers import pipeline # Allocate a pipeline for sentiment-analysis pipe = pipeline( 'sentiment-analysis' ) out = pipe( 'I love transformers!' ) # [{'label': 'POSITIVE', 'score': 0.999806941}] Copied import { pipeline } from '@huggingface/transformers' ; // Allocate a pipeline for sentiment-analysis let pipe = await pipeline ( 'sentiment-analysis' ); let out = await pipe ( 'I love transformers!' ); // [{'label': 'POSITIVE', 'score': 0.999817686}] You can also use a different model by specifying the model id or path as the second argument to the pipeline function. For example: Copied // Use a different model for sentiment-analysis let pipe = await pipeline ( 'sentiment-analysis' , 'Xenova/bert-base-multilingual-uncased-sentiment' ); Contents The documentation is organized into 4 sections: GET STARTED provides a quick tour of the library and installation instructions to get up and running. TUTORIALS are a great place to start if you’re a beginner! We also include sample applications for you to play around with! DEVELOPER GUIDES show you how to use the library to achieve a specific goal. API REFERENCE describes all classes and functions, as well as their available parameters and types. Examples Want to jump straight in? Get started with one of our sample applications/templates: Name Description Links Whisper Web Speech recognition w/ Whisper code , demo Doodle Dash Real-time sketch-recognition game blog , code , demo Code Playground In-browser code completion website code , demo Semantic Image Search (client-side) Search for images with text code , demo Semantic Image Search (server-side) Search for images with text (Supabase) code , demo Vanilla JavaScript In-browser object detection video , code , demo React Multilingual translation website code , demo Text to speech (client-side) In-browser speech synthesis code , demo Browser extension Text classification extension code Electron Text classification application code Next.js (client-side) Sentiment analysis (in-browser inference) code , demo Next.js (server-side) Sentiment analysis (Node.js inference) code , demo Node.js Sentiment analysis API code Demo site A collection of demos code , demo Check out the Transformers.js template on Hugging Face to get started in one click! Supported tasks/models Here is the list of all tasks and architectures currently supported by Transformers.js. If you don’t see your task/model listed here or it is not yet supported, feel free to open up a feature request here . To find compatible models on the Hub, select the “transformers.js” library tag in the filter menu (or visit this link ). You can refine your search by selecting the task you’re interested in (e.g., text-classification ). Tasks Natural Language Processing Task ID Description Supported? Fill-Mask fill-mask Masking some of the words in a sentence and predicting which words should replace those masks. ✅ (docs) (models) Question Answering question-answering Retrieve the answer to a question from a given text. ✅ (docs) (models) Sentence Similarity sentence-similarity Determining how similar two texts are. ✅ (docs) (models) Summarization summarization Producing a shorter version of a document while preserving its important information. ✅ (docs) (models) Table Question Answering table-question-answering Answering a question about information from a given table. ❌ Text Classification text-classification or sentiment-analysis Assigning a label or class to a given text. ✅ (docs) (models) Text Generation text-generation Producing new text by predicting the next word in a sequence. ✅ (docs) (models) Text-to-text Generation text2text-generation Converting one text sequence into another text sequence. ✅ (docs) (models) Token Classification token-classification or ner Assigning a label to each token in a text. ✅ (docs) (models) Translation translation Converting text from one language to another. ✅ (docs) (models) Zero-Shot Classification zero-shot-classification Classifying text into classes that are unseen during training. ✅ (docs) (models) Feature Extraction feature-extraction Transforming raw data into numerical features that can be processed while preserving the information in the original dataset. ✅ (docs) (models) Vision Task ID Description Supported? Depth Estimation depth-estimation Predicting the depth of objects present in an image. ✅ (docs) (models) Image Classification image-classification Assigning a label or class to an entire image. ✅ (docs) (models) Image Segmentation image-segmentation Divides an image into segments where each pixel is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation. ✅ (docs) (models) Image-to-Image image-to-image Transforming a source image to match the characteristics of a target image or a target image domain. ✅ (docs) (models) Mask Generation mask-generation Generate masks for the objects in an image. ❌ Object Detection object-detection Identify objects of certain defined classes within an image. ✅ (docs) (models) Video Classification n/a Assigning a label or class to an entire video. ❌ Unconditional Image Generation n/a Generating images with no condition in any context (like a prompt text or another image). ❌ Image Feature Extraction image-feature-extraction Transforming raw data into numerical features that can be processed while preserving the information in the original image. ✅ (docs) (models) Audio Task ID Description Supported? Audio Classification audio-classification Assigning a label or class to a given audio. ✅ (docs) (models) Audio-to-Audio n/a Generating audio from an input audio source. ❌ Automatic Speech Recognition automatic-speech-recognition Transcribing a given audio into text. ✅ (docs) (models) Text-to-Speech text-to-speech or text-to-audio Generating natural-sounding speech given text input. ✅ (docs) (models) Tabular Task ID Description Supported? Tabular Classification n/a Classifying a target category (a group) based on set of attributes. ❌ Tabular Regression n/a Predicting a numerical value given a set of attributes. ❌ Multimodal Task ID Description Supported? Document Question Answering document-question-answering Answering questions on document images. ✅ (docs) (models) Image-to-Text image-to-text Output text from a given image. ✅ (docs) (models) Text-to-Image text-to-image Generates images from input text. ❌ Visual Question Answering visual-question-answering Answering open-ended questions based on an image. ❌ Zero-Shot Audio Classification zero-shot-audio-classification Classifying audios into classes that are unseen during training. ✅ (docs) (models) Zero-Shot Image Classification zero-shot-image-classification Classifying images into classes that are unseen during training. ✅ (docs) (models) Zero-Shot Object Detection zero-shot-object-detection Identify objects of classes that are unseen during training. ✅ (docs) (models) Reinforcement Learning Task ID Description Supported? Reinforcement Learning n/a Learning from actions by interacting with an environment through trial and error and receiving rewards (negative or positive) as feedback. ✅ Models ALBERT (from Google Research and the Toyota Technological Institute at Chicago) released with the paper ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. Audio Spectrogram Transformer (from MIT) released with the paper AST: Audio Spectrogram Transformer by Yuan Gong, Yu-An Chung, James Glass. BART (from Facebook) released with the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. BEiT (from Microsoft) released with the paper BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong, Furu Wei. BERT (from Google) released with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. Blenderbot (from Facebook) released with the paper Recipes for building an open-domain chatbot by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. BlenderbotSmall (from Facebook) released with the paper Recipes for building an open-domain chatbot by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. BLOOM (from BigScience workshop) released by the BigScience Workshop . CamemBERT (from Inria/Facebook/Sorbonne) released with the paper CamemBERT: a Tasty French Language Model by Louis Martin , Benjamin Muller , Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. Chinese-CLIP (from OFA-Sys) released with the paper Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. CLAP (from LAION-AI) released with the paper Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. CLIP (from OpenAI) released with the paper Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIPSeg (from University of Göttingen) released with the paper Image Segmentation Using Text and Image Prompts by Timo Lüddecke and Alexander Ecker. CodeGen (from Salesforce) released with the paper A Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. CodeLlama (from MetaAI) released with the paper Code Llama: Open Foundation Models for Code by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve. Cohere (from Cohere) released with the paper Command-R: Retrieval Augmented Generation at Production Scale by Cohere. ConvBERT (from YituTech) released with the paper ConvBERT: Improving BERT with Span-based Dynamic Convolution by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. ConvNeXT (from Facebook AI) released with the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. ConvNeXTV2 (from Facebook AI) released with the paper ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. DeBERTa (from Microsoft) released with the paper DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. DeBERTa-v2 (from Microsoft) released with the paper DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. Decision Transformer (from Berkeley/Facebook/Google) released with the paper Decision Transformer: Reinforcement Learning via Sequence Modeling by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. DeiT (from Facebook) released with the paper Training data-efficient image transformers & distillation through attention by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. Depth Anything (from University of Hong Kong and TikTok) released with the paper Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data by Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao. Depth Pro (from Apple) released with the paper Depth Pro: Sharp Monocular Metric Depth in Less Than a Second by Aleksei Bochkovskii, Amaël Delaunoy, Hugo Germain, Marcel Santos, Yichao Zhou, Stephan R. Richter, Vladlen Koltun. DETR (from Facebook) released with the paper End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. DINOv2 (from Meta AI) released with the paper DINOv2: Learning Robust Visual Features without Supervision by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski. DistilBERT (from HuggingFace), released together with the paper DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into DistilGPT2 , RoBERTa into DistilRoBERTa , Multilingual BERT into DistilmBERT and a German version of DistilBERT. DiT (from Microsoft Research) released with the paper DiT: Self-supervised Pre-training for Document Image Transformer by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. Donut (from NAVER), released together with the paper OCR-free Document Understanding Transformer by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. DPT (from Intel Labs) released with the paper Vision Transformers for Dense Prediction by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. EfficientNet (from Google Brain) released with the paper EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks by Mingxing Tan, Quoc V. Le. ELECTRA (from Google Research/Stanford University) released with the paper ELECTRA: Pre-training text encoders as discriminators rather than generators by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. ESM (from Meta AI) are transformer protein language models. ESM-1b was released with the paper Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. ESM-1v was released with the paper Language models enable zero-shot prediction of the effects of mutations on protein function by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. ESM-2 and ESMFold were released with the paper Language models of protein sequences at the scale of evolution enable accurate structure prediction by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. Falcon (from Technology Innovation Institute) by Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme. FastViT (from Apple) released with the paper FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization by Pavan Kumar Anasosalu Vasu, James Gabriel, Jeff Zhu, Oncel Tuzel and Anurag Ranjan. FLAN-T5 (from Google AI) released in the repository google-research/t5x by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei Florence2 (from Microsoft) released with the paper Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks by Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, Lu Yuan. Gemma (from Google) released with the paper Gemma: Open Models Based on Gemini Technology and Research by the Gemma Google team. Gemma2 (from Google) released with the paper Gemma2: Open Models Based on Gemini Technology and Research by the Gemma Google team. GLPN (from KAIST) released with the paper Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. GPT Neo (from EleutherAI) released in the repository EleutherAI/gpt-neo by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. GPT NeoX (from EleutherAI) released with the paper GPT-NeoX-20B: An Open-Source Autoregressive Language Model by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach GPT-2 (from OpenAI) released with the paper Language Models are Unsupervised Multitask Learners by Alec Radford , Jeffrey Wu , Rewon Child, David Luan, Dario Amodei and Ilya Sutskever . GPT-J (from EleutherAI) released in the repository kingoflolz/mesh-transformer-jax by Ben Wang and Aran Komatsuzaki. GPTBigCode (from BigCode) released with the paper SantaCoder: don’t reach for the stars! by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. Granite (from IBM) released with the paper Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler by Yikang Shen, Matthew Stallone, Mayank Mishra, Gaoyuan Zhang, Shawn Tan, Aditya Prasad, Adriana Meza Soria, David D. Cox, Rameswar Panda. GroupViT (from UCSD, NVIDIA) released with the paper GroupViT: Semantic Segmentation Emerges from Text Supervision by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. HerBERT (from Allegro.pl, AGH University of Science and Technology) released with the paper KLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik. Hiera (from Meta) released with the paper Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles by Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer. Hubert (from Facebook) released with the paper HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. JAIS (from Core42) released with the paper Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models by Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, William Marshall, Gurpreet Gosal, Cynthia Liu, Zhiming Chen, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan, Zain Muhammad Mujahid, Massa Baali, Xudong Han, Sondos Mahmoud Bsharat, Alham Fikri Aji, Zhiqiang Shen, Zhengzhong Liu, Natalia Vassilieva, Joel Hestness, Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Hector Xuguang Ren, Preslav Nakov, Timothy Baldwin, Eric Xing. LongT5 (from Google AI) released with the paper LongT5: Efficient Text-To-Text Transformer for Long Sequences by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. LLaMA (from The FAIR team of Meta AI) released with the paper LLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. Llama2 (from The FAIR team of Meta AI) released with the paper Llama2: Open Foundation and Fine-Tuned Chat Models by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom. LLaVa (from Microsoft Research & University of Wisconsin-Madison) released with the paper Visual Instruction Tuning by Haotian Liu, Chunyuan Li, Yuheng Li and Yong Jae Lee. M2M100 (from Facebook) released with the paper Beyond English-Centric Multilingual Machine Translation by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. MarianMT Machine translation models trained using OPUS data by Jörg Tiedemann. The Marian Framework is being developed by the Microsoft Translator Team. MaskFormer (from Meta and UIUC) released with the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. mBART (from Facebook) released with the paper Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. mBART-50 (from Facebook) released with the paper Multilingual Translation with Extensible Multilingual Pretraining and Finetuning by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. MusicGen (from Meta) released with the paper Simple and Controllable Music Generation by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez. Mistral (from Mistral AI) by The Mistral AI team: Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. MMS (from Facebook) released with the paper Scaling Speech Technology to 1,000+ Languages by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli. MobileBERT (from CMU/Google Brain) released with the paper MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. MobileCLIP (from Apple) released with the paper MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel. MobileNetV1 (from Google Inc.) released with the paper MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. MobileNetV2 (from Google Inc.) released with the paper MobileNetV2: Inverted Residuals and Linear Bottlenecks by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. MobileNetV3 (from Google Inc.) released with the paper Searching for MobileNetV3 by Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, Hartwig Adam. MobileNetV4 (from Google Inc.) released with the paper MobileNetV4 - Universal Models for the Mobile Ecosystem by Danfeng Qin, Chas Leichner, Manolis Delakis, Marco Fornoni, Shixin Luo, Fan Yang, Weijun Wang, Colby Banbury, Chengxi Ye, Berkin Akin, Vaibhav Aggarwal, Tenghui Zhu, Daniele Moro, Andrew Howard. MobileViT (from Apple) released with the paper MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer by Sachin Mehta and Mohammad Rastegari. MobileViTV2 (from Apple) released with the paper Separable Self-attention for Mobile Vision Transformers by Sachin Mehta and Mohammad Rastegari. Moondream1 released in the repository moondream by vikhyat. MPNet (from Microsoft Research) released with the paper MPNet: Masked and Permuted Pre-training for Language Understanding by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. MPT (from MosaiML) released with the repository llm-foundry by the MosaicML NLP Team. MT5 (from Google AI) released with the paper mT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. NLLB (from Meta) released with the paper No Language Left Behind: Scaling Human-Centered Machine Translation by the NLLB team. Nougat (from Meta AI) released with the paper Nougat: Neural Optical Understanding for Academic Documents by Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic. OpenELM (from Apple) released with the paper OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework by Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari. OPT (from Meta AI) released with the paper OPT: Open Pre-trained Transformer Language Models by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. OWL-ViT (from Google AI) released with the paper Simple Open-Vocabulary Object Detection with Vision Transformers by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWLv2 (from Google AI) released with the paper Scaling Open-Vocabulary Object Detection by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. Phi (from Microsoft) released with the papers - Textbooks Are All You Need by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li, Textbooks Are All You Need II: phi-1.5 technical report by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee. Phi3 (from Microsoft) released with the paper Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone by Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio César Teodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon, Ronen Eldan, Dan Iter, Amit Garg, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Xia Song, Masahiro Tanaka, Xin Wang, Rachel Ward, Guanhua Wang, Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, Ziyi Yang, Donghan Yu, Chengruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, Xiren Zhou. PVT (from Nanjing University, The University of Hong Kong etc.) released with the paper Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. PyAnnote released in the repository pyannote/pyannote-audio by Hervé Bredin. Qwen2 (from the Qwen team, Alibaba Group) released with the paper Qwen Technical Report by Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou and Tianhang Zhu. ResNet (from Microsoft Research) released with the paper Deep Residual Learning for Image Recognition by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. RoBERTa (from Facebook), released together with the paper RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. RoFormer (from ZhuiyiTechnology), released together with the paper RoFormer: Enhanced Transformer with Rotary Position Embedding by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. RT-DETR (from Baidu), released together with the paper DETRs Beat YOLOs on Real-time Object Detection by Yian Zhao, Wenyu Lv, Shangliang Xu, Jinman Wei, Guanzhong Wang, Qingqing Dang, Yi Liu, Jie Chen. Sapiens (from Meta AI) released with the paper Sapiens: Foundation for Human Vision Models by Rawal Khirodkar, Timur Bagautdinov, Julieta Martinez, Su Zhaoen, Austin James, Peter Selednik, Stuart Anderson, Shunsuke Saito. SegFormer (from NVIDIA) released with the paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. Segment Anything (from Meta AI) released with the paper Segment Anything by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. SigLIP (from Google AI) released with the paper Sigmoid Loss for Language Image Pre-Training by Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer. SpeechT5 (from Microsoft Research) released with the paper SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. SqueezeBERT (from Berkeley) released with the paper SqueezeBERT: What can computer vision teach NLP about efficient neural networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. StableLm (from Stability AI) released with the paper StableLM 3B 4E1T (Technical Report) by Jonathan Tow, Marco Bellagente, Dakota Mahan, Carlos Riquelme Ruiz, Duy Phung, Maksym Zhuravinskyi, Nathan Cooper, Nikhil Pinnaparaju, Reshinth Adithyan, and James Baicoianu. Starcoder2 (from BigCode team) released with the paper StarCoder 2 and The Stack v2: The Next Generation by Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. Swin Transformer (from Microsoft) released with the paper Swin Transformer: Hierarchical Vision Transformer using Shifted Windows by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. Swin2SR (from University of Würzburg) released with the paper Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. T5 (from Google AI) released with the paper Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. T5v1.1 (from Google AI) released in the repository google-research/text-to-text-transfer-transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. Table Transformer (from Microsoft Research) released with the paper PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents by Brandon Smock, Rohith Pesala, Robin Abraham. TrOCR (from Microsoft), released together with the paper TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. UniSpeech (from Microsoft Research) released with the paper UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. UniSpeechSat (from Microsoft Research) released with the paper UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. Vision Transformer (ViT) (from Google AI) released with the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. ViTMAE (from Meta AI) released with the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. ViTMatte (from HUST-VL) released with the paper ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang. ViTMSN (from Meta AI) released with the paper Masked Siamese Networks for Label-Efficient Learning by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. VITS (from Kakao Enterprise) released with the paper Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech by Jaehyeon Kim, Jungil Kong, Juhee Son. Wav2Vec2 (from Facebook AI) released with the paper wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. Wav2Vec2-BERT (from Meta AI) released with the paper Seamless: Multilingual Expressive and Streaming Speech Translation by the Seamless Communication team. WavLM (from Microsoft Research) released with the paper WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. Whisper (from OpenAI) released with the paper Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. XLM (from Facebook) released together with the paper Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau. XLM-RoBERTa (from Facebook AI), released together with the paper Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau , Kartikay Khandelwal , Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. YOLOS (from Huazhong University of Science & Technology) released with the paper You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. < > Update on GitHub Installation → Transformers.js Quick tour Contents Examples Supported tasks/models Tasks Natural Language Processing Vision Audio Tabular Multimodal Reinforcement Learning Models
Collections.txt
Collections Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Collections Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Collections Use Collections to group repositories from the Hub (Models, Datasets, Spaces and Papers) on a dedicated page. Collections have many use cases: Highlight specific repositories on your personal or organizational profile. Separate key repositories from others for your profile visitors. Showcase and share a complete project with its paper(s), dataset(s), model(s) and Space(s). Bookmark things you find on the Hub in categories. Have a dedicated page of curated things to share with others. Gate a group of models/datasets (Enterprise Hub) This is just a list of possible uses, but remember that collections are just a way of grouping things, so use them in the way that best fits your use case. Creating a new collection There are several ways to create a collection: For personal collections: Use the + New button on your logged-in homepage (1). For organization collections: Use the + New button available on organizations page (2). It’s also possible to create a collection on the fly when adding the first item from a repository page, select + Create new collection from the dropdown menu. You’ll need to enter a title and short description for your collection to be created. Adding items to a collection There are 2 ways to add items to a collection: From any repository page: Use the context menu available on any repository page then select Add to collection to add it to a collection (1). From the collection page: If you know the name of the repository you want to add, use the + add to collection option in the right-hand menu (2). It’s possible to add external repositories to your collections, not just your own. Collaborating on collections Organization collections are a great way to build collections together. Any member of the organization can add, edit and remove items from the collection. Use the history feature to keep track of who has edited the collection. Collection options Collection visibility Public collections appear at the top of your profile or organization page and can be viewed by anyone. The first 3 items in each collection are visible directly in the collection preview (1). To see more, the user must click to go to the collection page. Set your collection to private if you don’t want it to be accessible via its URL (it will not be displayed on your profile/organization page). For organizations, private collections are only available to members of the organization. Gating Group Collections (Enterprise Hub) You can use a collection to gate all the models/datasets belonging to it, allowing you to grant (or reject) access to all of them at once. This feature is reserved for Enterprise Hub subscribers: more information about Gating Group Collections can be found in our dedicated doc . Ordering your collections and their items You can use the drag and drop handles in the collections list (on the left side of your collections page) to change the order of your collections (1). The first two collections will be directly visible on your profile/organization pages. You can also sort repositories within a collection by dragging the handles next to each item (2). Deleting items from a collection To delete an item from a collection, click the trash icon in the menu that shows up on the right when you hover over an item (1). To delete the whole collection, click delete on the right-hand menu (2) - you’ll need to confirm this action. Adding notes to collection’s items It’s possible to add a note to any item in a collection to give it more context (for others, or as a reminder to yourself). You can add notes by clicking the pencil icon when you hover over an item with your mouse. Notes are plain text and don’t support markdown, to keep things clean and simple. URLs in notes are converted into clickable links. Adding images to a collection item Similarily, you can attach images to a collection item. This is useful for showcasing the output of a model, the content of a dataset, attaching an infographic for context, etc. To start adding images to your collection, you can click on the image icon in the contextual menu of an item. The menu shows up when you hover over an item with your mouse. Then, add images by dragging and dropping images from your computer. You can also click on the gray zone to select image files from your computer’s file system. You can re-order images by drag-and-dropping them. Clicking on an image will open it in full-screen mode. Your feedback on collections We’re working on improving collections, so if you have any bugs, questions, or new features you’d like to see added, please post a message in the dedicated discussion . < > Update on GitHub ← Notifications Webhooks → Collections Creating a new collection Adding items to a collection Collaborating on collections Collection options Collection visibility Gating Group Collections ( Enterprise Hub) Ordering your collections and their items Deleting items from a collection Adding notes to collection’s items Adding images to a collection item Your feedback on collections
Set_up_AWS_Trainium_instance.txt
Set up AWS Trainium instance Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Set up AWS Trainium instance AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Set up AWS Trainium instance In this guide, we will show you: How to create an AWS Trainium instance How to use and run Jupyter Notebooks on your instance Create an AWS Trainium Instance The simplest way to work with AWS Trainium and Hugging Face Transformers is the Hugging Face Neuron Deep Learning AMI (DLAMI). The DLAMI comes with all required libraries pre-packaged for you, including the Neuron Drivers, Transformers, Datasets, and Accelerate. To create an EC2 Trainium instance, you can start from the console or the Marketplace. This guide will start from the EC2 console . Starting from the EC2 console in the us-east-1 region, You first click on Launch an instance and define a name for the instance ( trainium-huggingface-demo ). Next, you search the Amazon Marketplace for Hugging Face AMIs. Entering “Hugging Face” in the search bar for “Application and OS Images” and hitting “enter”. This should now open the “Choose an Amazon Machine Image” view with the search. You can now navigate to “AWS Marketplace AMIs” and find the Hugging Face Neuron Deep Learning AMI and click select. You will be asked to subscribe if you aren’t. The AMI is completely free of charge, and you will only pay for the EC2 compute. Then you need to define a key pair, which will be used to connect to the instance via ssh . You can create one in place if you don’t have a key pair. After that, create or select a security group . Important you want to allow ssh traffic. You are ready to launch our instance. Therefore click on “Launch Instance” on the right side. AWS will now provision the instance using the Hugging Face Neuron Deep Learning AMI . Additional configurations can be made by increasing the disk space or creating an instance profile to access other AWS services. After the instance runs, you can view and copy the public IPv4 address to ssh into the machine. Replace the empty strings "" in the snippet below with the IP address of your instances and the path to the key pair you created/selected when launching the instance. Copied PUBLIC_DNS= "" # IP address KEY_PATH= "" # local path to key pair ssh -i $KEY_PATH ubuntu@ $PUBLIC_DNS After you are connected, you can run neuron-ls to ensure you have access to the Trainium accelerators. You should see a similar output than below. Copied ubuntu@ip -172 -31 -79 -164 : ~$ neuron-ls instance-type : trn1 .2 xlarge instance-id : i -0570615e41700 a481 +--------+--------+--------+---------+ | NEURON | NEURON | NEURON | PCI | | DEVICE | CORES | MEMORY | BDF | +--------+--------+--------+---------+ | 0 | 2 | 32 GB | 00 : 1 e .0 | +--------+--------+--------+---------+ Configuring Jupyter Notebook on your AWS Trainium Instance With the instance is up and running, we can ssh into it. But instead of developing inside a terminal it is also possible to use a Jupyter Notebook environment. We can use it for preparing our dataset and launching the training (at least when working on a single node). For this, we need to add a port for forwarding in the ssh command, which will tunnel our localhost traffic to the Trainium instance. Copied PUBLIC_DNS= "" # IP address, e.g. ec2-3-80-.... KEY_PATH= "" # local path to key, e.g. ssh/trn.pem ssh -L 8080:localhost:8080 -i ${KEY_NAME} .pem ubuntu@ $PUBLIC_DNS You are done! You can now start using the Trainium accelerators with Hugging Face Transformers. Check out the Fine-tune Transformers with AWS Trainium guide to get started. ← Generate images with Stable Diffusion models on AWS Inferentia Training and Deployment using Amazon Sagemaker → Set up AW S Trainium instance Create an AW S Trainium Instance Configuring Jupyter Notebook on your AW S Trainium Instance
Hub_methods.txt
Hub methods Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Hub methods Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Hub methods Methods for using the Hugging Face Hub: Push to hub evaluate.push_to_hub < source > ( model_id : str task_type : str dataset_type : str dataset_name : str metric_type : str metric_name : str metric_value : float task_name : str = None dataset_config : str = None dataset_split : str = None dataset_revision : str = None dataset_args : typing.Dict[str, int] = None metric_config : str = None metric_args : typing.Dict[str, int] = None overwrite : bool = False ) Parameters model_id ( str ) — Model id from https://hf.co/models . task_type ( str ) — Task id, refer to https://github.com/huggingface/evaluate/blob/main/src/evaluate/config.py#L154 for allowed values. dataset_type ( str ) — Dataset id from https://hf.co/datasets . dataset_name ( str ) — Pretty name for the dataset. metric_type ( str ) — Metric id from https://hf.co/metrics . metric_name ( str ) — Pretty name for the metric. metric_value ( float ) — Computed metric value. task_name ( str , optional) — Pretty name for the task. dataset_config ( str , optional) — Dataset configuration used in datasets.load_dataset(). See huggingface/datasets docs for more info: https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name dataset_split ( str , optional) — Name of split used for metric computation. dataset_revision ( str , optional) — Git hash for the specific version of the dataset. dataset_args ( dict[str, int] , optional) — Additional arguments passed to datasets.load_dataset(). metric_config ( str , optional) — Configuration for the metric (e.g. the GLUE metric has a configuration for each subset) metric_args ( dict[str, int] , optional) — Arguments passed during Metric.compute(). overwrite ( bool , optional, defaults to False ) — If set to True an existing metric field can be overwritten, otherwise attempting to overwrite any existing fields will cause an error. Pushes the result of a metric to the metadata of a model repository in the Hub. Example: Copied >>> push_to_hub( ... model_id= "huggingface/gpt2-wikitext2" , ... metric_value= 0.5 ... metric_type= "bleu" , ... metric_name= "BLEU" , ... dataset_name= "WikiText" , ... dataset_type= "wikitext" , ... dataset_split= "test" , ... task_type= "text-generation" , ... task_name= "Text Generation" ... ) ← Saving methods Evaluator classes → Hub methods Push to hub
FAQs.txt
FAQs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation FAQs Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started FAQs Q: In which regions are Inference Endpoints available? A: Inference Endpoints are currently available on AWS in us-east-1 (N. Virginia) & eu-west-1 (Ireland), on Azure in eastus (Virginia), and on GCP in us-east4 (Virginia). If you need to deploy in a different region, please let us know. Q: Can I access the instance my Endpoint is running on? A: No, you cannot access the instance hosting your Endpoint. But if you are missing information or need more insights on the machine where the Endpoint is running, please contact us. Q: Can I see my Private Endpoint running on my VPC account? A: No, when creating a Private Endpoint (a Hugging Face Inference Endpoint linked to your VPC via AWS/Azure PrivateLink), you can only see the ENI in your VPC where the Endpoint is available. Q: Can I run inference in batches? A: It depends on the Task. The supported Tasks are using the transformers, sentence-transformers, or diffusers pipelines under the hood. If your Task pipeline supports batching, e.g. Zero-Shot Classification then batch inference is supported. In any case, you can always create your own inference handler and implement batching. Q: How can I scale my deployment? A: The Endpoints are scaled automatically for you, the only information you need to provide is a min replica target and a max replica target. Then the system will scale your Endpoint based on the load. Scaling to zero is supported with a variety of timing options. Q: Will my endpoint still be running if no more requests are processed? A: Yes, your Endpoint will always stay available/up with the number of min replicas defined in the Advanced configuration. Q: I would like to deploy a model which is not in the supported tasks, is this possible? A: Yes, you can deploy any repository from the Hugging Face Hub and if your task/model/framework is not supported out of the box, you can create your own inference handler and then deploy your model to an Endpoint. Q: How much does it cost to run my Endpoint? A: Dedicated Endpoints are billed based on the compute hours of your Running Endpoints, and the associated instance types. We may add usage costs for load balancers and Private Links in the future. Q: Is the data transiting to the Endpoint encrypted? A: Yes, data is encrypted during transit with TLS/SSL. Q: How can I reduce the latency of my Endpoint? A: There are several ways to reduce the latency of your Endpoint. One is to deploy your Endpoint in a region close to your application to reduce the network overhead. Another is to optimize your model using Hugging Face Optimum before creating your Endpoint. If you need help or have more questions about reducing latency, please contact us. Q: How do I monitor my deployed Endpoint? A: You can currently monitor your Endpoint through the 🤗 Inference Endpoints web application , where you have access to the Logs of your Endpoints as well as a metrics dashboard . If you need programmatic access or more information, please contact us. Q: What if I would like to deploy to a different instance type that is not listed? A: Please contact us if you feel your model would do better on a different instance type than what is listed. Q: I accidentally leaked my token. Do I need to delete my endpoint? A: You can invalidate existing personal tokens and create new ones in your settings here: https://huggingface.co/settings/tokens . Note that fine-grained tokens are supported in Inference Endpoints - please consider using them! Q: I need to add a custom environment variable (default or secrets) to my endpoint. How can I do this? A: This is now possible in the UI, or via the API: Copied { "model" : { "image" : { "huggingface" : { "env" : { "var1" : "value" } } } , } Q: I’m using the text-generation-inference container type for my Endpoint. Is there more information about using TGI? A: Yes! Please check out our TGI documentation and this video on TGI deploys. Q: I’m sometimes running into a 503 error on a running endpoint in production. What can I do? A: To help mitigate service interruptions on an Endpoint that needs to be highly available, please make sure to use at least 2 replicas, ie min replicas set to 2. Q: What’s the difference between Dedicated and Serverless Endpoints? A: The Inference API (Serverless) is a solution to easily explore and evaluate models. For larger volumes of requests, or if you need guaranteed latency/performance, use Inference Endpoints (Dedicated) to easily deploy your models on dedicated, fully-managed infrastructure. Q: I can see from the logs that my endpoint is running but the status is stuck at “initializing” A: This usually means that the port mapping is incorrect. Ensure your app is listening on port 80 and that the Docker container is exposing port 80 externally. If you’re deploying a custom container you can change these values, but make sure to keep them aligned. Q: I’m getting a 500 response in the beginning of my endpoint deployment or when scaling is happening A: Confirm that you have a health route implemented in your app that returns a status code 200 when your application is ready to serve requests. Otherwise your app is considered ready as soon as the container has started, potentially resulting in 500s. You can configure the health route under the settings of your endpoint. Q: I see there’s an option to select a Download Pattern under Instance Configuration. What does this mean? A: You have an option to choose the download pattern of the model’s files when deploying an Endpoint, to help with limiting the volume of downloaded files. If a selected download pattern is not possible or compatible with the model, the system will not allow a change to the pattern. < > Update on GitHub ← Help & Support Access the solution (UI) → FA Qs Q: In which regions are Inference Endpoints available? Q: Can I access the instance my Endpoint is running on? Q: Can I see my Private Endpoint running on my VP C account? Q: Can I run inference in batches? Q: How can I scale my deployment? Q: Will my endpoint still be running if no more requests are processed? Q: I would like to deploy a model which is not in the supported tasks, is this possible? Q: How much does it cost to run my Endpoint? Q: Is the data transiting to the Endpoint encrypted? Q: How can I reduce the latency of my Endpoint? Q: How do I monitor my deployed Endpoint? Q: What if I would like to deploy to a different instance type that is not listed? Q: I accidentally leaked my token. Do I need to delete my endpoint? Q: I need to add a custom environment variable (default or secrets) to my endpoint. How can I do this? Q: I’m using the text-generation-inference container type for my Endpoint. Is there more information about using TG I? Q: I’m sometimes running into a 503 error on a running endpoint in production. What can I do? Q: What’s the difference between Dedicated and Serverless Endpoints? Q: I can see from the logs that my endpoint is running but the status is stuck at “initializing” Q: I’m getting a 500 response in the beginning of my endpoint deployment or when scaling is happening Q: I see there’s an option to select a Download Pattern under Instance Configuration. What does this mean?
Model_training_anatomy.txt
Model training anatomy Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Model training anatomy Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Model training anatomy To understand performance optimization techniques that one can apply to improve efficiency of model training speed and memory utilization, it’s helpful to get familiar with how GPU is utilized during training, and how compute intensity varies depending on an operation performed. Let’s start by exploring a motivating example of GPU utilization and the training run of a model. For the demonstration, we’ll need to install a few libraries: Copied pip install transformers datasets accelerate nvidia-ml-py3 The nvidia-ml-py3 library allows us to monitor the memory usage of the models from within Python. You might be familiar with the nvidia-smi command in the terminal - this library allows to access the same information in Python directly. Then, we create some dummy data: random token IDs between 100 and 30000 and binary labels for a classifier. In total, we get 512 sequences each with length 512 and store them in a Dataset with PyTorch format. Copied >>> import numpy as np >>> from datasets import Dataset >>> seq_len, dataset_size = 512 , 512 >>> dummy_data = { ... "input_ids" : np.random.randint( 100 , 30000 , (dataset_size, seq_len)), ... "labels" : np.random.randint( 0 , 2 , (dataset_size)), ... } >>> ds = Dataset.from_dict(dummy_data) >>> ds.set_format( "pt" ) To print summary statistics for the GPU utilization and the training run with the Trainer we define two helper functions: Copied >>> from pynvml import * >>> def print_gpu_utilization (): ... nvmlInit() ... handle = nvmlDeviceGetHandleByIndex( 0 ) ... info = nvmlDeviceGetMemoryInfo(handle) ... print ( f"GPU memory occupied: {info.used// 1024 ** 2 } MB." ) >>> def print_summary ( result ): ... print ( f"Time: {result.metrics[ 'train_runtime' ]: .2 f} " ) ... print ( f"Samples/second: {result.metrics[ 'train_samples_per_second' ]: .2 f} " ) ... print_gpu_utilization() Let’s verify that we start with a free GPU memory: Copied >>> print_gpu_utilization() GPU memory occupied: 0 MB. That looks good: the GPU memory is not occupied as we would expect before we load any models. If that’s not the case on your machine make sure to stop all processes that are using GPU memory. However, not all free GPU memory can be used by the user. When a model is loaded to the GPU the kernels are also loaded, which can take up 1-2GB of memory. To see how much it is we load a tiny tensor into the GPU which triggers the kernels to be loaded as well. Copied >>> import torch >>> torch.ones(( 1 , 1 )).to( "cuda" ) >>> print_gpu_utilization() GPU memory occupied: 1343 MB. We see that the kernels alone take up 1.3GB of GPU memory. Now let’s see how much space the model uses. Load Model First, we load the google-bert/bert-large-uncased model. We load the model weights directly to the GPU so that we can check how much space just the weights use. Copied >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained( "google-bert/bert-large-uncased" ).to( "cuda" ) >>> print_gpu_utilization() GPU memory occupied: 2631 MB. We can see that the model weights alone take up 1.3 GB of GPU memory. The exact number depends on the specific GPU you are using. Note that on newer GPUs a model can sometimes take up more space since the weights are loaded in an optimized fashion that speeds up the usage of the model. Now we can also quickly check if we get the same result as with nvidia-smi CLI: Copied nvidia-smi Copied Tue Jan 11 08:58:05 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:04.0 Off | 0 | | N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB | +-----------------------------------------------------------------------------+ We get the same number as before and you can also see that we are using a V100 GPU with 16GB of memory. So now we can start training the model and see how the GPU memory consumption changes. First, we set up a few standard training arguments: Copied default_args = { "output_dir" : "tmp" , "eval_strategy" : "steps" , "num_train_epochs" : 1 , "log_level" : "error" , "report_to" : "none" , } If you plan to run multiple experiments, in order to properly clear the memory between experiments, restart the Python kernel between experiments. Memory utilization at vanilla training Let’s use the Trainer and train the model without using any GPU performance optimization techniques and a batch size of 4: Copied >>> from transformers import TrainingArguments, Trainer, logging >>> logging.set_verbosity_error() >>> training_args = TrainingArguments(per_device_train_batch_size= 4 , **default_args) >>> trainer = Trainer(model=model, args=training_args, train_dataset=ds) >>> result = trainer.train() >>> print_summary(result) Copied Time: 57.82 Samples/second: 8.86 GPU memory occupied: 14949 MB. We see that already a relatively small batch size almost fills up our GPU’s entire memory. However, a larger batch size can often result in faster model convergence or better end performance. So ideally we want to tune the batch size to our model’s needs and not to the GPU limitations. What’s interesting is that we use much more memory than the size of the model. To understand a bit better why this is the case let’s have a look at a model’s operations and memory needs. Anatomy of Model’s Operations Transformers architecture includes 3 main groups of operations grouped below by compute-intensity. Tensor Contractions Linear layers and components of Multi-Head Attention all do batched matrix-matrix multiplications . These operations are the most compute-intensive part of training a transformer. Statistical Normalizations Softmax and layer normalization are less compute-intensive than tensor contractions, and involve one or more reduction operations , the result of which is then applied via a map. Element-wise Operators These are the remaining operators: biases, dropout, activations, and residual connections . These are the least compute-intensive operations. This knowledge can be helpful to know when analyzing performance bottlenecks. This summary is derived from Data Movement Is All You Need: A Case Study on Optimizing Transformers 2020 Anatomy of Model’s Memory We’ve seen that training the model uses much more memory than just putting the model on the GPU. This is because there are many components during training that use GPU memory. The components on GPU memory are the following: model weights optimizer states gradients forward activations saved for gradient computation temporary buffers functionality-specific memory A typical model trained in mixed precision with AdamW requires 18 bytes per model parameter plus activation memory. For inference there are no optimizer states and gradients, so we can subtract those. And thus we end up with 6 bytes per model parameter for mixed precision inference, plus activation memory. Let’s look at the details. Model Weights: 4 bytes * number of parameters for fp32 training 6 bytes * number of parameters for mixed precision training (maintains a model in fp32 and one in fp16 in memory) Optimizer States: 8 bytes * number of parameters for normal AdamW (maintains 2 states) 2 bytes * number of parameters for 8-bit AdamW optimizers like bitsandbytes 4 bytes * number of parameters for optimizers like SGD with momentum (maintains only 1 state) Gradients 4 bytes * number of parameters for either fp32 or mixed precision training (gradients are always kept in fp32) Forward Activations size depends on many factors, the key ones being sequence length, hidden size and batch size. There are the input and output that are being passed and returned by the forward and the backward functions and the forward activations saved for gradient computation. Temporary Memory Additionally, there are all kinds of temporary variables which get released once the calculation is done, but in the moment these could require additional memory and could push to OOM. Therefore, when coding it’s crucial to think strategically about such temporary variables and sometimes to explicitly free those as soon as they are no longer needed. Functionality-specific memory Then, your software could have special memory needs. For example, when generating text using beam search, the software needs to maintain multiple copies of inputs and outputs. forward vs backward Execution Speed For convolutions and linear layers there are 2x flops in the backward compared to the forward, which generally translates into ~2x slower (sometimes more, because sizes in the backward tend to be more awkward). Activations are usually bandwidth-limited, and it’s typical for an activation to have to read more data in the backward than in the forward (e.g. activation forward reads once, writes once, activation backward reads twice, gradOutput and output of the forward, and writes once, gradInput). As you can see, there are potentially a few places where we could save GPU memory or speed up operations. Now that you understand what affects GPU utilization and computation speed, refer to the Methods and tools for efficient training on a single GPU documentation page to learn about performance optimization techniques. < > Update on GitHub ← Pipelines for webserver inference Getting the most out of LLMs → Model training anatomy Load Model Memory utilization at vanilla training Anatomy of Model’s Operations Anatomy of Model’s Memory
PaddlePaddle_API.txt
PaddlePaddle API Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Safetensors documentation PaddlePaddle API Safetensors 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.5.0-rc.0 v0.3.2 v0.2.9 EN Getting started 🤗 Safetensors Speed Comparison Tensor Sharing in Pytorch Metadata Parsing Convert weights to safetensors API Torch API Tensorflow API PaddlePaddle API Flax API Numpy API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.5.0-rc.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started PaddlePaddle API safetensors.paddle.load_file < source > ( filename : typing.Union[str, os.PathLike] device = 'cpu' ) → Dict[str, paddle.Tensor] Parameters filename ( str , or os.PathLike )) — The name of the file which contains the tensors device ( Union[Dict[str, any], str] , optional , defaults to cpu ) — The device where the tensors need to be located after load. available options are all regular paddle device locations Returns Dict[str, paddle.Tensor] dictionary that contains name as key, value as paddle.Tensor Loads a safetensors file into paddle format. Example: Copied from safetensors.paddle import load_file file_path = "./my_folder/bert.safetensors" loaded = load_file(file_path) safetensors.paddle.load < source > ( data : bytes device : str = 'cpu' ) → Dict[str, paddle.Tensor] Parameters data ( bytes ) — The content of a safetensors file Returns Dict[str, paddle.Tensor] dictionary that contains name as key, value as paddle.Tensor on cpu Loads a safetensors file into paddle format from pure bytes. Example: Copied from safetensors.paddle import load file_path = "./my_folder/bert.safetensors" with open (file_path, "rb" ) as f: data = f.read() loaded = load(data) safetensors.paddle.save_file < source > ( tensors : typing.Dict[str, paddle.Tensor] filename : typing.Union[str, os.PathLike] metadata : typing.Optional[typing.Dict[str, str]] = None ) → None Parameters tensors ( Dict[str, paddle.Tensor] ) — The incoming tensors. Tensors need to be contiguous and dense. filename ( str , or os.PathLike )) — The filename we’re saving into. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns None Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.paddle import save_file import paddle tensors = { "embedding" : paddle.zeros(( 512 , 1024 )), "attention" : paddle.zeros(( 256 , 256 ))} save_file(tensors, "model.safetensors" ) safetensors.paddle.save < source > ( tensors : typing.Dict[str, paddle.Tensor] metadata : typing.Optional[typing.Dict[str, str]] = None ) → bytes Parameters tensors ( Dict[str, paddle.Tensor] ) — The incoming tensors. Tensors need to be contiguous and dense. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns bytes The raw bytes representing the format Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.paddle import save import paddle tensors = { "embedding" : paddle.zeros(( 512 , 1024 )), "attention" : paddle.zeros(( 256 , 256 ))} byte_data = save(tensors) < > Update on GitHub ← Tensorflow API Flax API → Paddle Paddle API
Helper_methods.txt
Helper methods Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Helper methods PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Helper methods A collection of helper functions for PEFT. Checking if a model is a PEFT model peft.helpers.check_if_peft_model < source > ( model_name_or_path : str ) → bool Parameters model_name_or_path ( str ) — Model id to check, can be local or on the Hugging Face Hub. Returns bool True if the model is a PEFT model, False otherwise. Check if the model is a PEFT model. Temporarily Rescaling Adapter Scale in LoraLayer Modules peft.helpers.rescale_adapter_scale < source > ( model multiplier ) Parameters model — The model containing LoraLayer modules whose scaling is to be adjusted. multiplier (float or int) — The multiplier that rescales the scaling attribute. Must be of type float or int. Raises ValueError ValueError — If the model does not contain any LoraLayer instances, indicating that the model does not support scaling. Context manager to temporarily rescale the scaling of the LoRA adapter in a model. The original scaling values are restored when the context manager exits. This context manager works with the transformers and diffusers models that have directly loaded LoRA adapters. For LoRA, applying this context manager with multiplier in [0, 1] is strictly equivalent to applying wise-ft (see #1940 for details). It can improve the performances of the model if there is a distribution shiftbetween the training data used for fine-tuning, and the test data used during inference. Warning: It has been reported that when using Apple’s MPS backend for PyTorch, it is necessary to add a short sleep time after exiting the context before the scales are fully restored. Example: Copied >>> model = ModelWithLoraLayer() >>> multiplier = 0.5 >>> with rescale_adapter_scale(model, multiplier): ... outputs = model(**inputs) # Perform operations with the scaled model >>> outputs = model(**inputs) # The original scaling values are restored here < > Update on GitHub ← Model merge Hotswapping adapters → Helper methods Checking if a model is a PEF T model Temporarily Rescaling Adapter Scale in Lora Layer Modules
Interface__CommitOutput.txt
Interface: CommitOutput Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: CommitOutput Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: CommitOutput Properties commit • commit : Object Type declaration Name Type oid string url string Defined in hub/src/lib/commit.ts:89 hookOutput • hookOutput : string Defined in hub/src/lib/commit.ts:93 pullRequestUrl • Optional pullRequestUrl : string Defined in hub/src/lib/commit.ts:88 < > Update on GitHub ← CommitInfo Credentials → Interface: Commit Output Properties commit Type declaration Defined in hook Output Defined in pull Request Url Defined in
Utilities_for_pipelines.txt
Utilities for pipelines Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Utilities for pipelines Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Utilities for pipelines This page lists all the utility functions the library provides for pipelines. Most of those are only useful if you are studying the code of the models in the library. Argument handling class transformers.pipelines. ArgumentHandler < source > ( ) Base interface for handling arguments for each Pipeline . class transformers.pipelines. ZeroShotClassificationArgumentHandler < source > ( ) Handles arguments for zero-shot for text classification by turning each possible label into an NLI premise/hypothesis pair. class transformers.pipelines. QuestionAnsweringArgumentHandler < source > ( ) QuestionAnsweringPipeline requires the user to provide multiple arguments (i.e. question & context) to be mapped to internal SquadExample . QuestionAnsweringArgumentHandler manages all the possible to create a SquadExample from the command-line supplied arguments. Data format class transformers. PipelineDataFormat < source > ( output_path : typing.Optional[str] input_path : typing.Optional[str] column : typing.Optional[str] overwrite : bool = False ) Parameters output_path ( str ) — Where to save the outgoing data. input_path ( str ) — Where to look for the input data. column ( str ) — The column to read. overwrite ( bool , optional , defaults to False ) — Whether or not to overwrite the output_path . Base class for all the pipeline supported data format both for reading and writing. Supported data formats currently includes: JSON CSV stdin/stdout (pipe) PipelineDataFormat also includes some utilities to work with multi-columns like mapping from datasets columns to pipelines keyword arguments through the dataset_kwarg_1=dataset_column_1 format. from_str < source > ( format : str output_path : typing.Optional[str] input_path : typing.Optional[str] column : typing.Optional[str] overwrite = False ) → PipelineDataFormat Parameters format ( str ) — The format of the desired pipeline. Acceptable values are "json" , "csv" or "pipe" . output_path ( str , optional ) — Where to save the outgoing data. input_path ( str , optional ) — Where to look for the input data. column ( str , optional ) — The column to read. overwrite ( bool , optional , defaults to False ) — Whether or not to overwrite the output_path . Returns PipelineDataFormat The proper data format. Creates an instance of the right subclass of PipelineDataFormat depending on format . save < source > ( data : typing.Union[dict, typing.List[dict]] ) Parameters data ( dict or list of dict ) — The data to store. Save the provided data object with the representation for the current PipelineDataFormat . save_binary < source > ( data : typing.Union[dict, typing.List[dict]] ) → str Parameters data ( dict or list of dict ) — The data to store. Returns str Path where the data has been saved. Save the provided data object as a pickle-formatted binary data on the disk. class transformers. CsvPipelineDataFormat < source > ( output_path : typing.Optional[str] input_path : typing.Optional[str] column : typing.Optional[str] overwrite = False ) Parameters output_path ( str ) — Where to save the outgoing data. input_path ( str ) — Where to look for the input data. column ( str ) — The column to read. overwrite ( bool , optional , defaults to False ) — Whether or not to overwrite the output_path . Support for pipelines using CSV data format. save < source > ( data : typing.List[dict] ) Parameters data ( List[dict] ) — The data to store. Save the provided data object with the representation for the current PipelineDataFormat . class transformers. JsonPipelineDataFormat < source > ( output_path : typing.Optional[str] input_path : typing.Optional[str] column : typing.Optional[str] overwrite = False ) Parameters output_path ( str ) — Where to save the outgoing data. input_path ( str ) — Where to look for the input data. column ( str ) — The column to read. overwrite ( bool , optional , defaults to False ) — Whether or not to overwrite the output_path . Support for pipelines using JSON file format. save < source > ( data : dict ) Parameters data ( dict ) — The data to store. Save the provided data object in a json file. class transformers. PipedPipelineDataFormat < source > ( output_path : typing.Optional[str] input_path : typing.Optional[str] column : typing.Optional[str] overwrite : bool = False ) Parameters output_path ( str ) — Where to save the outgoing data. input_path ( str ) — Where to look for the input data. column ( str ) — The column to read. overwrite ( bool , optional , defaults to False ) — Whether or not to overwrite the output_path . Read data from piped input to the python process. For multi columns data, columns should separated by If columns are provided, then the output will be a dictionary with {column_x: value_x} save < source > ( data : dict ) Parameters data ( dict ) — The data to store. Print the data. Utilities class transformers.pipelines. PipelineException < source > ( task : str model : str reason : str ) Parameters task ( str ) — The task of the pipeline. model ( str ) — The model used by the pipeline. reason ( str ) — The error message to display. Raised by a Pipeline when handling call . < > Update on GitHub ← Custom Layers and Utilities Utilities for Tokenizers → Utilities for pipelines Argument handling Data format Utilities
Speed_up_inference.txt
Speed up inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Speed up inference Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Speed up inference There are several ways to optimize Diffusers for inference speed, such as reducing the computational burden by lowering the data precision or using a lightweight distilled model. There are also memory-efficient attention implementations, xFormers and scaled dot product attention in PyTorch 2.0, that reduce memory usage which also indirectly speeds up inference. Different speed optimizations can be stacked together to get the fastest inference times. Optimizing for inference speed or reduced memory usage can lead to improved performance in the other category, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about lowering memory usage in the Reduce memory usage guide. The inference times below are obtained from generating a single 512x512 image from the prompt “a photo of an astronaut riding a horse on mars” with 50 DDIM steps on a NVIDIA A100. setup latency speed-up baseline 5.27s x1 tf32 4.14s x1.27 fp16 3.51s x1.50 combined 3.41s x1.54 TensorFloat-32 On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (tf32) mode for faster, but slightly less accurate computations. By default, PyTorch enables tf32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling tf32 for matrix multiplications. It can significantly speed up computations with typically negligible loss in numerical accuracy. Copied import torch torch.backends.cuda.matmul.allow_tf32 = True Learn more about tf32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, set torch_dtype=torch.float16 to load and run the model weights directly with half-precision weights. Copied import torch from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, use_safetensors= True , ) pipe = pipe.to( "cuda" ) Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. Distilled model You could also use a distilled Stable Diffusion model and autoencoder to speed up inference. During distillation, many of the UNet’s residual and attention blocks are shed to reduce the model size by 51% and improve latency on CPU/GPU by 43%. The distilled model is faster and uses less memory while generating images of comparable quality to the full Stable Diffusion model. Read the Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. The inference times below are obtained from generating 4 images from the prompt “a photo of an astronaut riding a horse on mars” with 25 PNDM steps on a NVIDIA A100. Each generation is repeated 3 times with the distilled Stable Diffusion v1.4 model by Nota AI . setup latency speed-up baseline 6.37s x1 distilled 4.18s x1.52 distilled + tiny autoencoder 3.83s x1.66 Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model. Copied from diffusers import StableDiffusionPipeline import torch distilled = StableDiffusionPipeline.from_pretrained( "nota-ai/bk-sdm-small" , torch_dtype=torch.float16, use_safetensors= True , ).to( "cuda" ) prompt = "a golden vase with different flowers" generator = torch.manual_seed( 2023 ) image = distilled( "a golden vase with different flowers" , num_inference_steps= 25 , generator=generator).images[ 0 ] image original Stable Diffusion distilled Stable Diffusion Tiny AutoEncoder To speed inference up even more, replace the autoencoder with a distilled version of it. Copied import torch from diffusers import AutoencoderTiny, StableDiffusionPipeline distilled = StableDiffusionPipeline.from_pretrained( "nota-ai/bk-sdm-small" , torch_dtype=torch.float16, use_safetensors= True , ).to( "cuda" ) distilled.vae = AutoencoderTiny.from_pretrained( "sayakpaul/taesd-diffusers" , torch_dtype=torch.float16, use_safetensors= True , ).to( "cuda" ) prompt = "a golden vase with different flowers" generator = torch.manual_seed( 2023 ) image = distilled( "a golden vase with different flowers" , num_inference_steps= 25 , generator=generator).images[ 0 ] image distilled Stable Diffusion + Tiny AutoEncoder More tiny autoencoder models for other Stable Diffusion models, like Stable Diffusion 3, are available from madebyollin . < > Update on GitHub ← torchao Reduce memory usage → Speed up inference Tensor Float-32 Half-precision weights Distilled model Tiny Auto Encoder
Backbone.txt
Backbone Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Backbone Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Backbone A backbone is a model used for feature extraction for higher level computer vision tasks such as object detection and image classification. Transformers provides an AutoBackbone class for initializing a Transformers backbone from pretrained model weights, and two utility classes: BackboneMixin enables initializing a backbone from Transformers or timm and includes functions for returning the output features and indices. BackboneConfigMixin sets the output features and indices of the backbone configuration. timm models are loaded with the TimmBackbone and TimmBackboneConfig classes. Backbones are supported for the following models: BEiT BiT ConvNext ConvNextV2 DiNAT DINOV2 FocalNet MaskFormer NAT ResNet Swin Transformer Swin Transformer v2 ViTDet AutoBackbone class transformers. AutoBackbone < source > ( *args **kwargs ) BackboneMixin class transformers.utils. BackboneMixin < source > ( ) to_dict < source > ( ) Serializes this instance to a Python dictionary. Override the default to_dict() from PretrainedConfig to include the out_features and out_indices attributes. BackboneConfigMixin class transformers.utils. BackboneConfigMixin < source > ( ) A Mixin to support handling the out_features and out_indices attributes for the backbone configurations. to_dict < source > ( ) Serializes this instance to a Python dictionary. Override the default to_dict() from PretrainedConfig to include the out_features and out_indices attributes. TimmBackbone class transformers. TimmBackbone < source > ( config **kwargs ) Wrapper class for timm models to be used as backbones. This enables using the timm models interchangeably with the other models in the library keeping the same API. TimmBackboneConfig class transformers. TimmBackboneConfig < source > ( backbone = None num_channels = 3 features_only = True use_pretrained_backbone = True out_indices = None freeze_batch_norm_2d = False **kwargs ) Parameters backbone ( str , optional ) — The timm checkpoint to load. num_channels ( int , optional , defaults to 3) — The number of input channels. features_only ( bool , optional , defaults to True ) — Whether to output only the features or also the logits. use_pretrained_backbone ( bool , optional , defaults to True ) — Whether to use a pretrained backbone. out_indices ( List[int] , optional ) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). Will default to the last stage if unset. freeze_batch_norm_2d ( bool , optional , defaults to False ) — Converts all BatchNorm2d and SyncBatchNorm layers of provided module into FrozenBatchNorm2d . This is the configuration class to store the configuration for a timm backbone TimmBackbone . It is used to instantiate a timm backbone model according to the specified arguments, defining the model. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: Copied >>> from transformers import TimmBackboneConfig, TimmBackbone >>> # Initializing a timm backbone >>> configuration = TimmBackboneConfig( "resnet50" ) >>> # Initializing a model from the configuration >>> model = TimmBackbone(configuration) >>> # Accessing the model configuration >>> configuration = model.config < > Update on GitHub ← Auto Classes Callbacks → Backbone Auto Backbone Backbone Mixin Backbone Config Mixin Timm Backbone Timm Backbone Config
Preparing_the_Model.txt
Preparing the Model Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Preparing the Model text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Preparing the Model Text Generation Inference improves the model in several aspects. Quantization TGI supports bits-and-bytes , GPT-Q , AWQ , Marlin , EETQ , EXL2 , and fp8 quantization. To speed up inference with quantization, simply set quantize flag to bitsandbytes , gptq , awq , marlin , exl2 , eetq or fp8 depending on the quantization technique you wish to use. When using GPT-Q quantization, you need to point to one of the models here . Similarly, when using AWQ quantization, you need to point to one of these models . To get more information about quantization, please refer to quantization guide RoPE Scaling RoPE scaling can be used to increase the sequence length of the model during the inference time without necessarily fine-tuning it. To enable RoPE scaling, simply pass --rope-scaling , --max-input-length and --rope-factors flags when running through CLI. --rope-scaling can take the values linear or dynamic . If your model is not fine-tuned to a longer sequence length, use dynamic . --rope-factor is the ratio between the intended max sequence length and the model’s original max sequence length. Make sure to pass --max-input-length to provide maximum input length for extension. We recommend using dynamic RoPE scaling. Safetensors Safetensors is a fast and safe persistence format for deep learning models, and is required for tensor parallelism. TGI supports safetensors model loading under the hood. By default, given a repository with safetensors and pytorch weights, TGI will always load safetensors . If there’s no pytorch weights, TGI will convert the weights to safetensors format. < > Update on GitHub ← Consuming TGI Serving Private & Gated Models → Preparing the Model Quantization RoP E Scaling Safetensors
Model_merging.txt
Model merging Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Model merging PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Model merging Training a model for each task can be costly, take up storage space, and the models aren’t able to learn new information to improve their performance. Multitask learning can overcome some of these limitations by training a model to learn several tasks, but it is expensive to train and designing a dataset for it is challenging. Model merging offers a solution to these challenges by combining multiple pretrained models into one model, giving it the combined abilities of each individual model without any additional training. PEFT provides several methods for merging models like a linear or SVD combination. This guide focuses on two methods that are more efficient for merging LoRA adapters by eliminating redundant parameters: TIES - TrIm, Elect, and Merge (TIES) is a three-step method for merging models. First, redundant parameters are trimmed, then conflicting signs are resolved into an aggregated vector, and finally the parameters whose signs are the same as the aggregate sign are averaged. This method takes into account that some values (redundant and sign disagreement) can degrade performance in the merged model. DARE - Drop And REscale is a method that can be used to prepare for other model merging methods like TIES. It works by randomly dropping parameters according to a drop rate and rescaling the remaining parameters. This helps to reduce the number of redundant and potentially interfering parameters among multiple models. Models are merged with the add_weighted_adapter() method, and the specific model merging method is specified in the combination_type parameter. Merge method With TIES and DARE, merging is enabled by setting combination_type and density to a value of the weights to keep from the individual models. For example, let’s merge three finetuned TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T models: tinyllama_lora_nobots , tinyllama_lora_sql , and tinyllama_lora_adcopy . When you’re attempting to merge fully trained models with TIES, you should be aware of any special tokens each model may have added to the embedding layer which are not a part of the original checkpoint’s vocabulary. This may cause an issue because each model may have added a special token to the same embedding position. If this is the case, you should use the resize_token_embeddings method to avoid merging the special tokens at the same embedding index. This shouldn’t be an issue if you’re only merging LoRA adapters trained from the same base model. Load a base model and can use the load_adapter() method to load and assign each adapter a name: Copied from peft import PeftConfig, PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer import torch config = PeftConfig.from_pretrained( "smangrul/tinyllama_lora_norobots" ) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, load_in_4bit= True , device_map= "auto" ). eval () tokenizer = AutoTokenizer.from_pretrained( "smangrul/tinyllama_lora_norobots" ) model = PeftModel.from_pretrained(model, "smangrul/tinyllama_lora_norobots" , adapter_name= "norobots" ) _ = model.load_adapter( "smangrul/tinyllama_lora_sql" , adapter_name= "sql" ) _ = model.load_adapter( "smangrul/tinyllama_lora_adcopy" , adapter_name= "adcopy" ) Set the adapters, weights, adapter_name , combination_type , and density with the add_weighted_adapter() method. TIES DARE Weight values greater than 1.0 typically produce better results because they preserve the correct scale. A good default starting value for the weights is to set all values to 1.0 . Copied adapters = [ "norobots" , "adcopy" , "sql" ] weights = [ 2.0 , 1.0 , 1.0 ] adapter_name = "merge" density = 0.2 model.add_weighted_adapter(adapters, weights, adapter_name, combination_type= "ties" , density=density) Set the newly merged model as the active model with the set_adapter() method. Copied model.set_adapter( "merge" ) Now you can use the merged model as an instruction-tuned model to write ad copy or SQL queries! instruct ad copy SQL Copied messages = [ { "role" : "user" , "content" : "Write an essay about Generative AI." }, ] text = tokenizer.apply_chat_template(messages, add_generation_prompt= True , tokenize= False ) inputs = tokenizer(text, return_tensors= "pt" ) inputs = {k: v.to( "cuda" ) for k, v in inputs.items()} outputs = model.generate(**inputs, max_new_tokens= 256 , do_sample= True , top_p= 0.95 , temperature= 0.2 , repetition_penalty= 1.2 , eos_token_id=tokenizer.eos_token_id) print (tokenizer.decode(outputs[ 0 ])) Merging (IA)³ Models The (IA)³ models facilitate linear merging of adapters. To merge adapters in an (IA)³ model, utilize the add_weighted_adapter method from the IA3Model class. This method is analogous to the add_weighted_adapter method used in LoraModel , with the key difference being the absence of the combination_type parameter. For example, to merge three (IA)³ adapters into a PEFT model, you would proceed as follows: Copied adapters = [ "adapter1" , "adapter2" , "adapter3" ] weights = [ 0.4 , 0.3 , 0.3 ] adapter_name = "merge" model.add_weighted_adapter(adapters, weights, adapter_name) It is recommended that the weights sum to 1.0 to preserve the scale of the model. The merged model can then be set as the active model using the set_adapter method: Copied model.set_adapter( "merge" ) < > Update on GitHub ← IA3 Quantization → Model merging Merge method Merging (I A)³ Models
Building_custom_models.txt
Building custom models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Building custom models Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Building custom models The 🤗 Transformers library is designed to be easily extensible. Every model is fully coded in a given subfolder of the repository with no abstraction, so you can easily copy a modeling file and tweak it to your needs. If you are writing a brand new model, it might be easier to start from scratch. In this tutorial, we will show you how to write a custom model and its configuration so it can be used inside Transformers, and how you can share it with the community (with the code it relies on) so that anyone can use it, even if it’s not present in the 🤗 Transformers library. We’ll see how to build upon transformers and extend the framework with your hooks and custom code. We will illustrate all of this on a ResNet model, by wrapping the ResNet class of the timm library into a PreTrainedModel . Writing a custom configuration Before we dive into the model, let’s first write its configuration. The configuration of a model is an object that will contain all the necessary information to build the model. As we will see in the next section, the model can only take a config to be initialized, so we really need that object to be as complete as possible. Models in the transformers library itself generally follow the convention that they accept a config object in their __init__ method, and then pass the whole config to sub-layers in the model, rather than breaking the config object into multiple arguments that are all passed individually to sub-layers. Writing your model in this style results in simpler code with a clear “source of truth” for any hyperparameters, and also makes it easier to reuse code from other models in transformers . In our example, we will take a couple of arguments of the ResNet class that we might want to tweak. Different configurations will then give us the different types of ResNets that are possible. We then just store those arguments, after checking the validity of a few of them. Copied from transformers import PretrainedConfig from typing import List class ResnetConfig ( PretrainedConfig ): model_type = "resnet" def __init__ ( self, block_type= "bottleneck" , layers: List [ int ] = [ 3 , 4 , 6 , 3 ], num_classes: int = 1000 , input_channels: int = 3 , cardinality: int = 1 , base_width: int = 64 , stem_width: int = 64 , stem_type: str = "" , avg_down: bool = False , **kwargs, ): if block_type not in [ "basic" , "bottleneck" ]: raise ValueError( f"`block_type` must be 'basic' or bottleneck', got {block_type} ." ) if stem_type not in [ "" , "deep" , "deep-tiered" ]: raise ValueError( f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type} ." ) self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super ().__init__(**kwargs) The three important things to remember when writing you own configuration are the following: you have to inherit from PretrainedConfig , the __init__ of your PretrainedConfig must accept any kwargs, those kwargs need to be passed to the superclass __init__ . The inheritance is to make sure you get all the functionality from the 🤗 Transformers library, while the two other constraints come from the fact a PretrainedConfig has more fields than the ones you are setting. When reloading a config with the from_pretrained method, those fields need to be accepted by your config and then sent to the superclass. Defining a model_type for your configuration (here model_type="resnet" ) is not mandatory, unless you want to register your model with the auto classes (see last section). With this done, you can easily create and save your configuration like you would do with any other model config of the library. Here is how we can create a resnet50d config and save it: Copied resnet50d_config = ResnetConfig(block_type= "bottleneck" , stem_width= 32 , stem_type= "deep" , avg_down= True ) resnet50d_config.save_pretrained( "custom-resnet" ) This will save a file named config.json inside the folder custom-resnet . You can then reload your config with the from_pretrained method: Copied resnet50d_config = ResnetConfig.from_pretrained( "custom-resnet" ) You can also use any other method of the PretrainedConfig class, like push_to_hub() to directly upload your config to the Hub. Writing a custom model Now that we have our ResNet configuration, we can go on writing the model. We will actually write two: one that extracts the hidden features from a batch of images (like BertModel ) and one that is suitable for image classification (like BertForSequenceClassification ). As we mentioned before, we’ll only write a loose wrapper of the model to keep it simple for this example. The only thing we need to do before writing this class is a map between the block types and actual block classes. Then the model is defined from the configuration by passing everything to the ResNet class: Copied from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = { "basic" : BasicBlock, "bottleneck" : Bottleneck} class ResnetModel ( PreTrainedModel ): config_class = ResnetConfig def __init__ ( self, config ): super ().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward ( self, tensor ): return self.model.forward_features(tensor) For the model that will classify images, we just change the forward method: Copied import torch class ResnetModelForImageClassification ( PreTrainedModel ): config_class = ResnetConfig def __init__ ( self, config ): super ().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward ( self, tensor, labels= None ): logits = self.model(tensor) if labels is not None : loss = torch.nn.functional.cross_entropy(logits, labels) return { "loss" : loss, "logits" : logits} return { "logits" : logits} In both cases, notice how we inherit from PreTrainedModel and call the superclass initialization with the config (a bit like when you write a regular torch.nn.Module ). The line that sets the config_class is not mandatory, unless you want to register your model with the auto classes (see last section). If your model is very similar to a model inside the library, you can re-use the same configuration as this model. You can have your model return anything you want, but returning a dictionary like we did for ResnetModelForImageClassification , with the loss included when labels are passed, will make your model directly usable inside the Trainer class. Using another output format is fine as long as you are planning on using your own training loop or another library for training. Now that we have our model class, let’s create one: Copied resnet50d = ResnetModelForImageClassification(resnet50d_config) Again, you can use any of the methods of PreTrainedModel , like save_pretrained() or push_to_hub() . We will use the second in the next section, and see how to push the model weights with the code of our model. But first, let’s load some pretrained weights inside our model. In your own use case, you will probably be training your custom model on your own data. To go fast for this tutorial, we will use the pretrained version of the resnet50d. Since our model is just a wrapper around it, it’s going to be easy to transfer those weights: Copied import timm pretrained_model = timm.create_model( "resnet50d" , pretrained= True ) resnet50d.model.load_state_dict(pretrained_model.state_dict()) Now let’s see how to make sure that when we do save_pretrained() or push_to_hub() , the code of the model is saved. Registering a model with custom code to the auto classes If you are writing a library that extends 🤗 Transformers, you may want to extend the auto classes to include your own model. This is different from pushing the code to the Hub in the sense that users will need to import your library to get the custom models (contrarily to automatically downloading the model code from the Hub). As long as your config has a model_type attribute that is different from existing model types, and that your model classes have the right config_class attributes, you can just add them to the auto classes like this: Copied from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register( "resnet" , ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) Note that the first argument used when registering your custom config to AutoConfig needs to match the model_type of your custom config, and the first argument used when registering your custom models to any auto model class needs to match the config_class of those models. Sending the code to the Hub This API is experimental and may have some slight breaking changes in the next releases. First, make sure your model is fully defined in a .py file. It can rely on relative imports to some other files as long as all the files are in the same directory (we don’t support submodules for this feature yet). For our example, we’ll define a modeling_resnet.py file and a configuration_resnet.py file in a folder of the current working directory named resnet_model . The configuration file contains the code for ResnetConfig and the modeling file contains the code of ResnetModel and ResnetModelForImageClassification . Copied . └── resnet_model ├── __init__. py ├── configuration_resnet. py └── modeling_resnet. py The __init__.py can be empty, it’s just there so that Python detects resnet_model can be use as a module. If copying a modeling files from the library, you will need to replace all the relative imports at the top of the file to import from the transformers package. Note that you can re-use (or subclass) an existing configuration/model. To share your model with the community, follow those steps: first import the ResNet model and config from the newly created files: Copied from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification Then you have to tell the library you want to copy the code files of those objects when using the save_pretrained method and properly register them with a given Auto class (especially for models), just run: Copied ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class( "AutoModel" ) ResnetModelForImageClassification.register_for_auto_class( "AutoModelForImageClassification" ) Note that there is no need to specify an auto class for the configuration (there is only one auto class for them, AutoConfig ) but it’s different for models. Your custom model could be suitable for many different tasks, so you have to specify which one of the auto classes is the correct one for your model. Use register_for_auto_class() if you want the code files to be copied. If you instead prefer to use code on the Hub from another repo, you don’t need to call it. In cases where there’s more than one auto class, you can modify the config.json directly using the following structure: Copied "auto_map" : { "AutoConfig" : "<your-repo-name>--<config-name>" , "AutoModel" : "<your-repo-name>--<config-name>" , "AutoModelFor<Task>" : "<your-repo-name>--<config-name>" , } , Next, let’s create the config and models as we did before: Copied resnet50d_config = ResnetConfig(block_type= "bottleneck" , stem_width= 32 , stem_type= "deep" , avg_down= True ) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model( "resnet50d" , pretrained= True ) resnet50d.model.load_state_dict(pretrained_model.state_dict()) Now to send the model to the Hub, make sure you are logged in. Either run in your terminal: Copied huggingface-cli login or from a notebook: Copied from huggingface_hub import notebook_login notebook_login() You can then push to your own namespace (or an organization you are a member of) like this: Copied resnet50d.push_to_hub( "custom-resnet50d" ) On top of the modeling weights and the configuration in json format, this also copied the modeling and configuration .py files in the folder custom-resnet50d and uploaded the result to the Hub. You can check the result in this model repo . See the sharing tutorial for more information on the push to Hub method. Using a model with custom code You can use any configuration, model or tokenizer with custom code files in its repository with the auto-classes and the from_pretrained method. All files and code uploaded to the Hub are scanned for malware (refer to the Hub security documentation for more information), but you should still review the model code and author to avoid executing malicious code on your machine. Set trust_remote_code=True to use a model with custom code: Copied from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d" , trust_remote_code= True ) It is also strongly encouraged to pass a commit hash as a revision to make sure the author of the models did not update the code with some malicious new lines (unless you fully trust the authors of the models). Copied commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d" , trust_remote_code= True , revision=commit_hash ) Note that when browsing the commit history of the model repo on the Hub, there is a button to easily copy the commit hash of any commit. < > Update on GitHub ← Use model-specific APIs Chat templates → Building custom models Writing a custom configuration Writing a custom model Registering a model with custom code to the auto classes Sending the code to the Hub Using a model with custom code
Textual_inversion.txt
Textual inversion Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Textual inversion Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the Stable Diffusion Conceptualizer . This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you’re interested in teaching a model new concepts with textual inversion, take a look at the Textual Inversion training guide. Import the necessary libraries: Copied import torch from diffusers import StableDiffusionPipeline from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer : Copied pretrained_model_name_or_path = "stable-diffusion-v1-5/stable-diffusion-v1-5" repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained( pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token <cat-toy> , and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a <cat-toy> on it" num_samples_per_row = 2 num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = [] for _ in range (num_rows): images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps= 50 , guidance_scale= 7.5 ).images all_images.extend(images) grid = make_image_grid(all_images, num_rows, num_samples_per_row) grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have a closer look at it’s structure: Copied from huggingface_hub import hf_hub_download from safetensors.torch import load_file file = hf_hub_download( "dn118/unaestheticXL" , filename= "unaestheticXLv31.safetensors" ) state_dict = load_file(file) state_dict Copied { 'clip_g' : tensor( [[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], ..., [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]] , 'clip_l' : tensor( [[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], ..., [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]] , There are two tensors, "clip_g" and "clip_l" . "clip_g" corresponds to the bigger text encoder in SDXL and refers to pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder . Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer to load_textual_inversion() : Copied from diffusers import AutoPipelineForText2Image import torch pipe = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , variant= "fp16" , torch_dtype=torch.float16) pipe.to( "cuda" ) pipe.load_textual_inversion(state_dict[ "clip_g" ], token= "unaestheticXLv31" , text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) pipe.load_textual_inversion(state_dict[ "clip_l" ], token= "unaestheticXLv31" , text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) # the embedding should be used as a negative embedding, so we pass it as a negative prompt generator = torch.Generator().manual_seed( 33 ) image = pipe( "a woman standing in front of a mountain" , negative_prompt= "unaestheticXLv31" , generator=generator).images[ 0 ] image < > Update on GitHub ← Latent Consistency Model Shap-E → Textual inversion Stable Diffusion 1 and 2 Stable Diffusion XL
Gradient_synchronization.txt
Gradient synchronization Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Gradient synchronization Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Gradient synchronization PyTorch’s distributed module operates by communicating back and forth between all of the GPUs in your system. This communication takes time, and ensuring all processes know the states of each other happens at particular triggerpoints when using the ddp module. These triggerpoints are added to the PyTorch model, specifically their forward() and backward() methods. This happens when the model is wrapped with DistributedDataParallel : Copied import torch.nn as nn from torch.nn.parallel import DistributedDataParallel model = nn.Linear( 10 , 10 ) ddp_model = DistributedDataParallel(model) In Accelerate this conversion happens automatically when calling prepare() and passing in your model. Copied + from accelerate import Accelerator + accelerator = Accelerator() import torch.nn as nn - from torch.nn.parallel import DistributedDataParallel model = nn.Linear(10,10) + model = accelerator.prepare(model) The slowdown in gradient accumulation You now understand that PyTorch adds hooks to the forward and backward method of your PyTorch model when training in a distributed setup. But how does this risk slowing down your code? In DDP (distributed data parallel), the specific order in which processes are performed and ran are expected at specific points and these must also occur at roughly the same time before moving on. The most direct example is when you update model parameters through optimizer.step() . Without gradient accumulation, all instances of the model need to have updated their gradients computed, collated, and updated before moving on to the next batch of data. When performing gradient accumulation, you accumulate n loss gradients and skip optimizer.step() until n batches have been reached. As all training processes only need to synchronize by the time optimizer.step() is called, without any modification to your training step, this needless inter-process communication can cause a significant slowdown. How can you avoid this overhead? Solving the slowdown problem Since you are skipping model parameter updates when training on these batches, their gradients do not need to be synchronized until the point where optimizer.step() is actually called. PyTorch cannot automagically tell when you need to do this, but they do provide a tool to help through the no_sync context manager that is added to your model after converting it to DDP. Under this context manager, PyTorch will skip synchronizing the gradients when .backward() is called, and the first call to .backward() outside this context manager will trigger the synchronization. See an example below: Copied ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer) for index, batch in enumerate (dataloader): inputs, targets = batch # Trigger gradient synchronization on the last batch if index != ( len (dataloader) - 1 ): with ddp_model.no_sync(): # Gradients only accumulate outputs = ddp_model(inputs) loss = loss_func(outputs) accelerator.backward(loss) else : # Gradients finally sync outputs = ddp_model(inputs) loss = loss_func(outputs) accelerator.backward(loss) optimizer.step() In Accelerate to make this an API that can be called no matter the training device (though it may not do anything if you are not in a distributed system!), ddp_model.no_sync gets replaced with no_sync() and operates the same way: Copied ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer) for index, batch in enumerate(dataloader): inputs, targets = batch # Trigger gradient synchronization on the last batch if index != (len(dataloader)-1): - with ddp_model.no_sync(): + with accelerator.no_sync(model): # Gradients only accumulate outputs = ddp_model(inputs) loss = loss_func(outputs, targets) accelerator.backward(loss) else: # Gradients finally sync outputs = ddp_model(inputs) loss = loss_func(outputs) accelerator.backward(loss) optimizer.step() optimizer.zero_grad() As you may expect, the accumulate() function wraps around this conditional check by keeping track of the current batch number, leaving you with the final gradient accumulation API: Copied ddp_model, dataloader, optimizer = accelerator.prepare(model, dataloader, optimizer) for batch in dataloader: with accelerator.accumulate(model): optimizer.zero_grad() inputs, targets = batch outputs = model(inputs) loss = loss_function(outputs, targets) accelerator.backward(loss) optimizer.step() optimizer.zero_grad() As a result, you should either use accelerator.accumulate or accelerator.no_sync when it comes to API choice. Just how much of a slowdown is there, and easy mistakes you can make To set up a realistic example, consider the following setup: Two single-GPU T4 nodes and one node with two GPUs Each GPU is a T4, and are hosted on GCP The script used is a modification of the NLP Example script Batch size per GPU is 16, and gradients are accumulated every 4 steps All scripts are available in this repository . If not careful about gradient synchronization and GPU communication, a large amount of time can be wasted from when these GPUs communicate to each other during unnecessary periods. By how much? Reference: Baseline: uses no synchronization practices discussed here no_sync improperly: no_sync only around the backward call, not the forward no_sync : using the no_sync pattern properly accumulate : using accumulate() properly Below are the average seconds per batch iterating over 29 batches of data for each setup on both a single node and on the dual-node setup: Baseline no_sync improperly no_sync accumulate Multi-Node 2±0.01s 2.13±0.08s 0.91±0.11s 0.91±0.11s Single Node 0.50±0.01s 0.50±0.01s 0.41±0.015s 0.41±0.015s As you can see, if you are not careful about how you set up your gradient synchronization, you can get upwards of more than a 2x slowdown during training! If you are worried about making sure everything is done properly, we highly recommend utilizing the accumulate() function and passing in gradient_accumulation_steps or gradient_accumulation_plugin to the Accelerator object so Accelerate can handle this for you. no_sync requires additional GPU memory when using FSDP Be aware that not syncing gradients can have adverse effects while performing FSDP training. As it has been warned in torch , the no_sync context manager for FSDP will require additional memory. Therefore in memory intensive situations while using FSDP, we recommend to set sync_each_batch to True in the GradientAccumulationPlugin to disable no_sync . See the example below where we fine-tune Mixtral (47B parameters) on 8 A100-80GB GPUs. We see that even for a modest gradient_accumulation_steps=2 we quickly go out-of-memory (OOM) if no_sync is enabled. Again, this is due to additional memory overheads due to FSDP’s no_sync . However, if no_sync is disabled via sync_each_batch=True , then the memory consumption for gradient_accumulation_steps=16 reverts to that of gradient_accumulation_steps=1 . Model no_sync (accum=1) no_sync (accum=2) no_sync disabled (accum=16) mixtral 8x7B 69G OOM 69G Disabling no_sync means there will be slowdown due the extra data syncs, as explained by the earlier sections of this guide. < > Update on GitHub ← Executing and deferring jobs FSDP vs DeepSpeed → Gradient synchronization The slowdown in gradient accumulation Solving the slowdown problem Just how much of a slowdown is there, and easy mistakes you can make no_sync requires additional GP U memory when using FSDP
AutoPipeline.txt
AutoPipeline Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation AutoPipeline Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started AutoPipeline Diffusers provides many pipelines for basic tasks like generating images, videos, audio, and inpainting. On top of these, there are specialized pipelines for adapters and features like upscaling, super-resolution, and more. Different pipeline classes can even use the same checkpoint because they share the same pretrained model! With so many different pipelines, it can be overwhelming to know which pipeline class to use. The AutoPipeline class is designed to simplify the variety of pipelines in Diffusers. It is a generic task-first pipeline that lets you focus on a task ( AutoPipelineForText2Image , AutoPipelineForImage2Image , and AutoPipelineForInpainting ) without needing to know the specific pipeline class. The AutoPipeline automatically detects the correct pipeline class to use. For example, let’s use the dreamlike-art/dreamlike-photoreal-2.0 checkpoint. Under the hood, AutoPipeline : Detects a "stable-diffusion" class from the model_index.json file. Depending on the task you’re interested in, it loads the StableDiffusionPipeline , StableDiffusionImg2ImgPipeline , or StableDiffusionInpaintPipeline . Any parameter ( strength , num_inference_steps , etc.) you would pass to these specific pipelines can also be passed to the AutoPipeline . text-to-image image-to-image inpainting Copied from diffusers import AutoPipelineForText2Image import torch pipe_txt2img = AutoPipelineForText2Image.from_pretrained( "dreamlike-art/dreamlike-photoreal-2.0" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) prompt = "cinematic photo of Godzilla eating sushi with a cat in a izakaya, 35mm photograph, film, professional, 4k, highly detailed" generator = torch.Generator(device= "cpu" ).manual_seed( 37 ) image = pipe_txt2img(prompt, generator=generator).images[ 0 ] image Unsupported checkpoints The AutoPipeline supports Stable Diffusion , Stable Diffusion XL , ControlNet , Kandinsky 2.1 , Kandinsky 2.2 , and DeepFloyd IF checkpoints. If you try to load an unsupported checkpoint, you’ll get an error. Copied from diffusers import AutoPipelineForImage2Image import torch pipeline = AutoPipelineForImage2Image.from_pretrained( "openai/shap-e-img2img" , torch_dtype=torch.float16, use_safetensors= True ) "ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" < > Update on GitHub ← Understanding pipelines, models and schedulers Train a diffusion model → Auto Pipeline Unsupported checkpoints
Argilla_on_Spaces.txt
Argilla on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Argilla on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Argilla on Spaces Argilla is a free and open source tool to build and iterate on data for AI. It can be deployed on the Hub with a few clicks and Hugging Face OAuth enabled. This enables other HF users to join your Argilla server to annotate datasets, perfect for running community annotation initiatives! With Argilla you can: Configure datasets for collecting human feedback with a growing number questions (Label, NER, Ranking, Rating, free text, etc.) Use model outputs/predictions to evaluate them or speeding up the annotation process. UI users can explore, find, and label the most interesting/critical subsets using Argilla’s search and semantic similarity features. Pull and push datasets from the Hugging Face Hub for versioning and model training. The best place to get started with Argilla on Spaces is this guide . < > Update on GitHub ← JupyterLab on Spaces Livebook on Spaces → Argilla on Spaces
Loading_big_models_into_memory.txt
Loading big models into memory Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Loading big models into memory Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Loading big models into memory When loading a pre-trained model in PyTorch, the usual workflow looks like this: Copied import torch my_model = ModelClass(...) state_dict = torch.load(checkpoint_file) my_model.load_state_dict(state_dict) In plain English, those steps are: Create the model with randomly initialized weights Load the model weights (in a dictionary usually called a state dict) from the disk Load those weights inside the model While this works very well for regularly sized models, this workflow has some clear limitations when we deal with a huge model: in step 1, we load a full version of the model in RAM, and spend some time randomly initializing the weights (which will be discarded in step 3). In step 2, we load another full version of the model in RAM, with the pre-trained weights. If you’re loading a model with 6 billion parameters, this means you will need 24GB of RAM for each copy of the model, so 48GB in total (half of it to load the model in FP16). This API is quite new and still in its experimental stage. While we strive to provide a stable API, it’s possible some small parts of the public API will change in the future. How the Process Works: A Quick Overview How the Process Works: Working with Code Instantiating an empty model The first tool Accelerate introduces to help with big models is a context manager init_empty_weights() that helps you initialize a model without using any RAM so that step 1 can be done on models of any size. Here is how it works: Copied from accelerate import init_empty_weights with init_empty_weights(): my_model = ModelClass(...) For instance: Copied with init_empty_weights(): model = nn.Sequential(*[nn.Linear( 10000 , 10000 ) for _ in range ( 1000 )]) initializes an empty model with a bit more than 100B parameters. Behind the scenes, this relies on the meta device introduced in PyTorch 1.9. During the initialization under the context manager, each time a parameter is created, it is instantly moved to that device. You can’t move a model initialized like this on CPU or another device directly, since it doesn’t have any data. It’s also very likely that a forward pass with that empty model will fail, as not all operations are supported on the meta device. Sharded checkpoints It’s possible your model is so big that even a single copy won’t fit in RAM. That doesn’t mean it can’t be loaded: if you have one or several GPUs, this is more memory available to store your model. In this case, it’s better if your checkpoint is split into several smaller files that we call checkpoint shards. Accelerate will handle sharded checkpoints as long as you follow the following format: your checkpoint should be in a folder, with several files containing the partial state dicts, and there should be an index in the JSON format that contains a dictionary mapping parameter names to the file containing their weights. You can easily shard your model with save_model() . For instance, we could have a folder containing: Copied first_state_dict.bin index.json second_state_dict.bin with index.json being the following file: Copied { "linear1.weight" : "first_state_dict.bin" , "linear1.bias" : "first_state_dict.bin" , "linear2.weight" : "second_state_dict.bin" , "linear2.bias" : "second_state_dict.bin" } and first_state_dict.bin containing the weights for "linear1.weight" and "linear1.bias" , second_state_dict.bin the ones for "linear2.weight" and "linear2.bias" Loading weights The second tool Accelerate introduces is a function load_checkpoint_and_dispatch() , that will allow you to load a checkpoint inside your empty model. This supports full checkpoints (a single file containing the whole state dict) as well as sharded checkpoints. It will also automatically dispatch those weights across the devices you have available (GPUs, CPU RAM), so if you are loading a sharded checkpoint, the maximum RAM usage will be the size of the biggest shard. If you want to use big model inference with Transformers models, check out this documentation . Here is how we can use this to load the GPT2-1.5B model. Let’s download the sharded version of this model. Copied pip install huggingface_hub Copied from huggingface_hub import snapshot_download checkpoint = "marcsun13/gpt2-xl-linear-sharded" weights_location = snapshot_download(repo_id=checkpoint) In order to initialize the model, we will use the library minGPT. Copied git clone https://github.com/karpathy/minGPT.git pip install minGPT/ Copied from accelerate import init_empty_weights from mingpt.model import GPT model_config = GPT.get_default_config() model_config.model_type = 'gpt2-xl' model_config.vocab_size = 50257 model_config.block_size = 1024 with init_empty_weights(): model = GPT(model_config) Then, load the checkpoint we just downloaded with: Copied from accelerate import load_checkpoint_and_dispatch model = load_checkpoint_and_dispatch( model, checkpoint=weights_location, device_map= "auto" , no_split_module_classes=[ 'Block' ] ) By passing device_map="auto" , we tell Accelerate to determine automatically where to put each layer of the model depending on the available resources: first, we use the maximum space available on the GPU(s) if we still need space, we store the remaining weights on the CPU if there is not enough RAM, we store the remaining weights on the hard drive as memory-mapped tensors no_split_module_classes This parameter will indicate that some of the modules with the name "Block" should not be split across different devices. You should set here all blocks that include a residual connection of some kind. The device_map You can see the device_map that Accelerate picked by accessing the hf_device_map attribute of your model: Copied model.hf_device_map Copied { 'transformer.wte' : 0 , 'transformer.wpe' : 0 , 'transformer.drop' : 0 , 'transformer.h.0' : 0 , ... 'transformer.h.21' : 0 , 'transformer.h.22' : 1 , 'transformer.h.23' : 1 , 'transformer.h.24' : 1 , ... 'transformer.h.47' : 1 , 'transformer.ln_f' : 1 , 'lm_head' : 1 } It’s fully possible to create your own device map for the layers to use as well, specifying the GPU device to use (a number), "cpu" , or "disk" and pass this in: Copied device_map = { "transformer.wte" : "cpu" , "transformer.wpe" : 0 , "transformer.drop" : "cpu" , "transformer.h.0" : "disk" } model = load_checkpoint_and_dispatch( model, checkpoint=weights_location, device_map=device_map ) Run the model Now that we have done this, our model lies across several devices, and maybe the hard drive. But it can still be used as a regular PyTorch model: Copied from mingpt.bpe import BPETokenizer tokenizer = BPETokenizer() inputs = tokenizer( "Hello, my name is" ).to( 0 ) outputs = model.generate(x1, max_new_tokens= 10 , do_sample= False )[ 0 ] tokenizer.decode(outputs.cpu().squeeze()) Behind the scenes, Accelerate added hooks to the model, so that: at each layer, the inputs are put on the right device (so even if your model is spread across several GPUs, it works) for the weights offloaded on the CPU, they are put on a GPU just before the forward pass and cleaned up just after for the weights offloaded on the hard drive, they are loaded in RAM then put on a GPU just before the forward pass and cleaned up just after This way, your model can run for inference even if it doesn’t fit on one of the GPUs or the CPU RAM! This only supports the inference of your model, not training. Most of the computation happens behind torch.no_grad() context managers to avoid spending some GPU memory with intermediate activations. Designing a device map You can let Accelerate handle the device map computation by setting device_map to one of the supported options ( "auto" , "balanced" , "balanced_low_0" , "sequential" ) or create one yourself if you want more control over where each layer should go. You can derive all sizes of the model (and thus compute a device_map ) on a model that is on the meta device. All the options will produce the same result when you don’t have enough GPU memory to accommodate the whole model (which is to fit everything that can on the GPU, then offload weights on the CPU or even on the disk if there is not enough RAM). When you have more GPU memory available than the model size, here is the difference between each option: "auto" and "balanced" evenly split the model on all available GPUs, making it possible for you to use a batch size greater than 1. "balanced_low_0" evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the generate function for Transformers models "sequential" will fit what it can on GPU 0, then move on GPU 1 and so forth (so won’t use the last GPUs if it doesn’t need to). The options "auto" and "balanced" produce the same results for now, but the behavior of "auto" might change in the future if we find a strategy that makes more sense, while "balanced" will stay stable. First note that you can limit the memory used on each GPU by using the max_memory argument (available in infer_auto_device_map() and in all functions using it). When setting max_memory , you should pass along a dictionary containing the GPU identifiers (for instance 0 , 1 etc.) and the "cpu" key for the maximum RAM you want to use for CPU offload. The values can either be an integer (in bytes) or a string representing a number with its unit, such as "10GiB" or "10GB" . Here is an example where we don’t want to use more than 10GiB on each of the two GPUs and no more than 30GiB of CPU RAM for the model weights: Copied from accelerate import infer_auto_device_map device_map = infer_auto_device_map(my_model, max_memory={ 0 : "10GiB" , 1 : "10GiB" , "cpu" : "30GiB" }) When a first allocation happens in PyTorch, it loads CUDA kernels which take about 1-2GB of memory depending on the GPU. Therefore you always have less usable memory than the actual size of the GPU. To see how much memory is actually used do torch.ones(1).cuda() and look at the memory usage. Therefore when you create memory maps with max_memory make sure to adjust the available memory accordingly to avoid out-of-memory errors. Additionally, if you do some additional operations with your outputs without placing them back on the CPU (for instance inside the generate method of Transformers) and if you placed your inputs on a GPU, that GPU will consume more memory than the others (Accelerate always place the output back to the device of the input). Therefore if you would like to optimize the maximum batch size and you have many GPUs, give the first GPU less memory. For example, with BLOOM-176B on 8x80 A100 setup, the close-to-ideal map is: Copied max_memory = { 0 : "30GIB" , 1 : "46GIB" , 2 : "46GIB" , 3 : "46GIB" , 4 : "46GIB" , 5 : "46GIB" , 6 : "46GIB" , 7 : "46GIB" } as you can see we gave the remaining 7 GPUs ~50% more memory than GPU 0. If you opt to fully design the device_map yourself, it should be a dictionary with keys being module names of your model and values being a valid device identifier (for instance an integer for the GPUs) or "cpu" for CPU offload, "disk" for disk offload. The keys need to cover the whole model, you can then define your device map as you wish: for instance, if your model has two blocks (let’s say block1 and block2 ) which each contain three linear layers (let’s say linear1 , linear2 and linear3 ), a valid device map can be: Copied device_map = { "block1" : 0 , "block2" : 1 } another one that is valid could be: Copied device_map = { "block1" : 0 , "block2.linear1" : 0 , "block2.linear2" : 1 , "block2.linear3" : 1 } On the other hand, this one is not valid as it does not cover every parameter of the model: Copied device_map = { "block1" : 0 , "block2.linear1" : 1 , "block2.linear2" : 1 } To be the most efficient, make sure your device map puts the parameters on the GPUs in a sequential manner (e.g. don’t put one of the first weights on GPU 0, then weights on GPU 1 and the last weight back to GPU 0) to avoid making many transfers of data between the GPUs. CPU offload only If you want to offload your model on CPU, you can use cpu_offload() . As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device and passed as they are needed, then offloaded again. Copied cpu_offload(model, execution_device) You can also use cpu_offload_with_hook() . This function will offloads a model on the CPU and puts it back to an execution device when executed. The difference with cpu_offload() is that the model stays on the execution device after the forward and is only offloaded again when the offload method of the returned hook is called. Furthermore, cpu_offload_with_hook() is more performant but less memory saving. It is useful for pipelines running a model in a loop: Copied model_1, hook_1 = cpu_offload_with_hook(model_1, execution_device) model_2, hook_2 = cpu_offload_with_hook(model_2, execution_device, prev_module_hook=hook_1) model_3, hook_3 = cpu_offload_with_hook(model_3, execution_device, prev_module_hook=hook_2) hid_1 = model_1( input ) for i in range ( 50 ): # model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop. hid_2 = model_2(hid_1) # model2 is offloaded to the CPU just before this forward. hid_3 = model_3(hid_3) # For model3, you need to manually call the hook offload method. hook_3.offload() Disk offload only To perform disk offload, you can use disk_offload() . As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again. Copied disk_offload(model, offload_dir, execution_device) Limits and further development We are aware of the current limitations in the API: infer_auto_device_map() (or device_map="auto" in load_checkpoint_and_dispatch() ) tries to maximize GPU and CPU RAM it sees available when you execute it. While PyTorch is very good at managing GPU RAM efficiently (and giving it back when not needed), it’s not entirely true with Python and CPU RAM. Therefore, an automatically computed device map might be too intense on the CPU. Move a few modules to the disk device if you get crashes due to a lack of RAM. infer_auto_device_map() (or device_map="auto" in load_checkpoint_and_dispatch() ) attributes devices sequentially (to avoid moving things back and forth) so if your first layer is bigger than the size of the GPU you have, it will end up with everything on the CPU/Disk. load_checkpoint_and_dispatch() and load_checkpoint_in_model() do not perform any check on the correctness of your state dict compared to your model at the moment (this will be fixed in a future version), so you may get some weird errors if trying to load a checkpoint with mismatched or missing keys. The model parallelism used when your model is split on several GPUs is naive and not optimized, meaning that only one GPU works at a given time and the other sits idle. When weights are offloaded on the CPU/hard drive, there is no pre-fetching (yet, we will work on this for future versions) which means the weights are put on the GPU when they are needed and not before. Hard-drive offloading might be very slow if the hardware you run on does not have fast communication between disk and CPU (like NVMes). < > Update on GitHub ← Accelerate's internal mechanism Comparing performance across distributed setups → Loading big models into memory How the Process Works: A Quick Overview How the Process Works: Working with Code Instantiating an empty model Sharded checkpoints Loading weights no_split_module_classes The device_map Run the model Designing a device map CP U offload only Disk offload only Limits and further development
EETQ.txt
EETQ Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation EETQ Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started EETQ The EETQ library supports int8 per-channel weight-only quantization for NVIDIA GPUS. The high-performance GEMM and GEMV kernels are from FasterTransformer and TensorRT-LLM. It requires no calibration dataset and does not need to pre-quantize your model. Moreover, the accuracy degradation is negligible owing to the per-channel quantization. Make sure you have eetq installed from the release page Copied pip install --no-cache-dir https: //gi thub.com /NetEase-FuXi/ EETQ /releases/ download /v1.0.0/ EETQ- 1.0 . 0 +cu121+torch2. 1.2 -cp310-cp310-linux_x86_64.whl or via the source code https://github.com/NetEase-FuXi/EETQ . EETQ requires CUDA capability <= 8.9 and >= 7.0 Copied git clone https: //gi thub.com /NetEase-FuXi/ EETQ.git cd EETQ/ git submodule update --init --recursive pip install . An unquantized model can be quantized via “from_pretrained”. Copied from transformers import AutoModelForCausalLM, EetqConfig path = "/path/to/model" quantization_config = EetqConfig( "int8" ) model = AutoModelForCausalLM.from_pretrained(path, device_map= "auto" , quantization_config=quantization_config) A quantized model can be saved via “saved_pretrained” and be reused again via the “from_pretrained”. Copied quant_path = "/path/to/save/quantized/model" model.save_pretrained(quant_path) model = AutoModelForCausalLM.from_pretrained(quant_path, device_map= "auto" ) < > Update on GitHub ← Quanto HIGGS → EETQ
Interface__WhoAmIOrg.txt
Interface: WhoAmIOrg Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: WhoAmIOrg Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: WhoAmIOrg Properties avatarUrl • avatarUrl : string Defined in hub/src/lib/who-am-i.ts:33 canPay • canPay : boolean Defined in hub/src/lib/who-am-i.ts:32 email • email : null | string Defined in hub/src/lib/who-am-i.ts:31 fullname • fullname : string Defined in hub/src/lib/who-am-i.ts:30 id • id : string Unique ID persistent across renames Defined in hub/src/lib/who-am-i.ts:27 name • name : string Defined in hub/src/lib/who-am-i.ts:29 periodEnd • periodEnd : null | number Unix timestamp in seconds Defined in hub/src/lib/who-am-i.ts:37 type • type : "org" Defined in hub/src/lib/who-am-i.ts:28 < > Update on GitHub ← WhoAmIApp WhoAmIUser → Interface: Who AmI Org Properties avatar Url Defined in can Pay Defined in email Defined in fullname Defined in id Defined in name Defined in period End Defined in type Defined in
Server-side_Inference_in_Node.js.txt
Server-side Inference in Node.js Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation Server-side Inference in Node.js Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Server-side Inference in Node.js Although Transformers.js was originally designed to be used in the browser, it’s also able to run inference on the server. In this tutorial, we will design a simple Node.js API that uses Transformers.js for sentiment analysis. We’ll also show you how to use the library in both CommonJS and ECMAScript modules, so you can choose the module system that works best for your project: ECMAScript modules (ESM) - The official standard format to package JavaScript code for reuse. It’s the default module system in modern browsers, with modules imported using import and exported using export . Fortunately, starting with version 13.2.0, Node.js has stable support of ES modules. CommonJS - The default module system in Node.js. In this system, modules are imported using require() and exported using module.exports . Although you can always use the Python library for server-side inference, using Transformers.js means that you can write all of your code in JavaScript (instead of having to set up and communicate with a separate Python process). Useful links: Source code ( ESM or CommonJS ) Documentation Prerequisites Node.js version 18+ npm version 9+ Getting started Let’s start by creating a new Node.js project and installing Transformers.js via NPM : Copied npm init -y npm i @huggingface/transformers Next, create a new file called app.js , which will be the entry point for our application. Depending on whether you’re using ECMAScript modules or CommonJS , you will need to do some things differently (see below). We’ll also create a helper class called MyClassificationPipeline control the loading of the pipeline. It uses the singleton pattern to lazily create a single instance of the pipeline when getInstance is first called, and uses this pipeline for all subsequent calls: ECMAScript modules (ESM) To indicate that your project uses ECMAScript modules, you need to add "type": "module" to your package.json : Copied { ... "type" : "module" , ... } Next, you will need to add the following imports to the top of app.js : Copied import http from 'http' ; import querystring from 'querystring' ; import url from 'url' ; Following that, let’s import Transformers.js and define the MyClassificationPipeline class. Copied import { pipeline, env } from '@huggingface/transformers' ; class MyClassificationPipeline { static task = 'text-classification' ; static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english' ; static instance = null ; static async getInstance ( progress_callback = null ) { if ( this . instance === null ) { // NOTE: Uncomment this to change the cache directory // env.cacheDir = './.cache'; this . instance = pipeline ( this . task , this . model , { progress_callback }); } return this . instance ; } } CommonJS Start by adding the following imports to the top of app.js : Copied const http = require ( 'http' ); const querystring = require ( 'querystring' ); const url = require ( 'url' ); Following that, let’s import Transformers.js and define the MyClassificationPipeline class. Since Transformers.js is an ESM module, we will need to dynamically import the library using the import() function: Copied class MyClassificationPipeline { static task = 'text-classification' ; static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english' ; static instance = null ; static async getInstance ( progress_callback = null ) { if ( this . instance === null ) { // Dynamically import the Transformers.js library let { pipeline, env } = await import ( '@huggingface/transformers' ); // NOTE: Uncomment this to change the cache directory // env.cacheDir = './.cache'; this . instance = pipeline ( this . task , this . model , { progress_callback }); } return this . instance ; } } Creating a basic HTTP server Next, let’s create a basic server with the built-in HTTP module. We will listen for requests made to the server (using the /classify endpoint), extract the text query parameter, and run this through the pipeline. Copied // Define the HTTP server const server = http. createServer (); const hostname = '127.0.0.1' ; const port = 3000 ; // Listen for requests made to the server server. on ( 'request' , async (req, res) => { // Parse the request URL const parsedUrl = url. parse (req. url ); // Extract the query parameters const { text } = querystring. parse (parsedUrl. query ); // Set the response headers res. setHeader ( 'Content-Type' , 'application/json' ); let response; if (parsedUrl. pathname === '/classify' && text) { const classifier = await MyClassificationPipeline . getInstance (); response = await classifier (text); res. statusCode = 200 ; } else { response = { 'error' : 'Bad request' } res. statusCode = 400 ; } // Send the JSON response res. end ( JSON . stringify (response)); }); server. listen (port, hostname, () => { console . log ( `Server running at http:// ${hostname} : ${port} /` ); }); Since we use lazy loading, the first request made to the server will also be responsible for loading the pipeline. If you would like to begin loading the pipeline as soon as the server starts running, you can add the following line of code after defining MyClassificationPipeline : Copied MyClassificationPipeline . getInstance (); To start the server, run the following command: Copied node app.js The server should be live at http://127.0.0.1:3000/ , which you can visit in your web browser. You should see the following message: Copied { "error" : "Bad request" } This is because we aren’t targeting the /classify endpoint with a valid text query parameter. Let’s try again, this time with a valid request. For example, you can visit http://127.0.0.1:3000/classify?text=I%20love%20Transformers.js and you should see: Copied [ { "label" : "POSITIVE" , "score" : 0.9996721148490906 } ] Great! We’ve successfully created a basic HTTP server that uses Transformers.js to classify text. (Optional) Customization Model caching By default, the first time you run the application, it will download the model files and cache them on your file system (in ./node_modules/@huggingface/transformers/.cache/ ). All subsequent requests will then use this model. You can change the location of the cache by setting env.cacheDir . For example, to cache the model in the .cache directory in the current working directory, you can add: Copied env. cacheDir = './.cache' ; Use local models If you want to use local model files, you can set env.localModelPath as follows: Copied // Specify a custom location for models (defaults to '/models/'). env. localModelPath = '/path/to/models/' ; You can also disable loading of remote models by setting env.allowRemoteModels to false : Copied // Disable the loading of remote models from the Hugging Face Hub: env. allowRemoteModels = false ; < > Update on GitHub ← Building an Electron Application Accessing Private/Gated Models → Server-side Inference in Node.js Prerequisites Getting started ECMA Script modules (ES M) CommonJS Creating a basic HTT P server ( Optional) Customization Model caching Use local models
Notebooks.txt
Notebooks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Notebooks AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Notebooks We prepared some notebooks for you, so that you can run directly tutorials in the documentation. Notebook Description Studio Lab Create your own chatbot with llama-2-13B on AWS Inferentia Show how to run LLama-2 13B chat model on AWS inferentia 2. How to generate images with Stable Diffusion Show how to use stable-diffusion v2.1 model to generate images from prompts on Inferentia 2. How to generate images with Stable Diffusion XL Show how to use stable-diffusion XL model to generate images from prompts on Inferentia 2. Compute text embeddings with Sentence Transformers on Inferentia Show how to use Sentence Transformers to compute sentence / text embeddings on Inferentia 2. How to compile (if needed) and generate text with CodeLlama 7B How to use CodeLlama 7B to generate code. Also walks through compilation. ← Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Create your own chatbot with llama-2-13B on AWS Inferentia → Notebooks
Summary_of_the_tokenizers.txt
Summary of the tokenizers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Summary of the tokenizers Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Summary of the tokenizers On this page, we will have a closer look at tokenization. As we saw in the preprocessing tutorial , tokenizing a text is splitting it into words or subwords, which then are converted to ids through a look-up table. Converting words or subwords to ids is straightforward, so in this summary, we will focus on splitting a text into words or subwords (i.e. tokenizing a text). More specifically, we will look at the three main types of tokenizers used in 🤗 Transformers: Byte-Pair Encoding (BPE) , WordPiece , and SentencePiece , and show examples of which tokenizer type is used by which model. Note that on each model page, you can look at the documentation of the associated tokenizer to know which tokenizer type was used by the pretrained model. For instance, if we look at BertTokenizer , we can see that the model uses WordPiece . Introduction Splitting a text into smaller chunks is a task that is harder than it looks, and there are multiple ways of doing so. For instance, let’s look at the sentence "Don't you love 🤗 Transformers? We sure do." A simple way of tokenizing this text is to split it by spaces, which would give: Copied [ "Don't" , "you" , "love" , "🤗" , "Transformers?" , "We" , "sure" , "do." ] This is a sensible first step, but if we look at the tokens "Transformers?" and "do." , we notice that the punctuation is attached to the words "Transformer" and "do" , which is suboptimal. We should take the punctuation into account so that a model does not have to learn a different representation of a word and every possible punctuation symbol that could follow it, which would explode the number of representations the model has to learn. Taking punctuation into account, tokenizing our exemplary text would give: Copied [ "Don" , "'" , "t" , "you" , "love" , "🤗" , "Transformers" , "?" , "We" , "sure" , "do" , "." ] Better. However, it is disadvantageous, how the tokenization dealt with the word "Don't" . "Don't" stands for "do not" , so it would be better tokenized as ["Do", "n't"] . This is where things start getting complicated, and part of the reason each model has its own tokenizer type. Depending on the rules we apply for tokenizing a text, a different tokenized output is generated for the same text. A pretrained model only performs properly if you feed it an input that was tokenized with the same rules that were used to tokenize its training data. spaCy and Moses are two popular rule-based tokenizers. Applying them on our example, spaCy and Moses would output something like: Copied [ "Do" , "n't" , "you" , "love" , "🤗" , "Transformers" , "?" , "We" , "sure" , "do" , "." ] As can be seen space and punctuation tokenization, as well as rule-based tokenization, is used here. Space and punctuation tokenization and rule-based tokenization are both examples of word tokenization, which is loosely defined as splitting sentences into words. While it’s the most intuitive way to split texts into smaller chunks, this tokenization method can lead to problems for massive text corpora. In this case, space and punctuation tokenization usually generates a very big vocabulary (the set of all unique words and tokens used). E.g. , Transformer XL uses space and punctuation tokenization, resulting in a vocabulary size of 267,735! Such a big vocabulary size forces the model to have an enormous embedding matrix as the input and output layer, which causes both an increased memory and time complexity. In general, transformers models rarely have a vocabulary size greater than 50,000, especially if they are pretrained only on a single language. So if simple space and punctuation tokenization is unsatisfactory, why not simply tokenize on characters? While character tokenization is very simple and would greatly reduce memory and time complexity it makes it much harder for the model to learn meaningful input representations. E.g. learning a meaningful context-independent representation for the letter "t" is much harder than learning a context-independent representation for the word "today" . Therefore, character tokenization is often accompanied by a loss of performance. So to get the best of both worlds, transformers models use a hybrid between word-level and character-level tokenization called subword tokenization. Subword tokenization Subword tokenization algorithms rely on the principle that frequently used words should not be split into smaller subwords, but rare words should be decomposed into meaningful subwords. For instance "annoyingly" might be considered a rare word and could be decomposed into "annoying" and "ly" . Both "annoying" and "ly" as stand-alone subwords would appear more frequently while at the same time the meaning of "annoyingly" is kept by the composite meaning of "annoying" and "ly" . This is especially useful in agglutinative languages such as Turkish, where you can form (almost) arbitrarily long complex words by stringing together subwords. Subword tokenization allows the model to have a reasonable vocabulary size while being able to learn meaningful context-independent representations. In addition, subword tokenization enables the model to process words it has never seen before, by decomposing them into known subwords. For instance, the BertTokenizer tokenizes "I have a new GPU!" as follows: Copied >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained( "google-bert/bert-base-uncased" ) >>> tokenizer.tokenize( "I have a new GPU!" ) [ "i" , "have" , "a" , "new" , "gp" , "##u" , "!" ] Because we are considering the uncased model, the sentence was lowercased first. We can see that the words ["i", "have", "a", "new"] are present in the tokenizer’s vocabulary, but the word "gpu" is not. Consequently, the tokenizer splits "gpu" into known subwords: ["gp" and "##u"] . "##" means that the rest of the token should be attached to the previous one, without space (for decoding or reversal of the tokenization). As another example, XLNetTokenizer tokenizes our previously exemplary text as follows: Copied >>> from transformers import XLNetTokenizer >>> tokenizer = XLNetTokenizer.from_pretrained( "xlnet/xlnet-base-cased" ) >>> tokenizer.tokenize( "Don't you love 🤗 Transformers? We sure do." ) [ "▁Don" , "'" , "t" , "▁you" , "▁love" , "▁" , "🤗" , "▁" , "Transform" , "ers" , "?" , "▁We" , "▁sure" , "▁do" , "." ] We’ll get back to the meaning of those "▁" when we look at SentencePiece . As one can see, the rare word "Transformers" has been split into the more frequent subwords "Transform" and "ers" . Let’s now look at how the different subword tokenization algorithms work. Note that all of those tokenization algorithms rely on some form of training which is usually done on the corpus the corresponding model will be trained on. Byte-Pair Encoding (BPE) Byte-Pair Encoding (BPE) was introduced in Neural Machine Translation of Rare Words with Subword Units (Sennrich et al., 2015) . BPE relies on a pre-tokenizer that splits the training data into words. Pretokenization can be as simple as space tokenization, e.g. GPT-2 , RoBERTa . More advanced pre-tokenization include rule-based tokenization, e.g. XLM , FlauBERT which uses Moses for most languages, or GPT which uses spaCy and ftfy, to count the frequency of each word in the training corpus. After pre-tokenization, a set of unique words has been created and the frequency with which each word occurred in the training data has been determined. Next, BPE creates a base vocabulary consisting of all symbols that occur in the set of unique words and learns merge rules to form a new symbol from two symbols of the base vocabulary. It does so until the vocabulary has attained the desired vocabulary size. Note that the desired vocabulary size is a hyperparameter to define before training the tokenizer. As an example, let’s assume that after pre-tokenization, the following set of words including their frequency has been determined: Copied ( "hug" , 10 ) , ( "pug" , 5 ) , ( "pun" , 12 ) , ( "bun" , 4 ) , ( "hugs" , 5 ) Consequently, the base vocabulary is ["b", "g", "h", "n", "p", "s", "u"] . Splitting all words into symbols of the base vocabulary, we obtain: Copied ( "h" "u" "g" , 10 ) , ( "p" "u" "g" , 5 ) , ( "p" "u" "n" , 12 ) , ( "b" "u" "n" , 4 ) , ( "h" "u" "g" "s" , 5 ) BPE then counts the frequency of each possible symbol pair and picks the symbol pair that occurs most frequently. In the example above "h" followed by "u" is present 10 + 5 = 15 times (10 times in the 10 occurrences of "hug" , 5 times in the 5 occurrences of "hugs" ). However, the most frequent symbol pair is "u" followed by "g" , occurring 10 + 5 + 5 = 20 times in total. Thus, the first merge rule the tokenizer learns is to group all "u" symbols followed by a "g" symbol together. Next, "ug" is added to the vocabulary. The set of words then becomes Copied ( "h" "ug" , 10 ) , ( "p" "ug" , 5 ) , ( "p" "u" "n" , 12 ) , ( "b" "u" "n" , 4 ) , ( "h" "ug" "s" , 5 ) BPE then identifies the next most common symbol pair. It’s "u" followed by "n" , which occurs 16 times. "u" , "n" is merged to "un" and added to the vocabulary. The next most frequent symbol pair is "h" followed by "ug" , occurring 15 times. Again the pair is merged and "hug" can be added to the vocabulary. At this stage, the vocabulary is ["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"] and our set of unique words is represented as Copied ( "hug" , 10 ) , ( "p" "ug" , 5 ) , ( "p" "un" , 12 ) , ( "b" "un" , 4 ) , ( "hug" "s" , 5 ) Assuming, that the Byte-Pair Encoding training would stop at this point, the learned merge rules would then be applied to new words (as long as those new words do not include symbols that were not in the base vocabulary). For instance, the word "bug" would be tokenized to ["b", "ug"] but "mug" would be tokenized as ["<unk>", "ug"] since the symbol "m" is not in the base vocabulary. In general, single letters such as "m" are not replaced by the "<unk>" symbol because the training data usually includes at least one occurrence of each letter, but it is likely to happen for very special characters like emojis. As mentioned earlier, the vocabulary size, i.e. the base vocabulary size + the number of merges, is a hyperparameter to choose. For instance GPT has a vocabulary size of 40,478 since they have 478 base characters and chose to stop training after 40,000 merges. Byte-level BPE A base vocabulary that includes all possible base characters can be quite large if e.g. all unicode characters are considered as base characters. To have a better base vocabulary, GPT-2 uses bytes as the base vocabulary, which is a clever trick to force the base vocabulary to be of size 256 while ensuring that every base character is included in the vocabulary. With some additional rules to deal with punctuation, the GPT2’s tokenizer can tokenize every text without the need for the <unk> symbol. GPT-2 has a vocabulary size of 50,257, which corresponds to the 256 bytes base tokens, a special end-of-text token and the symbols learned with 50,000 merges. WordPiece WordPiece is the subword tokenization algorithm used for BERT , DistilBERT , and Electra . The algorithm was outlined in Japanese and Korean Voice Search (Schuster et al., 2012) and is very similar to BPE. WordPiece first initializes the vocabulary to include every character present in the training data and progressively learns a given number of merge rules. In contrast to BPE, WordPiece does not choose the most frequent symbol pair, but the one that maximizes the likelihood of the training data once added to the vocabulary. So what does this mean exactly? Referring to the previous example, maximizing the likelihood of the training data is equivalent to finding the symbol pair, whose probability divided by the probabilities of its first symbol followed by its second symbol is the greatest among all symbol pairs. E.g. "u" , followed by "g" would have only been merged if the probability of "ug" divided by "u" , "g" would have been greater than for any other symbol pair. Intuitively, WordPiece is slightly different to BPE in that it evaluates what it loses by merging two symbols to ensure it’s worth it . Unigram Unigram is a subword tokenization algorithm introduced in Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, 2018) . In contrast to BPE or WordPiece, Unigram initializes its base vocabulary to a large number of symbols and progressively trims down each symbol to obtain a smaller vocabulary. The base vocabulary could for instance correspond to all pre-tokenized words and the most common substrings. Unigram is not used directly for any of the models in the transformers, but it’s used in conjunction with SentencePiece . At each training step, the Unigram algorithm defines a loss (often defined as the log-likelihood) over the training data given the current vocabulary and a unigram language model. Then, for each symbol in the vocabulary, the algorithm computes how much the overall loss would increase if the symbol was to be removed from the vocabulary. Unigram then removes p (with p usually being 10% or 20%) percent of the symbols whose loss increase is the lowest, i.e. those symbols that least affect the overall loss over the training data. This process is repeated until the vocabulary has reached the desired size. The Unigram algorithm always keeps the base characters so that any word can be tokenized. Because Unigram is not based on merge rules (in contrast to BPE and WordPiece), the algorithm has several ways of tokenizing new text after training. As an example, if a trained Unigram tokenizer exhibits the vocabulary: Copied [ "b" , "g" , "h" , "n" , "p" , "s" , "u" , "ug" , "un" , "hug" ] , "hugs" could be tokenized both as ["hug", "s"] , ["h", "ug", "s"] or ["h", "u", "g", "s"] . So which one to choose? Unigram saves the probability of each token in the training corpus on top of saving the vocabulary so that the probability of each possible tokenization can be computed after training. The algorithm simply picks the most likely tokenization in practice, but also offers the possibility to sample a possible tokenization according to their probabilities. Those probabilities are defined by the loss the tokenizer is trained on. Assuming that the training data consists of the words x 1 , … , x N x_{1}, \dots, x_{N} x 1 ​ , … , x N ​ and that the set of all possible tokenizations for a word x i x_{i} x i ​ is defined as S ( x i ) S(x_{i}) S ( x i ​ ) , then the overall loss is defined as L = − ∑ i = 1 N log ⁡ ( ∑ x ∈ S ( x i ) p ( x ) ) \mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right ) L = − i = 1 ∑ N ​ lo g ​ x ∈ S ( x i ​ ) ∑ ​ p ( x ) ​ SentencePiece All tokenization algorithms described so far have the same problem: It is assumed that the input text uses spaces to separate words. However, not all languages use spaces to separate words. One possible solution is to use language specific pre-tokenizers, e.g. XLM uses a specific Chinese, Japanese, and Thai pre-tokenizer. To solve this problem more generally, SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing (Kudo et al., 2018) treats the input as a raw input stream, thus including the space in the set of characters to use. It then uses the BPE or unigram algorithm to construct the appropriate vocabulary. The XLNetTokenizer uses SentencePiece for example, which is also why in the example earlier the "▁" character was included in the vocabulary. Decoding with SentencePiece is very easy since all tokens can just be concatenated and "▁" is replaced by a space. All transformers models in the library that use SentencePiece use it in combination with unigram. Examples of models using SentencePiece are ALBERT , XLNet , Marian , and T5 . < > Update on GitHub ← The Transformer model family Attention mechanisms → Summary of the tokenizers Introduction Subword tokenization Byte- Pair Encoding (BP E) Byte-level BPE Word Piece Unigram Sentence Piece
Using_TGI_with_Intel_GPUs.txt
Using TGI with Intel GPUs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Using TGI with Intel GPUs text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using TGI with Intel GPUs TGI optimized models are supported on Intel Data Center GPU Max1100 , Max1550 , the recommended usage is through Docker. On a server powered by Intel GPUs, TGI can be launched with the following command: Copied model=teknium/OpenHermes-2.5-Mistral-7B volume= $PWD /data # share a volume with the Docker container to avoid downloading weights every run docker run -- rm --privileged --cap-add=sys_nice \ --device=/dev/dri \ --ipc=host --shm-size 1g --net host -v $volume :/data \ ghcr.io/huggingface/text-generation-inference:3.0.1-intel-xpu \ --model-id $model --cuda-graphs 0 Using TGI with Intel CPUs Intel® Extension for PyTorch (IPEX) also provides further optimizations for Intel CPUs. The IPEX provides optimization operations such as flash attention, page attention, Add + LayerNorm, ROPE and more. On a server powered by Intel CPU, TGI can be launched with the following command: Copied model=teknium/OpenHermes-2.5-Mistral-7B volume= $PWD /data # share a volume with the Docker container to avoid downloading weights every run docker run -- rm --privileged --cap-add=sys_nice \ --device=/dev/dri \ --ipc=host --shm-size 1g --net host -v $volume :/data \ ghcr.io/huggingface/text-generation-inference:3.0.1-intel-cpu \ --model-id $model --cuda-graphs 0 The launched TGI server can then be queried from clients, make sure to check out the Consuming TGI guide. < > Update on GitHub ← Using TGI with Google TPUs Installation from source → Using TG I with Intel GP Us
Get_dataset_information.txt
Get dataset information Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Get dataset information Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Get dataset information The dataset viewer provides an /info endpoint for exploring the general information about dataset, including such fields as description, citation, homepage, license and features. The /info endpoint accepts two query parameters: dataset : the dataset name config : the subset name Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://datasets-server.huggingface.co/info?dataset=ibm/duorc&config=SelfRC" def query (): response = requests.get(API_URL, headers=headers) return response.json() data = query() The endpoint response is a JSON with the dataset_info key. Its structure and content correspond to DatasetInfo object of the datasets library. Copied { "dataset_info" : { "description" : "" , "citation" : "" , "homepage" : "" , "license" : "" , "features" : { "plot_id" : { "dtype" : "string" , "_type" : "Value" } , "plot" : { "dtype" : "string" , "_type" : "Value" } , "title" : { "dtype" : "string" , "_type" : "Value" } , "question_id" : { "dtype" : "string" , "_type" : "Value" } , "question" : { "dtype" : "string" , "_type" : "Value" } , "answers" : { "feature" : { "dtype" : "string" , "_type" : "Value" } , "_type" : "Sequence" } , "no_answer" : { "dtype" : "bool" , "_type" : "Value" } } , "builder_name" : "parquet" , "dataset_name" : "duorc" , "config_name" : "SelfRC" , "version" : { "version_str" : "0.0.0" , "major" : 0 , "minor" : 0 , "patch" : 0 } , "splits" : { "train" : { "name" : "train" , "num_bytes" : 248966361 , "num_examples" : 60721 , "dataset_name" : null } , "validation" : { "name" : "validation" , "num_bytes" : 56359392 , "num_examples" : 12961 , "dataset_name" : null } , "test" : { "name" : "test" , "num_bytes" : 51022318 , "num_examples" : 12559 , "dataset_name" : null } } , "download_size" : 21001846 , "dataset_size" : 356348071 } , "partial" : false } < > Update on GitHub ← List splits and subsets Preview a dataset → Get dataset information
Builder_classes.txt
Builder classes Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Builder classes Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Builder classes Builders 🤗 Datasets relies on two main classes during the dataset building process: DatasetBuilder and BuilderConfig . class datasets. DatasetBuilder < source > ( cache_dir : typing.Optional[str] = None dataset_name : typing.Optional[str] = None config_name : typing.Optional[str] = None hash : typing.Optional[str] = None base_path : typing.Optional[str] = None info : typing.Optional[datasets.info.DatasetInfo] = None features : typing.Optional[datasets.features.features.Features] = None token : typing.Union[bool, str, NoneType] = None repo_id : typing.Optional[str] = None data_files : typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir : typing.Optional[str] = None storage_options : typing.Optional[dict] = None writer_batch_size : typing.Optional[int] = None **config_kwargs ) Parameters cache_dir ( str , optional ) — Directory to cache data. Defaults to "~/.cache/huggingface/datasets" . dataset_name ( str , optional ) — Name of the dataset, if different from the builder name. Useful for packaged builders like csv, imagefolder, audiofolder, etc. to reflect the difference between datasets that use the same packaged builder. config_name ( str , optional ) — Name of the dataset configuration. It affects the data generated on disk. Different configurations will have their own subdirectories and versions. If not provided, the default configuration is used (if it exists). Added in 2.3.0 Parameter name was renamed to config_name . hash ( str , optional ) — Hash specific to the dataset code. Used to update the caching directory when the dataset loading script code is updated (to avoid reusing old data). The typical caching directory (defined in self._relative_data_dir ) is name/version/hash/ . base_path ( str , optional ) — Base path for relative paths that are used to download files. This can be a remote URL. features ( Features , optional ) — Features types to use with this dataset. It can be used to change the Features types of a dataset, for example. token ( str or bool , optional ) — String or boolean to use as Bearer token for remote files on the Datasets Hub. If True , will get token from "~/.huggingface" . repo_id ( str , optional ) — ID of the dataset repository. Used to distinguish builders with the same name but not coming from the same namespace, for example “squad” and “lhoestq/squad” repo IDs. In the latter, the builder name would be “lhoestq___squad”. data_files ( str or Sequence or Mapping , optional ) — Path(s) to source data file(s). For builders like “csv” or “json” that need the user to specify data files. They can be either local or remote files. For convenience, you can use a DataFilesDict . data_dir ( str , optional ) — Path to directory containing source data file(s). Use only if data_files is not passed, in which case it is equivalent to passing os.path.join(data_dir, "**") as data_files . For builders that require manual download, it must be the path to the local directory containing the manually downloaded data. storage_options ( dict , optional ) — Key/value pairs to be passed on to the dataset file-system backend, if any. writer_batch_size ( int , optional ) — Batch size used by the ArrowWriter. It defines the number of samples that are kept in memory before writing them and also the length of the arrow chunks. None means that the ArrowWriter will use its default value. * *config_kwargs (additional keyword arguments) — Keyword arguments to be passed to the corresponding builder configuration class, set on the class attribute DatasetBuilder.BUILDER_CONFIG_CLASS . The builder configuration class is BuilderConfig or a subclass of it. Abstract base class for all datasets. DatasetBuilder has 3 key methods: DatasetBuilder.info : Documents the dataset, including feature names, types, shapes, version, splits, citation, etc. DatasetBuilder.download_and_prepare() : Downloads the source data and writes it to disk. DatasetBuilder.as_dataset() : Generates a Dataset . Some DatasetBuilder s expose multiple variants of the dataset by defining a BuilderConfig subclass and accepting a config object (or name) on construction. Configurable datasets expose a pre-defined set of configurations in DatasetBuilder.builder_configs() . as_dataset < source > ( split : typing.Optional[datasets.splits.Split] = None run_post_process = True verification_mode : typing.Union[datasets.utils.info_utils.VerificationMode, str, NoneType] = None in_memory = False ) Parameters split ( datasets.Split ) — Which subset of the data to return. run_post_process ( bool , defaults to True ) — Whether to run post-processing dataset transforms and/or add indexes. verification_mode ( VerificationMode or str , defaults to BASIC_CHECKS ) — Verification mode determining the checks to run on the downloaded/processed dataset information (checksums/size/splits/…). Added in 2.9.1 in_memory ( bool , defaults to False ) — Whether to copy the data in-memory. Return a Dataset for the specified split. Example: Copied >>> from datasets import load_dataset_builder >>> builder = load_dataset_builder( 'rotten_tomatoes' ) >>> builder.download_and_prepare() >>> ds = builder.as_dataset(split= 'train' ) >>> ds Dataset({ features: [ 'text' , 'label' ], num_rows: 8530 }) download_and_prepare < source > ( output_dir : typing.Optional[str] = None download_config : typing.Optional[datasets.download.download_config.DownloadConfig] = None download_mode : typing.Union[datasets.download.download_manager.DownloadMode, str, NoneType] = None verification_mode : typing.Union[datasets.utils.info_utils.VerificationMode, str, NoneType] = None dl_manager : typing.Optional[datasets.download.download_manager.DownloadManager] = None base_path : typing.Optional[str] = None file_format : str = 'arrow' max_shard_size : typing.Union[str, int, NoneType] = None num_proc : typing.Optional[int] = None storage_options : typing.Optional[dict] = None **download_and_prepare_kwargs ) Parameters output_dir ( str , optional ) — Output directory for the dataset. Default to this builder’s cache_dir , which is inside ~/.cache/huggingface/datasets by default. Added in 2.5.0 download_config ( DownloadConfig , optional ) — Specific download configuration parameters. download_mode ( DownloadMode or str , optional ) — Select the download/generate mode, default to REUSE_DATASET_IF_EXISTS . verification_mode ( VerificationMode or str , defaults to BASIC_CHECKS ) — Verification mode determining the checks to run on the downloaded/processed dataset information (checksums/size/splits/…). Added in 2.9.1 dl_manager ( DownloadManager , optional ) — Specific DownloadManger to use. base_path ( str , optional ) — Base path for relative paths that are used to download files. This can be a remote url. If not specified, the value of the base_path attribute ( self.base_path ) will be used instead. file_format ( str , optional ) — Format of the data files in which the dataset will be written. Supported formats: “arrow”, “parquet”. Default to “arrow” format. If the format is “parquet”, then image and audio data are embedded into the Parquet files instead of pointing to local files. Added in 2.5.0 max_shard_size ( Union[str, int] , optional ) — Maximum number of bytes written per shard, default is “500MB”. The size is based on uncompressed data size, so in practice your shard files may be smaller than max_shard_size thanks to Parquet compression for example. Added in 2.5.0 num_proc ( int , optional , defaults to None ) — Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default. Added in 2.7.0 storage_options ( dict , optional ) — Key/value pairs to be passed on to the caching file-system backend, if any. Added in 2.5.0 * *download_and_prepare_kwargs (additional keyword arguments) — Keyword arguments. Downloads and prepares dataset for reading. Example: Download and prepare the dataset as Arrow files that can be loaded as a Dataset using builder.as_dataset() : Copied >>> from datasets import load_dataset_builder >>> builder = load_dataset_builder( "rotten_tomatoes" ) >>> builder.download_and_prepare() Download and prepare the dataset as sharded Parquet files locally: Copied >>> from datasets import load_dataset_builder >>> builder = load_dataset_builder( "rotten_tomatoes" ) >>> builder.download_and_prepare( "./output_dir" , file_format= "parquet" ) Download and prepare the dataset as sharded Parquet files in a cloud storage: Copied >>> from datasets import load_dataset_builder >>> storage_options = { "key" : aws_access_key_id, "secret" : aws_secret_access_key} >>> builder = load_dataset_builder( "rotten_tomatoes" ) >>> builder.download_and_prepare( "s3://my-bucket/my_rotten_tomatoes" , storage_options=storage_options, file_format= "parquet" ) get_all_exported_dataset_infos < source > ( ) Empty dict if doesn’t exist Example: Copied >>> from datasets import load_dataset_builder >>> ds_builder = load_dataset_builder( 'vivos' ) >>> ds_builder.get_all_exported_dataset_infos() { 'default' : DatasetInfo(description= '' , citation= '' , homepage= '' , license= '' , features={ 'speaker_id' : Value(dtype= 'string' , id = None ), 'path' : Value(dtype= 'string' , id = None ), 'audio' : Audio(sampling_rate= 16000 , mono= True , decode= True , id = None ), 'sentence' : Value(dtype= 'string' , id = None )}, post_processed= None , supervised_keys= None , builder_name= None , dataset_name= None , config_name= 'default' , version= None , splits={ 'train' : SplitInfo(name= 'train' , num_bytes= 1722002133 , num_examples= 11660 , shard_lengths= None , dataset_name= None ), 'test' : SplitInfo(name= 'test' , num_bytes= 86120227 , num_examples= 760 , shard_lengths= None , dataset_name= None )}, download_checksums= None , download_size= 1475540500 , post_processing_size= None , dataset_size= 1808122360 , size_in_bytes= None )} get_exported_dataset_info < source > ( ) Empty DatasetInfo if doesn’t exist Example: Copied >>> from datasets import load_dataset_builder >>> ds_builder = load_dataset_builder( 'rotten_tomatoes' ) >>> ds_builder.get_exported_dataset_info() DatasetInfo(description= '' , citation= '' , homepage= '' , license= '' , features={ 'speaker_id' : Value(dtype= 'string' , id = None ), 'path' : Value(dtype= 'string' , id = None ), 'audio' : Audio(sampling_rate= 16000 , mono= True , decode= True , id = None ), 'sentence' : Value(dtype= 'string' , id = None )}, post_processed= None , supervised_keys= None , builder_name= None , dataset_name= None , config_name= 'default' , version= None , splits={ 'train' : SplitInfo(name= 'train' , num_bytes= 1722002133 , num_examples= 11660 , shard_lengths= None , dataset_name= None ), 'test' : SplitInfo(name= 'test' , num_bytes= 86120227 , num_examples= 760 , shard_lengths= None , dataset_name= None )}, download_checksums= None , download_size= 1475540500 , post_processing_size= None , dataset_size= 1808122360 , size_in_bytes= None ) get_imported_module_dir < source > ( ) Return the path of the module of this class or subclass. class datasets. GeneratorBasedBuilder < source > ( cache_dir : typing.Optional[str] = None dataset_name : typing.Optional[str] = None config_name : typing.Optional[str] = None hash : typing.Optional[str] = None base_path : typing.Optional[str] = None info : typing.Optional[datasets.info.DatasetInfo] = None features : typing.Optional[datasets.features.features.Features] = None token : typing.Union[bool, str, NoneType] = None repo_id : typing.Optional[str] = None data_files : typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir : typing.Optional[str] = None storage_options : typing.Optional[dict] = None writer_batch_size : typing.Optional[int] = None **config_kwargs ) Base class for datasets with data generation based on dict generators. GeneratorBasedBuilder is a convenience class that abstracts away much of the data writing and reading of DatasetBuilder . It expects subclasses to implement generators of feature dictionaries across the dataset splits ( _split_generators ). See the method docstrings for details. class datasets. ArrowBasedBuilder < source > ( cache_dir : typing.Optional[str] = None dataset_name : typing.Optional[str] = None config_name : typing.Optional[str] = None hash : typing.Optional[str] = None base_path : typing.Optional[str] = None info : typing.Optional[datasets.info.DatasetInfo] = None features : typing.Optional[datasets.features.features.Features] = None token : typing.Union[bool, str, NoneType] = None repo_id : typing.Optional[str] = None data_files : typing.Union[str, list, dict, datasets.data_files.DataFilesDict, NoneType] = None data_dir : typing.Optional[str] = None storage_options : typing.Optional[dict] = None writer_batch_size : typing.Optional[int] = None **config_kwargs ) Base class for datasets with data generation based on Arrow loading functions (CSV/JSON/Parquet). class datasets. BuilderConfig < source > ( name : str = 'default' version : typing.Union[str, datasets.utils.version.Version, NoneType] = 0.0.0 data_dir : typing.Optional[str] = None data_files : typing.Union[datasets.data_files.DataFilesDict, datasets.data_files.DataFilesPatternsDict, NoneType] = None description : typing.Optional[str] = None ) Parameters name ( str , defaults to default ) — The name of the configuration. version ( Version or str , defaults to 0.0.0 ) — The version of the configuration. data_dir ( str , optional ) — Path to the directory containing the source data. data_files ( str or Sequence or Mapping , optional ) — Path(s) to source data file(s). description ( str , optional ) — A human description of the configuration. Base class for DatasetBuilder data configuration. DatasetBuilder subclasses with data configuration options should subclass BuilderConfig and add their own properties. create_config_id < source > ( config_kwargs : dict custom_features : typing.Optional[datasets.features.features.Features] = None ) The config id is used to build the cache directory. By default it is equal to the config name. However the name of a config is not sufficient to have a unique identifier for the dataset being generated since it doesn’t take into account: the config kwargs that can be used to overwrite attributes the custom features used to write the dataset the data_files for json/text/csv/pandas datasets Therefore the config id is just the config name with an optional suffix based on these. Download class datasets. DownloadManager < source > ( dataset_name : typing.Optional[str] = None data_dir : typing.Optional[str] = None download_config : typing.Optional[datasets.download.download_config.DownloadConfig] = None base_path : typing.Optional[str] = None record_checksums = True ) download < source > ( url_or_urls ) → str or list or dict Parameters url_or_urls ( str or list or dict ) — URL or list or dict of URLs to download. Each URL is a str . Returns str or list or dict The downloaded paths matching the given input url_or_urls . Download given URL(s). By default, only one process is used for download. Pass customized download_config.num_proc to change this behavior. Example: Copied >>> downloaded_files = dl_manager.download( 'https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz' ) download_and_extract < source > ( url_or_urls ) → extracted_path(s) Parameters url_or_urls ( str or list or dict ) — URL or list or dict of URLs to download and extract. Each URL is a str . Returns extracted_path(s) str , extracted paths of given URL(s). Download and extract given url_or_urls . Is roughly equivalent to: Copied extracted_paths = dl_manager.extract(dl_manager.download(url_or_urls)) extract < source > ( path_or_paths ) → extracted_path(s) Parameters path_or_paths (path or list or dict ) — Path of file to extract. Each path is a str . Returns extracted_path(s) str , The extracted paths matching the given input path_or_paths. Extract given path(s). Example: Copied >>> downloaded_files = dl_manager.download( 'https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz' ) >>> extracted_files = dl_manager.extract(downloaded_files) iter_archive < source > ( path_or_buf : typing.Union[str, _io.BufferedReader] ) → tuple[str, io.BufferedReader] Parameters path_or_buf ( str or io.BufferedReader ) — Archive path or archive binary file object. Yields tuple[str, io.BufferedReader] Iterate over files within an archive. Example: Copied >>> archive = dl_manager.download( 'https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz' ) >>> files = dl_manager.iter_archive(archive) iter_files < source > ( paths : typing.Union[str, typing.List[str]] ) → str Parameters paths ( str or list of str ) — Root paths. Yields str Iterate over file paths. Example: Copied >>> files = dl_manager.download_and_extract( 'https://huggingface.co/datasets/beans/resolve/main/data/train.zip' ) >>> files = dl_manager.iter_files(files) class datasets. StreamingDownloadManager < source > ( dataset_name : typing.Optional[str] = None data_dir : typing.Optional[str] = None download_config : typing.Optional[datasets.download.download_config.DownloadConfig] = None base_path : typing.Optional[str] = None ) Download manager that uses the ”::” separator to navigate through (possibly remote) compressed archives. Contrary to the regular DownloadManager , the download and extract methods don’t actually download nor extract data, but they rather return the path or url that could be opened using the xopen function which extends the built-in open function to stream data from remote files. download < source > ( url_or_urls ) → url(s) Parameters url_or_urls ( str or list or dict ) — URL(s) of files to stream data from. Each url is a str . Returns url(s) ( str or list or dict ), URL(s) to stream data from matching the given input url_or_urls. Normalize URL(s) of files to stream data from. This is the lazy version of DownloadManager.download for streaming. Example: Copied >>> downloaded_files = dl_manager.download( 'https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz' ) download_and_extract < source > ( url_or_urls ) → url(s) Parameters url_or_urls ( str or list or dict ) — URL(s) to stream from data from. Each url is a str . Returns url(s) ( str or list or dict ), URL(s) to stream data from matching the given input url_or_urls . Prepare given url_or_urls for streaming (add extraction protocol). This is the lazy version of DownloadManager.download_and_extract for streaming. Is equivalent to: Copied urls = dl_manager.extract(dl_manager.download(url_or_urls)) extract < source > ( url_or_urls ) → url(s) Parameters url_or_urls ( str or list or dict ) — URL(s) of files to stream data from. Each url is a str . Returns url(s) ( str or list or dict ), URL(s) to stream data from matching the given input url_or_urls . Add extraction protocol for given url(s) for streaming. This is the lazy version of DownloadManager.extract for streaming. Example: Copied >>> downloaded_files = dl_manager.download( 'https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz' ) >>> extracted_files = dl_manager.extract(downloaded_files) iter_archive < source > ( urlpath_or_buf : typing.Union[str, _io.BufferedReader] ) → tuple[str, io.BufferedReader] Parameters urlpath_or_buf ( str or io.BufferedReader ) — Archive path or archive binary file object. Yields tuple[str, io.BufferedReader] Iterate over files within an archive. Example: Copied >>> archive = dl_manager.download( 'https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz' ) >>> files = dl_manager.iter_archive(archive) iter_files < source > ( urlpaths : typing.Union[str, typing.List[str]] ) → str Parameters urlpaths ( str or list of str ) — Root paths. Yields str Iterate over files. Example: Copied >>> files = dl_manager.download_and_extract( 'https://huggingface.co/datasets/beans/resolve/main/data/train.zip' ) >>> files = dl_manager.iter_files(files) class datasets. DownloadConfig < source > ( cache_dir : typing.Union[str, pathlib.Path, NoneType] = None force_download : bool = False resume_download : bool = False local_files_only : bool = False proxies : typing.Optional[typing.Dict] = None user_agent : typing.Optional[str] = None extract_compressed_file : bool = False force_extract : bool = False delete_extracted : bool = False extract_on_the_fly : bool = False use_etag : bool = True num_proc : typing.Optional[int] = None max_retries : int = 1 token : typing.Union[str, bool, NoneType] = None storage_options : typing.Dict[str, typing.Any] = <factory> download_desc : typing.Optional[str] = None disable_tqdm : bool = False ) Parameters cache_dir ( str or Path , optional ) — Specify a cache directory to save the file to (overwrite the default cache dir). force_download ( bool , defaults to False ) — If True , re-dowload the file even if it’s already cached in the cache dir. resume_download ( bool , defaults to False ) — If True , resume the download if an incompletely received file is found. proxies ( dict , optional ) — user_agent ( str , optional ) — Optional string or dict that will be appended to the user-agent on remote requests. extract_compressed_file ( bool , defaults to False ) — If True and the path point to a zip or tar file, extract the compressed file in a folder along the archive. force_extract ( bool , defaults to False ) — If True when extract_compressed_file is True and the archive was already extracted, re-extract the archive and override the folder where it was extracted. delete_extracted ( bool , defaults to False ) — Whether to delete (or keep) the extracted files. extract_on_the_fly ( bool , defaults to False ) — If True , extract compressed files while they are being read. use_etag ( bool , defaults to True ) — Whether to use the ETag HTTP response header to validate the cached files. num_proc ( int , optional ) — The number of processes to launch to download the files in parallel. max_retries ( int , default to 1 ) — The number of times to retry an HTTP request if it fails. token ( str or bool , optional ) — Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If True , or not specified, will get token from ~/.huggingface . storage_options ( dict , optional ) — Key/value pairs to be passed on to the dataset file-system backend, if any. download_desc ( str , optional ) — A description to be displayed alongside with the progress bar while downloading the files. disable_tqdm ( bool , defaults to False ) — Whether to disable the individual files download progress bar Configuration for our cached path manager. class datasets. DownloadMode < source > ( value names = None module = None qualname = None type = None start = 1 ) Enum for how to treat pre-existing downloads and data. The default mode is REUSE_DATASET_IF_EXISTS , which will reuse both raw downloads and the prepared dataset if they exist. The generations modes: Downloads Dataset REUSE_DATASET_IF_EXISTS (default) Reuse Reuse REUSE_CACHE_IF_EXISTS Reuse Fresh FORCE_REDOWNLOAD Fresh Fresh Verification class datasets. VerificationMode < source > ( value names = None module = None qualname = None type = None start = 1 ) Enum that specifies which verification checks to run. The default mode is BASIC_CHECKS , which will perform only rudimentary checks to avoid slowdowns when generating/downloading a dataset for the first time. The verification modes: Verification checks ALL_CHECKS Split checks, uniqueness of the keys yielded in case of the GeneratorBuilder and the validity (number of files, checksums, etc.) of downloaded files BASIC_CHECKS (default) Same as ALL_CHECKS but without checking downloaded files NO_CHECKS None Splits class datasets. SplitGenerator < source > ( name : str gen_kwargs : typing.Dict = <factory> ) Parameters name ( str ) — Name of the Split for which the generator will create the examples. * *gen_kwargs (additional keyword arguments) — Keyword arguments to forward to the DatasetBuilder._generate_examples method of the builder. Defines the split information for the generator. This should be used as returned value of GeneratorBasedBuilder._split_generators . See GeneratorBasedBuilder._split_generators for more info and example of usage. Example: Copied >>> datasets.SplitGenerator( ... name=datasets.Split.TRAIN, ... gen_kwargs={ "split_key" : "train" , "files" : dl_manager.download_and_extract(url)}, ... ) class datasets. Split < source > ( name ) Enum for dataset splits. Datasets are typically split into different subsets to be used at various stages of training and evaluation. TRAIN : the training data. VALIDATION : the validation data. If present, this is typically used as evaluation data while iterating on a model (e.g. changing hyperparameters, model architecture, etc.). TEST : the testing data. This is the data to report metrics on. Typically you do not want to use this during model iteration as you may overfit to it. ALL : the union of all defined dataset splits. All splits, including compositions inherit from datasets.SplitBase . See the guide on splits for more information. Example: Copied >>> datasets.SplitGenerator( ... name=datasets.Split.TRAIN, ... gen_kwargs={ "split_key" : "train" , "files" : dl_manager.download_and extract(url)}, ... ), ... datasets.SplitGenerator( ... name=datasets.Split.VALIDATION, ... gen_kwargs={ "split_key" : "validation" , "files" : dl_manager.download_and extract(url)}, ... ), ... datasets.SplitGenerator( ... name=datasets.Split.TEST, ... gen_kwargs={ "split_key" : "test" , "files" : dl_manager.download_and extract(url)}, ... ) class datasets. NamedSplit < source > ( name ) Descriptor corresponding to a named split (train, test, …). Example: Each descriptor can be composed with other using addition or slice: Copied split = datasets.Split.TRAIN.subsplit(datasets.percent[ 0 : 25 ]) + datasets.Split.TEST The resulting split will correspond to 25% of the train split merged with 100% of the test split. A split cannot be added twice, so the following will fail: Copied split = ( datasets.Split.TRAIN.subsplit(datasets.percent[: 25 ]) + datasets.Split.TRAIN.subsplit(datasets.percent[ 75 :]) ) # Error split = datasets.Split.TEST + datasets.Split.ALL # Error The slices can be applied only one time. So the following are valid: Copied split = ( datasets.Split.TRAIN.subsplit(datasets.percent[: 25 ]) + datasets.Split.TEST.subsplit(datasets.percent[: 50 ]) ) split = (datasets.Split.TRAIN + datasets.Split.TEST).subsplit(datasets.percent[: 50 ]) But this is not valid: Copied train = datasets.Split.TRAIN test = datasets.Split.TEST split = train.subsplit(datasets.percent[: 25 ]).subsplit(datasets.percent[: 25 ]) split = (train.subsplit(datasets.percent[: 25 ]) + test).subsplit(datasets.percent[: 50 ]) class datasets. NamedSplitAll < source > ( ) Split corresponding to the union of all defined dataset splits. class datasets. ReadInstruction < source > ( split_name rounding = None from_ = None to = None unit = None ) Reading instruction for a dataset. Examples: Copied # The following lines are equivalent: ds = datasets.load_dataset( 'mnist' , split= 'test[:33%]' ) ds = datasets.load_dataset( 'mnist' , split=datasets.ReadInstruction.from_spec( 'test[:33%]' )) ds = datasets.load_dataset( 'mnist' , split=datasets.ReadInstruction( 'test' , to= 33 , unit= '%' )) ds = datasets.load_dataset( 'mnist' , split=datasets.ReadInstruction( 'test' , from_= 0 , to= 33 , unit= '%' )) # The following lines are equivalent: ds = datasets.load_dataset( 'mnist' , split= 'test[:33%]+train[1:-1]' ) ds = datasets.load_dataset( 'mnist' , split=datasets.ReadInstruction.from_spec( 'test[:33%]+train[1:-1]' )) ds = datasets.load_dataset( 'mnist' , split=( datasets.ReadInstruction( 'test' , to= 33 , unit= '%' ) + datasets.ReadInstruction( 'train' , from_= 1 , to=- 1 , unit= 'abs' ))) # The following lines are equivalent: ds = datasets.load_dataset( 'mnist' , split= 'test[:33%](pct1_dropremainder)' ) ds = datasets.load_dataset( 'mnist' , split=datasets.ReadInstruction.from_spec( 'test[:33%](pct1_dropremainder)' )) ds = datasets.load_dataset( 'mnist' , split=datasets.ReadInstruction( 'test' , from_= 0 , to= 33 , unit= '%' , rounding= "pct1_dropremainder" )) # 10-fold validation: tests = datasets.load_dataset( 'mnist' , [datasets.ReadInstruction( 'train' , from_=k, to=k+ 10 , unit= '%' ) for k in range ( 0 , 100 , 10 )]) trains = datasets.load_dataset( 'mnist' , [datasets.ReadInstruction( 'train' , to=k, unit= '%' ) + datasets.ReadInstruction( 'train' , from_=k+ 10 , unit= '%' ) for k in range ( 0 , 100 , 10 )]) from_spec < source > ( spec ) Parameters spec ( str ) — Split(s) + optional slice(s) to read + optional rounding if percents are used as the slicing unit. A slice can be specified, using absolute numbers ( int ) or percentages ( int ). Creates a ReadInstruction instance out of a string spec. Examples: Copied test: test split. test + validation: test split + validation split. test[10:]: test split, minus its first 10 records. test[:10%]: first 10% records of test split. test[:20%](pct1_dropremainder): first 10% records, rounded with the pct1_dropremainder rounding. test[: -5 %]+train[40%:60%]: first 95% of test + middle 20% of train. to_absolute < source > ( name2len ) Parameters name2len ( dict ) — Associating split names to number of examples. Translate instruction into a list of absolute instructions. Those absolute instructions are then to be added together. Version class datasets. Version < source > ( version_str : str description : typing.Optional[str] = None major : typing.Union[str, int, NoneType] = None minor : typing.Union[str, int, NoneType] = None patch : typing.Union[str, int, NoneType] = None ) Parameters version_str ( str ) — The dataset version. description ( str ) — A description of what is new in this version. major ( str ) — minor ( str ) — patch ( str ) — Dataset version MAJOR.MINOR.PATCH . Example: Copied >>> VERSION = datasets.Version( "1.0.0" ) < > Update on GitHub ← Main classes Loading methods → Builder classes Builders Download Verification Splits Version
Using_OpenCLIP_at_Hugging_Face.txt
Using OpenCLIP at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using OpenCLIP at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using OpenCLIP at Hugging Face OpenCLIP is an open-source implementation of OpenAI’s CLIP. Exploring OpenCLIP on the Hub You can find OpenCLIP models by filtering at the left of the models page . OpenCLIP models hosted on the Hub have a model card with useful information about the models. Thanks to OpenCLIP Hugging Face Hub integration, you can load OpenCLIP models with a few lines of code. You can also deploy these models using Inference Endpoints . Installation To get started, you can follow the OpenCLIP installation guide . You can also use the following one-line install through pip: Copied $ pip install open_clip_torch Using existing models All OpenCLIP models can easily be loaded from the Hub: Copied import open_clip model, preprocess = open_clip.create_model_from_pretrained( 'hf-hub:laion/CLIP-ViT-g-14-laion2B-s12B-b42K' ) tokenizer = open_clip.get_tokenizer( 'hf-hub:laion/CLIP-ViT-g-14-laion2B-s12B-b42K' ) Once loaded, you can encode the image and text to do zero-shot image classification : Copied import torch from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image. open (requests.get(url, stream= True ).raw) image = preprocess(image).unsqueeze( 0 ) text = tokenizer([ "a diagram" , "a dog" , "a cat" ]) with torch.no_grad(), torch.cuda.amp.autocast(): image_features = model.encode_image(image) text_features = model.encode_text(text) image_features /= image_features.norm(dim=- 1 , keepdim= True ) text_features /= text_features.norm(dim=- 1 , keepdim= True ) text_probs = ( 100.0 * image_features @ text_features.T).softmax(dim=- 1 ) print ( "Label probs:" , text_probs) It outputs the probability of each possible class: Copied Label probs: tensor([[0.0020, 0.0034, 0.9946]]) If you want to load a specific OpenCLIP model, you can click Use in OpenCLIP in the model card and you will be given a working snippet! Additional resources OpenCLIP repository OpenCLIP docs OpenCLIP models in the Hub < > Update on GitHub ← MLX PaddleNLP → Using OpenCLI P at Hugging Face Exploring OpenCLI P on the Hub Installation Using existing models Additional resources
Multitask_prompt_tuning.txt
Multitask prompt tuning Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Multitask prompt tuning PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Multitask prompt tuning Multitask prompt tuning decomposes the soft prompts of each task into a single learned transferable prompt instead of a separate prompt for each task. The single learned prompt can be adapted for each task by multiplicative low rank updates. The abstract from the paper is: Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods typically learn soft prompt vectors from scratch, and it has not been clear how to exploit the rich cross-task knowledge with prompt vectors in a multitask learning setting. We propose multitask prompt tuning (MPT), which first learns a single transferable prompt by distilling knowledge from multiple task-specific source prompts. We then learn multiplicative low rank updates to this shared prompt to efficiently adapt it to each downstream target task. Extensive experiments on 23 NLP datasets demonstrate that our proposed approach outperforms the state-of-the-art methods, including the full finetuning baseline in some cases, despite only tuning 0.035% as many task-specific parameters . MultitaskPromptTuningConfig class peft. MultitaskPromptTuningConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False num_virtual_tokens : int = None token_dim : int = None num_transformer_submodules : typing.Optional[int] = None num_attention_heads : typing.Optional[int] = None num_layers : typing.Optional[int] = None prompt_tuning_init : typing.Union[peft.tuners.multitask_prompt_tuning.config.MultitaskPromptTuningInit, str] = <MultitaskPromptTuningInit.RANDOM: 'RANDOM'> prompt_tuning_init_text : typing.Optional[str] = None tokenizer_name_or_path : typing.Optional[str] = None tokenizer_kwargs : typing.Optional[dict] = None prompt_tuning_init_state_dict_path : typing.Optional[str] = None prompt_tuning_init_task : typing.Optional[int] = 0 num_ranks : typing.Optional[int] = 1 num_tasks : typing.Optional[int] = 1 ) MultitaskPromptEmbedding class peft.tuners. MultitaskPromptEmbedding < source > ( config : MultitaskPromptTuningConfig word_embeddings ) < > Update on GitHub ← LyCORIS OFT → Multitask prompt tuning Multitask Prompt Tuning Config Multitask Prompt Embedding
BERTology.txt
BERTology Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation BERTology Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started BERTology There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”). Some good examples of this field are: BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950 Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650 What Does BERT Look At? An Analysis of BERT’s Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341 CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: https://arxiv.org/abs/2210.04633 In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to help people access the inner representations, mainly adapted from the great work of Paul Michel ( https://arxiv.org/abs/1905.10650 ): accessing all the hidden-states of BERT/GPT/GPT-2, accessing all the attention weights for each head of BERT/GPT/GPT-2, retrieving heads output values and gradients to be able to compute head importance score and prune head as explained in https://arxiv.org/abs/1905.10650 . To help you understand and use these features, we have added a specific example script: bertology.py which extracts information and prune a model pre-trained on GLUE. < > Update on GitHub ← Padding and truncation Perplexity of fixed-length models → BER Tology
Input_Sequences.txt
Input Sequences Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Tokenizers documentation Input Sequences Tokenizers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.20.3 v0.13.4.rc2 v0.10.0 v0.9.4 EN Getting started 🤗 Tokenizers Quicktour Installation The tokenization pipeline Components Training from memory API Input Sequences Encode Inputs Tokenizer Encoding Added Tokens Models Normalizers Pre-tokenizers Post-processors Trainers Decoders Visualizer Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Input Sequences Python Rust Node These types represent all the different kinds of sequence that can be used as input of a Tokenizer. Globally, any sequence can be either a string or a list of strings, according to the operating mode of the tokenizer: raw text vs pre-tokenized . TextInputSequence tokenizers.TextInputSequence A str that represents an input sequence PreTokenizedInputSequence tokenizers.PreTokenizedInputSequence A pre-tokenized input sequence. Can be one of: A List of str A Tuple of str alias of Union[List[str], Tuple[str]] . InputSequence tokenizers.InputSequence Represents all the possible types of input sequences for encoding. Can be: When is_pretokenized=False : TextInputSequence When is_pretokenized=True : PreTokenizedInputSequence alias of Union[str, List[str], Tuple[str]] . < > Update on GitHub ← Training from memory Encode Inputs → Input Sequences Text Input Sequence Pre Tokenized Input Sequence Input Sequence
Optimum_Neuron_Container.txt
Optimum Neuron Container Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Optimum Neuron Container AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Optimum Neuron Container We provide pre-built Optimum Neuron containers for Amazon SageMaker. These containers come with all of the Hugging Face libraries and dependencies pre-installed, so you can start using them right away. We have containers for training and inference, and optimized text generation containers with TGI. The table is up to date and only includes the latest versions of each container. You can find older versions in the Deep Learning Container Release Notes We recommend using the sagemaker Python SDK to retrieve the image URI for the container you want to use. Here is a code snippet to retrieve the latest Text Generation Inference container Image URI: Copied from sagemaker.huggingface import get_huggingface_llm_image_uri # retrieve the llm image uri llm_image = get_huggingface_llm_image_uri( "huggingface-neuronx" ) print ( f"llm image uri: {llm_image} " ) Available Optimum Neuron Containers Type Optimum Version Image URI Training 0.0.25 763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-training-neuronx:2.1.2-transformers4.43.2-neuronx-py310-sdk2.20.0-ubuntu20.04 Inference 0.0.25 763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-inference-neuronx:2.1.2-transformers4.43.2-neuronx-py310-sdk2.20.0-ubuntu20.04 Text Generation Inference 0.0.25 763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-tgi-inference:2.1.2-optimum0.0.25-neuronx-py310-ubuntu22.04 Please replace 763104351884 with the correct AWS account ID and region with the AWS region you are working in. ← Quickstart Notebooks → Optimum Neuron Container Available Optimum Neuron Containers
The_tokenization_pipeline.txt
The tokenization pipeline Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Tokenizers documentation The tokenization pipeline Tokenizers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.20.3 v0.13.4.rc2 v0.10.0 v0.9.4 EN Getting started 🤗 Tokenizers Quicktour Installation The tokenization pipeline Components Training from memory API Input Sequences Encode Inputs Tokenizer Encoding Added Tokens Models Normalizers Pre-tokenizers Post-processors Trainers Decoders Visualizer Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started The tokenization pipeline When calling Tokenizer.encode or Tokenizer.encode_batch , the input text(s) go through the following pipeline: normalization pre-tokenization model post-processing We’ll see in details what happens during each of those steps in detail, as well as when you want to decode <decoding> some token ids, and how the 🤗 Tokenizers library allows you to customize each of those steps to your needs. If you’re already familiar with those steps and want to learn by seeing some code, jump to our BERT from scratch example <example> . For the examples that require a Tokenizer we will use the tokenizer we trained in the quicktour , which you can load with: Python Rust Node Copied from tokenizers import Tokenizer tokenizer = Tokenizer.from_file( "data/tokenizer-wiki.json" ) Normalization Normalization is, in a nutshell, a set of operations you apply to a raw string to make it less random or “cleaner”. Common operations include stripping whitespace, removing accented characters or lowercasing all text. If you’re familiar with Unicode normalization , it is also a very common normalization operation applied in most tokenizers. Each normalization operation is represented in the 🤗 Tokenizers library by a Normalizer , and you can combine several of those by using a normalizers.Sequence . Here is a normalizer applying NFD Unicode normalization and removing accents as an example: Python Rust Node Copied from tokenizers import normalizers from tokenizers.normalizers import NFD, StripAccents normalizer = normalizers. Sequence ([NFD(), StripAccents()]) You can manually test that normalizer by applying it to any string: Python Rust Node Copied normalizer.normalize_str( "Héllò hôw are ü?" ) # "Hello how are u?" When building a Tokenizer , you can customize its normalizer by just changing the corresponding attribute: Python Rust Node Copied tokenizer.normalizer = normalizer Of course, if you change the way a tokenizer applies normalization, you should probably retrain it from scratch afterward. Pre-Tokenization Pre-tokenization is the act of splitting a text into smaller objects that give an upper bound to what your tokens will be at the end of training. A good way to think of this is that the pre-tokenizer will split your text into “words” and then, your final tokens will be parts of those words. An easy way to pre-tokenize inputs is to split on spaces and punctuations, which is done by the pre_tokenizers.Whitespace pre-tokenizer: Python Rust Node Copied from tokenizers.pre_tokenizers import Whitespace pre_tokenizer = Whitespace() pre_tokenizer.pre_tokenize_str( "Hello! How are you? I'm fine, thank you." ) # [("Hello", (0, 5)), ("!", (5, 6)), ("How", (7, 10)), ("are", (11, 14)), ("you", (15, 18)), # ("?", (18, 19)), ("I", (20, 21)), ("'", (21, 22)), ('m', (22, 23)), ("fine", (24, 28)), # (",", (28, 29)), ("thank", (30, 35)), ("you", (36, 39)), (".", (39, 40))] The output is a list of tuples, with each tuple containing one word and its span in the original sentence (which is used to determine the final offsets of our Encoding ). Note that splitting on punctuation will split contractions like "I'm" in this example. You can combine together any PreTokenizer together. For instance, here is a pre-tokenizer that will split on space, punctuation and digits, separating numbers in their individual digits: Python Rust Node Copied from tokenizers import pre_tokenizers from tokenizers.pre_tokenizers import Digits pre_tokenizer = pre_tokenizers. Sequence ([Whitespace(), Digits(individual_digits= True )]) pre_tokenizer.pre_tokenize_str( "Call 911!" ) # [("Call", (0, 4)), ("9", (5, 6)), ("1", (6, 7)), ("1", (7, 8)), ("!", (8, 9))] As we saw in the quicktour , you can customize the pre-tokenizer of a Tokenizer by just changing the corresponding attribute: Python Rust Node Copied tokenizer.pre_tokenizer = pre_tokenizer Of course, if you change the way the pre-tokenizer, you should probably retrain your tokenizer from scratch afterward. Model Once the input texts are normalized and pre-tokenized, the Tokenizer applies the model on the pre-tokens. This is the part of the pipeline that needs training on your corpus (or that has been trained if you are using a pretrained tokenizer). The role of the model is to split your “words” into tokens, using the rules it has learned. It’s also responsible for mapping those tokens to their corresponding IDs in the vocabulary of the model. This model is passed along when intializing the Tokenizer so you already know how to customize this part. Currently, the 🤗 Tokenizers library supports: models.BPE models.Unigram models.WordLevel models.WordPiece For more details about each model and its behavior, you can check here Post-Processing Post-processing is the last step of the tokenization pipeline, to perform any additional transformation to the Encoding before it’s returned, like adding potential special tokens. As we saw in the quick tour, we can customize the post processor of a Tokenizer by setting the corresponding attribute. For instance, here is how we can post-process to make the inputs suitable for the BERT model: Python Rust Node Copied from tokenizers.processors import TemplateProcessing tokenizer.post_processor = TemplateProcessing( single= "[CLS] $A [SEP]" , pair= "[CLS] $A [SEP] $B:1 [SEP]:1" , special_tokens=[( "[CLS]" , 1 ), ( "[SEP]" , 2 )], ) Note that contrarily to the pre-tokenizer or the normalizer, you don’t need to retrain a tokenizer after changing its post-processor. All together: a BERT tokenizer from scratch Let’s put all those pieces together to build a BERT tokenizer. First, BERT relies on WordPiece, so we instantiate a new Tokenizer with this model: Python Rust Node Copied from tokenizers import Tokenizer from tokenizers.models import WordPiece bert_tokenizer = Tokenizer(WordPiece(unk_token= "[UNK]" )) Then we know that BERT preprocesses texts by removing accents and lowercasing. We also use a unicode normalizer: Python Rust Node Copied from tokenizers import normalizers from tokenizers.normalizers import NFD, Lowercase, StripAccents bert_tokenizer.normalizer = normalizers. Sequence ([NFD(), Lowercase(), StripAccents()]) The pre-tokenizer is just splitting on whitespace and punctuation: Python Rust Node Copied from tokenizers.pre_tokenizers import Whitespace bert_tokenizer.pre_tokenizer = Whitespace() And the post-processing uses the template we saw in the previous section: Python Rust Node Copied from tokenizers.processors import TemplateProcessing bert_tokenizer.post_processor = TemplateProcessing( single= "[CLS] $A [SEP]" , pair= "[CLS] $A [SEP] $B:1 [SEP]:1" , special_tokens=[ ( "[CLS]" , 1 ), ( "[SEP]" , 2 ), ], ) We can use this tokenizer and train on it on wikitext like in the quicktour : Python Rust Node Copied from tokenizers.trainers import WordPieceTrainer trainer = WordPieceTrainer(vocab_size= 30522 , special_tokens=[ "[UNK]" , "[CLS]" , "[SEP]" , "[PAD]" , "[MASK]" ]) files = [ f"data/wikitext-103-raw/wiki. {split} .raw" for split in [ "test" , "train" , "valid" ]] bert_tokenizer.train(files, trainer) bert_tokenizer.save( "data/bert-wiki.json" ) Decoding On top of encoding the input texts, a Tokenizer also has an API for decoding, that is converting IDs generated by your model back to a text. This is done by the methods Tokenizer.decode (for one predicted text) and Tokenizer.decode_batch (for a batch of predictions). The decoder will first convert the IDs back to tokens (using the tokenizer’s vocabulary) and remove all special tokens, then join those tokens with spaces: Python Rust Node Copied output = tokenizer.encode( "Hello, y'all! How are you 😁 ?" ) print (output.ids) # [1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2] tokenizer.decode([ 1 , 27253 , 16 , 93 , 11 , 5097 , 5 , 7961 , 5112 , 6218 , 0 , 35 , 2 ]) # "Hello , y ' all ! How are you ?" If you used a model that added special characters to represent subtokens of a given “word” (like the "##" in WordPiece) you will need to customize the decoder to treat them properly. If we take our previous bert_tokenizer for instance the default decoding will give: Python Rust Node Copied output = bert_tokenizer.encode( "Welcome to the 🤗 Tokenizers library." ) print (output.tokens) # ["[CLS]", "welcome", "to", "the", "[UNK]", "tok", "##eni", "##zer", "##s", "library", ".", "[SEP]"] bert_tokenizer.decode(output.ids) # "welcome to the tok ##eni ##zer ##s library ." But by changing it to a proper decoder, we get: Python Rust Node Copied from tokenizers import decoders bert_tokenizer.decoder = decoders.WordPiece() bert_tokenizer.decode(output.ids) # "welcome to the tokenizers library." < > Update on GitHub ← Installation Components → The tokenization pipeline Normalization Pre- Tokenization Model Post- Processing All together: a BER T tokenizer from scratch Decoding
Interface__SpaceResourceRequirement.txt
Interface: SpaceResourceRequirement Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: SpaceResourceRequirement Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: SpaceResourceRequirement Properties cpu • Optional cpu : string Defined in hub/src/types/public.ts:94 ephemeral • Optional ephemeral : string Defined in hub/src/types/public.ts:98 gpu • Optional gpu : string Defined in hub/src/types/public.ts:96 gpuModel • Optional gpuModel : string Defined in hub/src/types/public.ts:97 memory • Optional memory : string Defined in hub/src/types/public.ts:95 < > Update on GitHub ← SpaceResourceConfig SpaceRuntime → Interface: Space Resource Requirement Properties cpu Defined in ephemeral Defined in gpu Defined in gpu Model Defined in memory Defined in
Launching_distributed_training_from_Jupyter_Notebo.txt
Launching distributed training from Jupyter Notebooks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Launching distributed training from Jupyter Notebooks Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Launching distributed training from Jupyter Notebooks This tutorial teaches you how to fine tune a computer vision model with 🤗 Accelerate from a Jupyter Notebook on a distributed system. You will also learn how to setup a few requirements needed for ensuring your environment is configured properly, your data has been prepared properly, and finally how to launch training. This tutorial is also available as a Jupyter Notebook here Configuring the Environment Before any training can be performed, a Accelerate config file must exist in the system. Usually this can be done by running the following in a terminal and answering the prompts: Copied accelerate config However, if general defaults are fine and you are not running on a TPU, Accelerate has a utility to quickly write your GPU configuration into a config file via utils.write_basic_config() . The following code will restart Jupyter after writing the configuration, as CUDA code was called to perform this. CUDA can’t be initialized more than once on a multi-GPU system. It’s fine to debug in the notebook and have calls to CUDA, but in order to finally train a full cleanup and restart will need to be performed. Copied import os from accelerate.utils import write_basic_config write_basic_config() # Write a config file os._exit( 00 ) # Restart the notebook Preparing the Dataset and Model Next you should prepare your dataset. As mentioned at earlier, great care should be taken when preparing the DataLoaders and model to make sure that nothing is put on any GPU. If you do, it is recommended to put that specific code into a function and call that from within the notebook launcher interface, which will be shown later. Make sure the dataset is downloaded based on the directions here Copied import os, re, torch, PIL import numpy as np from torch.optim.lr_scheduler import OneCycleLR from torch.utils.data import DataLoader, Dataset from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor from accelerate import Accelerator from accelerate.utils import set_seed from timm import create_model First you need to create a function to extract the class name based on a filename: Copied import os data_dir = "../../images" fnames = os.listdir(data_dir) fname = fnames[ 0 ] print (fname) Copied beagle_32.jpg In the case here, the label is beagle . Using regex you can extract the label from the filename: Copied import re def extract_label ( fname ): stem = fname.split(os.path.sep)[- 1 ] return re.search( r"^(.*)_\d+\.jpg$" , stem).groups()[ 0 ] Copied extract_label(fname) And you can see it properly returned the right name for our file: Copied "beagle" Next a Dataset class should be made to handle grabbing the image and the label: Copied class PetsDataset ( Dataset ): def __init__ ( self, file_names, image_transform= None , label_to_id= None ): self.file_names = file_names self.image_transform = image_transform self.label_to_id = label_to_id def __len__ ( self ): return len (self.file_names) def __getitem__ ( self, idx ): fname = self.file_names[idx] raw_image = PIL.Image. open (fname) image = raw_image.convert( "RGB" ) if self.image_transform is not None : image = self.image_transform(image) label = extract_label(fname) if self.label_to_id is not None : label = self.label_to_id[label] return { "image" : image, "label" : label} Now to build the dataset. Outside the training function you can find and declare all the filenames and labels and use them as references inside the launched function: Copied fnames = [os.path.join( "../../images" , fname) for fname in fnames if fname.endswith( ".jpg" )] Next gather all the labels: Copied all_labels = [extract_label(fname) for fname in fnames] id_to_label = list ( set (all_labels)) id_to_label.sort() label_to_id = {lbl: i for i, lbl in enumerate (id_to_label)} Next, you should make a get_dataloaders function that will return your built dataloaders for you. As mentioned earlier, if data is automatically sent to the GPU or a TPU device when building your DataLoaders , they must be built using this method. Copied def get_dataloaders ( batch_size: int = 64 ): "Builds a set of dataloaders with a batch_size" random_perm = np.random.permutation( len (fnames)) cut = int ( 0.8 * len (fnames)) train_split = random_perm[:cut] eval_split = random_perm[cut:] # For training a simple RandomResizedCrop will be used train_tfm = Compose([RandomResizedCrop(( 224 , 224 ), scale=( 0.5 , 1.0 )), ToTensor()]) train_dataset = PetsDataset([fnames[i] for i in train_split], image_transform=train_tfm, label_to_id=label_to_id) # For evaluation a deterministic Resize will be used eval_tfm = Compose([Resize(( 224 , 224 )), ToTensor()]) eval_dataset = PetsDataset([fnames[i] for i in eval_split], image_transform=eval_tfm, label_to_id=label_to_id) # Instantiate dataloaders train_dataloader = DataLoader(train_dataset, shuffle= True , batch_size=batch_size, num_workers= 4 ) eval_dataloader = DataLoader(eval_dataset, shuffle= False , batch_size=batch_size * 2 , num_workers= 4 ) return train_dataloader, eval_dataloader Finally, you should import the scheduler to be used later: Copied from torch.optim.lr_scheduler import CosineAnnealingLR Writing the Training Function Now you can build the training loop. notebook_launcher() works by passing in a function to call that will be ran across the distributed system. Here is a basic training loop for the animal classification problem: The code has been split up to allow for explanations on each section. A full version that can be copy and pasted will be available at the end Copied def training_loop ( mixed_precision= "fp16" , seed: int = 42 , batch_size: int = 64 ): set_seed(seed) accelerator = Accelerator(mixed_precision=mixed_precision) First you should set the seed and create an Accelerator object as early in the training loop as possible. If training on the TPU, your training loop should take in the model as a parameter and it should be instantiated outside of the training loop function. See the TPU best practices to learn why Next you should build your dataloaders and create your model: Copied train_dataloader, eval_dataloader = get_dataloaders(batch_size) model = create_model( "resnet50d" , pretrained= True , num_classes= len (label_to_id)) You build the model here so that the seed also controls the new weight initialization As you are performing transfer learning in this example, the encoder of the model starts out frozen so the head of the model can be trained only initially: Copied for param in model.parameters(): param.requires_grad = False for param in model.get_classifier().parameters(): param.requires_grad = True Normalizing the batches of images will make training a little faster: Copied mean = torch.tensor(model.default_cfg[ "mean" ])[ None , :, None , None ] std = torch.tensor(model.default_cfg[ "std" ])[ None , :, None , None ] To make these constants available on the active device, you should set it to the Accelerator’s device: Copied mean = mean.to(accelerator.device) std = std.to(accelerator.device) Next instantiate the rest of the PyTorch classes used for training: Copied optimizer = torch.optim.Adam(params=model.parameters(), lr= 3e-2 / 25 ) lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr= 3e-2 , epochs= 5 , steps_per_epoch= len (train_dataloader)) Before passing everything to prepare() . There is no specific order to remember, you just need to unpack the objects in the same order you gave them to the prepare method. Copied model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader, lr_scheduler ) Now train the model: Copied for epoch in range ( 5 ): model.train() for batch in train_dataloader: inputs = (batch[ "image" ] - mean) / std outputs = model(inputs) loss = torch.nn.functional.cross_entropy(outputs, batch[ "label" ]) accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() The evaluation loop will look slightly different compared to the training loop. The number of elements passed as well as the overall total accuracy of each batch will be added to two constants: Copied model. eval () accurate = 0 num_elems = 0 Next you have the rest of your standard PyTorch loop: Copied for batch in eval_dataloader: inputs = (batch[ "image" ] - mean) / std with torch.no_grad(): outputs = model(inputs) predictions = outputs.argmax(dim=- 1 ) Before finally the last major difference. When performing distributed evaluation, the predictions and labels need to be passed through gather() so that all of the data is available on the current device and a properly calculated metric can be achieved: Copied accurate_preds = accelerator.gather(predictions) == accelerator.gather(batch[ "label" ]) num_elems += accurate_preds.shape[ 0 ] accurate += accurate_preds.long(). sum () Now you just need to calculate the actual metric for this problem, and you can print it on the main process using print() : Copied eval_metric = accurate.item() / num_elems accelerator. print ( f"epoch {epoch} : { 100 * eval_metric: .2 f} " ) A full version of this training loop is available below: Copied def training_loop ( mixed_precision= "fp16" , seed: int = 42 , batch_size: int = 64 ): set_seed(seed) # Initialize accelerator accelerator = Accelerator(mixed_precision=mixed_precision) # Build dataloaders train_dataloader, eval_dataloader = get_dataloaders(batch_size) # Instantiate the model (you build the model here so that the seed also controls new weight initializations) model = create_model( "resnet50d" , pretrained= True , num_classes= len (label_to_id)) # Freeze the base model for param in model.parameters(): param.requires_grad = False for param in model.get_classifier().parameters(): param.requires_grad = True # You can normalize the batches of images to be a bit faster mean = torch.tensor(model.default_cfg[ "mean" ])[ None , :, None , None ] std = torch.tensor(model.default_cfg[ "std" ])[ None , :, None , None ] # To make these constants available on the active device, set it to the accelerator device mean = mean.to(accelerator.device) std = std.to(accelerator.device) # Instantiate the optimizer optimizer = torch.optim.Adam(params=model.parameters(), lr= 3e-2 / 25 ) # Instantiate the learning rate scheduler lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr= 3e-2 , epochs= 5 , steps_per_epoch= len (train_dataloader)) # Prepare everything # There is no specific order to remember, you just need to unpack the objects in the same order you gave them to the # prepare method. model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader, lr_scheduler ) # Now you train the model for epoch in range ( 5 ): model.train() for batch in train_dataloader: inputs = (batch[ "image" ] - mean) / std outputs = model(inputs) loss = torch.nn.functional.cross_entropy(outputs, batch[ "label" ]) accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() model. eval () accurate = 0 num_elems = 0 for batch in eval_dataloader: inputs = (batch[ "image" ] - mean) / std with torch.no_grad(): outputs = model(inputs) predictions = outputs.argmax(dim=- 1 ) accurate_preds = accelerator.gather(predictions) == accelerator.gather(batch[ "label" ]) num_elems += accurate_preds.shape[ 0 ] accurate += accurate_preds.long(). sum () eval_metric = accurate.item() / num_elems # Use accelerator.print to print only on the main process. accelerator. print ( f"epoch {epoch} : { 100 * eval_metric: .2 f} " ) Using the notebook_launcher All that’s left is to use the notebook_launcher() . You pass in the function, the arguments (as a tuple), and the number of processes to train on. (See the documentation for more information) Copied from accelerate import notebook_launcher Copied args = ( "fp16" , 42 , 64 ) notebook_launcher(training_loop, args, num_processes= 2 ) In the case of running on multiple nodes, you need to set up a Jupyter session at each node and run the launching cell at the same time. For an environment containing 2 nodes (computers) with 8 GPUs each and the main computer with an IP address of “172.31.43.8”, it would look like so: Copied notebook_launcher(training_loop, args, master_addr= "172.31.43.8" , node_rank= 0 , num_nodes= 2 , num_processes= 8 ) And in the second Jupyter session on the other machine: Notice how the node_rank has changed Copied notebook_launcher(training_loop, args, master_addr= "172.31.43.8" , node_rank= 1 , num_nodes= 2 , num_processes= 8 ) In the case of running on the TPU, it would look like so: Copied model = create_model( "resnet50d" , pretrained= True , num_classes= len (label_to_id)) args = (model, "fp16" , 42 , 64 ) notebook_launcher(training_loop, args, num_processes= 8 ) To launch the training process with elasticity, enabling fault tolerance, you can use the elastic_launch feature provided by PyTorch. This requires setting additional parameters such as rdzv_backend and max_restarts . Here is an example of how to use notebook_launcher with elastic capabilities: Copied notebook_launcher( training_loop, args, num_processes= 2 , max_restarts= 3 ) As it’s running it will print the progress as well as state how many devices you ran on. This tutorial was ran with two GPUs: Copied Launching training on 2 GPUs. epoch 0 : 88.12 epoch 1 : 91.73 epoch 2 : 92.58 epoch 3 : 93.90 epoch 4 : 94.71 And that’s it! Please note that notebook_launcher() ignores the Accelerate config file, to launch based on the config use: Copied accelerate launch Debugging A common issue when running the notebook_launcher is receiving a CUDA has already been initialized issue. This usually stems from an import or prior code in the notebook that makes a call to the PyTorch torch.cuda sublibrary. To help narrow down what went wrong, you can launch the notebook_launcher with ACCELERATE_DEBUG_MODE=yes in your environment and an additional check will be made when spawning that a regular process can be created and utilize CUDA without issue. (Your CUDA code can still be ran afterwards). Conclusion This notebook showed how to perform distributed training from inside of a Jupyter Notebook. Some key notes to remember: Make sure to save any code that use CUDA (or CUDA imports) for the function passed to notebook_launcher() Set the num_processes to be the number of devices used for training (such as number of GPUs, CPUs, TPUs, etc) If using the TPU, declare your model outside the training loop function < > Update on GitHub ← Launching Accelerate scripts Start Here! → Launching distributed training from Jupyter Notebooks Configuring the Environment Preparing the Dataset and Model Writing the Training Function Using the notebook_launcher Debugging Conclusion
Interface__LfsPathInfo.txt
Interface: LfsPathInfo Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: LfsPathInfo Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: LfsPathInfo Properties oid • oid : string Defined in hub/src/lib/paths-info.ts:8 pointerSize • pointerSize : number Defined in hub/src/lib/paths-info.ts:10 size • size : number Defined in hub/src/lib/paths-info.ts:9 < > Update on GitHub ← HFCacheInfo ListFileEntry → Interface: Lfs Path Info Properties oid Defined in pointer Size Defined in size Defined in
Handling_Spaces_Dependencies.txt
Handling Spaces Dependencies Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Handling Spaces Dependencies Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Handling Spaces Dependencies Spaces Settings Using Spaces for Organization Cards Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Handling Spaces Dependencies Default dependencies The default Spaces environment comes with several pre-installed dependencies: The huggingface_hub client library allows you to manage your repository and files on the Hub with Python and programmatically access the Inference API from your Space. If you choose to instantiate the model in your app with the Inference API, you can benefit from the built-in acceleration optimizations. This option also consumes less computing resources, which is always nice for the environment! 🌎 Refer to this page for more information on how to programmatically access the Inference API. requests is useful for calling third-party APIs from your app. datasets allows you to fetch or display any dataset from the Hub inside your app. The SDK you specified, which could be either streamlit or gradio . The version is specified in the README.md file. Common Debian packages, such as ffmpeg , cmake , libsm6 , and few others. Adding your own dependencies If you need other Python packages to run your app, add them to a requirements.txt file at the root of the repository. The Spaces runtime engine will create a custom environment on-the-fly. You can also add a pre-requirements.txt file describing dependencies that will be installed before your main dependencies. It can be useful if you need to update pip itself. Debian dependencies are also supported. Add a packages.txt file at the root of your repository, and list all your dependencies in it. Each dependency should be on a separate line, and each line will be read and installed by apt-get install . < > Update on GitHub ← Spaces Overview Spaces Settings → Handling Spaces Dependencies Default dependencies Adding your own dependencies
Utility_functions_and_classes.txt
Utility functions and classes Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Utility functions and classes Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Utility functions and classes Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case. Constants Constants used throughout 🤗 Accelerate for reference The following are constants used when utilizing Accelerator.save_state() utils.MODEL_NAME : "pytorch_model" utils.OPTIMIZER_NAME : "optimizer" utils.RNG_STATE_NAME : "random_states" utils.SCALER_NAME : "scaler.pt utils.SCHEDULER_NAME : "scheduler The following are constants used when utilizing Accelerator.save_model() utils.WEIGHTS_NAME : "pytorch_model.bin" utils.SAFE_WEIGHTS_NAME : "model.safetensors" utils.WEIGHTS_INDEX_NAME : "pytorch_model.bin.index.json" utils.SAFE_WEIGHTS_INDEX_NAME : "model.safetensors.index.json" Data Classes These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters. Standalone These are standalone dataclasses used for checks, such as the type of distributed system being used class accelerate.utils. ComputeEnvironment < source > ( value names = None module = None qualname = None type = None start = 1 ) Represents a type of the compute environment. Values: LOCAL_MACHINE — private/custom cluster hardware. AMAZON_SAGEMAKER — Amazon SageMaker as compute environment. class accelerate. DistributedType < source > ( value names = None module = None qualname = None type = None start = 1 ) Represents a type of distributed environment. Values: NO — Not a distributed environment, just a single process. MULTI_CPU — Distributed on multiple CPU nodes. MULTI_GPU — Distributed on multiple GPUs. MULTI_MLU — Distributed on multiple MLUs. MULTI_MUSA — Distributed on multiple MUSAs. MULTI_NPU — Distributed on multiple NPUs. MULTI_XPU — Distributed on multiple XPUs. DEEPSPEED — Using DeepSpeed. XLA — Using TorchXLA. class accelerate.utils. DynamoBackend < source > ( value names = None module = None qualname = None type = None start = 1 ) Represents a dynamo backend (see https://pytorch.org/docs/stable/torch.compiler.html ). Values: NO — Do not use torch dynamo. EAGER — Uses PyTorch to run the extracted GraphModule. This is quite useful in debugging TorchDynamo issues. AOT_EAGER — Uses AotAutograd with no compiler, i.e, just using PyTorch eager for the AotAutograd’s extracted forward and backward graphs. This is useful for debugging, and unlikely to give speedups. INDUCTOR — Uses TorchInductor backend with AotAutograd and cudagraphs by leveraging codegened Triton kernels. Read more AOT_TS_NVFUSER — nvFuser with AotAutograd/TorchScript. Read more NVPRIMS_NVFUSER — nvFuser with PrimTorch. Read more CUDAGRAPHS — cudagraphs with AotAutograd. Read more OFI — Uses Torchscript optimize_for_inference. Inference only. Read more FX2TRT — Uses Nvidia TensorRT for inference optimizations. Inference only. Read more ONNXRT — Uses ONNXRT for inference on CPU/GPU. Inference only. Read more TENSORRT — Uses ONNXRT to run TensorRT for inference optimizations. Read more AOT_TORCHXLA_TRACE_ONCE — Uses Pytorch/XLA with TorchDynamo optimization, for training. Read more TORCHXLA_TRACE_ONCE — Uses Pytorch/XLA with TorchDynamo optimization, for inference. Read more IPEX — Uses IPEX for inference on CPU. Inference only. Read more . TVM — Uses Apach TVM for inference optimizations. Read more class accelerate.utils. LoggerType < source > ( value names = None module = None qualname = None type = None start = 1 ) Represents a type of supported experiment tracker Values: ALL — all available trackers in the environment that are supported TENSORBOARD — TensorBoard as an experiment tracker WANDB — wandb as an experiment tracker COMETML — comet_ml as an experiment tracker DVCLIVE — dvclive as an experiment tracker class accelerate.utils. PrecisionType < source > ( value names = None module = None qualname = None type = None start = 1 ) Represents a type of precision used on floating point values Values: NO — using full precision (FP32) FP16 — using half precision BF16 — using brain floating point precision class accelerate.utils. RNGType < source > ( value names = None module = None qualname = None type = None start = 1 ) An enumeration. class accelerate.utils. SageMakerDistributedType < source > ( value names = None module = None qualname = None type = None start = 1 ) Represents a type of distributed environment. Values: NO — Not a distributed environment, just a single process. DATA_PARALLEL — using sagemaker distributed data parallelism. MODEL_PARALLEL — using sagemaker distributed model parallelism. Kwargs These are configurable arguments for specific interactions throughout the PyTorch ecosystem that Accelerate handles under the hood. class accelerate. AutocastKwargs < source > ( enabled : bool = True cache_enabled : bool = None ) Use this object in your Accelerator to customize how torch.autocast behaves. Please refer to the documentation of this context manager for more information on each argument. Example: Copied from accelerate import Accelerator from accelerate.utils import AutocastKwargs kwargs = AutocastKwargs(cache_enabled= True ) accelerator = Accelerator(kwargs_handlers=[kwargs]) class accelerate. DistributedDataParallelKwargs < source > ( dim : int = 0 broadcast_buffers : bool = True bucket_cap_mb : int = 25 find_unused_parameters : bool = False check_reduction : bool = False gradient_as_bucket_view : bool = False static_graph : bool = False comm_hook : DDPCommunicationHookType = <DDPCommunicationHookType.NO: 'no'> comm_wrapper : typing.Literal[<DDPCommunicationHookType.NO: 'no'>, <DDPCommunicationHookType.FP16: 'fp16'>, <DDPCommunicationHookType.BF16: 'bf16'>] = <DDPCommunicationHookType.NO: 'no'> comm_state_option : dict = <factory> ) Use this object in your Accelerator to customize how your model is wrapped in a torch.nn.parallel.DistributedDataParallel . Please refer to the documentation of this wrapper for more information on each argument. gradient_as_bucket_view is only available in PyTorch 1.7.0 and later versions. static_graph is only available in PyTorch 1.11.0 and later versions. Example: Copied from accelerate import Accelerator from accelerate.utils import DistributedDataParallelKwargs kwargs = DistributedDataParallelKwargs(find_unused_parameters= True ) accelerator = Accelerator(kwargs_handlers=[kwargs]) class accelerate.utils. FP8RecipeKwargs < source > ( backend : typing.Literal['MSAMP', 'TE'] = None use_autocast_during_eval : bool = None opt_level : typing.Literal['O1', 'O2'] = None margin : int = None interval : int = None fp8_format : typing.Literal['E4M3', 'HYBRID'] = None amax_history_len : int = None amax_compute_algo : typing.Literal['max', 'most_recent'] = None override_linear_precision : typing.Tuple[bool, bool, bool] = None ) Parameters backend ( str , optional ) — Which FP8 engine to use. Must be one of "msamp" (MS-AMP) or "te" (TransformerEngine). If not passed, will use whichever is available in the environment, prioritizing MS-AMP. use_autocast_during_eval ( bool , optional , default to False ) — Whether to use FP8 autocast during eval mode. Generally better metrics are found when this is False . margin ( int , optional , default to 0) — The margin to use for the gradient scaling. interval ( int , optional , default to 1) — The interval to use for how often the scaling factor is recomputed. fp8_format ( str , optional , default to “HYBRID”) — The format to use for the FP8 recipe. Must be one of HYBRID or E4M3 . (Generally HYBRID for training, E4M3 for evaluation) amax_history_len ( int , optional , default to 1024) — The length of the history to use for the scaling factor computation amax_compute_algo ( str , optional , default to “most_recent”) — The algorithm to use for the scaling factor computation. Must be one of max or most_recent . override_linear_precision ( tuple of three bool , optional , default to (False, False, False) ) — Whether or not to execute fprop , dgrad , and wgrad GEMMS in higher precision. optimization_level ( str ), one of O1 , O2 . (default is O2 ) — What level of 8-bit collective communication should be used with MS-AMP. In general: O1: Weight gradients and all_reduce communications are done in fp8, reducing GPU memory usage and communication bandwidth O2: First-order optimizer states are in 8-bit, and second order states are in FP16. Only available when using Adam or AdamW. This maintains accuracy and can potentially save the highest memory. 03: Specifically for DeepSpeed, implements capabilities so weights and master weights of models are stored in FP8. If fp8 is selected and deepspeed is enabled, will be used by default. (Not available currently). Use this object in your Accelerator to customize the initialization of the recipe for FP8 mixed precision training with transformer-engine or ms-amp . For more information on transformer-engine args, please refer to the API documentation . For more information on the ms-amp args, please refer to the Optimization Level documentation . Copied from accelerate import Accelerator from accelerate.utils import FP8RecipeKwargs kwargs = FP8RecipeKwargs(backend= "te" , fp8_format= "HYBRID" ) accelerator = Accelerator(mixed_precision= "fp8" , kwargs_handlers=[kwargs]) To use MS-AMP as an engine, pass backend="msamp" and the optimization_level : Copied kwargs = FP8RecipeKwargs(backend= "msamp" , optimization_level= "02" ) class accelerate. GradScalerKwargs < source > ( init_scale : float = 65536.0 growth_factor : float = 2.0 backoff_factor : float = 0.5 growth_interval : int = 2000 enabled : bool = True ) Use this object in your Accelerator to customize the behavior of mixed precision, specifically how the torch.cuda.amp.GradScaler used is created. Please refer to the documentation of this scaler for more information on each argument. GradScaler is only available in PyTorch 1.5.0 and later versions. Example: Copied from accelerate import Accelerator from accelerate.utils import GradScalerKwargs kwargs = GradScalerKwargs(backoff_factor= 0.25 ) accelerator = Accelerator(kwargs_handlers=[kwargs]) class accelerate. InitProcessGroupKwargs < source > ( backend : typing.Optional[str] = 'nccl' init_method : typing.Optional[str] = None timeout : typing.Optional[datetime.timedelta] = None ) Use this object in your Accelerator to customize the initialization of the distributed processes. Please refer to the documentation of this method for more information on each argument. Note: If timeout is set to None , the default will be based upon how backend is set. Copied from datetime import timedelta from accelerate import Accelerator from accelerate.utils import InitProcessGroupKwargs kwargs = InitProcessGroupKwargs(timeout=timedelta(seconds= 800 )) accelerator = Accelerator(kwargs_handlers=[kwargs]) class accelerate.utils. KwargsHandler < source > ( ) Internal mixin that implements a to_kwargs() method for a dataclass. to_kwargs < source > ( ) Returns a dictionary containing the attributes with values different from the default of this class. Plugins These are plugins that can be passed to the Accelerator object. While they are defined elsewhere in the documentation, for convenience all of them are available to see here: class accelerate. DeepSpeedPlugin < source > ( hf_ds_config : typing.Any = None gradient_accumulation_steps : int = None gradient_clipping : float = None zero_stage : int = None is_train_batch_min : bool = True offload_optimizer_device : str = None offload_param_device : str = None offload_optimizer_nvme_path : str = None offload_param_nvme_path : str = None zero3_init_flag : bool = None zero3_save_16bit_model : bool = None transformer_moe_cls_names : str = None enable_msamp : bool = None msamp_opt_level : typing.Optional[typing.Literal['O1', 'O2']] = None ) Parameters hf_ds_config ( Any , defaults to None ) — Path to DeepSpeed config file or dict or an object of class accelerate.utils.deepspeed.HfDeepSpeedConfig . gradient_accumulation_steps ( int , defaults to None ) — Number of steps to accumulate gradients before updating optimizer states. If not set, will use the value from the Accelerator directly. gradient_clipping ( float , defaults to None ) — Enable gradient clipping with value. zero_stage ( int , defaults to None ) — Possible options are 0, 1, 2, 3. Default will be taken from environment variable. is_train_batch_min ( bool , defaults to True ) — If both train & eval dataloaders are specified, this will decide the train_batch_size . offload_optimizer_device ( str , defaults to None ) — Possible options are none|cpu|nvme. Only applicable with ZeRO Stages 2 and 3. offload_param_device ( str , defaults to None ) — Possible options are none|cpu|nvme. Only applicable with ZeRO Stage 3. offload_optimizer_nvme_path ( str , defaults to None ) — Possible options are /nvme|/local_nvme. Only applicable with ZeRO Stage 3. offload_param_nvme_path ( str , defaults to None ) — Possible options are /nvme|/local_nvme. Only applicable with ZeRO Stage 3. zero3_init_flag ( bool , defaults to None ) — Flag to indicate whether to save 16-bit model. Only applicable with ZeRO Stage-3. zero3_save_16bit_model ( bool , defaults to None ) — Flag to indicate whether to save 16-bit model. Only applicable with ZeRO Stage-3. transformer_moe_cls_names ( str , defaults to None ) — Comma-separated list of Transformers MoE layer class names (case-sensitive). For example, MixtralSparseMoeBlock , Qwen2MoeSparseMoeBlock , JetMoEAttention , JetMoEBlock , etc. enable_msamp ( bool , defaults to None ) — Flag to indicate whether to enable MS-AMP backend for FP8 training. msasmp_opt_level ( Optional[Literal["O1", "O2"]] , defaults to None ) — Optimization level for MS-AMP (defaults to ‘O1’). Only applicable if enable_msamp is True. Should be one of [‘O1’ or ‘O2’]. This plugin is used to integrate DeepSpeed. deepspeed_config_process < source > ( prefix = '' mismatches = None config = None must_match = True **kwargs ) Process the DeepSpeed config with the values from the kwargs. select < source > ( _from_accelerator_state : bool = False ) Sets the HfDeepSpeedWeakref to use the current deepspeed plugin configuration class accelerate. FullyShardedDataParallelPlugin < source > ( sharding_strategy : typing.Union[str, ForwardRef('torch.distributed.fsdp.ShardingStrategy')] = None backward_prefetch : typing.Union[str, ForwardRef('torch.distributed.fsdp.BackwardPrefetch')] = None mixed_precision_policy : typing.Union[dict, ForwardRef('torch.distributed.fsdp.MixedPrecision'), NoneType] = None auto_wrap_policy : typing.Union[typing.Callable, typing.Literal['transformer_based_wrap', 'size_based_wrap', 'no_wrap'], NoneType] = None cpu_offload : typing.Union[bool, ForwardRef('torch.distributed.fsdp.CPUOffload')] = None ignored_modules : typing.Optional[typing.Iterable[torch.nn.modules.module.Module]] = None state_dict_type : typing.Union[str, ForwardRef('torch.distributed.fsdp.StateDictType')] = None state_dict_config : typing.Union[ForwardRef('torch.distributed.fsdp.FullStateDictConfig'), ForwardRef('torch.distributed.fsdp.ShardedStateDictConfig'), NoneType] = None optim_state_dict_config : typing.Union[ForwardRef('torch.distributed.fsdp.FullOptimStateDictConfig'), ForwardRef('torch.distributed.fsdp.ShardedOptimStateDictConfig'), NoneType] = None limit_all_gathers : bool = True use_orig_params : bool = None param_init_fn : typing.Optional[typing.Callable[[torch.nn.modules.module.Module], NoneType]] = None sync_module_states : bool = None forward_prefetch : bool = None activation_checkpointing : bool = None cpu_ram_efficient_loading : bool = None transformer_cls_names_to_wrap : typing.Optional[typing.List[str]] = None min_num_params : typing.Optional[int] = None ) Parameters sharding_strategy ( Union[str, torch.distributed.fsdp.ShardingStrategy] , defaults to 'FULL_SHARD' ) — Sharding strategy to use. Should be either a str or an instance of torch.distributed.fsdp.fully_sharded_data_parallel.ShardingStrategy . backward_prefetch ( Union[str, torch.distributed.fsdp.BackwardPrefetch] , defaults to 'NO_PREFETCH' ) — Backward prefetch strategy to use. Should be either a str or an instance of torch.distributed.fsdp.fully_sharded_data_parallel.BackwardPrefetch . mixed_precision_policy ( Optional[Union[dict, torch.distributed.fsdp.MixedPrecision]] , defaults to None ) — A config to enable mixed precision training with FullyShardedDataParallel. If passing in a dict , it should have the following keys: param_dtype , reduce_dtype , and buffer_dtype . auto_wrap_policy ( Optional(Union[Callable, Literal["transformer_based_wrap", "size_based_wrap", "no_wrap"]]), defaults to NO_WRAP ) -- A callable or string specifying a policy to recursively wrap layers with FSDP. If a string, it must be one of transformer_based_wrap , size_based_wrap , or no_wrap . See torch.distributed.fsdp.wrap.size_based_wrap_policy` for a direction on what it should look like. cpu_offload ( Union[bool, torch.distributed.fsdp.CPUOffload] , defaults to False ) — Whether to offload parameters to CPU. Should be either a bool or an instance of torch.distributed.fsdp.fully_sharded_data_parallel.CPUOffload . ignored_modules ( Optional[Iterable[torch.nn.Module]] , defaults to None ) — A list of modules to ignore when wrapping with FSDP. state_dict_type ( Union[str, torch.distributed.fsdp.StateDictType] , defaults to 'FULL_STATE_DICT' ) — State dict type to use. If a string, it must be one of full_state_dict , local_state_dict , or sharded_state_dict . state_dict_config ( Optional[Union[torch.distributed.fsdp.FullStateDictConfig, torch.distributed.fsdp.ShardedStateDictConfig] , defaults to None ) — State dict config to use. Is determined based on the state_dict_type if not passed in. optim_state_dict_config ( Optional[Union[torch.distributed.fsdp.FullOptimStateDictConfig, torch.distributed.fsdp.ShardedOptimStateDictConfig] , defaults to None ) — Optim state dict config to use. Is determined based on the state_dict_type if not passed in. limit_all_gathers ( bool , defaults to True ) — Whether to have FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers. This bool only affects the sharded strategies that schedule all-gathers. Enabling this can help lower the number of CUDA malloc retries. use_orig_params ( bool , defaults to False ) — Whether to use the original parameters for the optimizer. param_init_fn ( Optional[Callable[[torch.nn.Module], None] , defaults to None ) — A Callable[torch.nn.Module] -> None that specifies how modules that are currently on the meta device should be initialized onto an actual device. Only applicable when sync_module_states is True . By default is a lambda which calls to_empty on the module. sync_module_states ( bool , defaults to False ) — Whether each individually wrapped FSDP unit should broadcast module parameters from rank 0 to ensure they are the same across all ranks after initialization. Defaults to False unless cpu_ram_efficient_loading is True , then will be forcibly enabled. forward_prefetch ( bool , defaults to False ) — Whether to have FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass. only use with Static graphs. activation_checkpointing ( bool , defaults to False ) — A technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time for reduced memory usage. cpu_ram_efficient_loading ( bool , defaults to None ) — If True, only the first process loads the pretrained model checkoint while all other processes have empty weights. Only applicable for Transformers. When using this, sync_module_states needs to be True . transformer_cls_names_to_wrap ( Optional[List[str]] , defaults to None ) — A list of transformer layer class names to wrap. Only applicable when auto_wrap_policy is transformer_based_wrap . min_num_params ( Optional[int] , defaults to None ) — The minimum number of parameters a module must have to be wrapped. Only applicable when auto_wrap_policy is size_based_wrap . This plugin is used to enable fully sharded data parallelism. set_auto_wrap_policy < source > ( model ) Given model , creates an auto_wrap_policy baesd on the passed in policy and if we can use the transformer_cls_to_wrap set_mixed_precision < source > ( mixed_precision buffer_autocast = False override = False ) Sets the mixed precision policy for FSDP set_state_dict_type < source > ( state_dict_type = None ) Set the state dict config based on the StateDictType . class accelerate.utils. GradientAccumulationPlugin < source > ( num_steps : int = None adjust_scheduler : bool = True sync_with_dataloader : bool = True sync_each_batch : bool = False ) Parameters num_steps ( int ) — The number of steps to accumulate gradients for. adjust_scheduler ( bool , optional , defaults to True ) — Whether to adjust the scheduler steps to account for the number of steps being accumulated. Should be True if the used scheduler was not adjusted for gradient accumulation. sync_with_dataloader ( bool , optional , defaults to True ) — Whether to synchronize setting the gradients when at the end of the dataloader. sync_each_batch ( bool , optional ) — Whether to synchronize setting the gradients at each data batch. Seting to True may reduce memory requirements when using gradient accumulation with distributed training, at expense of speed. A plugin to configure gradient accumulation behavior. You can only pass one of gradient_accumulation_plugin or gradient_accumulation_steps to Accelerator . Passing both raises an error. Example: Copied from accelerate.utils import GradientAccumulationPlugin gradient_accumulation_plugin = GradientAccumulationPlugin(num_steps= 2 ) accelerator = Accelerator(gradient_accumulation_plugin=gradient_accumulation_plugin) class accelerate.utils. MegatronLMPlugin < source > ( tp_degree : int = None pp_degree : int = None num_micro_batches : int = None gradient_clipping : float = None sequence_parallelism : bool = None recompute_activations : bool = None use_distributed_optimizer : bool = None pipeline_model_parallel_split_rank : int = None num_layers_per_virtual_pipeline_stage : int = None is_train_batch_min : str = True train_iters : int = None train_samples : int = None weight_decay_incr_style : str = 'constant' start_weight_decay : float = None end_weight_decay : float = None lr_decay_style : str = 'linear' lr_decay_iters : int = None lr_decay_samples : int = None lr_warmup_iters : int = None lr_warmup_samples : int = None lr_warmup_fraction : float = None min_lr : float = 0 consumed_samples : typing.List[int] = None no_wd_decay_cond : typing.Optional[typing.Callable] = None scale_lr_cond : typing.Optional[typing.Callable] = None lr_mult : float = 1.0 megatron_dataset_flag : bool = False seq_length : int = None encoder_seq_length : int = None decoder_seq_length : int = None tensorboard_dir : str = None set_all_logging_options : bool = False eval_iters : int = 100 eval_interval : int = 1000 return_logits : bool = False custom_train_step_class : typing.Optional[typing.Any] = None custom_train_step_kwargs : typing.Optional[typing.Dict[str, typing.Any]] = None custom_model_provider_function : typing.Optional[typing.Callable] = None custom_prepare_model_function : typing.Optional[typing.Callable] = None custom_megatron_datasets_provider_function : typing.Optional[typing.Callable] = None custom_get_batch_function : typing.Optional[typing.Callable] = None custom_loss_function : typing.Optional[typing.Callable] = None other_megatron_args : typing.Optional[typing.Dict[str, typing.Any]] = None ) Parameters tp_degree ( int , defaults to None ) — Tensor parallelism degree. pp_degree ( int , defaults to None ) — Pipeline parallelism degree. num_micro_batches ( int , defaults to None ) — Number of micro-batches. gradient_clipping ( float , defaults to None ) — Gradient clipping value based on global L2 Norm (0 to disable). sequence_parallelism ( bool , defaults to None ) — Enable sequence parallelism. recompute_activations ( bool , defaults to None ) — Enable selective activation recomputation. use_distributed_optimizr ( bool , defaults to None ) — Enable distributed optimizer. pipeline_model_parallel_split_rank ( int , defaults to None ) — Rank where encoder and decoder should be split. num_layers_per_virtual_pipeline_stage ( int , defaults to None ) — Number of layers per virtual pipeline stage. is_train_batch_min ( str , defaults to True ) — If both tran & eval dataloaders are specified, this will decide the micro_batch_size . train_iters ( int , defaults to None ) — Total number of samples to train over all training runs. Note that either train-iters or train-samples should be provided when using MegatronLMDummyScheduler . train_samples ( int , defaults to None ) — Total number of samples to train over all training runs. Note that either train-iters or train-samples should be provided when using MegatronLMDummyScheduler . weight_decay_incr_style ( str , defaults to 'constant' ) — Weight decay increment function. choices=[“constant”, “linear”, “cosine”]. start_weight_decay ( float , defaults to None ) — Initial weight decay coefficient for L2 regularization. end_weight_decay ( float , defaults to None ) — End of run weight decay coefficient for L2 regularization. lr_decay_style ( str , defaults to 'linear' ) — Learning rate decay function. choices=[‘constant’, ‘linear’, ‘cosine’]. lr_decay_iters ( int , defaults to None ) — Number of iterations for learning rate decay. If None defaults to train_iters . lr_decay_samples ( int , defaults to None ) — Number of samples for learning rate decay. If None defaults to train_samples . lr_warmup_iters ( int , defaults to None ) — Number of iterations to linearly warmup learning rate over. lr_warmup_samples ( int , defaults to None ) — Number of samples to linearly warmup learning rate over. lr_warmup_fraction ( float , defaults to None ) — Fraction of lr-warmup-(iters/samples) to linearly warmup learning rate over. min_lr ( float , defaults to 0 ) — Minumum value for learning rate. The scheduler clip values below this threshold. consumed_samples ( List , defaults to None ) — Number of samples consumed in the same order as the dataloaders to accelerator.prepare call. no_wd_decay_cond ( Optional , defaults to None ) — Condition to disable weight decay. scale_lr_cond ( Optional , defaults to None ) — Condition to scale learning rate. lr_mult ( float , defaults to 1.0 ) — Learning rate multiplier. megatron_dataset_flag ( bool , defaults to False ) — Whether the format of dataset follows Megatron-LM Indexed/Cached/MemoryMapped format. seq_length ( int , defaults to None ) — Maximum sequence length to process. encoder_seq_length ( int , defaults to None ) — Maximum sequence length to process for the encoder. decoder_seq_length ( int , defaults to None ) — Maximum sequence length to process for the decoder. tensorboard_dir ( str , defaults to None ) — Path to save tensorboard logs. set_all_logging_options ( bool , defaults to False ) — Whether to set all logging options. eval_iters ( int , defaults to 100 ) — Number of iterations to run for evaluation validation/test for. eval_interval ( int , defaults to 1000 ) — Interval between running evaluation on validation set. return_logits ( bool , defaults to False ) — Whether to return logits from the model. custom_train_step_class ( Optional , defaults to None ) — Custom train step class. custom_train_step_kwargs ( Optional , defaults to None ) — Custom train step kwargs. custom_model_provider_function ( Optional , defaults to None ) — Custom model provider function. custom_prepare_model_function ( Optional , defaults to None ) — Custom prepare model function. custom_megatron_datasets_provider_function ( Optional , defaults to None ) — Custom megatron train_valid_test datasets provider function. custom_get_batch_function ( Optional , defaults to None ) — Custom get batch function. custom_loss_function ( Optional , defaults to None ) — Custom loss function. other_megatron_args ( Optional , defaults to None ) — Other Megatron-LM arguments. Please refer Megatron-LM. Plugin for Megatron-LM to enable tensor, pipeline, sequence and data parallelism. Also to enable selective activation recomputation and optimized fused kernels. class accelerate.utils. TorchDynamoPlugin < source > ( backend : DynamoBackend = None mode : str = None fullgraph : bool = None dynamic : bool = None options : typing.Any = None disable : bool = False ) Parameters backend ( DynamoBackend , defaults to None ) — A valid Dynamo backend. See https://pytorch.org/docs/stable/torch.compiler.html for more details. mode ( str , defaults to None ) — Possible options are ‘default’, ‘reduce-overhead’ or ‘max-autotune’. fullgraph ( bool , defaults to None ) — Whether it is ok to break model into several subgraphs. dynamic ( bool , defaults to None ) — Whether to use dynamic shape for tracing. options ( Any , defaults to None ) — A dictionary of options to pass to the backend. disable ( bool , defaults to False ) — Turn torch.compile() into a no-op for testing This plugin is used to compile a model with PyTorch 2.0 Configurations These are classes which can be configured and passed through to the appropriate integration class accelerate.utils. BnbQuantizationConfig < source > ( load_in_8bit : bool = False llm_int8_threshold : float = 6.0 load_in_4bit : bool = False bnb_4bit_quant_type : str = 'fp4' bnb_4bit_use_double_quant : bool = False bnb_4bit_compute_dtype : str = 'fp16' torch_dtype : dtype = None skip_modules : typing.List[str] = None keep_in_fp32_modules : typing.List[str] = None ) Parameters load_in_8bit ( bool , defaults to False ) — Enable 8bit quantization. llm_int8_threshold ( float , defaults to 6.0 ) — Value of the outliner threshold. Only relevant when load_in_8bit=True . load_in_4_bit ( bool , defaults to False ) — Enable 4bit quantization. bnb_4bit_quant_type ( str , defaults to fp4 ) — Set the quantization data type in the bnb.nn.Linear4Bit layers. Options are {‘fp4’,‘np4’}. bnb_4bit_use_double_quant ( bool , defaults to False ) — Enable nested quantization where the quantization constants from the first quantization are quantized again. bnb_4bit_compute_dtype ( bool , defaults to fp16 ) — This sets the computational type which might be different than the input time. For example, inputs might be fp32, but computation can be set to bf16 for speedups. Options are {‘fp32’,‘fp16’,‘bf16’}. torch_dtype ( torch.dtype , defaults to None ) — This sets the dtype of the remaining non quantized layers. bitsandbytes library suggests to set the value to torch.float16 for 8 bit model and use the same dtype as the compute dtype for 4 bit model. skip_modules ( List[str] , defaults to None ) — An explicit list of the modules that we don’t quantize. The dtype of these modules will be torch_dtype . keep_in_fp32_modules ( List , defaults to None ) — An explicit list of the modules that we don’t quantize. We keep them in torch.float32 . A plugin to enable BitsAndBytes 4bit and 8bit quantization class accelerate. DataLoaderConfiguration < source > ( split_batches : bool = False dispatch_batches : bool = None even_batches : bool = True use_seedable_sampler : bool = False data_seed : int = None non_blocking : bool = False use_stateful_dataloader : bool = False ) Parameters split_batches ( bool , defaults to False ) — Whether or not the accelerator should split the batches yielded by the dataloaders across the devices. If True , the actual batch size used will be the same on any kind of distributed processes, but it must be a round multiple of num_processes you are using. If False , actual batch size used will be the one set in your script multiplied by the number of processes. dispatch_batches ( bool , defaults to None ) — If set to True , the dataloader prepared by the Accelerator is only iterated through on the main process and then the batches are split and broadcast to each process. Will default to True for DataLoader whose underlying dataset is an IterableDataset , False otherwise. even_batches ( bool , defaults to True ) — If set to True , in cases where the total batch size across all processes does not exactly divide the dataset, samples at the start of the dataset will be duplicated so the batch can be divided equally among all workers. use_seedable_sampler ( bool , defaults to False ) — Whether or not use a fully seedable random sampler ( data_loader.SeedableRandomSampler ). Ensures training results are fully reproducable using a different sampling technique. While seed-to-seed results may differ, on average the differences are neglible when using multiple different seeds to compare. Should also be ran with set_seed() for the best results. data_seed ( int , defaults to None ) — The seed to use for the underlying generator when using use_seedable_sampler . If None , the generator will use the current default seed from torch. non_blocking ( bool , defaults to False ) — If set to True , the dataloader prepared by the Accelerator will utilize non-blocking host-to-device transfers, allowing for better overlap between dataloader communication and computation. Recommended that the prepared dataloader has pin_memory set to True to work properly. use_stateful_dataloader ( bool , defaults to False ) — If set to True , the dataloader prepared by the Accelerator will be backed by torchdata.StatefulDataLoader . This requires torchdata version 0.8.0 or higher that supports StatefulDataLoader to be installed. Configuration for dataloader-related items when calling accelerator.prepare . class accelerate.utils. ProjectConfiguration < source > ( project_dir : str = None logging_dir : str = None automatic_checkpoint_naming : bool = False total_limit : int = None iteration : int = 0 save_on_each_node : bool = False ) Parameters project_dir ( str , defaults to None ) — A path to a directory for storing data. logging_dir ( str , defaults to None ) — A path to a directory for storing logs of locally-compatible loggers. If None, defaults to project_dir . automatic_checkpoint_naming ( bool , defaults to False ) — Whether saved states should be automatically iteratively named. total_limit ( int , defaults to None ) — The maximum number of total saved states to keep. iteration ( int , defaults to 0 ) — The current save iteration. save_on_each_node ( bool , defaults to False ) — When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one. Configuration for the Accelerator object based on inner-project needs. set_directories < source > ( project_dir : str = None ) Sets self.project_dir and self.logging_dir to the appropriate values. Environmental Variables These are environmental variables that can be enabled for different use cases ACCELERATE_DEBUG_MODE ( str ): Whether to run accelerate in debug mode. More info available here . Data Manipulation and Operations These include data operations that mimic the same torch ops but can be used on distributed processes. accelerate.utils.broadcast < source > ( tensor from_process : int = 0 ) Parameters tensor (nested list/tuple/dictionary of torch.Tensor ) — The data to gather. from_process ( int , optional , defaults to 0) — The process from which to send the data Recursively broadcast tensor in a nested list/tuple/dictionary of tensors to all devices. accelerate.utils.broadcast_object_list < source > ( object_list from_process : int = 0 ) Parameters object_list (list of picklable objects) — The list of objects to broadcast. This list will be modified inplace. from_process ( int , optional , defaults to 0) — The process from which to send the data. Broadcast a list of picklable objects form one process to the others. accelerate.utils.concatenate < source > ( data dim = 0 ) Parameters data (nested list/tuple/dictionary of lists of tensors torch.Tensor ) — The data to concatenate. dim ( int , optional , defaults to 0) — The dimension on which to concatenate. Recursively concatenate the tensors in a nested list/tuple/dictionary of lists of tensors with the same shape. accelerate.utils.convert_outputs_to_fp32 < source > ( model_forward ) accelerate.utils.convert_to_fp32 < source > ( tensor ) Parameters tensor (nested list/tuple/dictionary of torch.Tensor ) — The data to convert from FP16/BF16 to FP32. Recursively converts the elements nested list/tuple/dictionary of tensors in FP16/BF16 precision to FP32. accelerate.utils.gather < source > ( tensor ) Parameters tensor (nested list/tuple/dictionary of torch.Tensor ) — The data to gather. Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices. accelerate.utils.gather_object < source > ( object : typing.Any ) Parameters object (nested list/tuple/dictionary of picklable object) — The data to gather. Recursively gather object in a nested list/tuple/dictionary of objects from all devices. accelerate.utils.get_grad_scaler < source > ( distributed_type : DistributedType = None **kwargs ) Parameters distributed_type ( DistributedType , optional , defaults to None) — The type of distributed environment. kwargs — Additional arguments for the utilized GradScaler constructor. A generic helper which will initialize the correct GradScaler implementation based on the environment and return it. accelerate.utils.get_mixed_precision_context_manager < source > ( native_amp : bool = False autocast_kwargs : AutocastKwargs = None ) Parameters native_amp ( bool , optional , defaults to False) — Whether mixed precision is actually enabled. cache_enabled ( bool , optional , defaults to True) — Whether the weight cache inside autocast should be enabled. Return a context manager for autocasting mixed precision accelerate.utils.listify < source > ( data ) Parameters data (nested list/tuple/dictionary of torch.Tensor ) — The data from which to convert to regular numbers. Recursively finds tensors in a nested list/tuple/dictionary and converts them to a list of numbers. accelerate.utils.pad_across_processes < source > ( tensor dim = 0 pad_index = 0 pad_first = False ) Parameters tensor (nested list/tuple/dictionary of torch.Tensor ) — The data to gather. dim ( int , optional , defaults to 0) — The dimension on which to pad. pad_index ( int , optional , defaults to 0) — The value with which to pad. pad_first ( bool , optional , defaults to False ) — Whether to pad at the beginning or the end. Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they can safely be gathered. accelerate.utils.recursively_apply < source > ( func data *args test_type = <function is_torch_tensor at 0x7f8a02351750> error_on_other_type = False **kwargs ) Parameters func ( callable ) — The function to recursively apply. data (nested list/tuple/dictionary of main_type ) — The data on which to apply func *args — Positional arguments that will be passed to func when applied on the unpacked data. main_type ( type , optional , defaults to torch.Tensor ) — The base type of the objects to which apply func . error_on_other_type ( bool , optional , defaults to False ) — Whether to return an error or not if after unpacking data , we get on an object that is not of type main_type . If False , the function will leave objects of types different than main_type unchanged. * *kwargs (additional keyword arguments, optional ) — Keyword arguments that will be passed to func when applied on the unpacked data. Recursively apply a function on a data structure that is a nested list/tuple/dictionary of a given base type. accelerate.utils.reduce < source > ( tensor reduction = 'mean' scale = 1.0 ) Parameters tensor (nested list/tuple/dictionary of torch.Tensor ) — The data to reduce. reduction ( str , optional , defaults to "mean" ) — A reduction method. Can be of “mean”, “sum”, or “none” scale ( float , optional ) — A default scaling value to be applied after the reduce, only valied on XLA. Recursively reduce the tensors in a nested list/tuple/dictionary of lists of tensors across all processes by the mean of a given operation. accelerate.utils.send_to_device < source > ( tensor device non_blocking = False skip_keys = None ) Parameters tensor (nested list/tuple/dictionary of torch.Tensor ) — The data to send to a given device. device ( torch.device ) — The device to send the data to. Recursively sends the elements in a nested list/tuple/dictionary of tensors to a given device. accelerate.utils.slice_tensors < source > ( data tensor_slice process_index = None num_processes = None ) Parameters data (nested list/tuple/dictionary of torch.Tensor ) — The data to slice. tensor_slice ( slice ) — The slice to take. Recursively takes a slice in a nested list/tuple/dictionary of tensors. Environment Checks These functionalities check the state of the current working environment including information about the operating system itself, what it can support, and if particular dependencies are installed. accelerate.utils.is_bf16_available < source > ( ignore_tpu = False ) Checks if bf16 is supported, optionally ignoring the TPU accelerate.utils.is_ipex_available < source > ( ) Checks if ipex is installed. accelerate.utils.is_mps_available < source > ( min_version = '1.12' ) Checks if MPS device is available. The minimum version required is 1.12. accelerate.utils.is_npu_available ( check_device = False ) Checks if torch_npu is installed and potentially if a NPU is in the environment accelerate.utils.is_torch_version < source > ( operation : str version : str ) Parameters operation ( str ) — A string representation of an operator, such as ">" or "<=" version ( str ) — A string version of PyTorch Compares the current PyTorch version to a given reference with an operation. accelerate.utils.is_torch_xla_available ( check_is_tpu = False check_is_gpu = False ) Check if torch_xla is available. To train a native pytorch job in an environment with torch xla installed, set the USE_TORCH_XLA to false. accelerate.utils.is_xpu_available ( check_device = False ) Checks if XPU acceleration is available either via intel_extension_for_pytorch or via stock PyTorch (>=2.4) and potentially if a XPU is in the environment Environment Manipulation accelerate.utils.patch_environment < source > ( **kwargs ) A context manager that will add each keyword argument passed to os.environ and remove them when exiting. Will convert the values in kwargs to strings and upper-case all the keys. Example: Copied >>> import os >>> from accelerate.utils import patch_environment >>> with patch_environment(FOO= "bar" ): ... print (os.environ[ "FOO" ]) # prints "bar" >>> print (os.environ[ "FOO" ]) # raises KeyError accelerate.utils.clear_environment < source > ( ) A context manager that will temporarily clear environment variables. When this context exits, the previous environment variables will be back. Example: Copied >>> import os >>> from accelerate.utils import clear_environment >>> os.environ[ "FOO" ] = "bar" >>> with clear_environment(): ... print (os.environ) ... os.environ[ "FOO" ] = "new_bar" ... print (os.environ[ "FOO" ]) {} new_bar >>> print (os.environ[ "FOO" ]) bar accelerate.commands.config.default.write_basic_config < source > ( mixed_precision = 'no' save_location : str = '/github/home/.cache/huggingface/accelerate/default_config.yaml' use_xpu : bool = False ) Parameters mixed_precision ( str , optional , defaults to “no”) — Mixed Precision to use. Should be one of “no”, “fp16”, or “bf16” save_location ( str , optional , defaults to default_json_config_file ) — Optional custom save location. Should be passed to --config_file when using accelerate launch . Default location is inside the huggingface cache folder ( ~/.cache/huggingface ) but can be overriden by setting the HF_HOME environmental variable, followed by accelerate/default_config.yaml . use_xpu ( bool , optional , defaults to False ) — Whether to use XPU if available. Creates and saves a basic cluster config to be used on a local machine with potentially multiple GPUs. Will also set CPU if it is a CPU-only machine. When setting up 🤗 Accelerate for the first time, rather than running accelerate config [~utils.write_basic_config] can be used as an alternative for quick configuration. accelerate.utils.set_numa_affinity ( local_process_index : int verbose : typing.Optional[bool] = None ) Parameters local_process_index (int) — The index of the current process on the current server. verbose (bool, optional ) — Whether to print the new cpu cores assignment for each process. If ACCELERATE_DEBUG_MODE is enabled, will default to True. Assigns the current process to a specific NUMA node. Ideally most efficient when having at least 2 cpus per node. This result is cached between calls. If you want to override it, please use accelerate.utils.environment.override_numa_afifnity . accelerate.utils.environment.override_numa_affinity < source > ( local_process_index : int verbose : typing.Optional[bool] = None ) Parameters local_process_index (int) — The index of the current process on the current server. verbose (bool, optional ) — Whether to log out the assignment of each CPU. If ACCELERATE_DEBUG_MODE is enabled, will default to True. Overrides whatever NUMA affinity is set for the current process. This is very taxing and requires recalculating the affinity to set, ideally you should use utils.environment.set_numa_affinity instead. accelerate.utils.purge_accelerate_environment < source > ( func_or_cls ) Decorator to clean up accelerate environment variables set by the decorated class or function. In some circumstances, calling certain classes or functions can result in accelerate env vars being set and not being cleaned up afterwards. As an example, when calling: TrainingArguments(fp16=True, …) The following env var will be set: ACCELERATE_MIXED_PRECISION=fp16 This can affect subsequent code, since the env var takes precedence over TrainingArguments(fp16=False). This is especially relevant for unit testing, where we want to avoid the individual tests to have side effects on one another. Decorate the unit test function or whole class with this decorator to ensure that after each test, the env vars are cleaned up. This works for both unittest.TestCase and normal classes (pytest); it also works when decorating the parent class. Memory accelerate.find_executable_batch_size < source > ( function : <built-in function callable> = None starting_batch_size : int = 128 ) Parameters function ( callable , optional ) — A function to wrap starting_batch_size ( int , optional ) — The batch size to try and fit into memory A basic decorator that will try to execute function . If it fails from exceptions related to out-of-memory or CUDNN, the batch size is cut in half and passed to function function must take in a batch_size parameter as its first argument. Example: Copied >>> from accelerate.utils import find_executable_batch_size >>> @find_executable_batch_size(starting_batch_size= 128 ) ... def train ( batch_size, model, optimizer ): ... ... >>> train(model, optimizer) Modeling These utilities relate to interacting with PyTorch models accelerate.utils.calculate_maximum_sizes < source > ( model : Module ) Computes the total size of the model and its largest layer accelerate.utils.compute_module_sizes < source > ( model : Module dtype : typing.Union[torch.device, str, NoneType] = None special_dtypes : typing.Optional[typing.Dict[str, typing.Union[str, torch.device]]] = None buffers_only : bool = False ) Compute the size of each submodule of a given model. accelerate.utils.extract_model_from_parallel < source > ( model keep_fp32_wrapper : bool = True keep_torch_compile : bool = True recursive : bool = False ) → torch.nn.Module Parameters model ( torch.nn.Module ) — The model to extract. keep_fp32_wrapper ( bool , optional ) — Whether to remove mixed precision hooks from the model. keep_torch_compile ( bool , optional ) — Whether to unwrap compiled model. recursive ( bool , optional , defaults to False ) — Whether to recursively extract all cases of module.module from model as well as unwrap child sublayers recursively, not just the top-level distributed containers. Returns torch.nn.Module The extracted model. Extract a model from its distributed containers. accelerate.utils.get_balanced_memory < source > ( model : Module max_memory : typing.Optional[typing.Dict[typing.Union[int, str], typing.Union[int, str]]] = None no_split_module_classes : typing.Optional[typing.List[str]] = None dtype : typing.Union[str, torch.dtype, NoneType] = None special_dtypes : typing.Optional[typing.Dict[str, typing.Union[str, torch.device]]] = None low_zero : bool = False ) Parameters model ( torch.nn.Module ) — The model to analyze. max_memory ( Dict , optional ) — A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset. Example: max_memory={0: "1GB"} . no_split_module_classes ( List[str] , optional ) — A list of layer class names that should never be split across device (for instance any layer that has a residual connection). dtype ( str or torch.dtype , optional ) — If provided, the weights will be converted to that type when loaded. special_dtypes ( Dict[str, Union[str, torch.device]] , optional ) — If provided, special dtypes to consider for some specific weights (will override dtype used as default for all weights). low_zero ( bool , optional ) — Minimizes the number of weights on GPU 0, which is convenient when it’s used for other operations (like the Transformers generate function). Compute a max_memory dictionary for infer_auto_device_map() that will balance the use of each available GPU. All computation is done analyzing sizes and dtypes of the model parameters. As a result, the model can be on the meta device (as it would if initialized within the init_empty_weights context manager). accelerate.utils.get_max_layer_size < source > ( modules : typing.List[typing.Tuple[str, torch.nn.modules.module.Module]] module_sizes : typing.Dict[str, int] no_split_module_classes : typing.List[str] ) → Tuple[int, List[str]] Parameters modules ( List[Tuple[str, torch.nn.Module]] ) — The list of named modules where we want to determine the maximum layer size. module_sizes ( Dict[str, int] ) — A dictionary mapping each layer name to its size (as generated by compute_module_sizes ). no_split_module_classes ( List[str] ) — A list of class names for layers we don’t want to be split. Returns Tuple[int, List[str]] The maximum size of a layer with the list of layer names realizing that maximum size. Utility function that will scan a list of named modules and return the maximum size used by one full layer. The definition of a layer being: a module with no direct children (just parameters and buffers) a module whose class name is in the list no_split_module_classes accelerate.infer_auto_device_map < source > ( model : Module max_memory : typing.Optional[typing.Dict[typing.Union[int, str], typing.Union[int, str]]] = None no_split_module_classes : typing.Optional[typing.List[str]] = None dtype : typing.Union[str, torch.dtype, NoneType] = None special_dtypes : typing.Optional[typing.Dict[str, typing.Union[str, torch.dtype]]] = None verbose : bool = False clean_result : bool = True offload_buffers : bool = False fallback_allocation : bool = False ) Parameters model ( torch.nn.Module ) — The model to analyze. max_memory ( Dict , optional ) — A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset. Example: max_memory={0: "1GB"} . no_split_module_classes ( List[str] , optional ) — A list of layer class names that should never be split across device (for instance any layer that has a residual connection). dtype ( str or torch.dtype , optional ) — If provided, the weights will be converted to that type when loaded. special_dtypes ( Dict[str, Union[str, torch.device]] , optional ) — If provided, special dtypes to consider for some specific weights (will override dtype used as default for all weights). verbose ( bool , optional , defaults to False ) — Whether or not to provide debugging statements as the function builds the device_map. clean_result ( bool , optional , defaults to True ) — Clean the resulting device_map by grouping all submodules that go on the same device together. offload_buffers ( bool , optional , defaults to False ) — In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as well as the parameters. fallback_allocation ( bool , optional , defaults to False ) — When regular allocation fails, try to allocate a module that fits in the size limit using BFS. Compute a device map for a given model giving priority to GPUs, then offload on CPU and finally offload to disk, such that: we don’t exceed the memory available of any of the GPU. if offload to the CPU is needed, there is always room left on GPU 0 to put back the layer offloaded on CPU that has the largest size. if offload to the CPU is needed,we don’t exceed the RAM available on the CPU. if offload to the disk is needed, there is always room left on the CPU to put back the layer offloaded on disk that has the largest size. All computation is done analyzing sizes and dtypes of the model parameters. As a result, the model can be on the meta device (as it would if initialized within the init_empty_weights context manager). accelerate.load_checkpoint_in_model < source > ( model : Module checkpoint : typing.Union[str, os.PathLike] device_map : typing.Optional[typing.Dict[str, typing.Union[int, str, torch.device]]] = None offload_folder : typing.Union[str, os.PathLike, NoneType] = None dtype : typing.Union[str, torch.dtype, NoneType] = None offload_state_dict : bool = False offload_buffers : bool = False keep_in_fp32_modules : typing.List[str] = None offload_8bit_bnb : bool = False strict : bool = False ) Parameters model ( torch.nn.Module ) — The model in which we want to load a checkpoint. checkpoint ( str or os.PathLike ) — The folder checkpoint to load. It can be: a path to a file containing a whole model state dict a path to a .json file containing the index to a sharded checkpoint a path to a folder containing a unique .index.json file and the shards of a checkpoint. a path to a folder containing a unique pytorch_model.bin or a model.safetensors file. device_map ( Dict[str, Union[int, str, torch.device]] , optional ) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. offload_folder ( str or os.PathLike , optional ) — If the device_map contains any value "disk" , the folder where we will offload weights. dtype ( str or torch.dtype , optional ) — If provided, the weights will be converted to that type when loaded. offload_state_dict ( bool , optional , defaults to False ) — If True , will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit. offload_buffers ( bool , optional , defaults to False ) — Whether or not to include the buffers in the weights offloaded to disk. keep_in_fp32_modules( List[str] , optional ) — A list of the modules that we keep in torch.float32 dtype. offload_8bit_bnb ( bool , optional ) — Whether or not to enable offload of 8-bit modules on cpu/disk. strict ( bool , optional , defaults to False ) — Whether to strictly enforce that the keys in the checkpoint state_dict match the keys of the model’s state_dict. Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded. Once loaded across devices, you still need to call dispatch_model() on your model to make it able to run. To group the checkpoint loading and dispatch in one single call, use load_checkpoint_and_dispatch() . accelerate.utils.load_offloaded_weights < source > ( model index offload_folder ) Parameters model ( torch.nn.Module ) — The model to load the weights into. index ( dict ) — A dictionary containing the parameter name and its metadata for each parameter that was offloaded from the model. offload_folder ( str ) — The folder where the offloaded weights are stored. Loads the weights from the offload folder into the model. accelerate.utils.load_state_dict < source > ( checkpoint_file device_map = None ) Parameters checkpoint_file ( str ) — The path to the checkpoint to load. device_map ( Dict[str, Union[int, str, torch.device]] , optional ) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. Load a checkpoint from a given file. If the checkpoint is in the safetensors format and a device map is passed, the weights can be fast-loaded directly on the GPU. accelerate.utils.offload_state_dict < source > ( save_dir : typing.Union[str, os.PathLike] state_dict : typing.Dict[str, torch.Tensor] ) Parameters save_dir ( str or os.PathLike ) — The directory in which to offload the state dict. state_dict ( Dict[str, torch.Tensor] ) — The dictionary of tensors to offload. Offload a state dict in a given folder. accelerate.utils.retie_parameters < source > ( model tied_params ) Parameters model ( torch.nn.Module ) — The model in which to retie parameters. tied_params ( List[List[str]] ) — A mapping parameter name to tied parameter name as obtained by find_tied_parameters . Reties tied parameters in a given model if the link was broken (for instance when adding hooks). accelerate.utils.set_module_tensor_to_device < source > ( module : Module tensor_name : str device : typing.Union[int, str, torch.device] value : typing.Optional[torch.Tensor] = None dtype : typing.Union[str, torch.dtype, NoneType] = None fp16_statistics : typing.Optional[torch.HalfTensor] = None tied_params_map : typing.Optional[typing.Dict[int, typing.Dict[torch.device, torch.Tensor]]] = None ) Parameters module ( torch.nn.Module ) — The module in which the tensor we want to move lives. tensor_name ( str ) — The full name of the parameter/buffer. device ( int , str or torch.device ) — The device on which to set the tensor. value ( torch.Tensor , optional ) — The value of the tensor (useful when going from the meta device to any other device). dtype ( torch.dtype , optional ) — If passed along the value of the parameter will be cast to this dtype . Otherwise, value will be cast to the dtype of the existing parameter in the model. fp16_statistics ( torch.HalfTensor , optional ) — The list of fp16 statistics to set on the module, used for 8 bit model serialization. tied_params_map (Dict[int, Dict[torch.device, torch.Tensor]], optional , defaults to None ) — A map of current data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight on the device for all others, instead of duplicating memory. A helper function to set a given tensor (parameter of buffer) of a module on a specific device (note that doing param.to(device) creates a new tensor not linked to the parameter, which is why we need this function). Parallel These include general utilities that should be used when working in parallel. accelerate.utils.extract_model_from_parallel < source > ( model keep_fp32_wrapper : bool = True keep_torch_compile : bool = True recursive : bool = False ) → torch.nn.Module Parameters model ( torch.nn.Module ) — The model to extract. keep_fp32_wrapper ( bool , optional ) — Whether to remove mixed precision hooks from the model. keep_torch_compile ( bool , optional ) — Whether to unwrap compiled model. recursive ( bool , optional , defaults to False ) — Whether to recursively extract all cases of module.module from model as well as unwrap child sublayers recursively, not just the top-level distributed containers. Returns torch.nn.Module The extracted model. Extract a model from its distributed containers. accelerate.utils.save < source > ( obj f save_on_each_node : bool = False safe_serialization : bool = False ) Parameters obj — The data to save f — The file (or file-like object) to use to save the data save_on_each_node ( bool , optional , defaults to False ) — Whether to only save on the global main process safe_serialization ( bool , optional , defaults to False ) — Whether to save obj using safetensors or the traditional PyTorch way (that uses pickle ). Save the data to disk. Use in place of torch.save() . accelerate.utils.load < source > ( f map_location = None **kwargs ) Parameters f — The file (or file-like object) to use to load the data map_location — a function, torch.device , string or a dict specifying how to remap storage locations * *kwargs — Additional keyword arguments to pass to torch.load() . Compatible drop-in replacement of torch.load() which allows for weights_only to be used if torch version is 2.4.0 or higher. Otherwise will ignore the kwarg. Will also add (and then remove) an exception for numpy arrays accelerate.utils.wait_for_everyone < source > ( ) Introduces a blocking point in the script, making sure all processes have reached this point before continuing. Make sure all processes will reach this instruction otherwise one of your processes will hang forever. Random These utilities relate to setting and synchronizing of all the random states. accelerate.utils.set_seed < source > ( seed : int device_specific : bool = False deterministic : bool = False ) Parameters seed ( int ) — The seed to set. device_specific ( bool , optional , defaults to False ) — Whether to differ the seed on each device slightly with self.process_index . deterministic ( bool , optional , defaults to False ) — Whether to use deterministic algorithms where available. Can slow down training. Helper function for reproducible behavior to set the seed in random , numpy , torch . accelerate.utils.synchronize_rng_state < source > ( rng_type : typing.Optional[accelerate.utils.dataclasses.RNGType] = None generator : typing.Optional[torch._C.Generator] = None ) accelerate.synchronize_rng_states < source > ( rng_types : typing.List[typing.Union[str, accelerate.utils.dataclasses.RNGType]] generator : typing.Optional[torch._C.Generator] = None ) PyTorch XLA These include utilities that are useful while using PyTorch with XLA. accelerate.utils.install_xla < source > ( upgrade : bool = False ) Parameters upgrade ( bool , optional , defaults to False ) — Whether to upgrade torch and install the latest torch_xla wheels. Helper function to install appropriate xla wheels based on the torch version in Google Colaboratory. Example: Copied >>> from accelerate.utils import install_xla >>> install_xla(upgrade= True ) Loading model weights These include utilities that are useful to load checkpoints. accelerate.load_checkpoint_in_model < source > ( model : Module checkpoint : typing.Union[str, os.PathLike] device_map : typing.Optional[typing.Dict[str, typing.Union[int, str, torch.device]]] = None offload_folder : typing.Union[str, os.PathLike, NoneType] = None dtype : typing.Union[str, torch.dtype, NoneType] = None offload_state_dict : bool = False offload_buffers : bool = False keep_in_fp32_modules : typing.List[str] = None offload_8bit_bnb : bool = False strict : bool = False ) Parameters model ( torch.nn.Module ) — The model in which we want to load a checkpoint. checkpoint ( str or os.PathLike ) — The folder checkpoint to load. It can be: a path to a file containing a whole model state dict a path to a .json file containing the index to a sharded checkpoint a path to a folder containing a unique .index.json file and the shards of a checkpoint. a path to a folder containing a unique pytorch_model.bin or a model.safetensors file. device_map ( Dict[str, Union[int, str, torch.device]] , optional ) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. offload_folder ( str or os.PathLike , optional ) — If the device_map contains any value "disk" , the folder where we will offload weights. dtype ( str or torch.dtype , optional ) — If provided, the weights will be converted to that type when loaded. offload_state_dict ( bool , optional , defaults to False ) — If True , will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit. offload_buffers ( bool , optional , defaults to False ) — Whether or not to include the buffers in the weights offloaded to disk. keep_in_fp32_modules( List[str] , optional ) — A list of the modules that we keep in torch.float32 dtype. offload_8bit_bnb ( bool , optional ) — Whether or not to enable offload of 8-bit modules on cpu/disk. strict ( bool , optional , defaults to False ) — Whether to strictly enforce that the keys in the checkpoint state_dict match the keys of the model’s state_dict. Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded. Once loaded across devices, you still need to call dispatch_model() on your model to make it able to run. To group the checkpoint loading and dispatch in one single call, use load_checkpoint_and_dispatch() . Quantization These include utilities that are useful to quantize model. accelerate.utils.load_and_quantize_model < source > ( model : Module bnb_quantization_config : BnbQuantizationConfig weights_location : typing.Union[str, os.PathLike] = None device_map : typing.Optional[typing.Dict[str, typing.Union[int, str, torch.device]]] = None no_split_module_classes : typing.Optional[typing.List[str]] = None max_memory : typing.Optional[typing.Dict[typing.Union[int, str], typing.Union[int, str]]] = None offload_folder : typing.Union[str, os.PathLike, NoneType] = None offload_state_dict : bool = False ) → torch.nn.Module Parameters model ( torch.nn.Module ) — Input model. The model can be already loaded or on the meta device bnb_quantization_config ( BnbQuantizationConfig ) — The bitsandbytes quantization parameters weights_location ( str or os.PathLike ) — The folder weights_location to load. It can be: a path to a file containing a whole model state dict a path to a .json file containing the index to a sharded checkpoint a path to a folder containing a unique .index.json file and the shards of a checkpoint. a path to a folder containing a unique pytorch_model.bin file. device_map ( Dict[str, Union[int, str, torch.device]] , optional ) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. no_split_module_classes ( List[str] , optional ) — A list of layer class names that should never be split across device (for instance any layer that has a residual connection). max_memory ( Dict , optional ) — A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset. offload_folder ( str or os.PathLike , optional ) — If the device_map contains any value "disk" , the folder where we will offload weights. offload_state_dict ( bool , optional , defaults to False ) — If True , will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit. Returns torch.nn.Module The quantized model This function will quantize the input model with the associated config passed in bnb_quantization_config . If the model is in the meta device, we will load and dispatch the weights according to the device_map passed. If the model is already loaded, we will quantize the model and put the model on the GPU, < > Update on GitHub ← FP8 Megatron-LM utilities → Utility functions and classes Constants Data Classes Standalone Kwargs Plugins Configurations Environmental Variables Data Manipulation and Operations Environment Checks Environment Manipulation Memory Modeling Parallel Random Py Torch XLA Loading model weights Quantization
Widget_Examples.txt
Widget Examples Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Widget Examples Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Widget Examples Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Widget Examples Note that each widget example can also optionally describe the corresponding model output, directly in the output property. See the spec for more details. Natural Language Processing Fill-Mask Copied widget: - text: "Paris is the <mask> of France." example_title: "Capital" - text: "The goal of life is <mask>." example_title: "Philosophy" Question Answering Copied widget: - text: "What's my name?" context: "My name is Clara and I live in Berkeley." example_title: "Name" - text: "Where do I live?" context: "My name is Sarah and I live in London" example_title: "Location" Summarization Copied widget: - text: "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct." example_title: "Eiffel Tower" - text: "Laika, a dog that was the first living creature to be launched into Earth orbit, on board the Soviet artificial satellite Sputnik 2, on November 3, 1957. It was always understood that Laika would not survive the mission, but her actual fate was misrepresented for decades. Laika was a small (13 pounds [6 kg]), even-tempered, mixed-breed dog about two years of age. She was one of a number of stray dogs that were taken into the Soviet spaceflight program after being rescued from the streets. Only female dogs were used because they were considered to be anatomically better suited than males for close confinement." example_title: "First in Space" Table Question Answering Copied widget: - text: "How many stars does the transformers repository have?" table: Repository: - "Transformers" - "Datasets" - "Tokenizers" Stars: - 36542 - 4512 - 3934 Contributors: - 651 - 77 - 34 Programming language: - "Python" - "Python" - "Rust, Python and NodeJS" example_title: "Github stars" Text Classification Copied widget: - text: "I love football so much" example_title: "Positive" - text: "I don't really like this type of food" example_title: "Negative" Text Generation Copied widget: - text: "My name is Julien and I like to" example_title: "Julien" - text: "My name is Merve and my favorite" example_title: "Merve" Text2Text Generation Copied widget: - text: "My name is Julien and I like to" example_title: "Julien" - text: "My name is Merve and my favorite" example_title: "Merve" Token Classification Copied widget: - text: "My name is Sylvain and I live in Paris" example_title: "Parisian" - text: "My name is Sarah and I live in London" example_title: "Londoner" Translation Copied widget: - text: "My name is Sylvain and I live in Paris" example_title: "Parisian" - text: "My name is Sarah and I live in London" example_title: "Londoner" Zero-Shot Classification Copied widget: - text: "I have a problem with my car that needs to be resolved asap!!" candidate_labels: "urgent, not urgent, phone, tablet, computer" multi_class: true example_title: "Car problem" - text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app." candidate_labels: "mobile, website, billing, account access" multi_class: false example_title: "Phone issue" Sentence Similarity Copied widget: - source_sentence: "That is a happy person" sentences: - "That is a happy dog" - "That is a very happy person" - "Today is a sunny day" example_title: "Happy" Conversational Copied widget: - text: "Hey my name is Julien! How are you?" example_title: "Julien" - text: "Hey my name is Clara! How are you?" example_title: "Clara" Feature Extraction Copied widget: - text: "My name is Sylvain and I live in Paris" example_title: "Parisian" - text: "My name is Sarah and I live in London" example_title: "Londoner" Audio Text-to-Speech Copied widget: - text: "My name is Sylvain and I live in Paris" example_title: "Parisian" - text: "My name is Sarah and I live in London" example_title: "Londoner" Automatic Speech Recognition Copied widget: - src: https://cdn-media.huggingface.co/speech_samples/sample1.flac example_title: Librispeech sample 1 - src: https://cdn-media.huggingface.co/speech_samples/sample2.flac example_title: Librispeech sample 2 Audio-to-Audio Copied widget: - src: https://cdn-media.huggingface.co/speech_samples/sample1.flac example_title: Librispeech sample 1 - src: https://cdn-media.huggingface.co/speech_samples/sample2.flac example_title: Librispeech sample 2 Audio Classification Copied widget: - src: https://cdn-media.huggingface.co/speech_samples/sample1.flac example_title: Librispeech sample 1 - src: https://cdn-media.huggingface.co/speech_samples/sample2.flac example_title: Librispeech sample 2 Voice Activity Detection Copied widget: - src: https://cdn-media.huggingface.co/speech_samples/sample1.flac example_title: Librispeech sample 1 - src: https://cdn-media.huggingface.co/speech_samples/sample2.flac example_title: Librispeech sample 2 Computer Vision Image Classification Copied widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot Object Detection Copied widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg example_title: Airport Image Segmentation Copied widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg example_title: Airport Image-to-Image Copied widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/canny-edge.jpg prompt: Girl with Pearl Earring # `prompt` field is optional in case the underlying model supports text guidance Text-to-Image Copied widget: - text: "A cat playing with a ball" example_title: "Cat" - text: "A dog jumping over a fence" example_title: "Dog" Document Question Answering Copied widget: - text: "What is the invoice number?" src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png" - text: "What is the purchase amount?" src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg" Visual Question Answering Copied widget: - text: "What animal is it?" src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg" - text: "Where is it?" src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg" Zero-Shot Image Classification Copied widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog Other Structured Data Classification Copied widget: - structured_data: fixed_acidity: - 7.4 - 7.8 - 10.3 volatile_acidity: - 0.7 - 0.88 - 0.32 citric_acid: - 0 - 0 - 0.45 residual_sugar: - 1.9 - 2.6 - 6.4 chlorides: - 0.076 - 0.098 - 0.073 free_sulfur_dioxide: - 11 - 25 - 5 total_sulfur_dioxide: - 34 - 67 - 13 density: - 0.9978 - 0.9968 - 0.9976 pH: - 3.51 - 3.2 - 3.23 sulphates: - 0.56 - 0.68 - 0.82 alcohol: - 9.4 - 9.8 - 12.6 example_title: "Wine" < > Update on GitHub ← Model Widgets Inference API docs → Widget Examples Natural Language Processing Fill- Mask Question Answering Summarization Table Question Answering Text Classification Text Generation Text2 Text Generation Token Classification Translation Zero- Shot Classification Sentence Similarity Conversational Feature Extraction Audio Text-to- Speech Automatic Speech Recognition Audio-to- Audio Audio Classification Voice Activity Detection Computer Vision Image Classification Object Detection Image Segmentation Image-to- Image Text-to- Image Document Question Answering Visual Question Answering Zero- Shot Image Classification Other Structured Data Classification