filename
stringlengths 7
54
| content
stringlengths 1.74k
710k
|
---|---|
Parameters.txt | Parameters Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Parameters api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Parameters Additional Options Caching There is a cache layer on the inference API to speed up requests when the inputs are exactly the same. Many models, such as classifiers and embedding models, can use those results as is if they are deterministic, meaning the results will be the same. However, if you use a nondeterministic model, you can disable the cache mechanism from being used, resulting in a real new query. To do this, you can add x-use-cache:false to the request headers. For example Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/MODEL_ID" headers = { "Authorization": "Bearer hf_***", "Content-Type": "application/json", + "x-use-cache": "false" } data = { "inputs": "Can you please let us know more details about your " } response = requests.post(API_URL, headers=headers, json=data) print(response.json()) Wait for the model When a model is warm, it is ready to be used and you will get a response relatively quickly. However, some models are cold and need to be loaded before they can be used. In that case, you will get a 503 error. Rather than doing many requests until it’s loaded, you can wait for the model to be loaded by adding x-wait-for-model:true to the request headers. We suggest to only use this flag to wait for the model to be loaded when you are sure that the model is cold. That means, first try the request without this flag and only if you get a 503 error, try again with this flag. Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/MODEL_ID" headers = { "Authorization": "Bearer hf_***", "Content-Type": "application/json", + "x-wait-for-model": "true" } data = { "inputs": "Can you please let us know more details about your " } response = requests.post(API_URL, headers=headers, json=data) print(response.json()) < > Update on GitHub ← Security Audio Classification → Parameters Additional Options Caching Wait for the model |
Megatron-LM.txt | Megatron-LM Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Megatron-LM Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Megatron-LM Megatron-LM enables training large transformer language models at scale. It provides efficient tensor, pipeline and sequence based model parallelism for pre-training transformer based Language Models such as GPT (Decoder Only), BERT (Encoder Only) and T5 (Encoder-Decoder). For detailed information and how things work behind the scene please refer the github repo . What is integrated? Accelerate integrates following feature of Megatron-LM to enable large scale pre-training/finetuning of BERT (Encoder), GPT (Decoder) or T5 models (Encoder and Decoder): a. Tensor Parallelism (TP) : Reduces memory footprint without much additional communication on intra-node ranks. Each tensor is split into multiple chunks with each shard residing on separate GPU. At each step, the same mini-batch of data is processed independently and in parallel by each shard followed by syncing across all GPUs ( all-reduce operation). In a simple transformer layer, this leads to 2 all-reduces in the forward path and 2 in the backward path. For more details, please refer research paper Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism and this section of blogpost The Technology Behind BLOOM Training . b. Pipeline Parallelism (PP) : Reduces memory footprint and enables large scale training via inter-node parallelization. Reduces the bubble of naive PP via PipeDream-Flush schedule/1F1B schedule and Interleaved 1F1B schedule. Layers are distributed uniformly across PP stages. For example, if a model has 24 layers and we have 4 GPUs for pipeline parallelism, each GPU will have 6 layers (24/4). For more details on schedules to reduce the idle time of PP, please refer to the research paper Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM and this section of blogpost The Technology Behind BLOOM Training . c. Sequence Parallelism (SP) : Reduces memory footprint without any additional communication. Only applicable when using TP. It reduces activation memory required as it prevents the same copies to be on the tensor parallel ranks post all-reduce by replacing then with reduce-scatter and no-op operation would be replaced by all-gather . As all-reduce = reduce-scatter + all-gather , this saves a ton of activation memory at no added communication cost. To put it simply, it shards the outputs of each transformer layer along sequence dimension, e.g., if the sequence length is 1024 and the TP size is 4 , each GPU will have 256 tokens (1024/4) for each sample. This increases the batch size that can be supported for training. For more details, please refer to the research paper Reducing Activation Recomputation in Large Transformer Models . d. Data Parallelism (DP) via Distributed Optimizer: Reduces the memory footprint by sharding optimizer states and gradients across DP ranks (versus the traditional method of replicating the optimizer state across data parallel ranks). For example, when using Adam optimizer with mixed-precision training, each parameter accounts for 12 bytes of memory. This gets distributed equally across the GPUs, i.e., each parameter would account for 3 bytes (12/4) if we have 4 GPUs. For more details, please refer the research paper ZeRO: Memory Optimizations Toward Training Trillion Parameter Models and following section of blog The Technology Behind BLOOM Training . e. Selective Activation Recomputation : Reduces the memory footprint of activations significantly via smart activation checkpointing. It doesn’t store activations occupying large memory while being fast to recompute thereby achieving great tradeoff between memory and recomputation. For example, for GPT-3, this leads to 70% reduction in required memory for activations at the expense of only 2.7% FLOPs overhead for recomputation of activations. For more details, please refer to the research paper Reducing Activation Recomputation in Large Transformer Models . f. Fused Kernels : Fused Softmax, Mixed Precision Fused Layer Norm and Fused gradient accumulation to weight gradient computation of linear layer. PyTorch JIT compiled Fused GeLU and Fused Bias+Dropout+Residual addition. g. Support for Indexed datasets : Efficient binary format of datasets for large scale training. Support for the mmap , cached index file and the lazy loader format. h. Checkpoint reshaping and interoperability : Utility for reshaping Megatron-LM checkpoints of variable tensor and pipeline parallel sizes to the beloved Transformers sharded checkpoints as it has great support with plethora of tools such as Accelerate Big Model Inference, Megatron-DeepSpeed Inference etc. Support is also available for converting Transformers sharded checkpoints to Megatron-LM checkpoint of variable tensor and pipeline parallel sizes for large scale training. Pre-Requisites You will need to install the latest pytorch, cuda, nccl, and NVIDIA APEX releases and the nltk library. See documentation for more details. Another way to setup the environment is to pull an NVIDIA PyTorch Container that comes with all the required installations from NGC. Below is a step-by-step method to set up the conda environment: Create a virtual environment Copied conda create --name ml Assuming that the machine has CUDA 11.3 installed, installing the corresponding PyTorch GPU Version Copied conda install pytorch torchvision torchaudio cudatoolkit= 11 . 3 -c pytorch Install Nvidia APEX Copied git clone https: //github.com/NVIDIA/apex cd apex pip install -v --disable-pip-version-check --no-cache-dir --global-option= "--cpp_ext" --global-option= "--cuda_ext" ./ cd .. Installing Megatron-LM Copied git clone https: //gi thub.com /NVIDIA/ Megatron-LM.git cd Megatron-LM git checkout core_r0. 5.0 pip install --no-use-pep517 -e . Accelerate Megatron-LM Plugin Important features are directly supported via the accelerate config command. An example of the corresponding questions for using Megatron-LM features is shown below: Copied :~$ accelerate config --config_file "megatron_gpt_config.yaml" In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0 Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2 How many different machines will you use (use more than 1 for multi-node training)? [1]: Do you want to use DeepSpeed? [ yes /NO]: Do you want to use FullyShardedDataParallel? [ yes /NO]: Do you want to use Megatron-LM ? [ yes /NO]: yes What is the Tensor Parallelism degree/size? [1]:2 Do you want to enable Sequence Parallelism? [YES/no]: What is the Pipeline Parallelism degree/size? [1]:2 What is the number of micro-batches? [1]:2 Do you want to enable selective activation recomputation? [YES/no]: Do you want to use distributed optimizer which shards optimizer state and gradients across data parallel ranks? [YES/no]: What is the gradient clipping value based on global L2 Norm (0 to disable )? [1.0]: How many GPU(s) should be used for distributed training? [1]:4 Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: bf16 The resulting config is shown below: Copied ~$ cat megatron_gpt_config.yaml compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: MEGATRON_LM downcast_bf16: 'no' fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main megatron_lm_config: megatron_lm_gradient_clipping: 1.0 megatron_lm_num_micro_batches: 2 megatron_lm_pp_degree: 2 megatron_lm_recompute_activations: true megatron_lm_sequence_parallelism: true megatron_lm_tp_degree: 2 megatron_lm_use_distributed_optimizer: true mixed_precision: bf16 num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true use_cpu: false We will take the example of GPT pre-training. The minimal changes required to the official run_clm_no_trainer.py to use Megatron-LM are as follows: As Megatron-LM uses its own implementation of Optimizer, the corresponding scheduler compatible with it needs to be used. As such, support for only the Megatron-LM’s scheduler is present. User will need to create accelerate.utils.MegatronLMDummyScheduler . Example is given below: Copied from accelerate.utils import MegatronLMDummyScheduler if accelerator.distributed_type == DistributedType.MEGATRON_LM: lr_scheduler = MegatronLMDummyScheduler( optimizer=optimizer, total_num_steps=args.max_train_steps, warmup_num_steps=args.num_warmup_steps, ) else : lr_scheduler = get_scheduler( name=args.lr_scheduler_type, optimizer=optimizer, num_warmup_steps=args.num_warmup_steps * args.gradient_accumulation_steps, num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, ) Getting the details of the total batch size now needs to be cognization of tensor and pipeline parallel sizes. Example of getting the effective total batch size is shown below: Copied if accelerator.distributed_type == DistributedType.MEGATRON_LM: total_batch_size = accelerator.state.megatron_lm_plugin.global_batch_size else : total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps When using Megatron-LM, the losses are already averaged across the data parallel group Copied if accelerator.distributed_type == DistributedType.MEGATRON_LM: losses.append(loss) else : losses.append(accelerator.gather_for_metrics(loss.repeat(args.per_device_eval_batch_size))) if accelerator.distributed_type == DistributedType.MEGATRON_LM: losses = torch.tensor(losses) else : losses = torch.cat(losses) For Megatron-LM, we need to save the model using accelerator.save_state Copied if accelerator.distributed_type == DistributedType.MEGATRON_LM: accelerator.save_state(args.output_dir) else : unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained( args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save ) That’s it! We are good to go 🚀. Please find the example script in the examples folder at the path accelerate/examples/by_feature/megatron_lm_gpt_pretraining.py . Let’s run it for gpt-large model architecture using 4 A100-80GB GPUs. Copied accelerate launch --config_file megatron_gpt_config.yaml \ examples/by_feature/megatron_lm_gpt_pretraining.py \ --config_name "gpt2-large" \ --tokenizer_name "gpt2-large" \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --block_size 1024 \ --learning_rate 5e-5 \ --per_device_train_batch_size 24 \ --per_device_eval_batch_size 24 \ --num_train_epochs 5 \ --with_tracking \ --report_to "wandb" \ --output_dir "awesome_model" Below are some important excerpts from the output logs: Copied Loading extension module fused_dense_cuda... >>> done with compiling and loading fused kernels. Compilation time: 3.569 seconds > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) Building gpt model in the pre-training mode. The Megatron LM model weights are initialized at random in `accelerator.prepare`. Please use `accelerator.load_checkpoint` to load a pre-trained checkpoint matching the distributed setup. Preparing dataloader Preparing dataloader Preparing model > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 210753280 > number of parameters on (tensor, pipeline) model parallel rank (1, 1): 209445120 > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 210753280 > number of parameters on (tensor, pipeline) model parallel rank (0, 1): 209445120 Preparing optimizer Preparing scheduler > learning rate decay style: linear 10/10/2022 22:57:22 - INFO - __main__ - ***** Running training ***** 10/10/2022 22:57:22 - INFO - __main__ - Num examples = 2318 10/10/2022 22:57:22 - INFO - __main__ - Num Epochs = 5 10/10/2022 22:57:22 - INFO - __main__ - Instantaneous batch size per device = 24 10/10/2022 22:57:22 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 48 10/10/2022 22:57:22 - INFO - __main__ - Gradient Accumulation steps = 1 10/10/2022 22:57:22 - INFO - __main__ - Total optimization steps = 245 20%|████████████▍ | 49/245 [01:04<04:09, 1.27s/it] 10/10/2022 22:58:29 - INFO - __main__ - epoch 0: perplexity: 1222.1594275215962 eval_loss: 7.10837459564209 40%|████████████████████████▊ | 98/245 [02:10<03:07, 1.28s/it] 10/10/2022 22:59:35 - INFO - __main__ - epoch 1: perplexity: 894.5236583794557 eval_loss: 6.796291351318359 60%|████████████████████████████████████▌ | 147/245 [03:16<02:05, 1.28s/it] 10/10/2022 23:00:40 - INFO - __main__ - epoch 2: perplexity: 702.8458788508042 eval_loss: 6.555137634277344 80%|████████████████████████████████████████████████▊ | 196/245 [04:22<01:02, 1.28s/it] 10/10/2022 23:01:46 - INFO - __main__ - epoch 3: perplexity: 600.3220028695281 eval_loss: 6.39746618270874 100%|█████████████████████████████████████████████████████████████| 245/245 [05:27<00:00, 1.28s/it] There are a large number of other options/features that one can set using accelerate.utils.MegatronLMPlugin . Advanced features to leverage writing custom train step and Megatron-LM Indexed Datasets For leveraging more features, please go through below details. Below is an example of changes required to customize the Train Step while using Megatron-LM. You will implement the accelerate.utils.AbstractTrainStep or inherit from their corresponding children accelerate.utils.GPTTrainStep , accelerate.utils.BertTrainStep or accelerate.utils.T5TrainStep . Copied from accelerate.utils import MegatronLMDummyScheduler, GPTTrainStep, avg_losses_across_data_parallel_group # Custom loss function for the Megatron model class GPTTrainStepWithCustomLoss ( GPTTrainStep ): def __init__ ( self, megatron_args, **kwargs ): super ().__init__(megatron_args) self.kwargs = kwargs def get_loss_func ( self ): def loss_func ( inputs, loss_mask, output_tensor ): batch_size, seq_length = output_tensor.shape losses = output_tensor. float () loss_mask = loss_mask.view(- 1 ). float () loss = losses.view(- 1 ) * loss_mask # Resize and average loss per sample loss_per_sample = loss.view(batch_size, seq_length). sum (axis= 1 ) loss_mask_per_sample = loss_mask.view(batch_size, seq_length). sum (axis= 1 ) loss_per_sample = loss_per_sample / loss_mask_per_sample # Calculate and scale weighting weights = torch.stack([(inputs == kt). float () for kt in self.kwargs[ "keytoken_ids" ]]). sum (axis=[ 0 , 2 ]) weights = 1.0 + self.kwargs[ "alpha" ] * weights # Calculate weighted average weighted_loss = (loss_per_sample * weights).mean() # Reduce loss across data parallel groups averaged_loss = avg_losses_across_data_parallel_group([weighted_loss]) return weighted_loss, { "lm loss" : averaged_loss[ 0 ]} return loss_func def get_forward_step_func ( self ): def forward_step ( data_iterator, model ): """Forward step.""" # Get the batch. tokens, labels, loss_mask, attention_mask, position_ids = self.get_batch(data_iterator) output_tensor = model(tokens, position_ids, attention_mask, labels=labels) return output_tensor, partial(self.loss_func, tokens, loss_mask) return forward_step def main (): # Custom loss function for the Megatron model keytoken_ids = [] keywords = [ "plt" , "pd" , "sk" , "fit" , "predict" , " plt" , " pd" , " sk" , " fit" , " predict" ] for keyword in keywords: ids = tokenizer([keyword]).input_ids[ 0 ] if len (ids) == 1 : keytoken_ids.append(ids[ 0 ]) accelerator. print ( f"Keytoken ids: {keytoken_ids} " ) accelerator.state.megatron_lm_plugin.custom_train_step_class = GPTTrainStepWithCustomLoss accelerator.state.megatron_lm_plugin.custom_train_step_kwargs = { "keytoken_ids" : keytoken_ids, "alpha" : 0.25 , } For using the Megatron-LM datasets, a few more changes are required. Dataloaders for these datasets are available only on rank 0 of each tensor parallel group. As such, there are rank where dataloader won’t be available and this requires tweaks to the training loop. Being able to do all this shows how flexible and extensible Accelerate is. The changes required are as follows. a. For Megatron-LM indexed datasets, we need to use MegatronLMDummyDataLoader and pass the required dataset args to it such as data_path , seq_length etc. See here for the list of available args. Copied from accelerate.utils import MegatronLMDummyDataLoader megatron_dataloader_config = { "data_path" : args.data_path, "splits_string" : args.splits_string, "seq_length" : args.block_size, "micro_batch_size" : args.per_device_train_batch_size, } megatron_dataloader = MegatronLMDummyDataLoader(**megatron_dataloader_config) accelerator.state.megatron_lm_plugin.megatron_dataset_flag = True b. megatron_dataloader is repeated 3 times to get training, validation and test dataloaders as per the args.splits_string proportions Copied model, optimizer, lr_scheduler, train_dataloader, eval_dataloader, _ = accelerator.prepare( model, optimizer, lr_scheduler, megatron_dataloader, megatron_dataloader, megatron_dataloader ) c. Changes to training and evaluation loops as dataloader is only available on tensor parallel ranks 0 So, we need to iterate only if the dataloader isn’t None else provide empty dict As such, we loop using while loop and break when completed_steps is equal to args.max_train_steps This is similar to the Megatron-LM setup wherein user has to provide max_train_steps when using Megaton-LM indexed datasets. This displays how flexible and extensible Accelerate is. Copied while completed_steps < args.max_train_steps: model.train() batch = next (train_dataloader) if train_dataloader is not None else {} outputs = model(**batch) loss = outputs.loss ... if completed_steps % eval_interval == 0 : eval_completed_steps = 0 losses = [] while eval_completed_steps < eval_iters: model. eval () with torch.no_grad(): batch = next (eval_dataloader) if eval_dataloader is not None else {} outputs = model(**batch) Utility for Checkpoint reshaping and interoperability The scripts for these are present in Transformers library under respective models. Currently, it is available for GPT model checkpoint_reshaping_and_interoperability.py Below is an example of conversion of checkpoint from Megatron-LM to universal Transformers sharded checkpoint. Copied python checkpoint_reshaping_and_interoperability.py \ --convert_checkpoint_from_megatron_to_transformers \ --load_path "gpt/iter_0005000" \ --save_path "gpt/trfs_checkpoint" \ --max_shard_size "200MB" \ --tokenizer_name "gpt2" \ --print-checkpoint-structure Conversion of checkpoint from transformers to megatron with tp_size=2 , pp_size=2 and dp_size=2 . Copied python checkpoint_utils/megatgron_gpt2/checkpoint_reshaping_and_interoperability.py \ --load_path "gpt/trfs_checkpoint" \ --save_path "gpt/megatron_lm_checkpoint" \ --target_tensor_model_parallel_size 2 \ --target_pipeline_model_parallel_size 2 \ --target_data_parallel_size 2 \ --target_params_dtype "bf16" \ --make_vocab_size_divisible_by 128 \ --use_distributed_optimizer \ --print-checkpoint-structure Megatron-LM GPT models support returning logits and megatron_generate function for text generation Returning logits require setting require_logits=True in MegatronLMPlugin as shown below. These would be available on the in the last stage of pipeline. Copied megatron_lm_plugin = MegatronLMPlugin(return_logits= True ) megatron_generate method for Megatron-LM GPT model: This will use Tensor and Pipeline Parallelism to complete generations for a batch of inputs when using greedy with/without top_k/top_p sampling and for individual prompt inputs when using beam search decoding. Only a subset of features of transformers generate is supported. This will help in using large models via tensor and pipeline parallelism for generation (already does key-value caching and uses fused kernels by default). This requires data parallel size to be 1, sequence parallelism and activation checkpointing to be disabled. It also requires specifying path to tokenizer’s vocab file and merges file. Below example shows how to configure and use megatron_generate method for Megatron-LM GPT model. Copied # specifying tokenizer's vocab and merges file vocab_file = os.path.join(args.resume_from_checkpoint, "vocab.json" ) merge_file = os.path.join(args.resume_from_checkpoint, "merges.txt" ) other_megatron_args = { "vocab_file" : vocab_file, "merge_file" : merge_file} megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args) # inference using `megatron_generate` functionality tokenizer.pad_token = tokenizer.eos_token max_new_tokens = 64 batch_texts = [ "Are you human?" , "The purpose of life is" , "The arsenal was constructed at the request of" , "How are you doing these days?" , ] batch_encodings = tokenizer(batch_texts, return_tensors= "pt" , padding= True ) # top-p sampling generated_tokens = model.megatron_generate( batch_encodings[ "input_ids" ], batch_encodings[ "attention_mask" ], max_new_tokens=max_new_tokens, top_p= 0.8 , top_p_decay= 0.5 , temperature= 0.9 , ) decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy()) accelerator. print (decoded_preds) # top-k sampling generated_tokens = model.megatron_generate( batch_encodings[ "input_ids" ], batch_encodings[ "attention_mask" ], max_new_tokens=max_new_tokens, top_k= 50 , temperature= 0.9 , ) decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy()) accelerator. print (decoded_preds) # adding `bos` token at the start generated_tokens = model.megatron_generate( batch_encodings[ "input_ids" ], batch_encodings[ "attention_mask" ], max_new_tokens=max_new_tokens, add_BOS= True ) decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy()) accelerator. print (decoded_preds) # beam search => only takes single prompt batch_texts = [ "The purpose of life is" ] batch_encodings = tokenizer(batch_texts, return_tensors= "pt" , padding= True ) generated_tokens = model.megatron_generate( batch_encodings[ "input_ids" ], batch_encodings[ "attention_mask" ], max_new_tokens=max_new_tokens, num_beams= 20 , length_penalty= 1.5 , ) decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy()) accelerator. print (decoded_preds) An end-to-end example of using megatron_generate method for Megatron-LM GPT model is available at megatron_gpt2_generation.py with config file megatron_lm_gpt_generate_config.yaml . The bash script with accelerate launch command is available at megatron_lm_gpt_generate.sh . The output logs of the script are available at megatron_lm_gpt_generate.log . Support for ROPE and ALiBi Positional embeddings and Multi-Query Attention For ROPE/ALiBi attention, pass position_embedding_type with ("absolute" | "rotary" | "alibi") to MegatronLMPlugin as shown below. Copied other_megatron_args = { "position_embedding_type" : "alibi" } megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args) For Multi-Query Attention, pass attention_head_type with ("multihead" | "multiquery") to MegatronLMPlugin as shown below. Copied other_megatron_args = { "attention_head_type" : "multiquery" } megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args) Caveats Supports Transformers GPT2, Megatron-BERT and T5 models. This covers Decoder only, Encode only and Encoder-Decoder model classes. Only loss is returned from model forward pass as there is quite complex interplay of pipeline, tensor and data parallelism behind the scenes. The model(**batch_data) call return loss(es) averaged across the data parallel ranks. This is fine for most cases wherein pre-training jobs are run using Megatron-LM features and you can easily compute the perplexity using the loss. For GPT model, returning logits in addition to loss(es) is supported. These logits aren’t gathered across data parallel ranks. Use accelerator.utils.gather_across_data_parallel_groups to gather logits across data parallel ranks. These logits along with labels can be used for computing various performance metrics. The main process is the last rank as the losses/logits are available in the last stage of pipeline. accelerator.is_main_process and accelerator.is_local_main_process return True for last rank when using Megatron-LM integration. In accelerator.prepare call, a Megatron-LM model corresponding to a given Transformers model is created with random weights. Please use accelerator.load_state to load the Megatron-LM checkpoint with matching TP, PP and DP partitions. Currently, checkpoint reshaping and interoperability support is only available for GPT. Soon it will be extended to BERT and T5. gradient_accumulation_steps needs to be 1. When using Megatron-LM, micro batches in pipeline parallelism setting is synonymous with gradient accumulation. When using Megatron-LM, use accelerator.save_state and accelerator.load_state for saving and loading checkpoints. Below are the mapping from Megatron-LM model architectures to the the equivalent transformers model architectures. Only these transformers model architectures are supported. a. Megatron-LM BertModel : transformers models with megatron-bert in config’s model type, e.g., MegatronBERT b. Megatron-LM GPTModel : transformers models with gpt2 in config’s model type, e.g., OpenAI GPT2 c. Megatron-LM T5Model : transformers models with t5 in config’s model type, e.g., T5 and MT5 < > Update on GitHub ← Fully Sharded Data Parallel Amazon SageMaker → Megatron-LM What is integrated? Pre- Requisites Accelerate Megatron-L M Plugin Advanced features to leverage writing custom train step and Megatron-L M Indexed Datasets Utility for Checkpoint reshaping and interoperability Megatron-L M GP T models support returning logits and megatron_generate function for text generation Support for ROP E and A Li Bi Positional embeddings and Multi- Query Attention Caveats |
Creating_and_sharing_a_new_evaluation.txt | Creating and sharing a new evaluation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Creating and sharing a new evaluation Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Creating and sharing a new evaluation Setup Before you can create a new metric make sure you have all the necessary dependencies installed: Copied pip install evaluate[template] Also make sure your Hugging Face token is registered so you can connect to the Hugging Face Hub: Copied huggingface-cli login Create All evaluation modules, be it metrics, comparisons, or measurements live on the 🤗 Hub in a Space (see for example Accuracy ). In principle, you could setup a new Space and add a new module following the same structure. However, we added a CLI that makes creating a new evaluation module much easier: Copied evaluate-cli create "My Metric" --module_type "metric" This will create a new Space on the 🤗 Hub, clone it locally, and populate it with a template. Instructions on how to fill the template will be displayed in the terminal, but are also explained here in more detail. For more information about Spaces, see the Spaces documentation . Module script The evaluation module script (the file with suffix *.py ) is the core of the new module and includes all the code for computing the evaluation. Attributes Start by adding some information about your evalution module in EvaluationModule._info() . The most important attributes you should specify are: EvaluationModuleInfo.description provides a brief description about your evalution module. EvaluationModuleInfo.citation contains a BibTex citation for the evalution module. EvaluationModuleInfo.inputs_description describes the expected inputs and outputs. It may also provide an example usage of the evalution module. EvaluationModuleInfo.features defines the name and type of the predictions and references. This has to be either a single datasets.Features object or a list of datasets.Features objects if multiple input types are allowed. Then, we can move on to prepare everything before the actual computation. Download Some evaluation modules require some external data such as NLTK that requires resources or the BLEURT metric that requires checkpoints. You can implement these downloads in EvaluationModule._download_and_prepare() , which downloads and caches the resources via the dlmanager . A simplified example on how BLEURT downloads and loads a checkpoint: Copied def _download_and_prepare ( self, dl_manager ): model_path = dl_manager.download_and_extract(CHECKPOINT_URLS[self.config_name]) self.scorer = score.BleurtScorer(os.path.join(model_path, self.config_name)) Or if you need to download the NLTK "punkt" resources: Copied def _download_and_prepare ( self, dl_manager ): import nltk nltk.download( "punkt" ) Next, we need to define how the computation of the evaluation module works. Compute The computation is performed in the EvaluationModule._compute() method. It takes the same arguments as EvaluationModuleInfo.features and should then return the result as a dictionary. Here an example of an exact match metric: Copied def _compute ( self, references, predictions ): em = sum ([r==p for r, p in zip (references, predictions)])/ len (references) return { "exact_match" : em} This method is used when you call .compute() later on. Readme When you use the evalute-cli to setup the evaluation module the Readme structure and instructions are automatically created. It should include a general description of the metric, information about its input/output format, examples as well as information about its limiations or biases and references. Requirements If your evaluation modules has additional dependencies (e.g. sklearn or nltk ) the requirements.txt files is the place to put them. The file follows the pip format and you can list all dependencies there. App The app.py is where the Spaces widget lives. In general it looks like the following and does not require any changes: Copied import evaluate from evaluate.utils import launch_gradio_widget module = evaluate.load( "lvwerra/element_count" ) launch_gradio_widget(module) If you want a custom widget you could add your gradio app here. Push to Hub Finally, when you are done with all the above changes it is time to push your evaluation module to the hub. To do so navigate to the folder of your module and git add/commit/push the changes to the hub: Copied cd PATH_TO_MODULE git add . git commit -m "Add my new, shiny module." git push Tada 🎉! Your evaluation module is now on the 🤗 Hub and ready to be used by everybody! ← Choosing the right metric Using the evaluator → Creating and sharing a new evaluation Setup Create Module script Attributes Download Compute Readme Requirements App Push to Hub |
Using_Flair_at_Hugging_Face.txt | Using Flair at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using Flair at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using Flair at Hugging Face Flair is a very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. Exploring Flair in the Hub You can find flair models by filtering at the left of the models page . All models on the Hub come with these useful features: An automatically generated model card with a brief description. An interactive widget you can use to play with the model directly in the browser. An Inference API that allows you to make inference requests. Installation To get started, you can follow the Flair installation guide . You can also use the following one-line install through pip: Copied $ pip install -U flair Using existing models All flair models can easily be loaded from the Hub: Copied from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load( "flair/ner-multi" ) Once loaded, you can use predict() to perform inference: Copied sentence = Sentence( "George Washington ging nach Washington." ) tagger.predict(sentence) # print sentence print (sentence) It outputs the following: Copied Sentence[6]: "George Washington ging nach Washington." → ["George Washington"/PER, "Washington"/LOC] If you want to load a specific Flair model, you can click Use in Flair in the model card and you will be given a working snippet! Additional resources Flair repository Flair docs Official Flair models on the Hub (mainly trained by @alanakbik and @stefan-it ) < > Update on GitHub ← fastai Keras → Using Flair at Hugging Face Exploring Flair in the Hub Installation Using existing models Additional resources |
AWS_Neuron.txt | AWS Neuron Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation AWS Neuron Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started AWS Neuron Diffusers functionalities are available on AWS Inf2 instances , which are EC2 instances powered by Neuron machine learning accelerators . These instances aim to provide better compute performance (higher throughput, lower latency) with good cost-efficiency, making them good candidates for AWS users to deploy diffusion models to production. Optimum Neuron is the interface between Hugging Face libraries and AWS Accelerators, including AWS Trainium and AWS Inferentia . It supports many of the features in Diffusers with similar APIs, so it is easier to learn if you’re already familiar with Diffusers. Once you have created an AWS Inf2 instance, install Optimum Neuron. Copied python -m pip install --upgrade-strategy eager optimum[neuronx] We provide pre-built Hugging Face Neuron Deep Learning AMI (DLAMI) and Optimum Neuron containers for Amazon SageMaker. It’s recommended to correctly set up your environment. The example below demonstrates how to generate images with the Stable Diffusion XL model on an inf2.8xlarge instance (you can switch to cheaper inf2.xlarge instances once the model is compiled). To generate some images, use the NeuronStableDiffusionXLPipeline class, which is similar to the StableDiffusionXLPipeline class in Diffusers. Unlike Diffusers, you need to compile models in the pipeline to the Neuron format, .neuron . Launch the following command to export the model to the .neuron format. Copied optimum-cli export neuron --model stabilityai/stable-diffusion-xl-base-1.0 \ --batch_size 1 \ --height 1024 ` # height in pixels of generated image, eg. 768, 1024` \ --width 1024 ` # width in pixels of generated image, eg. 768, 1024` \ --num_images_per_prompt 1 ` # number of images to generate per prompt, defaults to 1` \ --auto_cast matmul ` # cast only matrix multiplication operations` \ --auto_cast_type bf16 ` # cast operations from FP32 to BF16` \ sd_neuron_xl/ Now generate some images with the pre-compiled SDXL model. Copied >>> from optimum.neuron import NeuronStableDiffusionXLPipeline >>> stable_diffusion_xl = NeuronStableDiffusionXLPipeline.from_pretrained( "sd_neuron_xl/" ) >>> prompt = "a pig with wings flying in floating US dollar banknotes in the air, skyscrapers behind, warm color palette, muted colors, detailed, 8k" >>> image = stable_diffusion_xl(prompt).images[ 0 ] Feel free to check out more guides and examples on different use cases from the Optimum Neuron documentation ! < > Update on GitHub ← Habana Gaudi Philosophy → AW S Neuron |
Search.txt | Search Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Search Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Search You can now easily search anything on the Hub with Full-text search . We index model cards, dataset cards, and Spaces app.py files. Go directly to https://huggingface.co/search or, using the search bar at the top of https://huggingface.co , you can select “Try Full-text search” to help find what you seek on the Hub across models, datasets, and Spaces: Filter with ease By default, models, datasets, & spaces are being searched when a user enters a query. If one prefers, one can filter to search only models, datasets, or spaces. Moreover, one can copy & share the URL from one’s browser’s address bar, which should contain the filter information as URL query. For example, when one searches for a query llama with a filter to show Spaces only, one gets URL https://huggingface.co/search/full-text?q=llama&type=space < > Update on GitHub ← Paper Pages Digital Object Identifier (DOI) → Search Filter with ease |
Table_Question_Answering.txt | Table Question Answering Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Table Question Answering api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Table Question Answering Table Question Answering (Table QA) is the answering a question about an information on a given table. For more details about the table-question-answering task, check out its dedicated page ! You will find examples and related materials. Recommended models google/tapas-base-finetuned-wtq : A robust table question answering model. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq" headers = { "Authorization" : "Bearer hf_***" } def query ( payload ): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs" : { "query" : "How many stars does the transformers repository have?" , "table" : { "Repository" : [ "Transformers" , "Datasets" , "Tokenizers" ], "Stars" : [ "36542" , "4512" , "3934" ], "Contributors" : [ "651" , "77" , "34" ], "Programming language" : [ "Python" , "Python" , "Rust, Python and NodeJS" ] } }, }) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload inputs* object One (table, question) pair to answer table* object The table to serve as context for the questions question* string The question to be answered about the table parameters object padding enum Possible values: do_not_pad, longest, max_length. sequential boolean Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the inference to be done sequentially to extract relations within sequences, given their conversational nature. truncation boolean Activates and controls truncation. Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Body (array) object[] Output is an array of objects. answer string The answer of the question given the table. If there is an aggregator, the answer will be preceded by AGGREGATOR > . coordinates array[] Coordinates of the cells of the answers. cells string[] List of strings made up of the answer cell values. aggregator string If the model has an aggregator, this returns the aggregator. < > Update on GitHub ← Summarization Text Classification → Table Question Answering Recommended models Using the API AP I specification Request Response |
Hub_API_Endpoints.txt | Hub API Endpoints Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Hub API Endpoints Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Hub API Endpoints We have open endpoints that you can use to retrieve information from the Hub as well as perform certain actions such as creating model, dataset or Space repos. We offer a wrapper Python library, huggingface_hub , that allows easy access to these endpoints. We also provide webhooks to receive real-time incremental info about repos. Enjoy! The base URL for those endpoints below is https://huggingface.co . For example, to construct the /api/models call below, one can call the URL https://huggingface.co/api/models The Hub API Playground Want to try out our API? Try it out now on our Playground ! Repo listing API The following endpoints help get information about models, datasets, and Spaces stored on the Hub. When making API calls to retrieve information about repositories, the createdAt attribute indicates the time when the respective repository was created. It's important to note that there is a unique value, 2022-03-02T23:29:04.000Z assigned to all repositories that were created before we began storing creation dates. GET /api/models Get information from all models in the Hub. The response is paginated, use the Link header to get the next pages. You can specify additional parameters to have more specific results. search : Filter based on substrings for repos and their usernames, such as resnet or microsoft author : Filter models by an author or organization, such as huggingface or microsoft filter : Filter based on tags, such as text-classification or spacy . sort : Property to use when sorting, such as downloads or author . direction : Direction in which to sort, such as -1 for descending, and anything else for ascending. limit : Limit the number of models fetched. full : Whether to fetch most model data, such as all tags, the files, etc. config : Whether to also fetch the repo config. Payload: Copied params = { "search" : "search" , "author" : "author" , "filter" : "filter" , "sort" : "sort" , "direction" : "direction" , "limit" : "limit" , "full" : "full" , "config" : "config" } This is equivalent to huggingface_hub.list_models() . GET /api/models/{repo_id} or /api/models/{repo_id}/revision/{revision} Get all information for a specific model. This is equivalent to huggingface_hub.model_info(repo_id, revision) . GET /api/models-tags-by-type Gets all the available model tags hosted in the Hub. This is equivalent to huggingface_hub.get_model_tags() . GET /api/datasets Get information from all datasets in the Hub. The response is paginated, use the Link header to get the next pages. You can specify additional parameters to have more specific results. search : Filter based on substrings for repos and their usernames, such as pets or microsoft author : Filter datasets by an author or organization, such as huggingface or microsoft filter : Filter based on tags, such as task_categories:text-classification or languages:en . sort : Property to use when sorting, such as downloads or author . direction : Direction in which to sort, such as -1 for descending, and anything else for ascending. limit : Limit the number of datasets fetched. full : Whether to fetch most dataset data, such as all tags, the files, etc. Payload: Copied params = { "search" : "search" , "author" : "author" , "filter" : "filter" , "sort" : "sort" , "direction" : "direction" , "limit" : "limit" , "full" : "full" , "config" : "config" } This is equivalent to huggingface_hub.list_datasets() . GET /api/datasets/{repo_id} or /api/datasets/{repo_id}/revision/{revision} Get all information for a specific dataset. full : Whether to fetch most dataset data, such as all tags, the files, etc. Payload: Copied params = { "full" : "full" } This is equivalent to huggingface_hub.dataset_info(repo_id, revision) . GET /api/datasets/{repo_id}/parquet Get the list of auto-converted parquet files. Append the subset and the split to the URL to get the list of files for a specific subset and split: GET /api/datasets/{repo_id}/parquet/{subset} GET /api/datasets/{repo_id}/parquet/{subset}/{split} GET /api/datasets/{repo_id}/parquet/{subset}/{split}/{n}.parquet Get the nth shard of the auto-converted parquet files, for a specific subset (also called “config”) and split. GET /api/datasets/{repo_id}/croissant Get the Croissant metadata. More details at https://huggingface.co/docs/datasets-server/croissant . GET /api/datasets-tags-by-type Gets all the available dataset tags hosted in the Hub. This is equivalent to huggingface_hub.get_dataset_tags() . GET /api/spaces Get information from all Spaces in the Hub. The response is paginated, use the Link header to get the next pages. You can specify additional parameters to have more specific results. search : Filter based on substrings for repos and their usernames, such as resnet or microsoft author : Filter models by an author or organization, such as huggingface or microsoft filter : Filter based on tags, such as text-classification or spacy . sort : Property to use when sorting, such as downloads or author . direction : Direction in which to sort, such as -1 for descending, and anything else for ascending. limit : Limit the number of models fetched. full : Whether to fetch most model data, such as all tags, the files, etc. Payload: Copied params = { "search" : "search" , "author" : "author" , "filter" : "filter" , "sort" : "sort" , "direction" : "direction" , "limit" : "limit" , "full" : "full" , "config" : "config" } This is equivalent to huggingface_hub.list_spaces() . GET /api/spaces/{repo_id} or /api/spaces/{repo_id}/revision/{revision} Get all information for a specific model. This is equivalent to huggingface_hub.space_info(repo_id, revision) . Repo API The following endpoints manage repository settings like creating and deleting a repository. POST /api/repos/create Create a repository. It’s a model repo by default. Parameters: type : Type of repo (dataset or space; model by default). name : Name of repo. organization : Name of organization (optional). private : Whether the repo is private. sdk : When the type is space (streamlit, gradio, docker or static) Payload: Copied payload = { "type" : "model" , "name" : "name" , "organization" : "organization" , "private" : "private" , "sdk" : "sdk" } This is equivalent to huggingface_hub.create_repo() . DELETE /api/repos/delete Delete a repository. It’s a model repo by default. Parameters: type : Type of repo (dataset or space; model by default). name : Name of repo. organization : Name of organization (optional). Payload: Copied payload = { "type" : "model" , "name" : "name" , "organization" : "organization" , } This is equivalent to huggingface_hub.delete_repo() . PUT /api/repos/{repo_type}/{repo_id}/settings Update repo visibility. Payload: Copied payload = { "private" : "private" , } This is equivalent to huggingface_hub.update_repo_visibility() . POST /api/repos/move Move a repository (rename within the same namespace or transfer from user to organization). Parameters: fromRepo : repo to rename. toRepo : new name of the repo. type : Type of repo (dataset or space; model by default). Payload: Copied payload = { "fromRepo" : "namespace/repo_name" , "toRepo" : "namespace2/repo_name2" , "type" : "model" , } This is equivalent to huggingface_hub.move_repo() . User API The following endpoint gets information about a user. GET /api/whoami-v2 Get username and organizations the user belongs to. Payload: Copied headers = { "authorization" : "Bearer $token" } This is equivalent to huggingface_hub.whoami() . Organization API The following endpoint gets a list of the Organization members. GET /api/organizations/{organization_name}/members Get the organization members. Payload: Copied headers = { "authorization" : "Bearer $token" } This is equivalent to huggingface_hub.list_organization_members() . Resource Groups API The following endpoints manage resource groups. Resource groups is an Enterprise feature. GET /api/organizations/{name}/resource-groups Get all resource groups in an organization that the authenticated user has access to view. GET /api/organizations/{name}/resource-groups/{resourceGroupId} Get detailed information about a specific resource group. POST /api/organizations/{name}/resource-groups Create a new resource group in the organization. Parameters: name : Name of the resource group (required) description : Description of the resource group (optional) users : List of users and their roles in the resource group (optional) repos : List of repositories (optional) autoJoin : Settings for automatic user joining (optional) Payload: Copied payload = { "name" : "name" , "description" : "description" , "users" : [ { "user" : "username" , "role" : "admin" // or "write" or "read" } ], "repos" : [ { "type" : "dataset" , "name" : "huggingface/repo" } ] } PATCH /api/organizations/{name}/resource-groups/{resourceGroupId} Update a resource group’s metadata. Parameters: name : New name for the resource group (optional) description : New description for the resource group (optional) Payload: Copied payload = { "name" : "name" , "description" : "description" } POST /api/organizations/{name}/resource-groups/{resourceGroupId}/settings Update a resource group’s settings. Payload: Copied payload = { "autoJoin" : { "enabled" : true , "role" : "read" // or "write" or "admin" } } DELETE /api/organizations/{name}/resource-groups/{resourceGroupId} Delete a resource group. POST /api/organizations/{name}/resource-groups/{resourceGroupId}/users Add users to a resource group. Payload: Copied payload = { "users" : [ { "user" : "username" , "role" : "admin" // or "write" or "read" } ] } DELETE /api/organizations/{name}/resource-groups/{resourceGroupId}/users/{username} Remove a user from a resource group. PATCH /api/organizations/{name}/resource-groups/{resourceGroupId}/users/{username} Update a user’s role in a resource group. Payload: Copied payload = { "role" : "admin" // or "write" or "read" } POST /api/(models|spaces|datasets)/{namespace}/{repo}/resource-group Update resource group’s repository. Payload: Copied payload = { "resourceGroupId" : "6771d4700000000000000000" // (allow `null` for removing the repo's resource group) } GET /api/(models|spaces|datasets)/{namespace}/{repo}/resource-group Get detailed repository’s resource group Paper Pages API The following endpoint gets information about a paper. GET /api/papers/{arxiv_id} Get the API equivalent of the Paper page, i.e., metadata like authors, summary, and discussion comments. GET /api/arxiv/{arxiv_id}/repos Get all the models, datasets, and Spaces that refer to a paper. GET /api/daily_papers Get the daily papers curated by AK and the community. It’s the equivalent of https://huggingface.co/papers . Collections API Use Collections to group repositories from the Hub (Models, Datasets, Spaces and Papers) on a dedicated page. You can learn more about it in the Collections guide . Collections can also be managed using the Python client (see guide ). POST /api/collections Create a new collection on the Hub with a title, a description (optional) and a first item (optional). An item is defined by a type ( model , dataset , space or paper ) and an id (repo_id or paper_id on the Hub). Payload: Copied payload = { "title" : "My cool models" , "namespace" : "username_or_org" , "description" : "Here is a shortlist of models I've trained." , "item" : { "type" : "model" , "id" : "username/cool-model" , } "private" : false , } This is equivalent to huggingface_hub.create_collection() . GET /api/collections/{namespace}/{slug}-{id} Return information about a collection. This is equivalent to huggingface_hub.get_collection() . GET /api/collections List collections from the Hub, based on some criteria. The supported parameters are: owner (string): filter collections created by a specific user or organization. item (string): filter collections containing a specific item. Value must be the item_type and item_id concatenated. Example: "models/teknium/OpenHermes-2.5-Mistral-7B" , "datasets/rajpurkar/squad" or "papers/2311.12983" . sort (string): sort the returned collections. Supported values are "lastModified" , "trending" (default) and "upvotes" . limit (int): maximum number (100) of collections per page. q (string): filter based on substrings for titles & descriptions. If no parameter is set, all collections are returned. The response is paginated. To get all collections, you must follow the Link header . When listing collections, the item list per collection is truncated to 4 items maximum. To retrieve all items from a collection, you need to make an additional call using its collection slug. Payload: Copied params = { "owner" : "TheBloke" , "item" : "models/teknium/OpenHermes-2.5-Mistral-7B" , "sort" : "lastModified" , "limit" : 1 , } This is equivalent to huggingface_hub.list_collections() . PATCH /api/collections/{namespace}/{slug}-{id} Update the metadata of a collection on the Hub. You can’t add or modify the items of the collection with this method. All fields of the payload are optional. Payload: Copied payload = { "title" : "My cool models" , "description" : "Here is a shortlist of models I've trained." , "private" : false , "position" : 0 , // position of the collection on your profile "theme" : "green" , } This is equivalent to huggingface_hub.update_collection_metadata() . DELETE /api/collections/{namespace}/{slug}-{id} Return a collection. This is a non-revertible operation. A deleted collection cannot be restored. This is equivalent to huggingface_hub.delete_collection() . POST /api/collections/{namespace}/{slug}-{id}/item Add an item to a collection. An item is defined by a type ( model , dataset , space or paper ) and an id (repo_id or paper_id on the Hub). A note can also be attached to the item (optional). Payload: Copied payload = { "item" : { "type" : "model" , "id" : "username/cool-model" , } "note" : "Here is the model I trained on ..." , } This is equivalent to huggingface_hub.add_collection_item() . PATCH /api/collections/{namespace}/{slug}-{id}/items/{item_id} Update an item in a collection. You must know the item object id which is different from the repo_id/paper_id provided when adding the item to the collection. The item_id can be retrieved by fetching the collection. You can update the note attached to the item or the position of the item in the collection. Both fields are optional. Copied payload = { "position" : 0 , "note" : "Here is the model I trained on ..." , } This is equivalent to huggingface_hub.update_collection_item() . DELETE /api/collections/{namespace}/{slug}-{id}/items/{item_id} Remove an item from a collection. You must know the item object id which is different from the repo_id/paper_id provided when adding the item to the collection. The item_id can be retrieved by fetching the collection. This is equivalent to huggingface_hub.delete_collection_item() . < > Update on GitHub ← Digital Object Identifier (DOI) Sign-In with HF → Hub AP I Endpoints The Hub AP I Playground Repo listing API GE T /api/models GE T /api/models/{repo_id} or /api/models/{repo_id}/revision/{revision} GE T /api/models-tags-by-type GE T /api/datasets GE T /api/datasets/{repo_id} or /api/datasets/{repo_id}/revision/{revision} GE T /api/datasets/{repo_id}/parquet GE T /api/datasets/{repo_id}/parquet/{subset}/{split}/{n}.parquet GE T /api/datasets/{repo_id}/croissant GE T /api/datasets-tags-by-type GE T /api/spaces GE T /api/spaces/{repo_id} or /api/spaces/{repo_id}/revision/{revision} Repo API POS T /api/repos/create DELET E /api/repos/delete PU T /api/repos/{repo_type}/{repo_id}/settings POS T /api/repos/move User API GE T /api/whoami-v2 Organization API GE T /api/organizations/{organization_name}/members Resource Groups API GE T /api/organizations/{name}/resource-groups GE T /api/organizations/{name}/resource-groups/{resource Group Id} POS T /api/organizations/{name}/resource-groups PATC H /api/organizations/{name}/resource-groups/{resource Group Id} POS T /api/organizations/{name}/resource-groups/{resource Group Id}/settings DELET E /api/organizations/{name}/resource-groups/{resource Group Id} POS T /api/organizations/{name}/resource-groups/{resource Group Id}/users DELET E /api/organizations/{name}/resource-groups/{resource Group Id}/users/{username} PATC H /api/organizations/{name}/resource-groups/{resource Group Id}/users/{username} POS T /api/(models|spaces|datasets)/{namespace}/{repo}/resource-group GE T /api/(models|spaces|datasets)/{namespace}/{repo}/resource-group Paper Pages API GE T /api/papers/{arxiv_id} GE T /api/arxiv/{arxiv_id}/repos GE T /api/daily_papers Collections API POS T /api/collections GE T /api/collections/{namespace}/{slug}-{id} GE T /api/collections PATC H /api/collections/{namespace}/{slug}-{id} DELET E /api/collections/{namespace}/{slug}-{id} POS T /api/collections/{namespace}/{slug}-{id}/item PATC H /api/collections/{namespace}/{slug}-{id}/items/{item_id} DELET E /api/collections/{namespace}/{slug}-{id}/items/{item_id} |
Class__HfAgent.txt | Class: HfAgent Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Class: HfAgent Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Class: HfAgent Constructors constructor • new HfAgent ( accessToken? , LLM? , tools? ): HfAgent Parameters Name Type Default value accessToken string "" LLM? LLM undefined tools? Tool [] undefined Returns HfAgent Defined in HfAgent.ts:14 Properties accessToken • Private accessToken : string Defined in HfAgent.ts:10 llm • Private llm : LLM Defined in HfAgent.ts:11 tools • Private tools : Tool [] Defined in HfAgent.ts:12 Methods evaluateCode ▸ evaluateCode ( code , files? ): Promise \< Update []> Parameters Name Type code string files? FileList Returns Promise \< Update []> Defined in HfAgent.ts:31 generateCode ▸ generateCode ( prompt , files? ): Promise \< string > Parameters Name Type prompt string files? FileList Returns Promise \< string > Defined in HfAgent.ts:27 generatePrompt ▸ generatePrompt ( prompt , files? ): string Parameters Name Type prompt string files? FileList Returns string Defined in HfAgent.ts:20 run ▸ run ( prompt , files? ): Promise \< Update []> Parameters Name Type prompt string files? FileList Returns Promise \< Update []> Defined in HfAgent.ts:51 < > Update on GitHub ← API Reference Use Space mini_header in your app → Class: Hf Agent Constructors constructor Parameters Returns Defined in Properties access Token Defined in llm Defined in tools Defined in Methods evaluate Code Parameters Returns Defined in generate Code Parameters Returns Defined in generate Prompt Parameters Returns Defined in run Parameters Returns Defined in |
Interface__CommitData.txt | Interface: CommitData Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: CommitData Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: CommitData Properties authors • authors : { avatarUrl : string ; username : string }[] Defined in hub/src/lib/list-commits.ts:13 date • date : Date Defined in hub/src/lib/list-commits.ts:14 message • message : string Defined in hub/src/lib/list-commits.ts:12 oid • oid : string Defined in hub/src/lib/list-commits.ts:10 title • title : string Defined in hub/src/lib/list-commits.ts:11 < > Update on GitHub ← CachedRevisionInfo CommitDeletedEntry → Interface: Commit Data Properties authors Defined in date Defined in message Defined in oid Defined in title Defined in |
Digital_Object_Identifier_(DOI).txt | Digital Object Identifier (DOI) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Digital Object Identifier (DOI) Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Digital Object Identifier (DOI) The Hugging Face Hub offers the possibility to generate DOI for your models or datasets. DOIs (Digital Object Identifiers) are strings uniquely identifying a digital object, anything from articles to figures, including datasets and models. DOIs are tied to object metadata, including the object’s URL, version, creation date, description, etc. They are a commonly accepted reference to digital resources across research and academic communities; they are analogous to a book’s ISBN. How to generate a DOI? To do this, you must go to the settings of your model or dataset. In the DOI section, a button called “Generate DOI” should appear: To generate the DOI for this model or dataset, you need to click on this button and acknowledge that some features on the hub will be restrained and some of your information (your full name) will be transferred to our partner DataCite: After you agree to those terms, your model or dataset will get a DOI assigned, and a new tag should appear in your model or dataset header allowing you to cite it. Can I regenerate a new DOI if my model or dataset changes? If ever there’s a new version of a model or dataset, a new DOI can easily be assigned, and the previous version of the DOI gets outdated. This makes it easy to refer to a specific version of an object, even if it has changed. You just need to click on “Generate new DOI” and tadaam!🎉 a new DOI is assigned for the current revision of your model or dataset. Why is there a ‘locked by DOI’ message on delete, rename and change visibility action on my model or dataset? DOIs make finding information about a model or dataset easier and sharing them with the world via a permanent link that will never expire or change. As such, datasets/models with DOIs are intended to persist perpetually and may only be deleted, renamed and changed their visibility upon filing a request with our support (website at huggingface.co) Further Reading Introducing DOI: the Digital Object Identifier to Datasets and Models < > Update on GitHub ← Search Hub API Endpoints → Digital Object Identifier (DO I) How to generate a DO I? Can I regenerate a new DO I if my model or dataset changes? Why is there a ‘locked by DO I’ message on delete, rename and change visibility action on my model or dataset? Further Reading |
DDP_Communication_Hooks.txt | DDP Communication Hooks Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation DDP Communication Hooks Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started DDP Communication Hooks Distributed Data Parallel (DDP) communication hooks provide a generic interface to control how gradients are communicated across workers by overriding the vanilla allreduce in DistributedDataParallel . A few built-in communication hooks are provided, and users can easily apply any of these hooks to optimize communication. FP16 Compression Hook : Compresses gradients by casting them to half-precision floating-point format ( torch.float16 ), reducing communication overhead. BF16 Compression Hook : Similar to FP16, but uses the Brain Floating Point format ( torch.bfloat16 ), which can be more efficient on certain hardware. PowerSGD Hook : An advanced gradient compression algorithm that provides high compression rates and can accelerate bandwidth-bound distributed training. In this tutorial, you will see how to quickly set up DDP communication hooks and perform training with the utilities provided in Accelerate, which can be as simple as adding just one new line of code! This demonstrates how to use DDP communication hooks to optimize gradient communication in distributed training with the Accelerate library. FP16 Compression Hook PyTorch Accelerate Copied import torch from torch.nn.parallel import DistributedDataParallel as DDP from torch.distributed.algorithms.ddp_comm_hooks import default_hooks class MyModel (torch.nn.Module): def __init__ ( self ): super ().__init__() self.layer = torch.nn.Linear( 10 , 10 ) def forward ( self, x ): return self.layer(x) model = MyModel() model = DDP(model, device_ids=[torch.cuda.current_device()]) model.register_comm_hook(state= None , hook=default_hooks.fp16_compress_hook) # Training loop for data, targets in data_loader: outputs = model(data) loss = criterion(outputs, targets) loss.backward() optimizer.step() optimizer.zero_grad() BF16 Compression Hook BF16 Compression Hook API is experimental, and it requires NCCL version later than 2.9.6. PyTorch Accelerate Copied import torch from torch.nn.parallel import DistributedDataParallel as DDP from torch.distributed.algorithms.ddp_comm_hooks import default_hooks class MyModel (torch.nn.Module): def __init__ ( self ): super ().__init__() self.layer = torch.nn.Linear( 10 , 10 ) def forward ( self, x ): return self.layer(x) model = MyModel() model = DDP(model, device_ids=[torch.cuda.current_device()]) model.register_comm_hook(state= None , hook=default_hooks.bf16_compress_hook) # Training loop for data, targets in data_loader: outputs = model(data) loss = criterion(outputs, targets) loss.backward() optimizer.step() optimizer.zero_grad() PowerSGD Hook PowerSGD typically requires extra memory of the same size as the model’s gradients to enable error feedback, which can compensate for biased compressed communication and improve accuracy. PyTorch Accelerate Copied import torch from torch.nn.parallel import DistributedDataParallel as DDP from torch.distributed.algorithms.ddp_comm_hooks import powerSGD_hook class MyModel (torch.nn.Module): def __init__ ( self ): super ().__init__() self.layer = torch.nn.Linear( 10 , 10 ) def forward ( self, x ): return self.layer(x) model = MyModel() model = DDP(model, device_ids=[torch.cuda.current_device()]) state = powerSGD_hook.PowerSGDState(process_group= None ) model.register_comm_hook(state=state, hook=powerSGD_hook.powerSGD_hook) # Training loop for data, targets in data_loader: outputs = model(data) loss = criterion(outputs, targets) loss.backward() optimizer.step() optimizer.zero_grad() DDP Communication Hooks utilities There are two additional utilities for supporting optional functionalities with the communication hooks. comm_wrapper comm_wrapper is an option to wrap a communication hook with additional functionality. For example, it can be used to combine FP16 compression with other communication strategies. Currently supported wrappers are no , fp16 , and bf16 . Copied from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs import torch class MyModel (torch.nn.Module): def __init__ ( self ): super ().__init__() self.layer = torch.nn.Linear( 10 , 10 ) def forward ( self, x ): return self.layer(x) # DDP Communication Hook setup ddp_kwargs = DistributedDataParallelKwargs( comm_hook=DDPCommunicationHookType.POWER_SGD, comm_wrapper=DDPCommunicationHookType.FP16 ) accelerator = Accelerator(kwargs_handlers=[ddp_kwargs]) model = MyModel() optimizer = torch.optim.Adam(model.parameters()) data_loader = DataLoader(dataset, batch_size= 16 ) model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader) # Training loop for data, targets in data_loader: outputs = model(data) loss = criterion(outputs, targets) accelerator.backward(loss) optimizer.step() optimizer.zero_grad() comm_state_option comm_state_option allows you to pass additional state information required by certain communication hooks. This is particularly useful for stateful hooks like PowerSGD , which require maintaining hyperparameters and internal states across training steps. Below is an example showcasing the use of comm_state_option with the PowerSGD hook. Copied from accelerate import Accelerator, DDPCommunicationHookType, DistributedDataParallelKwargs import torch class MyModel (torch.nn.Module): def __init__ ( self ): super ().__init__() self.layer = torch.nn.Linear( 10 , 10 ) def forward ( self, x ): return self.layer(x) # DDP Communication Hook setup ddp_kwargs = DistributedDataParallelKwargs( comm_hook=DDPCommunicationHookType.POWER_SGD, comm_state_option={ "matrix_approximation_rank" : 2 } ) accelerator = Accelerator(kwargs_handlers=[ddp_kwargs]) model = MyModel() optimizer = torch.optim.Adam(model.parameters()) data_loader = DataLoader(dataset, batch_size= 16 ) model, optimizer, data_loader = accelerator.prepare(model, optimizer, data_loader) # Training loop for data, targets in data_loader: outputs = model(data) loss = criterion(outputs, targets) accelerator.backward(loss) optimizer.step() optimizer.zero_grad() For more advanced usage and additional hooks, refer to the PyTorch DDP Communication Hooks documentation . < > Update on GitHub ← Using multiple models with DeepSpeed Fully Sharded Data Parallel → DD P Communication Hooks F P16 Compression Hook B F16 Compression Hook PowerSG D Hook DD P Communication Hooks utilities comm_wrapper comm_state_option |
🤗_Hugging_Face_Inference_Endpoints.txt | 🤗 Hugging Face Inference Endpoints Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation 🤗 Hugging Face Inference Endpoints Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🤗 Hugging Face Inference Endpoints A Typescript powered wrapper for the Hugging Face Inference API (serverless), Inference Endpoints (dedicated), and third-party Inference Providers. It works with Inference API (serverless) and Inference Endpoints (dedicated) , and even with supported third-party Inference Providers. You can also try out a live interactive notebook , see some demos on hf.co/huggingfacejs , or watch a Scrimba tutorial that explains how Inference Endpoints works . Getting Started Install Node Copied npm install @huggingface/inference pnpm add @huggingface/inference yarn add @huggingface/inference Deno Copied // esm.sh import { HfInference } from "https://esm.sh/@huggingface/inference" // or npm: import { HfInference } from "npm:@huggingface/inference" Initialize Copied import { HfInference } from '@huggingface/inference' const hf = new HfInference ( 'your access token' ) ❗ Important note: Using an access token is optional to get started, however you will be rate limited eventually. Join Hugging Face and then visit access tokens to generate your access token for free . Your access token should be kept private. If you need to protect it in front-end applications, we suggest setting up a proxy server that stores the access token. Third-party inference providers You can send inference requests to third-party providers with the inference client. Currently, we support the following providers: Fal.ai , Replicate , Together and Sambanova . To send requests to a third-party provider, you have to pass the provider parameter to the inference function. Make sure your request is authenticated with an access token. Copied const accessToken = "hf_..." ; // Either a HF access token, or an API key from the third-party provider (Replicate in this example) const client = new HfInference (accessToken); await client. textToImage ({ provider : "replicate" , model : "black-forest-labs/Flux.1-dev" , inputs : "A black forest cake" }) When authenticated with a Hugging Face access token, the request is routed through https://huggingface.co. When authenticated with a third-party provider key, the request is made directly against that provider’s inference API. Only a subset of models are supported when requesting third-party providers. You can check the list of supported models per pipeline tasks here: Fal.ai supported models Replicate supported models Sambanova supported models Together supported models HF Inference API (serverless) ❗ Important note: To be compatible, the third-party API must adhere to the “standard” shape API we expect on HF model pages for each pipeline task type. This is not an issue for LLMs as everyone converged on the OpenAI API anyways, but can be more tricky for other tasks like “text-to-image” or “automatic-speech-recognition” where there exists no standard API. Let us know if any help is needed or if we can make things easier for you! 👋 Want to add another provider? Get in touch if you’d like to add support for another Inference provider, and/or request it on https://huggingface.co/spaces/huggingface/HuggingDiscussions/discussions/49 Tree-shaking You can import the functions you need directly from the module instead of using the HfInference class. Copied import { textGeneration } from "@huggingface/inference" ; await textGeneration ({ accessToken : "hf_..." , model : "model_or_endpoint" , inputs : ..., parameters : ... }) This will enable tree-shaking by your bundler. Natural Language Processing Text Generation Generates text from an input prompt. Demo Copied await hf. textGeneration ({ model : 'gpt2' , inputs : 'The answer to the universe is' }) for await ( const output of hf. textGenerationStream ({ model : "google/flan-t5-xxl" , inputs : 'repeat "one two three four"' , parameters : { max_new_tokens : 250 } })) { console . log (output. token . text , output. generated_text ); } Text Generation (Chat Completion API Compatible) Using the chatCompletion method, you can generate text with models compatible with the OpenAI Chat Completion API. All models served by TGI on Hugging Face support Messages API. Demo Copied // Non-streaming API const out = await hf. chatCompletion ({ model : "meta-llama/Llama-3.1-8B-Instruct" , messages : [{ role : "user" , content : "Hello, nice to meet you!" }], max_tokens : 512 , temperature : 0.1 , }); // Streaming API let out = "" ; for await ( const chunk of hf. chatCompletionStream ({ model : "meta-llama/Llama-3.1-8B-Instruct" , messages : [ { role : "user" , content : "Can you help me solve an equation?" }, ], max_tokens : 512 , temperature : 0.1 , })) { if (chunk. choices && chunk. choices . length > 0 ) { out += chunk. choices [ 0 ]. delta . content ; } } It’s also possible to call Mistral or OpenAI endpoints directly: Copied const openai = new HfInference ( OPENAI_TOKEN ). endpoint ( "https://api.openai.com" ); let out = "" ; for await ( const chunk of openai. chatCompletionStream ({ model : "gpt-3.5-turbo" , messages : [ { role : "user" , content : "Complete the equation 1+1= ,just the answer" }, ], max_tokens : 500 , temperature : 0.1 , seed : 0 , })) { if (chunk. choices && chunk. choices . length > 0 ) { out += chunk. choices [ 0 ]. delta . content ; } } // For mistral AI: // endpointUrl: "https://api.mistral.ai" // model: "mistral-tiny" Fill Mask Tries to fill in a hole with a missing word (token to be precise). Copied await hf. fillMask ({ model : 'bert-base-uncased' , inputs : '[MASK] world!' }) Summarization Summarizes longer text into shorter text. Be careful, some models have a maximum length of input. Copied await hf. summarization ({ model : 'facebook/bart-large-cnn' , inputs : 'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930.' , parameters : { max_length : 100 } }) Question Answering Answers questions based on the context you provide. Copied await hf. questionAnswering ({ model : 'deepset/roberta-base-squad2' , inputs : { question : 'What is the capital of France?' , context : 'The capital of France is Paris.' } }) Table Question Answering Copied await hf. tableQuestionAnswering ({ model : 'google/tapas-base-finetuned-wtq' , inputs : { query : 'How many stars does the transformers repository have?' , table : { Repository : [ 'Transformers' , 'Datasets' , 'Tokenizers' ], Stars : [ '36542' , '4512' , '3934' ], Contributors : [ '651' , '77' , '34' ], 'Programming language' : [ 'Python' , 'Python' , 'Rust, Python and NodeJS' ] } } }) Text Classification Often used for sentiment analysis, this method will assign labels to the given text along with a probability score of that label. Copied await hf. textClassification ({ model : 'distilbert-base-uncased-finetuned-sst-2-english' , inputs : 'I like you. I love you.' }) Token Classification Used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text. Copied await hf. tokenClassification ({ model : 'dbmdz/bert-large-cased-finetuned-conll03-english' , inputs : 'My name is Sarah Jessica Parker but you can call me Jessica' }) Translation Converts text from one language to another. Copied await hf. translation ({ model : 't5-base' , inputs : 'My name is Wolfgang and I live in Berlin' }) await hf. translation ({ model : 'facebook/mbart-large-50-many-to-many-mmt' , inputs : textToTranslate, parameters : { "src_lang" : "en_XX" , "tgt_lang" : "fr_XX" } }) Zero-Shot Classification Checks how well an input text fits into a set of labels you provide. Copied await hf. zeroShotClassification ({ model : 'facebook/bart-large-mnli' , inputs : [ 'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!' ], parameters : { candidate_labels : [ 'refund' , 'legal' , 'faq' ] } }) Conversational This task corresponds to any chatbot-like structure. Models tend to have shorter max_length, so please check with caution when using a given model if you need long-range dependency or not. Copied await hf. conversational ({ model : 'microsoft/DialoGPT-large' , inputs : { past_user_inputs : [ 'Which movie is the best ?' ], generated_responses : [ 'It is Die Hard for sure.' ], text : 'Can you explain why ?' } }) Sentence Similarity Calculate the semantic similarity between one text and a list of other sentences. Copied await hf. sentenceSimilarity ({ model : 'sentence-transformers/paraphrase-xlm-r-multilingual-v1' , inputs : { source_sentence : 'That is a happy person' , sentences : [ 'That is a happy dog' , 'That is a very happy person' , 'Today is a sunny day' ] } }) Audio Automatic Speech Recognition Transcribes speech from an audio file. Demo Copied await hf. automaticSpeechRecognition ({ model : 'facebook/wav2vec2-large-960h-lv60-self' , data : readFileSync ( 'test/sample1.flac' ) }) Audio Classification Assigns labels to the given audio along with a probability score of that label. Demo Copied await hf. audioClassification ({ model : 'superb/hubert-large-superb-er' , data : readFileSync ( 'test/sample1.flac' ) }) Text To Speech Generates natural-sounding speech from text input. Interactive tutorial Copied await hf. textToSpeech ({ model : 'espnet/kan-bayashi_ljspeech_vits' , inputs : 'Hello world!' }) Audio To Audio Outputs one or multiple generated audios from an input audio, commonly used for speech enhancement and source separation. Copied await hf. audioToAudio ({ model : 'speechbrain/sepformer-wham' , data : readFileSync ( 'test/sample1.flac' ) }) Computer Vision Image Classification Assigns labels to a given image along with a probability score of that label. Demo Copied await hf. imageClassification ({ data : readFileSync ( 'test/cheetah.png' ), model : 'google/vit-base-patch16-224' }) Object Detection Detects objects within an image and returns labels with corresponding bounding boxes and probability scores. Demo Copied await hf. objectDetection ({ data : readFileSync ( 'test/cats.png' ), model : 'facebook/detr-resnet-50' }) Image Segmentation Detects segments within an image and returns labels with corresponding bounding boxes and probability scores. Copied await hf. imageSegmentation ({ data : readFileSync ( 'test/cats.png' ), model : 'facebook/detr-resnet-50-panoptic' }) Image To Text Outputs text from a given image, commonly used for captioning or optical character recognition. Copied await hf. imageToText ({ data : readFileSync ( 'test/cats.png' ), model : 'nlpconnect/vit-gpt2-image-captioning' }) Text To Image Creates an image from a text prompt. Demo Copied await hf. textToImage ({ model : 'black-forest-labs/FLUX.1-dev' , inputs : 'a picture of a green bird' }) Image To Image Image-to-image is the task of transforming a source image to match the characteristics of a target image or a target image domain. Interactive tutorial Copied await hf. imageToImage ({ inputs : new Blob ([ readFileSync ( "test/stormtrooper_depth.png" )]), parameters : { prompt : "elmo's lecture" , }, model : "lllyasviel/sd-controlnet-depth" , }); Zero Shot Image Classification Checks how well an input image fits into a set of labels you provide. Copied await hf. zeroShotImageClassification ({ model : 'openai/clip-vit-large-patch14-336' , inputs : { image : await ( await fetch ( 'https://placekitten.com/300/300' )). blob () }, parameters : { candidate_labels : [ 'cat' , 'dog' ] } }) Multimodal Feature Extraction This task reads some text and outputs raw float values, that are usually consumed as part of a semantic database/semantic search. Copied await hf. featureExtraction ({ model : "sentence-transformers/distilbert-base-nli-mean-tokens" , inputs : "That is a happy person" , }); Visual Question Answering Visual Question Answering is the task of answering open-ended questions based on an image. They output natural language responses to natural language questions. Demo Copied await hf. visualQuestionAnswering ({ model : 'dandelin/vilt-b32-finetuned-vqa' , inputs : { question : 'How many cats are lying down?' , image : await ( await fetch ( 'https://placekitten.com/300/300' )). blob () } }) Document Question Answering Document question answering models take a (document, question) pair as input and return an answer in natural language. Demo Copied await hf. documentQuestionAnswering ({ model : 'impira/layoutlm-document-qa' , inputs : { question : 'Invoice number?' , image : await ( await fetch ( 'https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png' )). blob (), } }) Tabular Tabular Regression Tabular regression is the task of predicting a numerical value given a set of attributes. Copied await hf. tabularRegression ({ model : "scikit-learn/Fish-Weight" , inputs : { data : { "Height" : [ "11.52" , "12.48" , "12.3778" ], "Length1" : [ "23.2" , "24" , "23.9" ], "Length2" : [ "25.4" , "26.3" , "26.5" ], "Length3" : [ "30" , "31.2" , "31.1" ], "Species" : [ "Bream" , "Bream" , "Bream" ], "Width" : [ "4.02" , "4.3056" , "4.6961" ] }, }, }) Tabular Classification Tabular classification is the task of classifying a target category (a group) based on set of attributes. Copied await hf. tabularClassification ({ model : "vvmnnnkv/wine-quality" , inputs : { data : { "fixed_acidity" : [ "7.4" , "7.8" , "10.3" ], "volatile_acidity" : [ "0.7" , "0.88" , "0.32" ], "citric_acid" : [ "0" , "0" , "0.45" ], "residual_sugar" : [ "1.9" , "2.6" , "6.4" ], "chlorides" : [ "0.076" , "0.098" , "0.073" ], "free_sulfur_dioxide" : [ "11" , "25" , "5" ], "total_sulfur_dioxide" : [ "34" , "67" , "13" ], "density" : [ "0.9978" , "0.9968" , "0.9976" ], "pH" : [ "3.51" , "3.2" , "3.23" ], "sulphates" : [ "0.56" , "0.68" , "0.82" ], "alcohol" : [ "9.4" , "9.8" , "12.6" ] }, }, }) Custom Calls For models with custom parameters / outputs. Copied await hf. request ({ model : 'my-custom-model' , inputs : 'hello world' , parameters : { custom_param : 'some magic' , } }) // Custom streaming call, for models with custom parameters / outputs for await ( const output of hf. streamingRequest ({ model : 'my-custom-model' , inputs : 'hello world' , parameters : { custom_param : 'some magic' , } })) { ... } You can use any Chat Completion API-compatible provider with the chatCompletion method. Copied // Chat Completion Example const MISTRAL_KEY = process. env . MISTRAL_KEY ; const hf = new HfInference ( MISTRAL_KEY ); const ep = hf. endpoint ( "https://api.mistral.ai" ); const stream = ep. chatCompletionStream ({ model : "mistral-tiny" , messages : [{ role : "user" , content : "Complete the equation one + one = , just the answer" }], }); let out = "" ; for await ( const chunk of stream) { if (chunk. choices && chunk. choices . length > 0 ) { out += chunk. choices [ 0 ]. delta . content ; console . log (out); } } Custom Inference Endpoints Learn more about using your own inference endpoints here Copied const gpt2 = hf. endpoint ( 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2' ); const { generated_text } = await gpt2. textGeneration ({ inputs : 'The answer to the universe is' }); // Chat Completion Example const ep = hf. endpoint ( "https://api-inference.huggingface.co/models/meta-llama/Llama-3.1-8B-Instruct" ); const stream = ep. chatCompletionStream ({ model : "tgi" , messages : [{ role : "user" , content : "Complete the equation 1+1= ,just the answer" }], max_tokens : 500 , temperature : 0.1 , seed : 0 , }); let out = "" ; for await ( const chunk of stream) { if (chunk. choices && chunk. choices . length > 0 ) { out += chunk. choices [ 0 ]. delta . content ; console . log (out); } } By default, all calls to the inference endpoint will wait until the model is loaded. When scaling to 0 is enabled on the endpoint, this can result in non-trivial waiting time. If you’d rather disable this behavior and handle the endpoint’s returned 500 HTTP errors yourself, you can do so like so: Copied const gpt2 = hf. endpoint ( 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2' ); const { generated_text } = await gpt2. textGeneration ( { inputs : 'The answer to the universe is' }, { retry_on_error : false }, ); Running tests Copied HF_TOKEN="your access token" pnpm run test Finding appropriate models We have an informative documentation project called Tasks to list available models for each task and explain how each task works in detail. It also contains demos, example outputs, and other resources should you want to dig deeper into the ML side of things. Dependencies @huggingface/tasks : Typings only < > Update on GitHub ← 🤗 Hugging Face JS Libraries API reference → 🤗 Hugging Face Inference Endpoints Getting Started Install Node Deno Initialize Third-party inference providers Tree-shaking Natural Language Processing Text Generation Text Generation ( Chat Completion AP I Compatible) Fill Mask Summarization Question Answering Table Question Answering Text Classification Token Classification Translation Zero- Shot Classification Conversational Sentence Similarity Audio Automatic Speech Recognition Audio Classification Text To Speech Audio To Audio Computer Vision Image Classification Object Detection Image Segmentation Image To Text Text To Image Image To Image Zero Shot Image Classification Multimodal Feature Extraction Visual Question Answering Document Question Answering Tabular Tabular Regression Tabular Classification Custom Calls Custom Inference Endpoints Running tests Finding appropriate models Dependencies |
Inpainting.txt | Inpainting Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Inpainting Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Inpainting Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt. With 🤗 Diffusers, here is how you can do inpainting: Load an inpainting checkpoint with the AutoPipelineForInpainting class. This’ll automatically detect the appropriate pipeline class to load based on the checkpoint: Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder-inpaint" , torch_dtype=torch.float16 ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention() , to save memory and increase inference speed. If you’re using PyTorch 2.0, it’s not necessary to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention . Load the base and mask images: Copied init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images: Copied prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k" negative_prompt = "bad anatomy, deformed, ugly, disfigured" image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[ 0 ] make_image_grid([init_image, mask_image, image], rows= 1 , cols= 3 ) base image mask image generated image Create a mask image Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you’ll need to create a mask image for it. Use the Space below to easily create a mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you’re done, click Run to generate and download the mask image. Mask blur The ~VaeImageProcessor.blur method provides an option for how to blend the original image and inpaint area. The amount of blur is determined by the blur_factor parameter. Increasing the blur_factor increases the amount of blur applied to the mask edges, softening the transition between the original image and inpaint area. A low or zero blur_factor preserves the sharper edges of the mask. To use this, create a blurred mask with the image processor. Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image from PIL import Image pipeline = AutoPipelineForInpainting.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16).to( 'cuda' ) mask = load_image( "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png" ) blurred_mask = pipeline.mask_processor.blur(mask, blur_factor= 33 ) blurred_mask mask with no blur mask with blur applied Popular models Stable Diffusion Inpainting , Stable Diffusion XL (SDXL) Inpainting , and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline: Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "runwayml/stable-diffusion-inpainting" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) generator = torch.Generator( "cuda" ).manual_seed( 92 ) prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[ 0 ] make_image_grid([init_image, mask_image, image], rows= 1 , cols= 3 ) Stable Diffusion XL (SDXL) Inpainting SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "diffusers/stable-diffusion-xl-1.0-inpainting-0.1" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) generator = torch.Generator( "cuda" ).manual_seed( 92 ) prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[ 0 ] make_image_grid([init_image, mask_image, image], rows= 1 , cols= 3 ) Kandinsky 2.2 Inpainting The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder-inpaint" , torch_dtype=torch.float16 ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) generator = torch.Generator( "cuda" ).manual_seed( 92 ) prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[ 0 ] make_image_grid([init_image, mask_image, image], rows= 1 , cols= 3 ) base image Stable Diffusion Inpainting Stable Diffusion XL Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints So far, this guide has used inpaint specific checkpoints such as stable-diffusion-v1-5/stable-diffusion-inpainting . But you can also use regular checkpoints like stable-diffusion-v1-5/stable-diffusion-v1-5 . Let’s compare the results of the two checkpoints. The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You’ll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural. stable-diffusion-v1-5/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) generator = torch.Generator( "cuda" ).manual_seed( 92 ) prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) stable-diffusion-v1-5/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn’t as noticeable of difference between the regular and inpaint checkpoint. stable-diffusion-v1-5/stable-diffusion-v1-5 runwayml/stable-diffusion-inpaint Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png" ) image = pipeline(prompt= "road" , image=init_image, mask_image=mask_image).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) stable-diffusion-v1-5/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area. If preserving the unmasked area is important for your task, you can use the VaeImageProcessor.apply_overlay method to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas. Copied import PIL import numpy as np import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid device = "cuda" pipeline = AutoPipelineForInpainting.from_pretrained( "runwayml/stable-diffusion-inpainting" , torch_dtype=torch.float16, ) pipeline = pipeline.to(device) img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = load_image(img_url).resize(( 512 , 512 )) mask_image = load_image(mask_url).resize(( 512 , 512 )) prompt = "Face of a yellow cat, high resolution, sitting on a park bench" repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[ 0 ] repainted_image.save( "repainted_image.png" ) unmasked_unchanged_image = pipeline.image_processor.apply_overlay(mask_image, init_image, repainted_image) unmasked_unchanged_image.save( "force_unmasked_unchanged.png" ) make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows= 2 , cols= 2 ) Configure pipeline parameters Image features - like quality and “creativity” - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let’s take a look at the most important parameters and see how changing them affects the output. Strength strength is a measure of how much noise is added to the base image, which influences how similar the output is to the base image. 📈 a high strength value means more noise is added to an image and the denoising process takes longer, but you’ll get higher quality images that are more different from the base image 📉 a low strength value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "runwayml/stable-diffusion-inpainting" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength= 0.6 ).images[ 0 ] make_image_grid([init_image, mask_image, image], rows= 1 , cols= 3 ) strength = 0.6 strength = 0.8 strength = 1.0 Guidance scale guidance_scale affects how aligned the text prompt and generated image are. 📈 a high guidance_scale value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt 📉 a low guidance_scale value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt You can use strength and guidance_scale together for more control over how expressive the model is. For example, a combination high strength and guidance_scale values gives the model the most creative freedom. Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "runwayml/stable-diffusion-inpainting" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale= 2.5 ).images[ 0 ] make_image_grid([init_image, mask_image, image], rows= 1 , cols= 3 ) guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 12.5 Negative prompt A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don’t want. Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "runwayml/stable-diffusion-inpainting" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" negative_prompt = "bad architecture, unstable, poor details, blurry" image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[ 0 ] make_image_grid([init_image, mask_image, image], rows= 1 , cols= 3 ) negative_prompt = "bad architecture, unstable, poor details, blurry" Padding mask crop A method for increasing the inpainting image quality is to use the padding_mask_crop parameter. When enabled, this option crops the masked area with some user-specified padding and it’ll also crop the same area from the original image. Both the image and mask are upscaled to a higher resolution for inpainting, and then overlaid on the original image. This is a quick and easy way to improve image quality without using a separate pipeline like StableDiffusionUpscalePipeline . Add the padding_mask_crop parameter to the pipeline call and set it to the desired padding value. Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image from PIL import Image generator = torch.Generator(device= 'cuda' ).manual_seed( 0 ) pipeline = AutoPipelineForInpainting.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16).to( 'cuda' ) base = load_image( "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png" ) mask = load_image( "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png" ) image = pipeline( "boat" , image=base, mask_image=mask, strength= 0.75 , generator=generator, padding_mask_crop= 32 ).images[ 0 ] image default inpaint image inpaint image with `padding_mask_crop` enabled Chained inpainting pipelines AutoPipelineForInpainting can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you’re using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components. Text-to-image-to-inpaint Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don’t have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image. Start with the text-to-image pipeline to create a castle: Copied import torch from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() text2image = pipeline( "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" ).images[ 0 ] Load the mask image of the output from above: Copied mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png" ) And let’s inpaint the masked area with a waterfall: Copied pipeline = AutoPipelineForInpainting.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder-inpaint" , torch_dtype=torch.float16 ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() prompt = "digital painting of a fantasy waterfall, cloudy" image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[ 0 ] make_image_grid([text2image, mask_image, image], rows= 1 , cols= 3 ) text-to-image inpaint Inpaint-to-image-to-image You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality. Begin by inpainting an image: Copied import torch from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "runwayml/stable-diffusion-inpainting" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[ 0 ] # resize image to 1024x1024 for SDXL image_inpainting = image_inpainting.resize(( 1024 , 1024 )) Now let’s pass the image to another inpainting pipeline with SDXL’s refiner model to enhance the image details and quality: Copied pipeline = AutoPipelineForInpainting.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0" , torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type= "latent" ).images[ 0 ] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the Text-to-image-to-inpaint section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won’t work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use AutoencoderKL . Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the from_pipe() method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again. Copied pipeline = AutoPipelineForImage2Image.from_pipe(pipeline) # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() image = pipeline(prompt=prompt, image=image).images[ 0 ] make_image_grid([init_image, mask_image, image_inpainting, image], rows= 2 , cols= 2 ) initial image inpaint image-to-image Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes. Control image generation Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like negative_prompt , there are better and more efficient methods for controlling image generation. Prompt weighting Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The Compel library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the Prompt weighting guide. Once you’ve generated the embeddings, pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the AutoPipelineForInpainting . The embeddings replace the prompt parameter: Copied import torch from diffusers import AutoPipelineForInpainting from diffusers.utils import make_image_grid pipeline = AutoPipelineForInpainting.from_pretrained( "runwayml/stable-diffusion-inpainting" , torch_dtype=torch.float16, ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel negative_prompt_embeds=negative_prompt_embeds, # generated from Compel image=init_image, mask_image=mask_image ).images[ 0 ] make_image_grid([init_image, mask_image, image], rows= 1 , cols= 3 ) ControlNet ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it. For example, let’s condition an image with a ControlNet pretrained on inpaint images: Copied import torch import numpy as np from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline from diffusers.utils import load_image, make_image_grid # load ControlNet controlnet = ControlNetModel.from_pretrained( "lllyasviel/control_v11p_sd15_inpaint" , torch_dtype=torch.float16, variant= "fp16" ) # pass ControlNet to the pipeline pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained( "runwayml/stable-diffusion-inpainting" , controlnet=controlnet, torch_dtype=torch.float16, variant= "fp16" ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # load base and mask image init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png" ) mask_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png" ) # prepare control image def make_inpaint_condition ( init_image, mask_image ): init_image = np.array(init_image.convert( "RGB" )).astype(np.float32) / 255.0 mask_image = np.array(mask_image.convert( "L" )).astype(np.float32) / 255.0 assert init_image.shape[ 0 : 1 ] == mask_image.shape[ 0 : 1 ], "image and image_mask must have the same image size" init_image[mask_image > 0.5 ] = - 1.0 # set as masked pixel init_image = np.expand_dims(init_image, 0 ).transpose( 0 , 3 , 1 , 2 ) init_image = torch.from_numpy(init_image) return init_image control_image = make_inpaint_condition(init_image, mask_image) Now generate an image from the base, mask and control images. You’ll notice features of the base image are strongly preserved in the generated image. Copied prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[ 0 ] make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[ 0 ][ 0 ])).convert( 'RGB' ), image], rows= 2 , cols= 2 ) You can take this a step further and chain it with an image-to-image pipeline to apply a new style : Copied from diffusers import AutoPipelineForImage2Image pipeline = AutoPipelineForImage2Image.from_pretrained( "nitrosocke/elden-ring-diffusion" , torch_dtype=torch.float16, ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() prompt = "elden ring style castle" # include the token "elden ring style" in the prompt negative_prompt = "bad architecture, deformed, disfigured, poor details" image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[ 0 ] make_image_grid([init_image, mask_image, image, image_elden_ring], rows= 2 , cols= 2 ) initial image ControlNet inpaint image-to-image Optimize It can be difficult and slow to run diffusion models if you’re resource constrained, but it doesn’t have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you’re using PyTorch 2.0, scaled-dot product attention is automatically enabled and you don’t need to do anything else. For non-PyTorch 2.0 users, you can install and use xFormers ’s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference. You can also offload the model to the CPU to save even more memory: Copied + pipeline.enable_xformers_memory_efficient_attention() + pipeline.enable_model_cpu_offload() To speed-up your inference code even more, use torch_compile . You should wrap torch.compile around the most intensive component in the pipeline which is typically the UNet: Copied pipeline.unet = torch. compile (pipeline.unet, mode= "reduce-overhead" , fullgraph= True ) Learn more in the Reduce memory usage and Torch 2.0 guides. < > Update on GitHub ← Image-to-image Text or image-to-video → Inpainting Create a mask image Mask blur Popular models Stable Diffusion Inpainting Stable Diffusion X L (SDX L) Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints Configure pipeline parameters Strength Guidance scale Negative prompt Padding mask crop Chained inpainting pipelines Text-to-image-to-inpaint Inpaint-to-image-to-image Control image generation Prompt weighting Control Net Optimize |
Consuming_Text_Generation_Inference.txt | Consuming Text Generation Inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Consuming Text Generation Inference text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Consuming Text Generation Inference There are many ways to consume Text Generation Inference (TGI) server in your applications. After launching the server, you can use the Messages API /v1/chat/completions route and make a POST request to get results from the server. You can also pass "stream": true to the call if you want TGI to return a stream of tokens. For more information on the API, consult the OpenAPI documentation of text-generation-inference available here . You can make the requests using any tool of your preference, such as curl, Python, or TypeScript. For an end-to-end experience, we’ve open-sourced ChatUI , a chat interface for open-access models. curl After a successful server launch, you can query the model using the v1/chat/completions route, to get responses that are compliant to the OpenAI Chat Completion spec: Copied curl localhost:8080/v1/chat/completions \ -X POST \ -d '{ "model": "tgi", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "What is deep learning?" } ], "stream": true, "max_tokens": 20 }' \ -H 'Content-Type: application/json' For non-chat use-cases, you can also use the /generate and /generate_stream routes. Copied curl 127.0.0.1:8080/generate \ -X POST \ -d '{ "inputs":"What is Deep Learning?", "parameters":{ "max_new_tokens":20 } }' \ -H 'Content-Type: application/json' Python Inference Client huggingface_hub is a Python library to interact with the Hugging Face Hub, including its endpoints. It provides a high-level class, huggingface_hub.InferenceClient , which makes it easy to make calls to TGI’s Messages API. InferenceClient also takes care of parameter validation and provides a simple-to-use interface. Install huggingface_hub package via pip. Copied pip install huggingface_hub You can now use InferenceClient the exact same way you would use OpenAI client in Python Copied from huggingface_hub import InferenceClient client = InferenceClient( base_url= "http://localhost:8080/v1/" , ) output = client.chat.completions.create( model= "tgi" , messages=[ { "role" : "system" , "content" : "You are a helpful assistant." }, { "role" : "user" , "content" : "Count to 10" }, ], stream= True , max_tokens= 1024 , ) for chunk in output: print (chunk.choices[ 0 ].delta.content) You can check out more details about OpenAI compatibility here . There is also an async version of the client, AsyncInferenceClient , based on asyncio and aiohttp . You can find docs for it here OpenAI Client You can directly use the OpenAI Python or JS clients to interact with TGI. Install the OpenAI Python package via pip. Copied pip install openai Copied from openai import OpenAI # init the client but point it to TGI client = OpenAI( base_url= "http://localhost:8080/v1/" , api_key= "-" ) chat_completion = client.chat.completions.create( model= "tgi" , messages=[ { "role" : "system" , "content" : "You are a helpful assistant." }, { "role" : "user" , "content" : "What is deep learning?" } ], stream= True ) # iterate and print stream for message in chat_completion: print (message) UI Gradio Gradio is a Python library that helps you build web applications for your machine learning models with a few lines of code. It has a ChatInterface wrapper that helps create neat UIs for chatbots. Let’s take a look at how to create a chatbot with streaming mode using TGI and Gradio. Let’s install Gradio and Hub Python library first. Copied pip install huggingface-hub gradio Assume you are serving your model on port 8080, we will query through InferenceClient . Copied import gradio as gr from huggingface_hub import InferenceClient client = InferenceClient(base_url= "http://127.0.0.1:8080" ) def inference ( message, history ): partial_message = "" output = client.chat.completions.create( messages=[ { "role" : "system" , "content" : "You are a helpful assistant." }, { "role" : "user" , "content" : message}, ], stream= True , max_tokens= 1024 , ) for chunk in output: partial_message += chunk.choices[ 0 ].delta.content yield partial_message gr.ChatInterface( inference, chatbot=gr.Chatbot(height= 300 ), textbox=gr.Textbox(placeholder= "Chat with me!" , container= False , scale= 7 ), description= "This is the demo for Gradio UI consuming TGI endpoint." , title= "Gradio 🤝 TGI" , examples=[ "Are tomatoes vegetables?" ], retry_btn= "Retry" , undo_btn= "Undo" , clear_btn= "Clear" , ).queue().launch() You can check out the UI and try the demo directly here 👇 You can read more about how to customize a ChatInterface here . ChatUI ChatUI is an open-source interface built for consuming LLMs. It offers many customization options, such as web search with SERP API and more. ChatUI can automatically consume the TGI server and even provides an option to switch between different TGI endpoints. You can try it out at Hugging Chat , or use the ChatUI Docker Space to deploy your own Hugging Chat to Spaces. To serve both ChatUI and TGI in same environment, simply add your own endpoints to the MODELS variable in .env.local file inside the chat-ui repository. Provide the endpoints pointing to where TGI is served. Copied { // rest of the model config here "endpoints" : [{ "url" : "https://HOST:PORT/generate_stream" }] } < > Update on GitHub ← Usage Statistics Preparing Model for Serving → Consuming Text Generation Inference curl Python Inference Client OpenA I Client UI Gradio ChatUI |
TPU_training.txt | TPU training Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation TPU training Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started TPU training A TPU (Tensor Processing Unit) is a type of hardware specifically designed for training models efficiently. Accelerate supports TPU training, but there are a few things you should be aware of, namely graph compilation. This tutorial briefly discusses compilation, and for more details, take a look at the Training on TPUs with Accelerate guide. Compilation A TPU creates a graph of all the operations in the training step such as the forward pass, backward pass and optimizer step. This is why the first training step always takes a while because building and compiling this graph takes time. But once compilation is complete, it is cached and all subsequent steps are much faster. The key is to avoid compiling your code again or else training is super slow. This means all your operations must be exactly the same: all tensors in your batches must have the same length (for example, no dynamic padding for NLP tasks) your code must be static (for example, no layers with for loops that have different lengths depending on the input such as a LSTM) Weight tying A common language model design is to tie the weights of the embedding and softmax layers. However, moving the model to a TPU (either yourself or passing it to the prepare() method) breaks the weight tying and you’ll need to retie the weights. To add special behavior (like weight tying) in your script for TPUs, set distributed_type to DistributedType.TPU first. Then you can use the tie_weights method to tie the weights. Copied if accelerator.distributed_type == DistributedType.TPU: model.tie_weights() < > Update on GitHub ← Execution process Launching Accelerate scripts → TP U training Compilation Weight tying |
Getting_Started_with_Repositories.txt | Getting Started with Repositories Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Getting Started with Repositories Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Getting Started with Repositories This beginner-friendly guide will help you get the basic skills you need to create and manage your repository on the Hub. Each section builds on the previous one, so feel free to choose where to start! Requirements This document shows how to handle repositories through the web interface as well as through the terminal. There are no requirements if working with the UI. If you want to work with the terminal, please follow these installation instructions. If you do not have git available as a CLI command yet, you will need to install Git for your platform. You will also need to install Git LFS , which will be used to handle large files such as images and model weights. To be able to push your code to the Hub, you’ll need to authenticate somehow. The easiest way to do this is by installing the huggingface_hub CLI and running the login command: Copied python -m pip install huggingface_hub huggingface-cli login The content in the Getting Started section of this document is also available as a video! Creating a repository Using the Hub’s web interface you can easily create repositories, add files (even large ones!), explore models, visualize diffs, and much more. There are three kinds of repositories on the Hub, and in this guide you’ll be creating a model repository for demonstration purposes. For information on creating and managing models, datasets, and Spaces, refer to their respective documentation. To create a new repository, visit huggingface.co/new : Specify the owner of the repository: this can be either you or any of the organizations you’re affiliated with. Enter your model’s name. This will also be the name of the repository. Specify whether you want your model to be public or private. Specify the license. You can leave the License field blank for now. To learn about licenses, visit the Licenses documentation. After creating your model repository, you should see a page like this: Note that the Hub prompts you to create a Model Card , which you can learn about in the Model Cards documentation . Including a Model Card in your model repo is best practice, but since we’re only making a test repo at the moment we can skip this. Adding files to a repository (Web UI) To add files to your repository via the web UI, start by selecting the Files tab, navigating to the desired directory, and then clicking Add file . You’ll be given the option to create a new file or upload a file directly from your computer. Creating a new file Choosing to create a new file will take you to the following editor screen, where you can choose a name for your file, add content, and save your file with a message that summarizes your changes. Instead of directly committing the new file to your repo’s main branch, you can select Open as a pull request to create a Pull Request . Uploading a file If you choose Upload file you’ll be able to choose a local file to upload, along with a message summarizing your changes to the repo. As with creating new files, you can select Open as a pull request to create a Pull Request instead of adding your changes directly to the main branch of your repo. Adding files to a repository (terminal) Cloning repositories Downloading repositories to your local machine is called cloning . You can use the following commands to load your repo and navigate to it: Copied git clone https://huggingface.co/<your-username>/<your-model-name> cd <your-model-name> You can clone over SSH with the following command: Copied git clone [email protected]:<your-username>/<your-model-name> cd <your-model-name> You’ll need to add your SSH public key to your user settings to push changes or access private repositories. Set up Now’s the time, you can add any files you want to the repository! 🔥 Do you have files larger than 10MB? Those files should be tracked with git-lfs , which you can initialize with: Copied git lfs install Note that if your files are larger than 5GB you’ll also need to run: Copied huggingface-cli lfs-enable-largefiles . When you use Hugging Face to create a repository, Hugging Face automatically provides a list of common file extensions for common Machine Learning large files in the .gitattributes file, which git-lfs uses to efficiently track changes to your large files. However, you might need to add new extensions if your file types are not already handled. You can do so with git lfs track "*.your_extension" . Pushing files You can use Git to save new files and any changes to already existing files as a bundle of changes called a commit , which can be thought of as a “revision” to your project. To create a commit, you have to add the files to let Git know that we’re planning on saving the changes and then commit those changes. In order to sync the new commit with the Hugging Face Hub, you then push the commit to the Hub. Copied # Create any files you like! Then... git add . git commit -m "First model version" # You can choose any descriptive message git push And you’re done! You can check your repository on Hugging Face with all the recently added files. For example, in the screenshot below the user added a number of files. Note that some files in this example have a size of 1.04 GB , so the repo uses Git LFS to track it. If you cloned the repository with HTTP, you might be asked to fill your username and password on every push operation. The simplest way to avoid repetition is to switch to SSH , instead of HTTP. Alternatively, if you have to use HTTP, you might find it helpful to setup a git credential helper to autofill your username and password. Viewing a repo’s history Every time you go through the add - commit - push cycle, the repo will keep track of every change you’ve made to your files. The UI allows you to explore the model files and commits and to see the difference (also known as diff ) introduced by each commit. To see the history, you can click on the History: X commits link. You can click on an individual commit to see what changes that commit introduced: < > Update on GitHub ← Repositories Repository Settings → Getting Started with Repositories Requirements Creating a repository Adding files to a repository ( Web U I) Creating a new file Uploading a file Adding files to a repository (terminal) Cloning repositories Set up Pushing files Viewing a repo’s history |
Prompt_tuning.txt | Prompt tuning Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation Prompt tuning PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Prompt tuning Prompt tuning adds task-specific prompts to the input, and these prompt parameters are updated independently of the pretrained model parameters which are frozen. The abstract from the paper is: In this work, we explore “prompt tuning”, a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signal from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3’s “few-shot” learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method “closes the gap” and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant in that large models are costly to share and serve, and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed “prefix tuning” of Li and Liang (2021), and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer, as compared to full model tuning . PromptTuningConfig class peft. PromptTuningConfig < source > ( task_type : typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type : typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping : typing.Optional[dict] = None base_model_name_or_path : typing.Optional[str] = None revision : typing.Optional[str] = None inference_mode : bool = False num_virtual_tokens : int = None token_dim : int = None num_transformer_submodules : typing.Optional[int] = None num_attention_heads : typing.Optional[int] = None num_layers : typing.Optional[int] = None prompt_tuning_init : typing.Union[peft.tuners.prompt_tuning.config.PromptTuningInit, str] = <PromptTuningInit.RANDOM: 'RANDOM'> prompt_tuning_init_text : typing.Optional[str] = None tokenizer_name_or_path : typing.Optional[str] = None tokenizer_kwargs : typing.Optional[dict] = None ) Parameters prompt_tuning_init (Union[ PromptTuningInit , str ]) — The initialization of the prompt embedding. prompt_tuning_init_text ( str , optional ) — The text to initialize the prompt embedding. Only used if prompt_tuning_init is TEXT . tokenizer_name_or_path ( str , optional ) — The name or path of the tokenizer. Only used if prompt_tuning_init is TEXT . tokenizer_kwargs ( dict , optional ) — The keyword arguments to pass to AutoTokenizer.from_pretrained . Only used if prompt_tuning_init is TEXT . This is the configuration class to store the configuration of a PromptEmbedding . PromptEmbedding class peft. PromptEmbedding < source > ( config word_embeddings ) Parameters config ( PromptTuningConfig ) — The configuration of the prompt embedding. word_embeddings ( torch.nn.Module ) — The word embeddings of the base transformer model. The model to encode virtual tokens into prompt embeddings. Attributes : embedding ( torch.nn.Embedding ) — The embedding layer of the prompt embedding. Example: Copied >>> from peft import PromptEmbedding, PromptTuningConfig >>> config = PromptTuningConfig( ... peft_type= "PROMPT_TUNING" , ... task_type= "SEQ_2_SEQ_LM" , ... num_virtual_tokens= 20 , ... token_dim= 768 , ... num_transformer_submodules= 1 , ... num_attention_heads= 12 , ... num_layers= 12 , ... prompt_tuning_init= "TEXT" , ... prompt_tuning_init_text= "Predict if sentiment of this review is positive, negative or neutral" , ... tokenizer_name_or_path= "t5-base" , ... ) >>> # t5_model.shared is the word embeddings of the base model >>> prompt_embedding = PromptEmbedding(config, t5_model.shared) Input Shape: ( batch_size , total_virtual_tokens ) Output Shape: ( batch_size , total_virtual_tokens , token_dim ) < > Update on GitHub ← Prefix tuning Layernorm tuning → Prompt tuning Prompt Tuning Config Prompt Embedding |
Interface__SafetensorsShardFileInfo.txt | Interface: SafetensorsShardFileInfo Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: SafetensorsShardFileInfo Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: SafetensorsShardFileInfo Properties basePrefix • basePrefix : string Defined in hub/src/lib/parse-safetensors-metadata.ts:20 prefix • prefix : string Defined in hub/src/lib/parse-safetensors-metadata.ts:19 shard • shard : string Defined in hub/src/lib/parse-safetensors-metadata.ts:21 total • total : string Defined in hub/src/lib/parse-safetensors-metadata.ts:22 < > Update on GitHub ← SafetensorsIndexJson SecurityFileStatus → Interface: Safetensors Shard File Info Properties base Prefix Defined in prefix Defined in shard Defined in total Defined in |
Image-to-image.txt | Image-to-image Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Image-to-image Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Image-to-image Image-to-image is similar to text-to-image , but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image. With 🤗 Diffusers, this is as easy as 1-2-3: Load a checkpoint into the AutoPipelineForImage2Image class; this pipeline automatically handles loading the correct pipeline class based on the checkpoint: Copied import torch from diffusers import AutoPipelineForImage2Image from diffusers.utils import load_image, make_image_grid pipeline = AutoPipelineForImage2Image.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder" , torch_dtype=torch.float16, use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention() , to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention . Load an image to pass to the pipeline: Copied init_image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png" ) Pass a prompt and image to the pipeline to generate an image: Copied prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" image = pipeline(prompt, image=init_image).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5 , Stable Diffusion XL (SDXL) , and Kandinsky 2.2 . The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let’s take a quick look at how to use each of these models and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you’ll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image: Copied import torch from diffusers import AutoPipelineForImage2Image from diffusers.utils import make_image_grid, load_image pipeline = AutoPipelineForImage2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # prepare image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" init_image = load_image(url) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" # pass prompt and image to pipeline image = pipeline(prompt, image=init_image).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images. Copied import torch from diffusers import AutoPipelineForImage2Image from diffusers.utils import make_image_grid, load_image pipeline = AutoPipelineForImage2Image.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # prepare image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" init_image = load_image(url) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" # pass prompt and image to pipeline image = pipeline(prompt, image=init_image, strength= 0.5 ).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images. The simplest way to use Kandinsky 2.2 is: Copied import torch from diffusers import AutoPipelineForImage2Image from diffusers.utils import make_image_grid, load_image pipeline = AutoPipelineForImage2Image.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder" , torch_dtype=torch.float16, use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # prepare image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" init_image = load_image(url) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" # pass prompt and image to pipeline image = pipeline(prompt, image=init_image).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) initial image generated image Configure pipeline parameters There are several important parameters you can configure in the pipeline that’ll affect the image generation process and image quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Strength strength is one of the most important parameters to consider and it’ll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words: 📈 a higher strength value gives the model more “creativity” to generate an image that’s different from the initial image; a strength value of 1.0 means the initial image is more or less ignored 📉 a lower strength value means the generated image is more similar to the initial image The strength and num_inference_steps parameters are related because strength determines the number of noise steps to add. For example, if the num_inference_steps is 50 and strength is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image. Copied import torch from diffusers import AutoPipelineForImage2Image from diffusers.utils import make_image_grid, load_image pipeline = AutoPipelineForImage2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # prepare image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" init_image = load_image(url) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" # pass prompt and image to pipeline image = pipeline(prompt, image=init_image, strength= 0.8 ).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) strength = 0.4 strength = 0.6 strength = 1.0 Guidance scale The guidance_scale parameter is used to control how closely aligned the generated image and text prompt are. A higher guidance_scale value means your generated image is more aligned with the prompt, while a lower guidance_scale value means your generated image has more space to deviate from the prompt. You can combine guidance_scale with strength for even more precise control over how expressive the model is. For example, combine a high strength + guidance_scale for maximum creativity or use a combination of low strength and low guidance_scale to generate an image that resembles the initial image but is not as strictly bound to the prompt. Copied import torch from diffusers import AutoPipelineForImage2Image from diffusers.utils import make_image_grid, load_image pipeline = AutoPipelineForImage2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # prepare image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" init_image = load_image(url) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" # pass prompt and image to pipeline image = pipeline(prompt, image=init_image, guidance_scale= 8.0 ).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch from diffusers import AutoPipelineForImage2Image from diffusers.utils import make_image_grid, load_image pipeline = AutoPipelineForImage2Image.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # prepare image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" init_image = load_image(url) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" # pass prompt and image to pipeline image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image import torch from diffusers.utils import make_image_grid pipeline = AutoPipelineForText2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() text2image = pipeline( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" ).images[ 0 ] text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder" , torch_dtype=torch.float16, use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() image2image = pipeline( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" , image=text2image).images[ 0 ] make_image_grid([text2image, image2image], rows= 1 , cols= 2 ) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch from diffusers import AutoPipelineForImage2Image from diffusers.utils import make_image_grid, load_image pipeline = AutoPipelineForImage2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # prepare image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" init_image = load_image(url) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" # pass prompt and image to pipeline image = pipeline(prompt, image=init_image, output_type= "latent" ).images[ 0 ] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style : Copied pipeline = AutoPipelineForImage2Image.from_pretrained( "ogkalu/Comic-Diffusion" , torch_dtype=torch.float16 ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # need to include the token "charliebo artstyle" in the prompt to use this checkpoint image = pipeline( "Astronaut in a jungle, charliebo artstyle" , image=image, output_type= "latent" ).images[ 0 ] Repeat one more time to generate the final image in a pixel art style : Copied pipeline = AutoPipelineForImage2Image.from_pretrained( "kohbanye/pixel-art-style" , torch_dtype=torch.float16 ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # need to include the token "pixelartstyle" in the prompt to use this checkpoint image = pipeline( "Astronaut in a jungle, pixelartstyle" , image=image).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch from diffusers import AutoPipelineForImage2Image from diffusers.utils import make_image_grid, load_image pipeline = AutoPipelineForImage2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() # prepare image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" init_image = load_image(url) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" # pass prompt and image to pipeline image_1 = pipeline(prompt, image=init_image, output_type= "latent" ).images[ 0 ] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( "stabilityai/sd-x2-latent-upscaler" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) upscaler.enable_model_cpu_offload() upscaler.enable_xformers_memory_efficient_attention() image_2 = upscaler(prompt, image=image_1, output_type= "latent" ).images[ 0 ] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline super_res = StableDiffusionUpscalePipeline.from_pretrained( "stabilityai/stable-diffusion-x4-upscaler" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) super_res.enable_model_cpu_offload() super_res.enable_xformers_memory_efficient_attention() image_3 = super_res(prompt, image=image_2).images[ 0 ] make_image_grid([init_image, image_3.resize(( 512 , 512 ))], rows= 1 , cols= 2 ) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image import torch pipeline = AutoPipelineForImage2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel negative_prompt_embeds=negative_prompt_embeds, # generated from Compel image=init_image, ).images[ 0 ] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid # prepare image url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" init_image = load_image(url) init_image = init_image.resize(( 958 , 960 )) # resize to depth image dimensions depth_image = load_image( "https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png" ) make_image_grid([init_image, depth_image], rows= 1 , cols= 2 ) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image : Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image import torch controlnet = ControlNetModel.from_pretrained( "lllyasviel/control_v11f1p_sd15_depth" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline = AutoPipelineForImage2Image.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , controlnet=controlnet, torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[ 0 ] make_image_grid([init_image, depth_image, image_control_net], rows= 1 , cols= 3 ) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( "nitrosocke/elden-ring-diffusion" , torch_dtype=torch.float16, ) pipeline.enable_model_cpu_offload() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed pipeline.enable_xformers_memory_efficient_attention() prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength= 0.45 , guidance_scale= 10.5 ).images[ 0 ] make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows= 2 , cols= 2 ) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0’s scaled-dot product attention or xFormers (you can use one or the other, but there’s no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU. Copied + pipeline.enable_model_cpu_offload() + pipeline.enable_xformers_memory_efficient_attention() With torch.compile , you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch. compile (pipeline.unet, mode= "reduce-overhead" , fullgraph= True ) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides. < > Update on GitHub ← Text-to-image Inpainting → Image-to-image Popular models Stable Diffusion v1.5 Stable Diffusion X L (SDX L) Kandinsky 2.2 Configure pipeline parameters Strength Guidance scale Negative prompt Chained image-to-image pipelines Text-to-image-to-image Image-to-image-to-image Image-to-upscaler-to-super-resolution Control image generation Prompt weighting Control Net Optimize |
Text_Generation.txt | Text Generation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up api-inference documentation Text Generation api-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting Started Serverless Inference API Getting Started Supported Models Rate Limits Security API Reference Parameters Detailed Task Parameters Audio Classification Automatic Speech Recognition Chat Completion Feature Extraction Fill Mask Image Classification Image Segmentation Image to Image Image-Text to Text Object Detection Question Answering Summarization Table Question Answering Text Classification Text Generation Text to Image Token Classification Translation Zero Shot Classification Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Text Generation Generate text based on a prompt. If you are interested in a Chat Completion task, which generates a response based on a list of messages, check out the chat-completion task. For more details about the text-generation task, check out its dedicated page ! You will find examples and related materials. Recommended models google/gemma-2-2b-it : A text-generation model trained to follow instructions. meta-llama/Meta-Llama-3.1-8B-Instruct : Very powerful text generation model trained to follow instructions. microsoft/Phi-3-mini-4k-instruct : Small yet powerful text generation model. Qwen/Qwen2.5-7B-Instruct : Strong text generation model to follow instructions. Explore all available models and find the one that suits you best here . Using the API Python JavaScript cURL Copied import requests API_URL = "https://api-inference.huggingface.co/models/google/gemma-2-2b-it" headers = { "Authorization" : "Bearer hf_***" } def query ( payload ): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs" : "Can you please let us know more details about your " , }) To use the Python client, see huggingface_hub ’s package reference . API specification Request Payload inputs* string parameters object adapter_id string Lora adapter id best_of integer Generate best_of sequences and return the one if the highest token logprobs. decoder_input_details boolean Whether to return decoder input token logprobs and ids. details boolean Whether to return generation details. do_sample boolean Activate logits sampling. frequency_penalty number The parameter for frequency penalty. 1.0 means no penalty Penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. grammar unknown One of the following: (#1) object type* enum Possible values: json. value* unknown A string that represents a JSON Schema . JSON Schema is a declarative language that allows to annotate JSON documents with types and descriptions. (#2) object type* enum Possible values: regex. value* string max_new_tokens integer Maximum number of tokens to generate. repetition_penalty number The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details. return_full_text boolean Whether to prepend the prompt to the generated text seed integer Random sampling seed. stop string[] Stop generating tokens if a member of stop is generated. temperature number The value used to module the logits distribution. top_k integer The number of highest probability vocabulary tokens to keep for top-k-filtering. top_n_tokens integer The number of highest probability vocabulary tokens to keep for top-n-filtering. top_p number Top-p value for nucleus sampling. truncate integer Truncate inputs tokens to the given size. typical_p number Typical Decoding mass See Typical Decoding for Natural Language Generation for more information. watermark boolean Watermarking with A Watermark for Large Language Models . stream boolean Some options can be configured by passing headers to the Inference API. Here are the available headers: Headers authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page . x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here . x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here . For more information about Inference API headers, check out the parameters guide . Response Output type depends on the stream input parameter. If stream is false (default), the response will be a JSON object with the following fields: Body details object best_of_sequences object[] finish_reason enum Possible values: length, eos_token, stop_sequence. generated_text string generated_tokens integer prefill object[] id integer logprob number text string seed integer tokens object[] id integer logprob number special boolean text string top_tokens array[] id integer logprob number special boolean text string finish_reason enum Possible values: length, eos_token, stop_sequence. generated_tokens integer prefill object[] id integer logprob number text string seed integer tokens object[] id integer logprob number special boolean text string top_tokens array[] id integer logprob number special boolean text string generated_text string If stream is true , generated tokens are returned as a stream, using Server-Sent Events (SSE). For more information about streaming, check out this guide . Body details object finish_reason enum Possible values: length, eos_token, stop_sequence. generated_tokens integer input_length integer seed integer generated_text string index integer token object id integer logprob number special boolean text string top_tokens object[] id integer logprob number special boolean text string < > Update on GitHub ← Text Classification Text to Image → Text Generation Recommended models Using the API AP I specification Request Response |
Datasets.txt | Datasets Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Datasets Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Datasets 🤗 Datasets is a library for easily accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP) tasks. Load a dataset in a single line of code, and use our powerful data processing methods to quickly get your dataset ready for training in a deep learning model. Backed by the Apache Arrow format, process large datasets with zero-copy reads without any memory constraints for optimal speed and efficiency. We also feature a deep integration with the Hugging Face Hub , allowing you to easily load and share a dataset with the wider machine learning community. Find your dataset today on the Hugging Face Hub , and take an in-depth look inside of it with the live viewer. Tutorials Learn the basics and become familiar with loading, accessing, and processing a dataset. Start here if you are using 🤗 Datasets for the first time! How-to guides Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Datasets to solve real-world problems. Conceptual guides High-level explanations for building a better understanding about important topics such as the underlying data format, the cache, and how datasets are generated. Reference Technical descriptions of how 🤗 Datasets classes and methods work. < > Update on GitHub Quickstart → Datasets |
Multilingual_models_for_inference.txt | Multilingual models for inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Multilingual models for inference Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Multilingual models for inference There are several multilingual models in 🤗 Transformers, and their inference usage differs from monolingual models. Not all multilingual model usage is different though. Some models, like google-bert/bert-base-multilingual-uncased , can be used just like a monolingual model. This guide will show you how to use multilingual models whose usage differs for inference. XLM XLM has ten different checkpoints, only one of which is monolingual. The nine remaining model checkpoints can be split into two categories: the checkpoints that use language embeddings and those that don’t. XLM with language embeddings The following XLM models use language embeddings to specify the language used at inference: FacebookAI/xlm-mlm-ende-1024 (Masked language modeling, English-German) FacebookAI/xlm-mlm-enfr-1024 (Masked language modeling, English-French) FacebookAI/xlm-mlm-enro-1024 (Masked language modeling, English-Romanian) FacebookAI/xlm-mlm-xnli15-1024 (Masked language modeling, XNLI languages) FacebookAI/xlm-mlm-tlm-xnli15-1024 (Masked language modeling + translation, XNLI languages) FacebookAI/xlm-clm-enfr-1024 (Causal language modeling, English-French) FacebookAI/xlm-clm-ende-1024 (Causal language modeling, English-German) Language embeddings are represented as a tensor of the same shape as the input_ids passed to the model. The values in these tensors depend on the language used and are identified by the tokenizer’s lang2id and id2lang attributes. In this example, load the FacebookAI/xlm-clm-enfr-1024 checkpoint (Causal language modeling, English-French): Copied >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained( "FacebookAI/xlm-clm-enfr-1024" ) >>> model = XLMWithLMHeadModel.from_pretrained( "FacebookAI/xlm-clm-enfr-1024" ) The lang2id attribute of the tokenizer displays this model’s languages and their ids: Copied >>> print (tokenizer.lang2id) { 'en' : 0 , 'fr' : 1 } Next, create an example input: Copied >>> input_ids = torch.tensor([tokenizer.encode( "Wikipedia was used to" )]) # batch size of 1 Set the language id as "en" and use it to define the language embedding. The language embedding is a tensor filled with 0 since that is the language id for English. This tensor should be the same size as input_ids . Copied >>> language_id = tokenizer.lang2id[ "en" ] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[ 1 ]) # torch.tensor([0, 0, 0, ..., 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view( 1 , - 1 ) # is now of shape [1, sequence_length] (we have a batch size of 1) Now you can pass the input_ids and language embedding to the model: Copied >>> outputs = model(input_ids, langs=langs) The run_generation.py script can generate text with language embeddings using the xlm-clm checkpoints. XLM without language embeddings The following XLM models do not require language embeddings during inference: FacebookAI/xlm-mlm-17-1280 (Masked language modeling, 17 languages) FacebookAI/xlm-mlm-100-1280 (Masked language modeling, 100 languages) These models are used for generic sentence representations, unlike the previous XLM checkpoints. BERT The following BERT models can be used for multilingual tasks: google-bert/bert-base-multilingual-uncased (Masked language modeling + Next sentence prediction, 102 languages) google-bert/bert-base-multilingual-cased (Masked language modeling + Next sentence prediction, 104 languages) These models do not require language embeddings during inference. They should identify the language from the context and infer accordingly. XLM-RoBERTa The following XLM-RoBERTa models can be used for multilingual tasks: FacebookAI/xlm-roberta-base (Masked language modeling, 100 languages) FacebookAI/xlm-roberta-large (Masked language modeling, 100 languages) XLM-RoBERTa was trained on 2.5TB of newly created and cleaned CommonCrawl data in 100 languages. It provides strong gains over previously released multilingual models like mBERT or XLM on downstream tasks like classification, sequence labeling, and question answering. M2M100 The following M2M100 models can be used for multilingual translation: facebook/m2m100_418M (Translation) facebook/m2m100_1.2B (Translation) In this example, load the facebook/m2m100_418M checkpoint to translate from Chinese to English. You can set the source language in the tokenizer: Copied >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "不要插手巫師的事務, 因為他們是微妙的, 很快就會發怒." >>> tokenizer = M2M100Tokenizer.from_pretrained( "facebook/m2m100_418M" , src_lang= "zh" ) >>> model = M2M100ForConditionalGeneration.from_pretrained( "facebook/m2m100_418M" ) Tokenize the text: Copied >>> encoded_zh = tokenizer(chinese_text, return_tensors= "pt" ) M2M100 forces the target language id as the first generated token to translate to the target language. Set the forced_bos_token_id to en in the generate method to translate to English: Copied >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id( "en" )) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens= True ) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' MBart The following MBart models can be used for multilingual translation: facebook/mbart-large-50-one-to-many-mmt (One-to-many multilingual machine translation, 50 languages) facebook/mbart-large-50-many-to-many-mmt (Many-to-many multilingual machine translation, 50 languages) facebook/mbart-large-50-many-to-one-mmt (Many-to-one multilingual machine translation, 50 languages) facebook/mbart-large-50 (Multilingual translation, 50 languages) facebook/mbart-large-cc25 In this example, load the facebook/mbart-large-50-many-to-many-mmt checkpoint to translate Finnish to English. You can set the source language in the tokenizer: Copied >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "Älä sekaannu velhojen asioihin, sillä ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt" , src_lang= "fi_FI" ) >>> model = AutoModelForSeq2SeqLM.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt" ) Tokenize the text: Copied >>> encoded_en = tokenizer(en_text, return_tensors= "pt" ) MBart forces the target language id as the first generated token to translate to the target language. Set the forced_bos_token_id to en in the generate method to translate to English: Copied >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id[ "en_XX" ]) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens= True ) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." If you are using the facebook/mbart-large-50-many-to-one-mmt checkpoint, you don’t need to force the target language id as the first generated token otherwise the usage is the same. < > Update on GitHub ← Use fast tokenizers from 🤗 Tokenizers Use model-specific APIs → Multilingual models for inference XLM XL M with language embeddings XL M without language embeddings BERT XL M- RoBER Ta M2 M100 M Bart |
Using_fastai_at_Hugging_Face.txt | Using fastai at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using fastai at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using fastai at Hugging Face fastai is an open-source Deep Learning library that leverages PyTorch and Python to provide high-level components to train fast and accurate neural networks with state-of-the-art outputs on text, vision, and tabular data. Exploring fastai in the Hub You can find fastai models by filtering at the left of the models page . All models on the Hub come up with the following features: An automatically generated model card with a brief description and metadata tags that help for discoverability. An interactive widget you can use to play out with the model directly in the browser (for Image Classification) An Inference API that allows to make inference requests (for Image Classification). Using existing models The huggingface_hub library is a lightweight Python client with utlity functions to download models from the Hub. Copied pip install huggingface_hub[ "fastai" ] Once you have the library installed, you just need to use the from_pretrained_fastai method. This method not only loads the model, but also validates the fastai version when the model was saved, which is important for reproducibility. Copied from huggingface_hub import from_pretrained_fastai learner = from_pretrained_fastai( "espejelomar/identify-my-cat" ) _,_,probs = learner.predict(img) print ( f"Probability it's a cat: { 100 *probs[ 1 ].item(): .2 f} %" ) # Probability it's a cat: 100.00% If you want to see how to load a specific model, you can click Use in fastai and you will be given a working snippet that you can load it! Sharing your models You can share your fastai models by using the push_to_hub_fastai method. Copied from huggingface_hub import push_to_hub_fastai push_to_hub_fastai(learner=learn, repo_id= "espejelomar/identify-my-cat" ) Additional resources fastai course . fastai website . Integration with Hub docs . Integration with Hub announcement . < > Update on GitHub ← ESPnet Flair → Using fastai at Hugging Face Exploring fastai in the Hub Using existing models Sharing your models Additional resources |
Create_a_custom_architecture.txt | Create a custom architecture Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Create a custom architecture Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Create a custom architecture An AutoClass automatically infers the model architecture and downloads pretrained configuration and weights. Generally, we recommend using an AutoClass to produce checkpoint-agnostic code. But users who want more control over specific model parameters can create a custom 🤗 Transformers model from just a few base classes. This could be particularly useful for anyone who is interested in studying, training or experimenting with a 🤗 Transformers model. In this guide, dive deeper into creating a custom model without an AutoClass . Learn how to: Load and customize a model configuration. Create a model architecture. Create a slow and fast tokenizer for text. Create an image processor for vision tasks. Create a feature extractor for audio tasks. Create a processor for multimodal tasks. Configuration A configuration refers to a model’s specific attributes. Each model configuration has different attributes; for instance, all NLP models have the hidden_size , num_attention_heads , num_hidden_layers and vocab_size attributes in common. These attributes specify the number of attention heads or hidden layers to construct a model with. Get a closer look at DistilBERT by accessing DistilBertConfig to inspect it’s attributes: Copied >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print (config) DistilBertConfig { "activation" : "gelu" , "attention_dropout" : 0.1 , "dim" : 768 , "dropout" : 0.1 , "hidden_dim" : 3072 , "initializer_range" : 0.02 , "max_position_embeddings" : 512 , "model_type" : "distilbert" , "n_heads" : 12 , "n_layers" : 6 , "pad_token_id" : 0 , "qa_dropout" : 0.1 , "seq_classif_dropout" : 0.2 , "sinusoidal_pos_embds" : false, "transformers_version" : "4.16.2" , "vocab_size" : 30522 } DistilBertConfig displays all the default attributes used to build a base DistilBertModel . All attributes are customizable, creating space for experimentation. For example, you can customize a default model to: Try a different activation function with the activation parameter. Use a higher dropout ratio for the attention probabilities with the attention_dropout parameter. Copied >>> my_config = DistilBertConfig(activation= "relu" , attention_dropout= 0.4 ) >>> print (my_config) DistilBertConfig { "activation" : "relu" , "attention_dropout" : 0.4 , "dim" : 768 , "dropout" : 0.1 , "hidden_dim" : 3072 , "initializer_range" : 0.02 , "max_position_embeddings" : 512 , "model_type" : "distilbert" , "n_heads" : 12 , "n_layers" : 6 , "pad_token_id" : 0 , "qa_dropout" : 0.1 , "seq_classif_dropout" : 0.2 , "sinusoidal_pos_embds" : false, "transformers_version" : "4.16.2" , "vocab_size" : 30522 } Pretrained model attributes can be modified in the from_pretrained() function: Copied >>> my_config = DistilBertConfig.from_pretrained( "distilbert/distilbert-base-uncased" , activation= "relu" , attention_dropout= 0.4 ) Once you are satisfied with your model configuration, you can save it with save_pretrained() . Your configuration file is stored as a JSON file in the specified save directory: Copied >>> my_config.save_pretrained(save_directory= "./your_model_save_path" ) To reuse the configuration file, load it with from_pretrained() : Copied >>> my_config = DistilBertConfig.from_pretrained( "./your_model_save_path/config.json" ) You can also save your configuration file as a dictionary or even just the difference between your custom configuration attributes and the default configuration attributes! See the configuration documentation for more details. Model The next step is to create a model . The model - also loosely referred to as the architecture - defines what each layer is doing and what operations are happening. Attributes like num_hidden_layers from the configuration are used to define the architecture. Every model shares the base class PreTrainedModel and a few common methods like resizing input embeddings and pruning self-attention heads. In addition, all models are also either a torch.nn.Module , tf.keras.Model or flax.linen.Module subclass. This means models are compatible with each of their respective framework’s usage. Pytorch Hide Pytorch content Load your custom configuration attributes into the model: Copied >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained( "./your_model_save_path/config.json" ) >>> model = DistilBertModel(my_config) This creates a model with random values instead of pretrained weights. You won’t be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training. Create a pretrained model with from_pretrained() : Copied >>> model = DistilBertModel.from_pretrained( "distilbert/distilbert-base-uncased" ) When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by 🤗 Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you’d like: Copied >>> model = DistilBertModel.from_pretrained( "distilbert/distilbert-base-uncased" , config=my_config) TensorFlow Hide TensorFlow content Load your custom configuration attributes into the model: Copied >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained( "./your_model_save_path/my_config.json" ) >>> tf_model = TFDistilBertModel(my_config) This creates a model with random values instead of pretrained weights. You won’t be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training. Create a pretrained model with from_pretrained() : Copied >>> tf_model = TFDistilBertModel.from_pretrained( "distilbert/distilbert-base-uncased" ) When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by 🤗 Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you’d like: Copied >>> tf_model = TFDistilBertModel.from_pretrained( "distilbert/distilbert-base-uncased" , config=my_config) Model heads At this point, you have a base DistilBERT model which outputs the hidden states . The hidden states are passed as inputs to a model head to produce the final output. 🤗 Transformers provides a different model head for each task as long as a model supports the task (i.e., you can’t use DistilBERT for a sequence-to-sequence task like translation). Pytorch Hide Pytorch content For example, DistilBertForSequenceClassification is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs. Copied >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained( "distilbert/distilbert-base-uncased" ) Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the DistilBertForQuestionAnswering model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output. Copied >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained( "distilbert/distilbert-base-uncased" ) TensorFlow Hide TensorFlow content For example, TFDistilBertForSequenceClassification is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs. Copied >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained( "distilbert/distilbert-base-uncased" ) Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the TFDistilBertForQuestionAnswering model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output. Copied >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained( "distilbert/distilbert-base-uncased" ) Tokenizer The last base class you need before using a model for textual data is a tokenizer to convert raw text to tensors. There are two types of tokenizers you can use with 🤗 Transformers: PreTrainedTokenizer : a Python implementation of a tokenizer. PreTrainedTokenizerFast : a tokenizer from our Rust-based 🤗 Tokenizer library. This tokenizer type is significantly faster - especially during batch tokenization - due to its Rust implementation. The fast tokenizer also offers additional methods like offset mapping which maps tokens to their original words or characters. Both tokenizers support common methods such as encoding and decoding, adding new tokens, and managing special tokens. Not every model supports a fast tokenizer. Take a look at this table to check if a model has fast tokenizer support. If you trained your own tokenizer, you can create one from your vocabulary file: Copied >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file= "my_vocab_file.txt" , do_lower_case= False , padding_side= "left" ) It is important to remember the vocabulary from a custom tokenizer will be different from the vocabulary generated by a pretrained model’s tokenizer. You need to use a pretrained model’s vocabulary if you are using a pretrained model, otherwise the inputs won’t make sense. Create a tokenizer with a pretrained model’s vocabulary with the DistilBertTokenizer class: Copied >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained( "distilbert/distilbert-base-uncased" ) Create a fast tokenizer with the DistilBertTokenizerFast class: Copied >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained( "distilbert/distilbert-base-uncased" ) By default, AutoTokenizer will try to load a fast tokenizer. You can disable this behavior by setting use_fast=False in from_pretrained . Image processor An image processor processes vision inputs. It inherits from the base ImageProcessingMixin class. To use, create an image processor associated with the model you’re using. For example, create a default ViTImageProcessor if you are using ViT for image classification: Copied >>> from transformers import ViTImageProcessor >>> vit_extractor = ViTImageProcessor() >>> print (vit_extractor) ViTImageProcessor { "do_normalize" : true, "do_resize" : true, "image_processor_type" : "ViTImageProcessor" , "image_mean" : [ 0.5 , 0.5 , 0.5 ], "image_std" : [ 0.5 , 0.5 , 0.5 ], "resample" : 2 , "size" : 224 } If you aren’t looking for any customization, just use the from_pretrained method to load a model’s default image processor parameters. Modify any of the ViTImageProcessor parameters to create your custom image processor: Copied >>> from transformers import ViTImageProcessor >>> my_vit_extractor = ViTImageProcessor(resample= "PIL.Image.BOX" , do_normalize= False , image_mean=[ 0.3 , 0.3 , 0.3 ]) >>> print (my_vit_extractor) ViTImageProcessor { "do_normalize" : false, "do_resize" : true, "image_processor_type" : "ViTImageProcessor" , "image_mean" : [ 0.3 , 0.3 , 0.3 ], "image_std" : [ 0.5 , 0.5 , 0.5 ], "resample" : "PIL.Image.BOX" , "size" : 224 } Backbone Computer vision models consist of a backbone, neck, and head. The backbone extracts features from an input image, the neck combines and enhances the extracted features, and the head is used for the main task (e.g., object detection). Start by initializing a backbone in the model config and specify whether you want to load pretrained weights or load randomly initialized weights. Then you can pass the model config to the model head. For example, to load a ResNet backbone into a MaskFormer model with an instance segmentation head: <hfoptions id="backbone"> <hfoption id="pretrained weights"> Set use_pretrained_backbone=True to load pretrained ResNet weights for the backbone. Copied from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone= "microsoft/resnet-50" , use_pretrained_backbone= True ) # backbone and neck config model = MaskFormerForInstanceSegmentation(config) # head </hfoption> <hfoption id="random weights"> Set use_pretrained_backbone=False to randomly initialize a ResNet backbone. Copied from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone= "microsoft/resnet-50" , use_pretrained_backbone= False ) # backbone and neck config model = MaskFormerForInstanceSegmentation(config) # head You could also load the backbone config separately and then pass it to the model config. Copied from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig backbone_config = ResNetConfig() config = MaskFormerConfig(backbone_config=backbone_config) model = MaskFormerForInstanceSegmentation(config) </hfoption> </hfoptions id="timm backbone"> timm models are loaded within a model with use_timm_backbone=True or with TimmBackbone and TimmBackboneConfig . Use use_timm_backbone=True and use_pretrained_backbone=True to load pretrained timm weights for the backbone. Copied from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone= "resnet50" , use_pretrained_backbone= True , use_timm_backbone= True ) # backbone and neck config model = MaskFormerForInstanceSegmentation(config) # head Set use_timm_backbone=True and use_pretrained_backbone=False to load a randomly initialized timm backbone. Copied from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone= "resnet50" , use_pretrained_backbone= False , use_timm_backbone= True ) # backbone and neck config model = MaskFormerForInstanceSegmentation(config) # head You could also load the backbone config and use it to create a TimmBackbone or pass it to the model config. Timm backbones will load pretrained weights by default. Set use_pretrained_backbone=False to load randomly initialized weights. Copied from transformers import TimmBackboneConfig, TimmBackbone backbone_config = TimmBackboneConfig( "resnet50" , use_pretrained_backbone= False ) # Create a backbone class backbone = TimmBackbone(config=backbone_config) # Create a model with a timm backbone from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone_config=backbone_config) model = MaskFormerForInstanceSegmentation(config) Feature extractor A feature extractor processes audio inputs. It inherits from the base FeatureExtractionMixin class, and may also inherit from the SequenceFeatureExtractor class for processing audio inputs. To use, create a feature extractor associated with the model you’re using. For example, create a default Wav2Vec2FeatureExtractor if you are using Wav2Vec2 for audio classification: Copied >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print (w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize" : true, "feature_extractor_type" : "Wav2Vec2FeatureExtractor" , "feature_size" : 1 , "padding_side" : "right" , "padding_value" : 0.0 , "return_attention_mask" : false, "sampling_rate" : 16000 } If you aren’t looking for any customization, just use the from_pretrained method to load a model’s default feature extractor parameters. Modify any of the Wav2Vec2FeatureExtractor parameters to create your custom feature extractor: Copied >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate= 8000 , do_normalize= False ) >>> print (w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize" : false, "feature_extractor_type" : "Wav2Vec2FeatureExtractor" , "feature_size" : 1 , "padding_side" : "right" , "padding_value" : 0.0 , "return_attention_mask" : false, "sampling_rate" : 8000 } Processor For models that support multimodal tasks, 🤗 Transformers offers a processor class that conveniently wraps processing classes such as a feature extractor and a tokenizer into a single object. For example, let’s use the Wav2Vec2Processor for an automatic speech recognition task (ASR). ASR transcribes audio to text, so you will need a feature extractor and a tokenizer. Create a feature extractor to handle the audio inputs: Copied >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value= 1.0 , do_normalize= True ) Create a tokenizer to handle the text inputs: Copied >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file= "my_vocab_file.txt" ) Combine the feature extractor and tokenizer in Wav2Vec2Processor : Copied >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) With two basic classes - configuration and model - and an additional preprocessing class (tokenizer, image processor, feature extractor, or processor), you can create any of the models supported by 🤗 Transformers. Each of these base classes are configurable, allowing you to use the specific attributes you want. You can easily setup a model for training or modify an existing pretrained model to fine-tune. < > Update on GitHub ← Run inference with multilingual models Share a custom model → Create a custom architecture Configuration Model Model heads Tokenizer Image processor Backbone Feature extractor Processor |
Panel_on_Spaces.txt | Panel on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Panel on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Panel on Spaces Panel is an open-source Python library that lets you easily build powerful tools, dashboards and complex applications entirely in Python. It has a batteries-included philosophy, putting the PyData ecosystem, powerful data tables and much more at your fingertips. High-level reactive APIs and lower-level callback based APIs ensure you can quickly build exploratory applications, but you aren’t limited if you build complex, multi-page apps with rich interactivity. Panel is a member of the HoloViz ecosystem, your gateway into a connected ecosystem of data exploration tools. Visit Panel documentation to learn more about making powerful applications. 🚀 Deploy Panel on Spaces You can deploy Panel on Spaces with just a few clicks: There are a few key parameters you need to define: the Owner (either your personal account or an organization), a Space name, and Visibility. In case you intend to execute computationally intensive deep learning models, consider upgrading to a GPU to boost performance. Once you have created the Space, it will start out in “Building” status, which will change to “Running” once your Space is ready to go. ⚡️ What will you see? When your Space is built and ready, you will see this image classification Panel app which will let you fetch a random image and run the OpenAI CLIP classifier model on it. Check out our blog post for a walkthrough of this app. 🛠️ How to customize and make your own app? The Space template will populate a few files to get your app started: Three files are important: 1. app.py This file defines your Panel application code. You can start by modifying the existing application or replace it entirely to build your own application. To learn more about writing your own Panel app, refer to the Panel documentation . 2. Dockerfile The Dockerfile contains a sequence of commands that Docker will execute to construct and launch an image as a container that your Panel app will run in. Typically, to serve a Panel app, we use the command panel serve app.py . In this specific file, we divide the command into a list of strings. Furthermore, we must define the address and port because Hugging Face will expect to serve your application on port 7860. Additionally, we need to specify the allow-websocket-origin flag to enable the connection to the server’s websocket. 3. requirements.txt This file defines the required packages for our Panel app. When using Space, dependencies listed in the requirements.txt file will be automatically installed. You have the freedom to modify this file by removing unnecessary packages or adding additional ones that are required for your application. Feel free to make the necessary changes to ensure your app has the appropriate packages installed. 🌐 Join Our Community The Panel community is vibrant and supportive, with experienced developers and data scientists eager to help and share their knowledge. Join us and connect with us: Discord Discourse Twitter LinkedIn Github < > Update on GitHub ← ChatUI on Spaces Tabby on Spaces → Panel on Spaces 🚀 Deploy Panel on Spaces ⚡️ What will you see? 🛠️ How to customize and make your own app? 1. app.py 2. Dockerfile 3. requirements.txt 🌐 Join Our Community |
Filter_rows_in_a_dataset.txt | Filter rows in a dataset Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Filter rows in a dataset Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Filter rows in a dataset The dataset viewer provides a /filter endpoint for filtering rows in a dataset. Currently, only datasets with Parquet exports are supported so the dataset viewer can index the contents and run the filter query without downloading the whole dataset. This guide shows you how to use the dataset viewer’s /filter endpoint to filter rows based on a query string. Feel free to also try it out with ReDoc . The /filter endpoint accepts the following query parameters: dataset : the dataset name, for example nyu-mll/glue or mozilla-foundation/common_voice_10_0 config : the subset name, for example cola split : the split name, for example train where : the filter condition orderby : the order-by clause offset : the offset of the slice, for example 150 length : the length of the slice, for example 10 (maximum: 100 ) The where parameter must be expressed as a comparison predicate, which can be: a simple predicate composed of a column name in double quotes, a comparison operator, and a value the comparison operators are: = , <> , > , >= , < , <= a composite predicate composed of two or more simple predicates (optionally grouped with parentheses to indicate the order of evaluation), combined with logical operators the logical operators are: AND , OR , NOT For example, the following where parameter value Copied where = "age" > 30 AND ( "name" = 'Simone' OR "children" = 0 ) will filter the data to select only those rows where the float “age” column is larger than 30 and, either the string “name” column is equal to ‘Simone’ or the integer “children” column is equal to 0. Note that, following SQL syntax, in comparison predicates, column names should be enclosed in double quotes ( "name" ), and string values must be enclosed in single quotes ( 'Simone' ). Additionally, if the string value contains a single quote, it must be escaped with another single quote, for example: 'O''Hara' . The orderby parameter must contain the column name (in double quotes) whose values will be sorted (in ascending order by default). To sort the rows in descending order, use the DESC keyword, like orderby="age" DESC . For example, let’s filter those rows with no_answer=false in the train split of the SelfRC subset of the ibm/duorc dataset restricting the results to the slice 150-151: Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://datasets-server.huggingface.co/filter?dataset=ibm/duorc&config=SelfRC&split=train&where=" no_answe r"=true&offset=150&length=2" def query (): response = requests.get(API_URL, headers=headers) return response.json() data = query() The endpoint response is a JSON containing two keys (same format as /rows ): The features of a dataset, including the column’s name and data type. The slice of rows of a dataset and the content contained in each column of a specific row. The rows are ordered by the row index. For example, here are the features and the slice 150-151 of matching rows of the ibm/duorc / SelfRC train split for the where condition no_answer=true : Copied { "features" : [ { "feature_idx" : 0 , "name" : "plot_id" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 1 , "name" : "plot" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 2 , "name" : "title" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 3 , "name" : "question_id" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 4 , "name" : "question" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 5 , "name" : "answers" , "type" : { "feature" : { "dtype" : "string" , "_type" : "Value" } , "_type" : "Sequence" } } , { "feature_idx" : 6 , "name" : "no_answer" , "type" : { "dtype" : "bool" , "_type" : "Value" } } ] , "rows" : [ { "row_idx" : 12825 , "row" : { "plot_id" : "/m/06qxsf" , "plot" : "Prologue\nA creepy-looking coroner introduces three different horror tales involving his current work on cadavers in \"body bags\".\n\"The Gas Station\"[edit]\nAnne is a young college student who arrives for her first job working the night shift at an all-night filling station near Haddonfield, Illinois (a reference to the setting of Carpenter's two Halloween films). The attending worker, Bill, tells her that a serial killer has broken out of a mental hospital, and cautions her not to leave the booth at the station without the keys because the door locks automatically. After Bill leaves, Anne is alone and the tension mounts as she deals with various late-night customers seeking to buy gas for a quick fill-up, purchase cigarettes or just use the restroom key, unsure whether any of them might be the escaped maniac. Eventually, when Anne suspects that the escaped killer is lurking around the gas station, she tries to call the police, only to find that the phone line is dead. Soon after that, she finds an elaborately grotesque drawing in the Restroom and then the dead body of a transient sitting in a pickup truck on the lift in one of the garage bays. She makes a phone call for help which results in her realization that \"Bill\", the attending worker she met earlier, is in fact the escaped killer, who has killed the real Bill and is killing numerous passers-by. She finds the real Bill's dead body in one of the lockers. Serial Killer \"Bill\" then reappears and attempts to kill Anne with a machete, breaking into the locked booth by smashing out the glass with a sledgehammer and then chasing her around the deserted garage. Just as he is about to kill her, a customer returns, having forgotten his credit card, and he wrestles the killer, giving Anne time to crush him under the vehicle lift.\n\"Hair\"[edit]\nRichard Coberts is a middle-aged businessman who is very self-conscious about his thinning hair. This obsession has caused a rift between him and his long-suffering girlfriend Megan. Richard answers a television ad about a \"miracle\" hair transplant operation, pays a visit to the office, and meets the shady Dr. Lock, who, for a very large fee, agrees to give Richard a surgical procedure to make his hair grow back. The next day, Richard wakes up and removes the bandage around his head, and is overjoyed to find that he has a full head of hair. But soon he becomes increasingly sick and fatigued, and finds his hair continuing to grow and, additionally, growing out of parts of his body, where hair does not normally grow. Trying to cut some of the hair off, he finds that it \"bleeds\", and, examining some of the hairs under a magnifying glass, sees that they are alive and resemble tiny serpents. He goes back to Dr. Lock for an explanation, but finds himself a prisoner as Dr. Lock explains that he and his entire staff are aliens from another planet, seeking out narcissistic human beings and planting seeds of \"hair\" to take over their bodies for consumption as part of their plan to spread their essence to Earth.\n\"Eye\"[edit]\nBrent Matthews is a baseball player whose life and career take a turn for the worse when he gets into a serious car accident in which his right eye is gouged out. Unwilling to admit that his career is over, he jumps at the chance to undergo an experimental surgical procedure to replace his eye with one from a recently deceased person. But soon after the surgery he begins to see things out of his new eye that others cannot see, and begins having nightmares of killing women and having sex with them. Brent seeks out the doctor who operated on him, and the doctor tells him that the donor of his new eye was a recently executed serial killer and necrophile who killed several young women, and then had sex with their dead bodies. Brent becomes convinced that the spirit of the dead killer is taking over his body so that he can resume killing women. He flees back to his house and tells his skeptical wife, Cathy, about what is happening. Just then the spirit of the killer emerges and attempts to kill Cathy as well. Cathy fights back, subduing him long enough for Brent to re-emerge. Realizing that it is only a matter of time before the killer emerges again, Brent cuts out his donated eye, severing his link with the killer, but then bleeds to death.\nEpilogue The coroner is finishing telling his last tale when he hears a noise from outside the morgue. He crawls back inside a body bag, revealing that he himself is a living cadaver, as two other morgue workers begin to go to work on his \"John Doe\" corpse." , "title" : "John Carpenter presents Body Bags" , "question_id" : "cf58489f-12ba-ace6-67a7-010d957b4ff4" , "question" : "What happens soon after the surgery?" , "answers" : [ ] , "no_answer" : true } , "truncated_cells" : [ ] } , { "row_idx" : 12836 , "row" : { "plot_id" : "/m/04z_3pm" , "plot" : "In 1976, eight-year-old Mary Daisy Dinkle (Bethany Whitmore) lives a lonely life in Mount Waverley, Australia. At school, she is teased by her classmates because of an unfortunate birthmark on her forehead; while at home, her distant father, Noel, and alcoholic, kleptomaniac mother, Vera, provide little support. Her only comforts are her pet rooster, Ethel; her favourite food, sweetened condensed milk; and a Smurfs-like cartoon show called The Noblets. One day, while at the post office with her mother, Mary spots a New York City telephone book and, becoming curious about Americans, decides to write to one. She randomly chooses Max Jerry Horowitz's name from the phone book and writes him a letter telling him about herself, sending it off in the hope that he will become her pen friend.\nMax Jerry Horowitz (Philip Seymour Hoffman) is a morbidly obese 44-year-old ex-Jewish atheist who has trouble forming close bonds with other people, due to various mental and social problems. Though Mary's letter initially gives him an anxiety attack, he decides to write back to her, and the two quickly become friends (partly due to their shared love of chocolate and The Noblets). Due to Vera's disapproval of Max, Mary tells him to send his letters to her agoraphobic neighbour, Len Hislop, whose mail she collects regularly. When Mary later asks Max about love, he suffers a severe anxiety attack and is institutionalized for eight months. After his release, he is hesitant to write to Mary again for some time. On his 48th birthday, he wins the New York lottery, using his winnings to buy a lifetime supply of chocolate and an entire collection of Noblet figurines. He gives the rest of his money to his elderly neighbour Ivy, who uses most of it to pamper herself before dying in an accident with a malfunctioning jet pack. Meanwhile, Mary becomes despondent, thinking Max has abandoned her.\nOn the advice of his therapist, Max finally writes back to Mary and explains he has been diagnosed with Asperger syndrome. Mary is thrilled to hear from him again, and the two continue their correspondence for the next several years. When Noel retires from his job at a tea bag factory, he takes up metal detecting, but is soon swept away (and presumably killed) by a big tidal bore while on a beach. Mary (Toni Colette) goes to university and has her birthmark surgically removed, and develops a crush on her Greek Australian neighbour, Damien Popodopoulos (Eric Bana). Drunk and guilt-ridden over her husband's death, Vera accidentally kills herself after she drinks embalming fluid (which she mistook for cooking sherry). Mary and Damien grow closer following Vera's death and are later married.\nInspired by her friendship with Max, Mary studies psychology at university, writing her doctoral dissertation on Asperger syndrome with Max as her test subject. She plans to have her dissertation published as a book; but when Max receives a copy from her, he is infuriated that she has taken advantage of his condition, which he sees as an integral part of his personality and not a disability that needs to be cured. He breaks off communication with Mary (by removing the letter \"M\" from his typewriter), who, heartbroken, has the entire run of her book pulped, effectively ending her budding career. She sinks into depression and begins drinking cooking sherry, as her mother had done. While searching through a cabinet, she finds a can of condensed milk, and sends it to Max as an apology. She checks the post daily for a response and one day finds a note from Damien, informing her that he has left her for his own pen friend, Desmond, a sheep farmer in New Zealand.\nMeanwhile, after an incident in which he nearly chokes a homeless man (Ian \"Molly\" Meldrum) in anger, after throwing a used cigarette, Max realizes Mary is an imperfect human being, like himself, and sends her a package containing his Noblet figurine collection as a sign of forgiveness. Mary, however, has sunken into despair after Damien's departure, and fails to find the package on her doorstep for several days. Finding some Valium that had belonged to her mother, and unaware that she is pregnant with Damien's child, Mary decides to commit suicide. As she takes the Valium and is on the verge of hanging herself, Len knocks on her door, having conquered his agoraphobia to alert her of Max's package. Inside, she finds the Noblet figurines and a letter from Max, in which he tells her of his realization that they are not perfect and expresses his forgiveness. He also states how much their friendship means to him, and that he hopes their paths will cross one day.\nOne year later, Mary travels to New York with her infant child to finally visit Max. Entering his apartment, Mary discovers Max on his couch, gazing upward with a smile on his face, having died earlier that morning. Looking around the apartment, Mary is awestruck to find all the letters she had sent to Max over the years, laminated and taped to the ceiling. Realizing Max had been gazing at the letters when he died, and seeing how much he had valued their friendship, Mary cries tears of joy and joins him on the couch." , "title" : "Mary and Max" , "question_id" : "1dc019ad-80cf-1d49-5a69-368f90fae2f8" , "question" : "Why was Mary Daisy Dinkle teased in school?" , "answers" : [ ] , "no_answer" : true } , "truncated_cells" : [ ] } ] , "num_rows_total" : 627 , "num_rows_per_page" : 100 , "partial" : false } If the result has partial: true it means that the filtering couldn’t be run on the full dataset because it’s too big. Indeed, the indexing for /filter can be partial if the dataset is bigger than 5GB. In that case, it only uses the first 5GB. < > Update on GitHub ← Search text in a dataset List Parquet files → Filter rows in a dataset |
🤗_Optimum.txt | 🤗 Optimum Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Optimum documentation 🤗 Optimum Optimum 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.23.3 v1.22.0 v1.21.4 v1.20.0 v1.19.0 v1.18.1 v1.17.1 v1.16.2 v1.15.0 v1.14.0 v1.13.2 v1.12.0 v1.11.2 v1.10.1 v1.9.0 v1.8.6 v1.7.3 v1.6.4 v1.5.2 v1.4.1 v1.3.0 v1.2.3 v1.0.0 EN Overview 🤗 Optimum Installation Quick tour Notebooks Conceptual guides Quantization Nvidia AMD Intel AWS Trainium/Inferentia Google TPUs Habana Furiosa ONNX Runtime Exporters BetterTransformer Torch FX LLM quantization Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🤗 Optimum 🤗 Optimum is an extension of Transformers that provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency. The AI ecosystem evolves quickly, and more and more specialized hardware along with their own optimizations are emerging every day. As such, Optimum enables developers to efficiently use any of these platforms with the same ease inherent to Transformers. 🤗 Optimum is distributed as a collection of packages - check out the links below for an in-depth look at each one. Hardware partners The packages below enable you to get the best of the 🤗 Hugging Face ecosystem on various types of devices. NVIDIA Accelerate inference with NVIDIA TensorRT-LLM on the NVIDIA platform AMD Enable performance optimizations for AMD Instinct GPUs and AMD Ryzen AI NPUs Intel Optimize your model to speedup inference with OpenVINO , Neural Compressor and IPEX AWS Trainium/Inferentia Accelerate your training and inference workflows with AWS Trainium and AWS Inferentia Google TPUs Accelerate your training and inference workflows with Google TPUs Habana Maximize training throughput and efficiency with Habana's Gaudi processor FuriosaAI Fast and efficient inference on FuriosaAI WARBOY Some packages provide hardware-agnostic features (e.g. INC interface in Optimum Intel). Open-source integrations 🤗 Optimum also supports a variety of open-source frameworks to make model optimization very easy. ONNX Runtime Apply quantization and graph optimization to accelerate Transformers models training and inference with ONNX Runtime Exporters Export your PyTorch or TensorFlow model to different formats such as ONNX and TFLite BetterTransformer A one-liner integration to use PyTorch's BetterTransformer with Transformers models Torch FX Create and compose custom graph transformations to optimize PyTorch Transformers models with Torch FX < > Update on GitHub Installation → 🤗 Optimum Hardware partners Open-source integrations |
🤗_Hugging_Face_Hub_API.txt | 🤗 Hugging Face Hub API Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation 🤗 Hugging Face Hub API Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🤗 Hugging Face Hub API Official utilities to use the Hugging Face Hub API. Install Copied pnpm add @huggingface/hub npm add @huggingface/hub yarn add @huggingface/hub Deno Copied // esm.sh import { uploadFiles, listModels } from "https://esm.sh/@huggingface/hub" // or npm: import { uploadFiles, listModels } from "npm:@huggingface/hub" Usage For some of the calls, you need to create an account and generate an access token . Learn how to find free models using the hub package in this interactive tutorial . Copied import * as hub from "@huggingface/hub" ; import type { RepoDesignation } from "@huggingface/hub" ; const repo : RepoDesignation = { type : "model" , name : "myname/some-model" }; const { name : username} = await hub. whoAmI ({ accessToken : "hf_..." }); for await ( const model of hub. listModels ({ search : { owner : username}, accessToken : "hf_..." })) { console . log ( "My model:" , model); } const specificModel = await hub. modelInfo ({ name : "openai-community/gpt2" }); await hub. checkRepoAccess ({repo, accessToken : "hf_..." }); await hub. createRepo ({ repo, accessToken : "hf_..." , license : "mit" }); await hub. uploadFiles ({ repo, accessToken : "hf_..." , files : [ // path + blob content { path : "file.txt" , content : new Blob ([ "Hello World" ]), }, // Local file URL pathToFileURL ( "./pytorch-model.bin" ), // Web URL new URL ( "https://huggingface.co/xlm-roberta-base/resolve/main/tokenizer.json" ), // Path + Web URL { path : "myfile.bin" , content : new URL ( "https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin" ) } // Can also work with native File in browsers ], }); // or for await ( const progressEvent of await hub. uploadFilesWithProgress ({ repo, accessToken : "hf_..." , files : [ ... ], })) { console . log (progressEvent); } await hub. deleteFile ({repo, accessToken : "hf_..." , path : "myfile.bin" }); await ( await hub. downloadFile ({ repo, path : "README.md" })). text (); for await ( const fileInfo of hub. listFiles ({repo})) { console . log (fileInfo); } await hub. deleteRepo ({ repo, accessToken : "hf_..." }); OAuth Login It’s possible to login using OAuth ( “Sign in with HF” ). This will allow you get an access token to use some of the API, depending on the scopes set inside the Space or the OAuth App. Copied import { oauthLoginUrl, oauthHandleRedirectIfPresent } from "@huggingface/hub" ; const oauthResult = await oauthHandleRedirectIfPresent (); if (!oauthResult) { // If the user is not logged in, redirect to the login page window . location . href = await oauthLoginUrl (); } // You can use oauthResult.accessToken, oauthResult.accessTokenExpiresAt and oauthResult.userInfo console . log (oauthResult); Checkout the demo: https://huggingface.co/spaces/huggingfacejs/client-side-oauth Hugging face cache The @huggingface/hub package provide basic capabilities to scan the cache directory. Learn more about Manage huggingface_hub cache-system . scanCacheDir You can get the list of cached repositories using the scanCacheDir function. Copied import { scanCacheDir } from "@huggingface/hub" ; const result = await scanCacheDir (); console . log (result); Note: this does not work in the browser downloadFileToCacheDir You can cache a file of a repository using the downloadFileToCacheDir function. Copied import { downloadFileToCacheDir } from "@huggingface/hub" ; const file = await downloadFileToCacheDir ({ repo : 'foo/bar' , path : 'README.md' }); console . log (file); Note: this does not work in the browser snapshotDownload You can download an entire repository at a given revision in the cache directory using the snapshotDownload function. Copied import { snapshotDownload } from "@huggingface/hub" ; const directory = await snapshotDownload ({ repo : 'foo/bar' , }); console . log (directory); The code use internally the downloadFileToCacheDir function. Note: this does not work in the browser Performance considerations When uploading large files, you may want to run the commit calls inside a worker, to offload the sha256 computations. Remote resources and local files should be passed as URL whenever it’s possible so they can be lazy loaded in chunks to reduce RAM usage. Passing a File inside the browser’s context is fine, because it natively behaves as a Blob . Under the hood, @huggingface/hub uses a lazy blob implementation to load the file. Dependencies @huggingface/tasks : Typings only < > Update on GitHub ← ZeroShotImageClassificationOutputValue API Reference → 🤗 Hugging Face Hub API Install Deno Usage O Auth Login Hugging face cache scan Cache Dir download File To Cache Dir snapshot Download Performance considerations Dependencies |
Command_Line_Interfaces_(CLIs).txt | Command Line Interfaces (CLIs) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Command Line Interfaces (CLIs) TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Command Line Interfaces (CLIs) You can use TRL to fine-tune your Language Model with Supervised Fine-Tuning (SFT) or Direct Policy Optimization (DPO) or even chat with your model using the TRL CLIs. Currently supported CLIs are: Training commands trl dpo : fine-tune a LLM with DPO trl kto : fine-tune a LLM with KTO trl sft : fine-tune a LLM with SFT Other commands trl chat : quickly spin up a LLM fine-tuned for chatting trl env : get the system information Fine-tuning with the CLI Before getting started, pick up a Language Model from Hugging Face Hub. Supported models can be found with the filter “text-generation” within models. Also make sure to pick up a relevant dataset for your task. Before using the sft or dpo commands make sure to run: Copied accelerate config and pick up the right configuration for your training setup (single / multi-GPU, DeepSpeed, etc.). Make sure to complete all steps of accelerate config before running any CLI command. We also recommend you passing a YAML config file to configure your training protocol. Below is a simple example of a YAML file that you can use for training your models with trl sft command. Copied model_name_or_path: Qwen/Qwen2.5-0.5B dataset_name: stanfordnlp/imdb report_to: none learning_rate: 0.0001 lr_scheduler_type: cosine Save that config in a .yaml and get started immediately! An example CLI config is available as examples/cli_configs/example_config.yaml . Note you can overwrite the arguments from the config file by explicitly passing them to the CLI, e.g. from the root folder: Copied trl sft --config examples/cli_configs/example_config.yaml --output_dir test-trl-cli --lr_scheduler_type cosine_with_restarts Will force-use cosine_with_restarts for lr_scheduler_type . Supported Arguments We do support all arguments from transformers.TrainingArguments , for loading your model, we support all arguments from ~trl.ModelConfig : class trl. ModelConfig < source > ( model_name_or_path : typing.Optional[str] = None model_revision : str = 'main' torch_dtype : typing.Optional[typing.Literal['auto', 'bfloat16', 'float16', 'float32']] = None trust_remote_code : bool = False attn_implementation : typing.Optional[str] = None use_peft : bool = False lora_r : int = 16 lora_alpha : int = 32 lora_dropout : float = 0.05 lora_target_modules : typing.Optional[list[str]] = None lora_modules_to_save : typing.Optional[list[str]] = None lora_task_type : str = 'CAUSAL_LM' use_rslora : bool = False load_in_8bit : bool = False load_in_4bit : bool = False bnb_4bit_quant_type : typing.Literal['fp4', 'nf4'] = 'nf4' use_bnb_nested_quant : bool = False ) Parameters model_name_or_path ( Optional[str] , optional , defaults to None ) — Model checkpoint for weights initialization. model_revision ( str , optional , defaults to "main" ) — Specific model version to use. It can be a branch name, a tag name, or a commit id. torch_dtype ( Optional[Literal["auto", "bfloat16", "float16", "float32"]] , optional , defaults to None ) — Override the default torch.dtype and load the model under this dtype. Possible values are "bfloat16" : torch.bfloat16 "float16" : torch.float16 "float32" : torch.float32 "auto" : Automatically derive the dtype from the model’s weights. trust_remote_code ( bool , optional , defaults to False ) — Whether to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. attn_implementation ( Optional[str] , optional , defaults to None ) — Which attention implementation to use. You can run --attn_implementation=flash_attention_2 , in which case you must install this manually by running pip install flash-attn --no-build-isolation . use_peft ( bool , optional , defaults to False ) — Whether to use PEFT for training. lora_r ( int , optional , defaults to 16 ) — LoRA R value. lora_alpha ( int , optional , defaults to 32 ) — LoRA alpha. lora_dropout ( float , optional , defaults to 0.05 ) — LoRA dropout. lora_target_modules ( Optional[Union[str, list[str]]] , optional , defaults to None ) — LoRA target modules. lora_modules_to_save ( Optional[list[str]] , optional , defaults to None ) — Model layers to unfreeze & train. lora_task_type ( str , optional , defaults to "CAUSAL_LM" ) — Task type to pass for LoRA (use "SEQ_CLS" for reward modeling). use_rslora ( bool , optional , defaults to False ) — Whether to use Rank-Stabilized LoRA, which sets the adapter scaling factor to lora_alpha/√r , instead of the original default value of lora_alpha/r . load_in_8bit ( bool , optional , defaults to False ) — Whether to use 8 bit precision for the base model. Works only with LoRA. load_in_4bit ( bool , optional , defaults to False ) — Whether to use 4 bit precision for the base model. Works only with LoRA. bnb_4bit_quant_type ( str , optional , defaults to "nf4" ) — Quantization type ( "fp4" or "nf4" ). use_bnb_nested_quant ( bool , optional , defaults to False ) — Whether to use nested quantization. Configuration class for the models. Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line. You can pass any of these arguments either to the CLI or the YAML file. Supervised Fine-tuning (SFT) Follow the basic instructions above and run trl sft --output_dir <output_dir> <*args> : Copied trl sft --model_name_or_path facebook/opt-125m --dataset_name stanfordnlp/imdb --output_dir opt-sft-imdb The SFT CLI is based on the trl/scripts/sft.py script. Direct Policy Optimization (DPO) To use the DPO CLI, you need to have a dataset in the TRL format such as TRL’s Anthropic HH dataset: https://huggingface.co/datasets/trl-internal-testing/hh-rlhf-helpful-base-trl-style TRL’s OpenAI TL;DR summarization dataset: https://huggingface.co/datasets/trl-internal-testing/tldr-preference-trl-style These datasets always have at least three columns prompt, chosen, rejected : prompt is a list of strings. chosen is the chosen response in chat format rejected is the rejected response chat format To do a quick start, you can run the following command: Copied trl dpo --model_name_or_path facebook/opt-125m --output_dir trl-hh-rlhf --dataset_name trl-internal-testing/hh-rlhf-helpful-base-trl-style The DPO CLI is based on the trl/scripts/dpo.py script. Custom preference dataset Format the dataset into TRL format (you can adapt the examples/datasets/anthropic_hh.py ): Copied python examples/datasets/anthropic_hh.py --push_to_hub --hf_entity your-hf-org Chat interface The chat CLI lets you quickly load the model and talk to it. Simply run the following: $ trl chat --model_name_or_path Qwen/Qwen1.5-0.5B-Chat <quentin_gallouedec>: What is the best programming language? <Qwen/Qwen1.5-0.5B-Chat>: There isn't a "best" programming language, as everyone has different style preferences, needs, and preferences. However, some people commonly use languages like Python, Java, C++, and JavaScript, which are popular among developers for a variety of reasons, including readability, flexibility, and scalability. Ultimately, it depends on personal preference, needs, and goals. Note that the chat interface relies on the tokenizer’s chat template to format the inputs for the model. Make sure your tokenizer has a chat template defined. Besides talking to the model there are a few commands you can use: clear : clears the current conversation and start a new one example {NAME} : load example named {NAME} from the config and use it as the user input set {SETTING_NAME}={SETTING_VALUE}; : change the system prompt or generation settings (multiple settings are separated by a ; ). reset : same as clear but also resets the generation configs to defaults if they have been changed by set save or save {SAVE_NAME} : save the current chat and settings to file by default to ./chat_history/{MODEL_NAME}/chat_{DATETIME}.yaml or {SAVE_NAME} if provided exit : closes the interface Getting the system information You can get the system information by running the following command: Copied trl env This will print out the system information including the GPU information, the CUDA version, the PyTorch version, the transformers version, and the TRL version, and any optional dependencies that are installed. Copied Copy-paste the following information when reporting an issue: - Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 - Python version: 3.11.9 - PyTorch version: 2.4.1 - CUDA device: NVIDIA H100 80GB HBM3 - Transformers version: 4.45.0.dev0 - Accelerate version: 0.34.2 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - mixed_precision: no - use_cpu: False - debug: False - num_processes: 4 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - enable_cpu_affinity: False - deepspeed_config: {'gradient_accumulation_steps': 4, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - Datasets version: 3.0.0 - HF Hub version: 0.24.7 - TRL version: 0.12.0.dev0+acb4d70 - bitsandbytes version: 0.41.1 - DeepSpeed version: 0.15.1 - Diffusers version: 0.30.3 - Liger-Kernel version: 0.3.0 - LLM-Blender version: 0.0.2 - OpenAI version: 1.46.0 - PEFT version: 0.12.0 This information are required when reporting an issue. < > Update on GitHub ← Quickstart Dataset Formats → Command Line Interfaces (CL Is) Training commands Other commands Fine-tuning with the CLI Supported Arguments Supervised Fine-tuning (SF T) Direct Policy Optimization (DP O) Custom preference dataset Chat interface Getting the system information |
Metal_Performance_Shaders_(MPS).txt | Metal Performance Shaders (MPS) Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Metal Performance Shaders (MPS) Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps ) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" ) pipe = pipe.to( "mps" ) # Recommended if your computer has < 64 GB of RAM pipe.enable_attention_slicing() prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[ 0 ] image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13 , you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5").to("mps") pipe.enable_attention_slicing() prompt = "a photo of an astronaut riding a horse on mars" # First-time "warmup" pass if PyTorch version is 1.13 + _ = pipe(prompt, num_inference_steps=1) # Results match those from the CPU device after the warmup pass. image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "mps" ) pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. < > Update on GitHub ← Core ML Habana Gaudi → Metal Performance Shaders (MP S) Troubleshoot |
Kandinsky.txt | Kandinsky Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Kandinsky Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Kandinsky The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. Kandinsky 2.1 changes the architecture to include an image prior model ( CLIP ) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a Modulating Quantized Vectors (MoVQ) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. Kandinsky 2.2 improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. Kandinsky 3 simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses Flan-UL2 to encode text, a UNet with BigGan-deep blocks, and Sber-MoVQGAN to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab #!pip install -q diffusers transformers accelerate Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn’t accept prompt as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts image_embeds during decoding. Kandinsky 3 has a more concise architecture and it doesn’t require a prior model. This means it’s usage is identical to other diffusion models like Stable Diffusion XL . Text-to-image To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates negative_image_embeds that correspond to the negative prompt "" . For better results, you can pass an actual negative_prompt to the prior pipeline, but this’ll increase the effective batch size of the prior pipeline by 2x. Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline import torch prior_pipeline = KandinskyPriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior" , torch_dtype=torch.float16).to( "cuda" ) pipeline = KandinskyPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1" , torch_dtype=torch.float16).to( "cuda" ) prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale= 1.0 ).to_tuple() Now pass all the prompts and embeddings to the KandinskyPipeline to generate an image: Copied image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height= 768 , width= 768 ).images[ 0 ] image 🤗 Diffusers also provides an end-to-end API with the KandinskyCombinedPipeline and KandinskyV22CombinedPipeline , meaning you don’t have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForText2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained( "kandinsky-community/kandinsky-2-1" , torch_dtype=torch.float16) pipeline.enable_model_cpu_offload() prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" negative_prompt = "low quality, bad quality" image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale= 1.0 , guidance_scale= 4.0 , height= 768 , width= 768 ).images[ 0 ] image Image-to-image For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied import torch from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline prior_pipeline = KandinskyPriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) pipeline = KandinskyImg2ImgPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) Download an image to condition on: Copied from diffusers.utils import load_image # download image url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" original_image = load_image(url) original_image = original_image.resize(( 768 , 512 )) Generate the image_embeds and negative_image_embeds with the prior pipeline: Copied prompt = "A fantasy landscape, Cinematic lighting" negative_prompt = "low quality, bad quality" image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple() Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers.utils import make_image_grid image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height= 768 , width= 768 , strength= 0.3 ).images[ 0 ] make_image_grid([original_image.resize(( 512 , 512 )), image.resize(( 512 , 512 ))], rows= 1 , cols= 2 ) 🤗 Diffusers also provides an end-to-end API with the KandinskyImg2ImgCombinedPipeline and KandinskyV22Img2ImgCombinedPipeline , meaning you don’t have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForImage2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForImage2Image from diffusers.utils import make_image_grid, load_image import torch pipeline = AutoPipelineForImage2Image.from_pretrained( "kandinsky-community/kandinsky-2-1" , torch_dtype=torch.float16, use_safetensors= True ) pipeline.enable_model_cpu_offload() prompt = "A fantasy landscape, Cinematic lighting" negative_prompt = "low quality, bad quality" url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" original_image = load_image(url) original_image.thumbnail(( 768 , 768 )) image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength= 0.3 ).images[ 0 ] make_image_grid([original_image.resize(( 512 , 512 )), image.resize(( 512 , 512 ))], rows= 1 , cols= 2 ) Inpainting ⚠️ The Kandinsky models use ⬜️ white pixels to represent the masked area now instead of black pixels. If you are using KandinskyInpaintPipeline in production, you need to change the mask to use white pixels: Copied # For PIL input import PIL.ImageOps mask = PIL.ImageOps.invert(mask) # For PyTorch and NumPy input mask = 1 - mask For inpainting, you’ll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline from diffusers.utils import load_image, make_image_grid import torch import numpy as np from PIL import Image prior_pipeline = KandinskyPriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) pipeline = KandinskyInpaintPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-inpaint" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) Load an initial image and create a mask: Copied init_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png" ) mask = np.zeros(( 768 , 768 ), dtype=np.float32) # mask area above cat's head mask[: 250 , 250 :- 250 ] = 1 Generate the embeddings with the prior pipeline: Copied prompt = "a hat" prior_output = prior_pipeline(prompt) Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Copied output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height= 768 , width= 768 , num_inference_steps= 150 ).images[ 0 ] mask = Image.fromarray((mask* 255 ).astype( 'uint8' ), 'L' ) make_image_grid([init_image, mask, output_image], rows= 1 , cols= 3 ) You can also use the end-to-end KandinskyInpaintCombinedPipeline and KandinskyV22InpaintCombinedPipeline to call the prior and decoder pipelines together under the hood. Use the AutoPipelineForInpainting for this: Kandinsky 2.1 Kandinsky 2.2 Copied import torch import numpy as np from PIL import Image from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid pipe = AutoPipelineForInpainting.from_pretrained( "kandinsky-community/kandinsky-2-1-inpaint" , torch_dtype=torch.float16) pipe.enable_model_cpu_offload() init_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png" ) mask = np.zeros(( 768 , 768 ), dtype=np.float32) # mask area above cat's head mask[: 250 , 250 :- 250 ] = 1 prompt = "a hat" output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[ 0 ] mask = Image.fromarray((mask* 255 ).astype( 'uint8' ), 'L' ) make_image_grid([init_image, mask, output_image], rows= 1 , cols= 3 ) Interpolation Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model’s intermediate outputs. Load the prior pipeline and two images you’d like to interpolate: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline from diffusers.utils import load_image, make_image_grid import torch prior_pipeline = KandinskyPriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) img_1 = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png" ) img_2 = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg" ) make_image_grid([img_1.resize(( 512 , 512 )), img_2.resize(( 512 , 512 ))], rows= 1 , cols= 2 ) a cat Van Gogh's Starry Night painting Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation! Copied images_texts = [ "a cat" , img_1, img_2] weights = [ 0.3 , 0.3 , 0.4 ] Call the interpolate function to generate the embeddings, and then pass them to the pipeline to generate the image: Kandinsky 2.1 Kandinsky 2.2 Copied # prompt can be left empty prompt = "" prior_out = prior_pipeline.interpolate(images_texts, weights) pipeline = KandinskyPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) image = pipeline(prompt, **prior_out, height= 768 , width= 768 ).images[ 0 ] image ControlNet ⚠️ ControlNet is only supported for Kandinsky 2.2! ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image. Let’s load an image and extract it’s depth map: Copied from diffusers.utils import load_image img = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" ).resize(( 768 , 768 )) img Then you can use the depth-estimation Pipeline from 🤗 Transformers to process the image and retrieve the depth map: Copied import torch import numpy as np from transformers import pipeline def make_hint ( image, depth_estimator ): image = depth_estimator(image)[ "depth" ] image = np.array(image) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) detected_map = torch.from_numpy(image). float () / 255.0 hint = detected_map.permute( 2 , 0 , 1 ) return hint depth_estimator = pipeline( "depth-estimation" ) hint = make_hint(img, depth_estimator).unsqueeze( 0 ).half().to( "cuda" ) Text-to-image Load the prior pipeline and the KandinskyV22ControlnetPipeline : Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline prior_pipeline = KandinskyV22PriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-2-prior" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) pipeline = KandinskyV22ControlnetPipeline.from_pretrained( "kandinsky-community/kandinsky-2-2-controlnet-depth" , torch_dtype=torch.float16 ).to( "cuda" ) Generate the image embeddings from a prompt and negative prompt: Copied prompt = "A robot, 4k photo" negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" generator = torch.Generator(device= "cuda" ).manual_seed( 43 ) image_emb, zero_image_emb = prior_pipeline( prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator ).to_tuple() Finally, pass the image embeddings and the depth image to the KandinskyV22ControlnetPipeline to generate an image: Copied image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps= 50 , generator=generator, height= 768 , width= 768 ).images[ 0 ] image Image-to-image For image-to-image with ControlNet, you’ll need to use the: KandinskyV22PriorEmb2EmbPipeline to generate the image embeddings from a text prompt and an image KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings Process and extract a depth map of an initial image of a cat with the depth-estimation Pipeline from 🤗 Transformers: Copied import torch import numpy as np from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline from diffusers.utils import load_image from transformers import pipeline img = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" ).resize(( 768 , 768 )) def make_hint ( image, depth_estimator ): image = depth_estimator(image)[ "depth" ] image = np.array(image) image = image[:, :, None ] image = np.concatenate([image, image, image], axis= 2 ) detected_map = torch.from_numpy(image). float () / 255.0 hint = detected_map.permute( 2 , 0 , 1 ) return hint depth_estimator = pipeline( "depth-estimation" ) hint = make_hint(img, depth_estimator).unsqueeze( 0 ).half().to( "cuda" ) Load the prior pipeline and the KandinskyV22ControlnetImg2ImgPipeline : Copied prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( "kandinsky-community/kandinsky-2-2-prior" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( "kandinsky-community/kandinsky-2-2-controlnet-depth" , torch_dtype=torch.float16 ).to( "cuda" ) Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings: Copied prompt = "A robot, 4k photo" negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" generator = torch.Generator(device= "cuda" ).manual_seed( 43 ) img_emb = prior_pipeline(prompt=prompt, image=img, strength= 0.85 , generator=generator) negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength= 1 , generator=generator) Now you can run the KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings: Copied image = pipeline(image=img, strength= 0.5 , image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps= 50 , generator=generator, height= 768 , width= 768 ).images[ 0 ] make_image_grid([img.resize(( 512 , 512 )), image.resize(( 512 , 512 ))], rows= 1 , cols= 2 ) Optimizations Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference. Enable xFormers if you’re using PyTorch < 2.0: Copied from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) + pipe.enable_xformers_memory_efficient_attention() Enable torch.compile if you’re using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA): Copied pipe.unet.to(memory_format=torch.channels_last) + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) This is the same as explicitly setting the attention processor to use AttnAddedKVProcessor2_0 : Copied from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) Offload the model to the CPU with enable_model_cpu_offload() to avoid out-of-memory errors: Copied from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) + pipe.enable_model_cpu_offload() By default, the text-to-image pipeline uses the DDIMScheduler but you can replace it with another scheduler like DDPMScheduler to see how that affects the tradeoff between inference speed and image quality: Copied from diffusers import DDPMScheduler from diffusers import DiffusionPipeline scheduler = DDPMScheduler.from_pretrained( "kandinsky-community/kandinsky-2-1" , subfolder= "ddpm_scheduler" ) pipe = DiffusionPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1" , scheduler=scheduler, torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) < > Update on GitHub ← SDXL Turbo IP-Adapter → Kandinsky Text-to-image Image-to-image Inpainting Interpolation Control Net Text-to-image Image-to-image Optimizations |
Monitoring_TGI_server_with_Prometheus_and_Grafana_.txt | Monitoring TGI server with Prometheus and Grafana dashboard Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Monitoring TGI server with Prometheus and Grafana dashboard text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Monitoring TGI server with Prometheus and Grafana dashboard TGI server deployment can easily be monitored through a Grafana dashboard, consuming a Prometheus data collection. Example of inspectable metrics are statistics on the effective batch sizes used by TGI, prefill/decode latencies, number of generated tokens, etc. In this tutorial, we look at how to set up a local Grafana dashboard to monitor TGI usage. Setup on the server machine First, on your server machine, TGI needs to be launched as usual. TGI exposes multiple metrics that can be collected by Prometheus monitoring server. In the rest of this tutorial, we assume that TGI was launched through Docker with --network host . On the server where TGI is hosted, a Prometheus server needs to be installed and launched. To do so, please follow Prometheus installation instructions . For example, at the time of writing on a Linux machine: Copied wget https://github.com/prometheus/prometheus/releases/download/v2. 52 . 0 /prometheus- 2 . 52 . 0 .linux-amd64.tar.gz tar -xvzf prometheus- 2 . 52 . 0 .linux-amd64.tar.gz cd prometheus Prometheus needs to be configured to listen on TGI’s port. To do so, in Prometheus configuration file prometheus.yml , one needs to edit the lines: Copied static_configs: - targets: [ "0.0.0.0:80" ] to use the correct IP address and port. We suggest to try curl 0.0.0.0:80/generate -X POST -d '{"inputs":"hey chatbot, how are","parameters":{"max_new_tokens":15}}' -H 'Content-Type: application/json' on the server side to make sure to configure the correct IP and port. Once Prometheus is configured, Prometheus server can be launched on the same machine where TGI is launched: Copied ./prometheus --config .file= "prometheus.yml" In this guide, Prometheus monitoring data will be consumed on a local computer. Hence, we need to forward Prometheus port (by default 9090) to the local computer. To do so, we can for example: Use ssh local port forwarding Use ngrok port tunneling For simplicity, we will use Ngrok in this guide to tunnel Prometheus port from the TGI server to the outside word. For that, you should follow the steps at https://dashboard.ngrok.com/get-started/setup/linux , and once Ngrok is installed, use: Copied ngrok http http://0.0.0.0:9090 As a sanity check, one can make sure that Prometheus server can be accessed at the URL given by Ngrok (in the style of https://d661-4-223-164-145.ngrok-free.app ) from a local machine. Setup on the monitoring machine Monitoring is typically done on an other machine than the server one. We use a Grafana dashboard to monitor TGI’s server usage. Two options are available: Use Grafana Cloud for an hosted dashboard solution ( https://grafana.com/products/cloud/ ). Self-host a grafana dashboard. In this tutorial, for simplicity, we will self host the dashbard. We recommend installing Grafana Open-source edition following the official install instructions , using the available Linux binaries. For example: Copied wget https://dl.grafana.com/oss/release/grafana-11.0.0.linux-amd64.tar.gz tar -zxvf grafana-11.0.0.linux-amd64.tar.gz cd grafana-11.0.0 ./bin/grafana-server Once the Grafana server is launched, the Grafana interface is available at http://localhost:3000. One needs to log in with the admin username and admin password. Once logged in, the Prometheus data source for Grafana needs to be configured, in the option Add your first data source . There, a Prometheus data source needs to be added with the Ngrok address we got earlier, that exposes Prometheus port (example: https://d661-4-223-164-145.ngrok-free.app ). Once Prometheus data source is configured, we can finally create our dashboard! From home, go to Create your first dashboard and then Import dashboard . There, we will use the recommended dashboard template tgi_grafana.json for a dashboard ready to be used, but you may configure your own dashboard as you like. Community contributed dashboard templates are also available, for example here or here . Load your dashboard configuration, and your TGI dashboard should be ready to go! < > Update on GitHub ← Visual Language Models Train Medusa → Monitoring TG I server with Prometheus and Grafana dashboard Setup on the server machine Setup on the monitoring machine |
Using_GPU_Spaces.txt | Using GPU Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using GPU Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using GPU Spaces You can upgrade your Space to use a GPU accelerator using the Settings button in the top navigation bar of the Space. You can even request a free upgrade if you are building a cool demo for a side project! Longer-term, we would also like to expose non-GPU hardware, like HPU, IPU or TPU. If you have a specific AI hardware you'd like to run on, please let us know (website at huggingface.co). As soon as your Space is running on GPU you can see which hardware it’s running on directly from this badge: Hardware Specs In the following tables, you can see the Specs for the different upgrade options. CPU Hardware CPU Memory GPU Memory Disk Hourly Price CPU Basic 2 vCPU 16 GB - 50 GB Free! CPU Upgrade 8 vCPU 32 GB - 50 GB $0.03 GPU Hardware CPU Memory GPU Memory Disk Hourly Price Nvidia T4 - small 4 vCPU 15 GB 16 GB 50 GB $0.40 Nvidia T4 - medium 8 vCPU 30 GB 16 GB 100 GB $0.60 Nvidia A10G - small 4 vCPU 15 GB 24 GB 110 GB $1.00 Nvidia A10G - large 12 vCPU 46 GB 24 GB 200 GB $1.50 2x Nvidia A10G - large 24 vCPU 92 GB 48 GB 1000 GB $3.00 4x Nvidia A10G - large 48 vCPU 184 GB 96 GB 2000 GB $5.00 Nvidia A100 - large 12 vCPU 142 GB 80 GB 1000 GB $4.00 1x Nvidia L40S 8 vCPU 62 GB 48 GB 380 GB $1.80 4x Nvidia L40S 48 vCPU 48 GB 192 GB 3200 GB $8.30 8x Nvidia L40S 192 vCPU 1534 GB 384 GB 6500 GB $23.50 Nvidia H100 24 vCPU 250 GB 80 GB 3000 GB $10.00 8x Nvidia H100 192 vCPU 2 TB 640 GB 3000 GB coming soon TPU Hardware Accelerators Accelerator Memory RAM Hourly Price Google TPU v5e - 1x1 1 16 GB 44 GB $1.20 Google TPU v5e - 2x2 4 64 GB 186 GB $4.75 Google TPU v5e - 2x4 8 128 GB 380 GB $9.50 Configure hardware programmatically You can programmatically configure your Space hardware using huggingface_hub . This allows for a wide range of use cases where you need to dynamically assign GPUs. Check out this guide for more details. Framework specific requirements Most Spaces should run out of the box after a GPU upgrade, but sometimes you’ll need to install CUDA versions of the machine learning frameworks you use. Please, follow this guide to ensure your Space takes advantage of the improved hardware. PyTorch You’ll need to install a version of PyTorch compatible with the built-in CUDA drivers. Adding the following two lines to your requirements.txt file should work: Copied --extra-index-url https: // download.pytorch.org /whl/ cu113 torch You can verify whether the installation was successful by running the following code in your app.py and checking the output in your Space logs: Copied import torch print ( f"Is CUDA available: {torch.cuda.is_available()} " ) # True print ( f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())} " ) # Tesla T4 Many frameworks automatically use the GPU if one is available. This is the case for the Pipelines in 🤗 transformers , fastai and many others. In other cases, or if you use PyTorch directly, you may need to move your models and data to the GPU to ensure computation is done on the accelerator and not on the CPU. You can use PyTorch’s .to() syntax, for example: Copied model = load_pytorch_model() model = model.to( "cuda" ) JAX If you use JAX, you need to specify the URL that contains CUDA compatible packages. Please, add the following lines to your requirements.txt file: Copied -f https: // storage.googleapis.com /jax-releases/ jax_cuda_releases.html jax[cuda11_pip] jaxlib After that, you can verify the installation by printing the output from the following code and checking it in your Space logs. Copied import jax print ( f"JAX devices: {jax.devices()} " ) # JAX devices: [StreamExecutorGpuDevice(id=0, process_index=0)] print ( f"JAX device type: {jax.devices()[ 0 ].device_kind} " ) # JAX device type: Tesla T4 Tensorflow The default tensorflow installation should recognize the CUDA device. Just add tensorflow to your requirements.txt file and use the following code in your app.py to verify in your Space logs. Copied import tensorflow as tf print (tf.config.list_physical_devices( 'GPU' )) # [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] Billing Billing on Spaces is based on hardware usage and is computed by the minute: you get charged for every minute the Space runs on the requested hardware, regardless of whether the Space is used. During a Space’s lifecycle, it is only billed when the Space is actually Running . This means that there is no cost during build or startup. If a running Space starts to fail, it will be automatically suspended and the billing will stop. Spaces running on free hardware are suspended automatically if they are not used for an extended period of time (e.g. two days). Upgraded Spaces run indefinitely by default, even if there is no usage. You can change this behavior by setting a custom “sleep time” in the Space’s settings. To interrupt the billing on your Space, you can change the Hardware to CPU basic, or pause it. Additional information about billing can be found in the dedicated Hub-wide section . Community GPU Grants Do you have an awesome Space but need help covering the GPU hardware upgrade costs? We love helping out those with an innovative Space so please feel free to apply for a community GPU grant and see if yours makes the cut! This application can be found in your Space hardware repo settings in the lower left corner under “sleep time settings”: Set a custom sleep time If your Space runs on the default cpu-basic hardware, it will go to sleep if inactive for more than a set time (currently, 48 hours). Anyone visiting your Space will restart it automatically. If you want your Space never to deactivate or if you want to set a custom sleep time, you need to upgrade to a paid Hardware. By default, an upgraded Space will never go to sleep. However, you can use this setting for your upgraded Space to become idle ( stopped stage) when it’s unused 😴. You are not going to be charged for the upgraded hardware while it is asleep. The Space will ‘wake up’ or get restarted once it receives a new visitor. The following interface will then be available in your Spaces hardware settings: The following options are available: Pausing a Space You can pause a Space from the repo settings. A “paused” Space means that the Space is on hold and will not use resources until manually restarted, and only the owner of a paused Space can restart it. Paused time is not billed. < > Update on GitHub ← Using Spaces for Organization Cards Spaces ZeroGPU → Using GP U Spaces Hardware Specs CPU GPU TPU Configure hardware programmatically Framework specific requirements Py Torch JAX Tensorflow Billing Community GP U Grants Set a custom sleep time Pausing a Space |
Process_text_data.txt | Process text data Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Process text data Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Process text data This guide shows specific methods for processing text datasets. Learn how to: Tokenize a dataset with map() . Align dataset labels with label ids for NLI datasets. For a guide on how to process any type of dataset, take a look at the general process guide . Map The map() function supports processing batches of examples at once which speeds up tokenization. Load a tokenizer from 🤗 Transformers : Copied >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "bert-base-cased" ) Set the batched parameter to True in the map() function to apply the tokenizer to batches of examples: Copied >>> dataset = dataset. map ( lambda examples: tokenizer(examples[ "text" ]), batched= True ) >>> dataset[ 0 ] { 'text' : 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .' , 'label' : 1 , 'input_ids' : [ 101 , 1996 , 2600 , 2003 , 16036 , 2000 , 2022 , 1996 , 7398 , 2301 , 1005 , 1055 , 2047 , 1000 , 16608 , 1000 , 1998 , 2008 , 2002 , 1005 , 1055 , 2183 , 2000 , 2191 , 1037 , 17624 , 2130 , 3618 , 2084 , 7779 , 29058 , 8625 , 13327 , 1010 , 3744 , 1011 , 18856 , 19513 , 3158 , 5477 , 4168 , 2030 , 7112 , 16562 , 2140 , 1012 , 102 ], 'token_type_ids' : [ 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ], 'attention_mask' : [ 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ]} The map() function converts the returned values to a PyArrow-supported format. But explicitly returning the tensors as NumPy arrays is faster because it is a natively supported PyArrow format. Set return_tensors="np" when you tokenize your text: Copied >>> dataset = dataset. map ( lambda examples: tokenizer(examples[ "text" ], return_tensors= "np" ), batched= True ) Align The align_labels_with_mapping() function aligns a dataset label id with the label name. Not all 🤗 Transformers models follow the prescribed label mapping of the original dataset, especially for NLI datasets. For example, the MNLI dataset uses the following label mapping: Copied >>> label2id = { "entailment" : 0 , "neutral" : 1 , "contradiction" : 2 } To align the dataset label mapping with the mapping used by a model, create a dictionary of the label name and id to align on: Copied >>> label2id = { "contradiction" : 0 , "neutral" : 1 , "entailment" : 2 } Pass the dictionary of the label mappings to the align_labels_with_mapping() function, and the column to align on: Copied >>> from datasets import load_dataset >>> mnli = load_dataset( "glue" , "mnli" , split= "train" ) >>> mnli_aligned = mnli.align_labels_with_mapping(label2id, "label" ) You can also use this function to assign a custom mapping of labels to ids. < > Update on GitHub ← Load text data Load tabular data → Process text data Map Align |
Gated_models.txt | Gated models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Gated models Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Gated models To give more control over how models are used, the Hub allows model authors to enable access requests for their models. Users must agree to share their contact information (username and email address) with the model authors to access the model files when enabled. Model authors can configure this request with additional fields. A model with access requests enabled is called a gated model . Access requests are always granted to individual users rather than to entire organizations. A common use case of gated models is to provide access to early research models before the wider release. Manage gated models as a model author To enable access requests, go to the model settings page. By default, the model is not gated. Click on Enable Access request in the top-right corner. By default, access to the model is automatically granted to the user when requesting it. This is referred to as automatic approval . In this mode, any user can access your model once they’ve shared their personal information with you. If you want to manually approve which users can access your model, you must set it to manual approval . When this is the case, you will notice more options: Add access allows you to search for a user and grant them access even if they did not request it. Notification frequency lets you configure when to get notified if new users request access. It can be set to once a day or real-time. By default, an email is sent to your primary email address. For models hosted under an organization, emails are by default sent to the first 5 admins of the organization. In both cases (user or organization) you can set a different email address in the Notifications email field. Review access requests Once access requests are enabled, you have full control of who can access your model or not, whether the approval mode is manual or automatic. You can review and manage requests either from the UI or via the API. From the UI You can review who has access to your gated model from its settings page by clicking on the Review access requests button. This will open a modal with 3 lists of users: pending : the list of users waiting for approval to access your model. This list is empty unless you’ve selected manual approval . You can either Accept or Reject the demand. If the demand is rejected, the user cannot access your model and cannot request access again. accepted : the complete list of users with access to your model. You can choose to Reject access at any time for any user, whether the approval mode is manual or automatic. You can also Cancel the approval, which will move the user to the pending list. rejected : the list of users you’ve manually rejected. Those users cannot access your models. If they go to your model repository, they will see a message Your request to access this repo has been rejected by the repo’s authors . Via the API You can automate the approval of access requests by using the API. You must pass a token with write access to the gated repository. To generate a token, go to your user settings . Method URI Description Headers Payload GET /api/models/{repo_id}/user-access-request/pending Retrieve the list of pending requests. {"authorization": "Bearer $token"} GET /api/models/{repo_id}/user-access-request/accepted Retrieve the list of accepted requests. {"authorization": "Bearer $token"} GET /api/models/{repo_id}/user-access-request/rejected Retrieve the list of rejected requests. {"authorization": "Bearer $token"} POST /api/models/{repo_id}/user-access-request/handle Change the status of a given access request to status . {"authorization": "Bearer $token"} {"status": "accepted"/"rejected"/"pending", "user": "username", "rejectionReason": "Optional rejection reason that will be visible to the user (max 200 characters)."} POST /api/models/{repo_id}/user-access-request/grant Allow a specific user to access your repo. {"authorization": "Bearer $token"} {"user": "username"} The base URL for the HTTP endpoints above is https://huggingface.co . NEW! Those endpoints are now officially supported in our Python client huggingface_hub . List the access requests to your model with list_pending_access_requests , list_accepted_access_requests and list_rejected_access_requests . You can also accept, cancel and reject access requests with accept_access_request , cancel_access_request , reject_access_request . Finally, you can grant access to a user with grant_access . Download access report You can download a report of all access requests for a gated model with the download user access report button. Click on it to download a json file with a list of users. For each entry, you have: user : the user id. Example: julien-c . fullname : name of the user on the Hub. Example: Julien Chaumond . status : status of the request. Either "pending" , "accepted" or "rejected" . email : email of the user. time : datetime when the user initially made the request. Customize requested information By default, users landing on your gated model will be asked to share their contact information (email and username) by clicking the Agree and send request to access repo button. If you want to collect more user information, you can configure additional fields. This information will be accessible from the Settings tab. To do so, add an extra_gated_fields property to your model card metadata containing a list of key/value pairs. The key is the name of the field and value its type or an object with a type field. The list of field types is: text : a single-line text field. checkbox : a checkbox field. date_picker : a date picker field. country : a country dropdown. The list of countries is based on the ISO 3166-1 alpha-2 standard. select : a dropdown with a list of options. The list of options is defined in the options field. Example: options: ["option 1", "option 2", {label: "option3", value: "opt3"}] . Finally, you can also personalize the message displayed to the user with the extra_gated_prompt extra field. Here is an example of customized request form where the user is asked to provide their company name and country and acknowledge that the model is for non-commercial use only. Copied --- extra_gated_prompt: "You agree to not use the model to conduct experiments that cause harm to human subjects." extra_gated_fields: Company: text Country: country Specific date: date_picker I want to use this model for: type: select options: - Research - Education - label: Other value: other I agree to use this model for non-commercial use ONLY: checkbox --- In some cases, you might also want to modify the default text in the gate heading, description, and button. For those use cases, you can modify extra_gated_heading , extra_gated_description and extra_gated_button_content like this: Copied --- extra_gated_heading: "Acknowledge license to accept the repository" extra_gated_description: "Our team may take 2-3 days to process your request" extra_gated_button_content: "Acknowledge license" --- Example use cases of programmatically managing access requests Here are a few interesting use cases of programmatically managing access requests for gated repos we’ve seen organically emerge in the community. As a reminder, the model repo needs to be set to manual approval, otherwise users get access to it automatically. Possible use cases of programmatic management include: If you have advanced user request screening requirements (for advanced compliance requirements, etc) or you wish to handle the user requests outside the Hub. An example for this was Meta’s Llama 2 initial release where users had to request access on a Meta website. You can ask users for their HF username in your access flow, and then use a script to programmatically accept user requests on the Hub based on your set of conditions. If you want to condition access to a model based on completing a payment flow (note that the actual payment flow happens outside of the Hub). Here’s an example repo from TrelisResearch that uses this use case. @RonanMcGovern has posted a video about the flow and tips on how to implement it. Manage gated models as an organization (Enterprise Hub) Enterprise Hub subscribers can create a Gating Group Collection to grant (or reject) access to all the models and datasets in a collection at once. More information about Gating Group Collections can be found in our dedicated doc . Access gated models as a user As a user, if you want to use a gated model, you will need to request access to it. This means that you must be logged in to a Hugging Face user account. Requesting access can only be done from your browser. Go to the model on the Hub and you will be prompted to share your information: By clicking on Agree , you agree to share your username and email address with the model authors. In some cases, additional fields might be requested. To help the model authors decide whether to grant you access, try to fill out the form as completely as possible. Once the access request is sent, there are two possibilities. If the approval mechanism is automatic, you immediately get access to the model files. Otherwise, the requests have to be approved manually by the authors, which can take more time. The model authors have complete control over model access. In particular, they can decide at any time to block your access to the model without prior notice, regardless of approval mechanism or if your request has already been approved. Download files To download files from a gated model you’ll need to be authenticated. In the browser, this is automatic as long as you are logged in with your account. If you are using a script, you will need to provide a user token . In the Hugging Face Python ecosystem ( transformers , diffusers , datasets , etc.), you can login your machine using the huggingface_hub library and running in your terminal: Copied huggingface-cli login Alternatively, you can programmatically login using login() in a notebook or a script: Copied >>> from huggingface_hub import login >>> login() You can also provide the token parameter to most loading methods in the libraries ( from_pretrained , hf_hub_download , load_dataset , etc.), directly from your scripts. For more details about how to login, check out the login guide . < > Update on GitHub ← Card Components Uploading Models → Gated models Manage gated models as a model author Review access requests From the UI Via the API Download access report Customize requested information Example use cases of programmatically managing access requests Manage gated models as an organization ( Enterprise Hub) Access gated models as a user Download files |
Model_Card_components.txt | Model Card components Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Model Card components Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Annotated Model Card Carbon Emissions Model Card Guidebook Landscape Analysis Card Components Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Model Card components Model Card Components are special elements that you can inject directly into your Model Card markdown to display powerful custom components in your model page. These components are authored by us, feel free to share ideas about new Model Card component in this discussion . The Gallery component The <Gallery /> component can be used in your model card to showcase your generated images and videos. How to use it? Update your Model Card widget metadata to add the media you want to showcase. Copied widget: - text: "drawing of tintin in a shop" output: url: "images/shop.png" - text: "drawing of tintin watching rugby" output: url: "images/rugby.png" parameters: negative_prompt: "blurry" - text: "tintin working at the office" output: url: "images/office.png" Add the <Gallery /> component to your card. The widget metadata will be used by the <Gallery /> component to display the media with each associated prompt. Copied < Gallery /> ## Model description TintinIA is fine-tuned version of Stable-Diffusion-xl trained on 125 comics panels from Tintin album. Hint: Support of Card Components through the GUI editor coming soon… < > Update on GitHub ← Landscape Analysis Gated Models → Model Card components The Gallery component How to use it? |
DDUF.txt | DDUF Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation DDUF Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Integrate a library with the Hub Tasks GGUF DDUF Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started DDUF Overview DDUF ( D DUF’s D iffusion U nified F ormat) is a single-file format for diffusion models that aims to unify the different model distribution methods and weight-saving formats by packaging all model components into a single file. It is language-agnostic and built to be parsable from a remote location without downloading the entire file. This work draws inspiration from the GGUF format. Check out the DDUF org to start using some of the most popular diffusion models in DDUF. We welcome contributions with open arms! To create a widely adopted file format, we need early feedback from the community. Nothing is set in stone, and we value everyone’s input. Is your use case not covered? Please let us know in the DDUF organization discussions . Its key features include the following. Single file packaging. Based on ZIP file format to leverage existing tooling. No compression, ensuring mmap compatibility for fast loading and saving. Language-agnostic : tooling can be implemented in Python, JavaScript, Rust, C++, etc. HTTP-friendly : metadata and file structure can be fetched remotely using HTTP Range requests. Flexible : each model component is stored in its own directory, following the current Diffusers structure. Safe : uses Safetensors as a weight-saving format and prohibits nested directories to prevent ZIP bombs. Technical specifications Technically, a .dduf file is a .zip archive . By building on a universally supported file format, we ensure robust tooling already exists. However, some constraints are enforced to meet diffusion models’ requirements: Data must be stored uncompressed (flag 0 ), allowing lazy-loading using memory-mapping. Data must be stored using ZIP64 protocol, enabling saving files above 4GB. The archive can only contain .json , .safetensors , .model and .txt files. A model_index.json file must be present at the root of the archive. It must contain a key-value mapping with metadata about the model and its components. Each component must be stored in its own directory (e.g., vae/ , text_encoder/ ). Nested files must use UNIX-style path separators ( / ). Each directory must correspond to a component in the model_index.json index. Each directory must contain a json config file (one of config.json , tokenizer_config.json , preprocessor_config.json , scheduler_config.json ). Sub-directories are forbidden. Want to check if your file is valid? Check it out using this Space: https://huggingface.co/spaces/DDUF/dduf-check . Usage The huggingface_hub provides tooling to handle DDUF files in Python. It includes built-in rules to validate file integrity and helpers to read and export DDUF files. The goal is to see this tooling adopted in the Python ecosystem, such as in the diffusers integration. Similar tooling can be developed for other languages (JavaScript, Rust, C++, etc.). How to read a DDUF file? Pass a path to read_dduf_file to read a DDUF file. Only the metadata is read, meaning this is a lightweight call that won’t explode your memory. In the example below, we consider that you’ve already downloaded the FLUX.1-dev.dduf file locally. Copied >>> from huggingface_hub import read_dduf_file # Read DDUF metadata >>> dduf_entries = read_dduf_file( "FLUX.1-dev.dduf" ) read_dduf_file returns a mapping where each entry corresponds to a file in the DDUF archive. A file is represented by a DDUFEntry dataclass that contains the filename, offset, and length of the entry in the original DDUF file. This information is useful to read its content without loading the whole file. In practice, you won’t have to handle low-level reading but rely on helpers instead. For instance, here is how to load the model_index.json content: Copied >>> import json >>> json.loads(dduf_entries[ "model_index.json" ].read_text()) { '_class_name' : 'FluxPipeline' , '_diffusers_version' : '0.32.0.dev0' , '_name_or_path' : 'black-forest-labs/FLUX.1-dev' , ... For binary files, you’ll want to access the raw bytes using as_mmap . This returns bytes as a memory-mapping on the original file. The memory-mapping allows you to read only the bytes you need without loading everything in memory. For instance, here is how to load safetensors weights: Copied >>> import safetensors.torch >>> with dduf_entries[ "vae/diffusion_pytorch_model.safetensors" ].as_mmap() as mm: ... state_dict = safetensors.torch.load(mm) # `mm` is a bytes object as_mmap must be used in a context manager to benefit from the memory-mapping properties. How to write a DDUF file? Pass a folder path to export_folder_as_dduf to export a DDUF file. Copied # Export a folder as a DDUF file >>> from huggingface_hub import export_folder_as_dduf >>> export_folder_as_dduf( "FLUX.1-dev.dduf" , folder_path= "path/to/FLUX.1-dev" ) This tool scans the folder, adds the relevant entries and ensures the exported file is valid. If anything goes wrong during the process, a DDUFExportError is raised. For more flexibility, use [ export_entries_as_dduf ] to explicitly specify a list of files to include in the final DDUF file: Copied # Export specific files from the local disk. >>> from huggingface_hub import export_entries_as_dduf >>> export_entries_as_dduf( ... dduf_path= "stable-diffusion-v1-4-FP16.dduf" , ... entries=[ # List entries to add to the DDUF file (here, only FP16 weights) ... ( "model_index.json" , "path/to/model_index.json" ), ... ( "vae/config.json" , "path/to/vae/config.json" ), ... ( "vae/diffusion_pytorch_model.fp16.safetensors" , "path/to/vae/diffusion_pytorch_model.fp16.safetensors" ), ... ( "text_encoder/config.json" , "path/to/text_encoder/config.json" ), ... ( "text_encoder/model.fp16.safetensors" , "path/to/text_encoder/model.fp16.safetensors" ), ... # ... add more entries here ... ] ... ) export_entries_as_dduf works well if you’ve already saved your model on the disk. But what if you have a model loaded in memory and want to serialize it directly into a DDUF file? export_entries_as_dduf lets you do that by providing a Python generator that tells how to serialize the data iteratively: Copied (...) # Export state_dicts one by one from a loaded pipeline >>> def as_entries ( pipe: DiffusionPipeline ) -> Generator[ Tuple [ str , bytes ], None , None ]: ... # Build a generator that yields the entries to add to the DDUF file. ... # The first element of the tuple is the filename in the DDUF archive. The second element is the content of the file. ... # Entries will be evaluated lazily when the DDUF file is created (only 1 entry is loaded in memory at a time) ... yield "vae/config.json" , pipe.vae.to_json_string().encode() ... yield "vae/diffusion_pytorch_model.safetensors" , safetensors.torch.save(pipe.vae.state_dict()) ... yield "text_encoder/config.json" , pipe.text_encoder.config.to_json_string().encode() ... yield "text_encoder/model.safetensors" , safetensors.torch.save(pipe.text_encoder.state_dict()) ... # ... add more entries here >>> export_entries_as_dduf(dduf_path= "my-cool-diffusion-model.dduf" , entries=as_entries(pipe)) Loading a DDUF file with Diffusers Diffusers has a built-in integration for DDUF files. Here is an example on how to load a pipeline from a stored checkpoint on the Hub: Copied from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained( "DDUF/FLUX.1-dev-DDUF" , dduf_file= "FLUX.1-dev.dduf" , torch_dtype=torch.bfloat16 ).to( "cuda" ) image = pipe( "photo a cat holding a sign that says Diffusers" , num_inference_steps= 50 , guidance_scale= 3.5 ).images[ 0 ] image.save( "cat.png" ) F.A.Q. Why build on top of ZIP? ZIP provides several advantages: Universally supported file format No additional dependencies for reading Built-in file indexing Wide language support Why not use a TAR with a table of contents at the beginning of the archive? See the explanation in this comment . Why no compression? Enables direct memory mapping of large files Ensures consistent and predictable remote file access Prevents CPU overhead during file reading Maintains compatibility with safetensors Can I modify a DDUF file? No. For now, DDUF files are designed to be immutable. To update a model, create a new DDUF file. Which frameworks/apps support DDUFs? Diffusers We are constantly reaching out to other libraries and frameworks. If you are interested in adding support to your project, open a Discussion in the DDUF org . < > Update on GitHub ← GGUF usage with Ollama Datasets → DDUF Overview Technical specifications Usage How to read a DDU F file? How to write a DDU F file? Loading a DDU F file with Diffusers F. A. Q. Why build on top of ZI P? Why not use a TA R with a table of contents at the beginning of the archive? Why no compression? Can I modify a DDU F file? Which frameworks/apps support DDU Fs? |
Tensorflow_API.txt | Tensorflow API Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Safetensors documentation Tensorflow API Safetensors 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.5.0-rc.0 v0.3.2 v0.2.9 EN Getting started 🤗 Safetensors Speed Comparison Tensor Sharing in Pytorch Metadata Parsing Convert weights to safetensors API Torch API Tensorflow API PaddlePaddle API Flax API Numpy API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.5.0-rc.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Tensorflow API safetensors.tensorflow.load_file < source > ( filename : typing.Union[str, os.PathLike] ) → Dict[str, tf.Tensor] Parameters filename ( str , or os.PathLike )) — The name of the file which contains the tensors Returns Dict[str, tf.Tensor] dictionary that contains name as key, value as tf.Tensor Loads a safetensors file into tensorflow format. Example: Copied from safetensors.tensorflow import load_file file_path = "./my_folder/bert.safetensors" loaded = load_file(file_path) safetensors.tensorflow.load < source > ( data : bytes ) → Dict[str, tf.Tensor] Parameters data ( bytes ) — The content of a safetensors file Returns Dict[str, tf.Tensor] dictionary that contains name as key, value as tf.Tensor on cpu Loads a safetensors file into tensorflow format from pure bytes. Example: Copied from safetensors.tensorflow import load file_path = "./my_folder/bert.safetensors" with open (file_path, "rb" ) as f: data = f.read() loaded = load(data) safetensors.tensorflow.save_file < source > ( tensors : typing.Dict[str, tensorflow.python.framework.tensor.Tensor] filename : typing.Union[str, os.PathLike] metadata : typing.Optional[typing.Dict[str, str]] = None ) → None Parameters tensors ( Dict[str, tf.Tensor] ) — The incoming tensors. Tensors need to be contiguous and dense. filename ( str , or os.PathLike )) — The filename we’re saving into. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns None Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.tensorflow import save_file import tensorflow as tf tensors = { "embedding" : tf.zeros(( 512 , 1024 )), "attention" : tf.zeros(( 256 , 256 ))} save_file(tensors, "model.safetensors" ) safetensors.tensorflow.save < source > ( tensors : typing.Dict[str, tensorflow.python.framework.tensor.Tensor] metadata : typing.Optional[typing.Dict[str, str]] = None ) → bytes Parameters tensors ( Dict[str, tf.Tensor] ) — The incoming tensors. Tensors need to be contiguous and dense. metadata ( Dict[str, str] , optional , defaults to None ) — Optional text only metadata you might want to save in your header. For instance it can be useful to specify more about the underlying tensors. This is purely informative and does not affect tensor loading. Returns bytes The raw bytes representing the format Saves a dictionary of tensors into raw bytes in safetensors format. Example: Copied from safetensors.tensorflow import save import tensorflow as tf tensors = { "embedding" : tf.zeros(( 512 , 1024 )), "attention" : tf.zeros(( 256 , 256 ))} byte_data = save(tensors) < > Update on GitHub ← Torch API PaddlePaddle API → Tensorflow API |
Search_text_in_a_dataset.txt | Search text in a dataset Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation Search text in a dataset Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Search text in a dataset The dataset viewer provides a /search endpoint for searching words in a dataset. Currently, only datasets with Parquet exports are supported so the dataset viewer can index the contents and run the search without downloading the whole dataset. This guide shows you how to use the dataset viewer’s /search endpoint to search for a query string. Feel free to also try it out with ReDoc . The text is searched in the columns of type string , even if the values are nested in a dictionary. We use DuckDB for full text search with the BM25 (Best Match 25) algorithm. BM25 is a ranking algorithm for information retrieval and search engines that determines a document’s relevance to a given query and ranks documents based on their relevance scores. Porter stemmer (which assumes English text) is used to reduce words to their root or base form, known as the stem. This process, called stemming, involves removing suffixes and prefixes from words to identify their core meaning. The purpose of a stemmer is to improve search accuracy and efficiency by ensuring that different forms of a word are recognized as the same term. The /search endpoint accepts five query parameters: dataset : the dataset name, for example nyu-mll/glue or mozilla-foundation/common_voice_10_0 config : the subset name, for example cola split : the split name, for example train query : the text to search offset : the offset of the slice, for example 150 length : the length of the slice, for example 10 (maximum: 100 ) For example, let’s search for the text "dog" in the train split of the SelfRC subset of the ibm/duorc dataset, restricting the results to the slice 150-151: Python JavaScript cURL Copied import requests headers = { "Authorization" : f"Bearer {API_TOKEN} " } API_URL = "https://datasets-server.huggingface.co/search?dataset=ibm/duorc&config=SelfRC&split=train&query=dog&offset=150&length=2" def query (): response = requests.get(API_URL, headers=headers) return response.json() data = query() The endpoint response is a JSON containing two keys (same format as /rows ): The features of a dataset, including the column’s name and data type. The slice of rows of a dataset and the content contained in each column of a specific row. The rows are ordered by the row index, and the text strings matching the query are not highlighted. For example, here are the features and the slice 150-151 of matching rows of the ibm/duorc / SelfRC train split for the query dog : Copied { "features" : [ { "feature_idx" : 0 , "name" : "plot_id" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 1 , "name" : "plot" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 2 , "name" : "title" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 3 , "name" : "question_id" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 4 , "name" : "question" , "type" : { "dtype" : "string" , "_type" : "Value" } } , { "feature_idx" : 5 , "name" : "answers" , "type" : { "feature" : { "dtype" : "string" , "_type" : "Value" } , "_type" : "Sequence" } } , { "feature_idx" : 6 , "name" : "no_answer" , "type" : { "dtype" : "bool" , "_type" : "Value" } } ] , "rows" : [ { "row_idx" : 1561 , "row" : { "plot_id" : "/m/014bjk" , "plot" : "The film begins with clips that track a telephone call between London and Geneva, where a university student and part-time model, Valentine Dussault (Irène Jacob), is talking to her emotionally infantile and possessive boyfriend. During her work as a model she poses for a chewing-gum campaign and during the photo shoot the photographer asks her to look very sad. While walking back home, Auguste, a neighbour of Valentine's, drops a set of books, notices that a particular chapter of the Criminal Code opened at random, and concentrates on that passage. As she drives back to her apartment, Valentine is distracted while adjusting the radio and accidentally hits a dog. She tracks down the owner, a reclusive retired judge, Joseph Kern (Jean-Louis Trintignant). He seems unconcerned by the accident or the injuries sustained by Rita, his dog. Valentine takes Rita to a veterinarian, where she learns that Rita is pregnant. Valentine takes the dog home. Later, money is delivered to her apartment from an unnamed sender.\nWhilst Valentine is walking Rita the next day the dog runs away and Valentine eventually finds her back at Kern's house. She asks and he confirms that the money sent to her came from him, for the vet bill. He then tells Valentine she can have the dog. A short time later Valentine finds Kern eavesdropping on his neighbours' private telephone conversations. The judge challenges Valentine to go tell the neighbours and initially she goes to do so. She visits the neighbours' house, which appears, on the surface, to contain a contented nuclear family, causing her to change her mind about exposing their secrets. She returns to Kern's house and Kern tells her that it would make no difference if she denounced him for his spying because the people's lives he listens to would eventually turn into hell anyway. She leaves saying that she feels nothing but pity for him.\nWhilst visiting Kern, Valentine hears a phone conversation between her (unbeknownst to her) neighbour, Auguste, and his girlfriend, Karin (Frederique Feder). They discuss if they should go bowling. Valentine covers her ears but from the very little she hears she concludes that they love each other. Kern disagrees. That evening Valentine is alone at home and hopes that her boyfriend will call, but it is the photographer who calls, saying that her billboard was set up that evening and asks her to join them bowling to celebrate. Later, Auguste takes his exam and passes it and becomes a judge. Karin asks if he was asked any questions regarding the article that was open when he dropped his books. Auguste says yes. Karin gives him a fancy fountain pen as a gift and he wonders what the first judgment he signs with it will be. That evening, Kern writes a series of letters to his neighbours and the court confessing his activities, and the community files a class action. Later, at the law courts, he sees Karin make the acquaintance of and begin to flirt with another man. Earlier, Auguste had missed a call from Karin and tried to call her back but got no answer.\nValentine reads the news about a retired judge who spied on his neighbours and rushes to Kern's house to tell him that she did not report on him. He confesses that he turned himself in, just to see what she would do. He asks her in and shows her that Rita has had seven puppies. He tells her that in their last conversation when she spoke about pity he later realized that she really meant disgust. He ponders about the reasons why people obey laws and concludes that often it is more on selfish grounds and from fear than about obeying the law or being decent. It is his birthday and he offers her pear brandy for a toast. During their conversation he reminisces about a sailor he acquitted a long time ago, only later realizing he had made a mistake, and that the man was guilty. However, the man later married, had children and grandchildren and lives peacefully and happy. Valentine says that he did what he had to do, but Kern wonders how many other people that he acquitted or condemned might have seen a different life had he decided otherwise. Valentine tells Kern about her intended trip to England for a modeling job and to visit her boyfriend. Kern suggests that she take the ferry.\nAuguste has been unable to reach Karin since graduation so he goes to her place and sees her having sex with another man. Distraught, he leaves. Later, Auguste sees Karin and her new boyfriend in a restaurant. He gets her attention by tapping on the restaurant window with the pen she gave him. But when she rushes outside, he hides from her. In a temper, he ties his dog by a quayside and abandons him.\nKarin runs a service providing personalised weather information to travelers by telephone. Kern calls and enquires about the weather in the English Channel for the time when Valentine will be traveling to England. Karin states that she expects the weather to be perfect and reveals that she is about to take a trip there (with her new boyfriend who owns a yacht).\nThe day before Valentine leaves, she invites Kern to a fashion show where she is modeling. After the show they speak about the dream Kern had about her, where he saw her at the age of 50 and happy with an unidentified man. The conversation then turns to Kern and the reasons why he disliked Karin. Kern reveals that before becoming a judge, he was in love with a woman very much like Karin, who betrayed him for another man. While preparing for his exam, he once went to the same theatre where the fashion show took place and he accidentally dropped one of his books. When he picked it up, Kern studied the chapter where the book accidentally opened, which turned out to be the crucial question at his examination. After his girlfriend left him, he followed her across the English Channel but never saw her again, because she died in an accident. Later, he was assigned to judge a case where the defendant was the same man who took his girlfriend from him. Despite this connection, Kern did not recuse himself from the case and found the man guilty. He tells Valentine the judgment was entirely legal but also that he subsequently requested early retirement.\nValentine boards the ferry to England. Auguste is also on the ferry, clutching the dog he had temporarily abandoned. Although living in the same neighborhood and nearly crossing paths many times, the two have still never met. Suddenly a storm rises and sinks both the ferry and the boat with Karin and her boyfriend. Only seven survivors are pulled from the ferry: the main characters from the first two films of the trilogy, Julie and Olivier from Blue, Karol and Dominique from White, and Valentine and Auguste, who meet for the first time, as well as an English bartender named Stephen Killian. As in the previous films, the film's final sequence shows a character crying - in this case, the judge - but the final image replicates the iconic chewing-gum poster of Valentine, but this time with real emotion showing on her face." , "title" : "Three Colors: Red" , "question_id" : "7c583513-0b7f-ddb3-be43-64befc7e90cc" , "question" : "Where is Valentine going on her trip?" , "answers" : [ "England." ] , "no_answer" : false } , "truncated_cells" : [ ] } , { "row_idx" : 1562 , "row" : { "plot_id" : "/m/014bjk" , "plot" : "The film begins with clips that track a telephone call between London and Geneva, where a university student and part-time model, Valentine Dussault (Irène Jacob), is talking to her emotionally infantile and possessive boyfriend. During her work as a model she poses for a chewing-gum campaign and during the photo shoot the photographer asks her to look very sad. While walking back home, Auguste, a neighbour of Valentine's, drops a set of books, notices that a particular chapter of the Criminal Code opened at random, and concentrates on that passage. As she drives back to her apartment, Valentine is distracted while adjusting the radio and accidentally hits a dog. She tracks down the owner, a reclusive retired judge, Joseph Kern (Jean-Louis Trintignant). He seems unconcerned by the accident or the injuries sustained by Rita, his dog. Valentine takes Rita to a veterinarian, where she learns that Rita is pregnant. Valentine takes the dog home. Later, money is delivered to her apartment from an unnamed sender.\nWhilst Valentine is walking Rita the next day the dog runs away and Valentine eventually finds her back at Kern's house. She asks and he confirms that the money sent to her came from him, for the vet bill. He then tells Valentine she can have the dog. A short time later Valentine finds Kern eavesdropping on his neighbours' private telephone conversations. The judge challenges Valentine to go tell the neighbours and initially she goes to do so. She visits the neighbours' house, which appears, on the surface, to contain a contented nuclear family, causing her to change her mind about exposing their secrets. She returns to Kern's house and Kern tells her that it would make no difference if she denounced him for his spying because the people's lives he listens to would eventually turn into hell anyway. She leaves saying that she feels nothing but pity for him.\nWhilst visiting Kern, Valentine hears a phone conversation between her (unbeknownst to her) neighbour, Auguste, and his girlfriend, Karin (Frederique Feder). They discuss if they should go bowling. Valentine covers her ears but from the very little she hears she concludes that they love each other. Kern disagrees. That evening Valentine is alone at home and hopes that her boyfriend will call, but it is the photographer who calls, saying that her billboard was set up that evening and asks her to join them bowling to celebrate. Later, Auguste takes his exam and passes it and becomes a judge. Karin asks if he was asked any questions regarding the article that was open when he dropped his books. Auguste says yes. Karin gives him a fancy fountain pen as a gift and he wonders what the first judgment he signs with it will be. That evening, Kern writes a series of letters to his neighbours and the court confessing his activities, and the community files a class action. Later, at the law courts, he sees Karin make the acquaintance of and begin to flirt with another man. Earlier, Auguste had missed a call from Karin and tried to call her back but got no answer.\nValentine reads the news about a retired judge who spied on his neighbours and rushes to Kern's house to tell him that she did not report on him. He confesses that he turned himself in, just to see what she would do. He asks her in and shows her that Rita has had seven puppies. He tells her that in their last conversation when she spoke about pity he later realized that she really meant disgust. He ponders about the reasons why people obey laws and concludes that often it is more on selfish grounds and from fear than about obeying the law or being decent. It is his birthday and he offers her pear brandy for a toast. During their conversation he reminisces about a sailor he acquitted a long time ago, only later realizing he had made a mistake, and that the man was guilty. However, the man later married, had children and grandchildren and lives peacefully and happy. Valentine says that he did what he had to do, but Kern wonders how many other people that he acquitted or condemned might have seen a different life had he decided otherwise. Valentine tells Kern about her intended trip to England for a modeling job and to visit her boyfriend. Kern suggests that she take the ferry.\nAuguste has been unable to reach Karin since graduation so he goes to her place and sees her having sex with another man. Distraught, he leaves. Later, Auguste sees Karin and her new boyfriend in a restaurant. He gets her attention by tapping on the restaurant window with the pen she gave him. But when she rushes outside, he hides from her. In a temper, he ties his dog by a quayside and abandons him.\nKarin runs a service providing personalised weather information to travelers by telephone. Kern calls and enquires about the weather in the English Channel for the time when Valentine will be traveling to England. Karin states that she expects the weather to be perfect and reveals that she is about to take a trip there (with her new boyfriend who owns a yacht).\nThe day before Valentine leaves, she invites Kern to a fashion show where she is modeling. After the show they speak about the dream Kern had about her, where he saw her at the age of 50 and happy with an unidentified man. The conversation then turns to Kern and the reasons why he disliked Karin. Kern reveals that before becoming a judge, he was in love with a woman very much like Karin, who betrayed him for another man. While preparing for his exam, he once went to the same theatre where the fashion show took place and he accidentally dropped one of his books. When he picked it up, Kern studied the chapter where the book accidentally opened, which turned out to be the crucial question at his examination. After his girlfriend left him, he followed her across the English Channel but never saw her again, because she died in an accident. Later, he was assigned to judge a case where the defendant was the same man who took his girlfriend from him. Despite this connection, Kern did not recuse himself from the case and found the man guilty. He tells Valentine the judgment was entirely legal but also that he subsequently requested early retirement.\nValentine boards the ferry to England. Auguste is also on the ferry, clutching the dog he had temporarily abandoned. Although living in the same neighborhood and nearly crossing paths many times, the two have still never met. Suddenly a storm rises and sinks both the ferry and the boat with Karin and her boyfriend. Only seven survivors are pulled from the ferry: the main characters from the first two films of the trilogy, Julie and Olivier from Blue, Karol and Dominique from White, and Valentine and Auguste, who meet for the first time, as well as an English bartender named Stephen Killian. As in the previous films, the film's final sequence shows a character crying - in this case, the judge - but the final image replicates the iconic chewing-gum poster of Valentine, but this time with real emotion showing on her face." , "title" : "Three Colors: Red" , "question_id" : "80becb22-908d-84bc-3a5f-00b620d551bc" , "question" : "What was the profession of the dog's owner?" , "answers" : [ "Retired Judge" ] , "no_answer" : false } , "truncated_cells" : [ ] } ] , "num_rows_total" : 5247 , "num_rows_per_page" : 100 , "partial" : false } If the result has partial: true it means that the search couldn’t be run on the full dataset because it’s too big. Indeed, the indexing for /search can be partial if the dataset is bigger than 5GB. In that case, it only uses the first 5GB. Truncated responses Unlike /first-rows , there is currently no truncation in /search . The truncated_cells field is still there but is always empty. < > Update on GitHub ← Download slices of rows Filter rows in a dataset → Search text in a dataset Truncated responses |
Metadata_Parsing.txt | Metadata Parsing Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Safetensors documentation Metadata Parsing Safetensors 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.5.0-rc.0 v0.3.2 v0.2.9 EN Getting started 🤗 Safetensors Speed Comparison Tensor Sharing in Pytorch Metadata Parsing Convert weights to safetensors API Torch API Tensorflow API PaddlePaddle API Flax API Numpy API You are viewing main version, which requires installation from source . If you'd like regular pip install, checkout the latest stable version ( v0.5.0-rc.0 ). Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Metadata Parsing Given the simplicity of the format, it’s very simple and efficient to fetch and parse metadata about Safetensors weights – i.e. the list of tensors, their types, and their shapes or numbers of parameters – using small (Range) HTTP requests . This parsing has been implemented in JS in huggingface.js (sample code follows below), but it would be similar in any language. Example use case There can be many potential use cases. For instance, we use it on the HuggingFace Hub to display info about models which have safetensors weights: Usage http javascript python From 🤗 Hub , you can get metadata of a model with HTTP range requests instead of downloading the entire safetensors file with all the weights. In this example python script below (you can use any language that has HTTP requests support), we are parsing metadata of gpt2 . Copied import requests # pip install requests import struct def parse_single_file ( url ): # Fetch the first 8 bytes of the file headers = { 'Range' : 'bytes=0-7' } response = requests.get(url, headers=headers) # Interpret the bytes as a little-endian unsigned 64-bit integer length_of_header = struct.unpack( '<Q' , response.content)[ 0 ] # Fetch length_of_header bytes starting from the 9th byte headers = { 'Range' : f'bytes=8- { 7 + length_of_header} ' } response = requests.get(url, headers=headers) # Interpret the response as a JSON object header = response.json() return header url = "https://huggingface.co/gpt2/resolve/main/model.safetensors" header = parse_single_file(url) print (header) # { # "__metadata__": { "format": "pt" }, # "h.10.ln_1.weight": { # "dtype": "F32", # "shape": [768], # "data_offsets": [223154176, 223157248] # }, # ... # } Example output For instance, here are the number of params per dtype for a few models on the HuggingFace Hub. Also see this issue for more examples of usage. model safetensors params gpt2 single-file { ‘F32’ => 137022720 } roberta-base single-file { ‘F32’ => 124697433, ‘I64’ => 514 } Jean-Baptiste/camembert-ner single-file { ‘F32’ => 110035205, ‘I64’ => 514 } roberta-large single-file { ‘F32’ => 355412057, ‘I64’ => 514 } distilbert-base-german-cased single-file { ‘F32’ => 67431550 } EleutherAI/gpt-neox-20b sharded { ‘F16’ => 20554568208, ‘U8’ => 184549376 } bigscience/bloom-560m single-file { ‘F16’ => 559214592 } bigscience/bloom sharded { ‘BF16’ => 176247271424 } bigscience/bloom-3b single-file { ‘F16’ => 3002557440 } < > Update on GitHub ← Tensor Sharing in Pytorch Convert weights to safetensors → Metadata Parsing Example use case Usage Example output |
Differences_between_Dataset_and_IterableDataset_2e.txt | Differences between Dataset and IterableDataset Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Differences between Dataset and IterableDataset Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Differences between Dataset and IterableDataset There are two types of dataset objects, a Dataset and an IterableDataset . Whichever type of dataset you choose to use or create depends on the size of the dataset. In general, an IterableDataset is ideal for big datasets (think hundreds of GBs!) due to its lazy behavior and speed advantages, while a Dataset is great for everything else. This page will compare the differences between a Dataset and an IterableDataset to help you pick the right dataset object for you. Downloading and streaming When you have a regular Dataset , you can access it using my_dataset[0] . This provides random access to the rows. Such datasets are also called “map-style” datasets. For example you can download ImageNet-1k like this and access any row: Copied from datasets import load_dataset imagenet = load_dataset( "imagenet-1k" , split= "train" ) # downloads the full dataset print (imagenet[ 0 ]) But one caveat is that you must have the entire dataset stored on your disk or in memory, which blocks you from accessing datasets bigger than the disk. Because it can become inconvenient for big datasets, there exists another type of dataset, the IterableDataset . When you have an IterableDataset , you can access it using a for loop to load the data progressively as you iterate over the dataset. This way, only a small fraction of examples is loaded in memory, and you don’t write anything on disk. For example, you can stream the ImageNet-1k dataset without downloading it on disk: Copied from datasets import load_dataset imagenet = load_dataset( "imagenet-1k" , split= "train" , streaming= True ) # will start loading the data when iterated over for example in imagenet: print (example) break Streaming can read online data without writing any file to disk. For example, you can stream datasets made out of multiple shards, each of which is hundreds of gigabytes like C4 , OSCAR or LAION-2B . Learn more about how to stream a dataset in the Dataset Streaming Guide . This is not the only difference though, because the “lazy” behavior of an IterableDataset is also present when it comes to dataset creation and processing. Creating map-style datasets and iterable datasets You can create a Dataset using lists or dictionaries, and the data is entirely converted to Arrow so you can easily access any row: Copied my_dataset = Dataset.from_dict({ "col_1" : [ 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 ]}) print (my_dataset[ 0 ]) To create an IterableDataset on the other hand, you must provide a “lazy” way to load the data. In Python, we generally use generator functions. These functions yield one example at a time, which means you can’t access a row by slicing it like a regular Dataset : Copied def my_generator ( n ): for i in range (n): yield { "col_1" : i} my_iterable_dataset = IterableDataset.from_generator(my_generator, gen_kwargs={ "n" : 10 }) for example in my_iterable_dataset: print (example) break Loading local files entirely and progressively It is possible to convert local or remote data files to an Arrow Dataset using load_dataset() : Copied data_files = { "train" : [ "path/to/data.csv" ]} my_dataset = load_dataset( "csv" , data_files=data_files, split= "train" ) print (my_dataset[ 0 ]) However, this requires a conversion step from CSV to Arrow format, which takes time and disk space if your dataset is big. To save disk space and skip the conversion step, you can define an IterableDataset by streaming from the local files directly. This way, the data is read progressively from the local files as you iterate over the dataset: Copied data_files = { "train" : [ "path/to/data.csv" ]} my_iterable_dataset = load_dataset( "csv" , data_files=data_files, split= "train" , streaming= True ) for example in my_iterable_dataset: # this reads the CSV file progressively as you iterate over the dataset print (example) break Many file formats are supported, like CSV, JSONL, and Parquet, as well as image and audio files. You can find more information in the corresponding guides for loading tabular , text , vision , and audio datasets. Eager data processing and lazy data processing When you process a Dataset object using Dataset.map() , the entire dataset is processed immediately and returned. This is similar to how pandas works for example. Copied my_dataset = my_dataset. map (process_fn) # process_fn is applied on all the examples of the dataset print (my_dataset[ 0 ]) On the other hand, due to the “lazy” nature of an IterableDataset , calling IterableDataset.map() does not apply your map function over the full dataset. Instead, your map function is applied on-the-fly. Because of that, you can chain multiple processing steps and they will all run at once when you start iterating over the dataset: Copied my_iterable_dataset = my_iterable_dataset. map (process_fn_1) my_iterable_dataset = my_iterable_dataset. filter (filter_fn) my_iterable_dataset = my_iterable_dataset. map (process_fn_2) # process_fn_1, filter_fn and process_fn_2 are applied on-the-fly when iterating over the dataset for example in my_iterable_dataset: print (example) break Exact and fast approximate shuffling When you shuffle a Dataset using Dataset.shuffle() , you apply an exact shuffling of the dataset. It works by taking a list of indices [0, 1, 2, ... len(my_dataset) - 1] and shuffling this list. Then, accessing my_dataset[0] returns the row and index defined by the first element of the indices mapping that has been shuffled: Copied my_dataset = my_dataset.shuffle(seed= 42 ) print (my_dataset[ 0 ]) Since we don’t have random access to the rows in the case of an IterableDataset , we can’t use a shuffled list of indices and access a row at an arbitrary position. This prevents the use of exact shuffling. Instead, a fast approximate shuffling is used in IterableDataset.shuffle() . It uses a shuffle buffer to sample random examples iteratively from the dataset. Since the dataset is still read iteratively, it provides excellent speed performance: Copied my_iterable_dataset = my_iterable_dataset.shuffle(seed= 42 , buffer_size= 100 ) for example in my_iterable_dataset: print (example) break But using a shuffle buffer is not enough to provide a satisfactory shuffling for machine learning model training. So IterableDataset.shuffle() also shuffles the dataset shards if your dataset is made of multiple files or sources: Copied # Stream from the internet my_iterable_dataset = load_dataset( "deepmind/code_contests" , split= "train" , streaming= True ) my_iterable_dataset.num_shards # 39 # Stream from local files data_files = { "train" : [ f"path/to/data_ {i} .csv" for i in range ( 1024 )]} my_iterable_dataset = load_dataset( "csv" , data_files=data_files, split= "train" , streaming= True ) my_iterable_dataset.num_shards # 1024 # From a generator function def my_generator ( n, sources ): for source in sources: for example_id_for_current_source in range (n): yield { "example_id" : f" {source} _ {example_id_for_current_source} " } gen_kwargs = { "n" : 10 , "sources" : [ f"path/to/data_ {i} " for i in range ( 1024 )]} my_iterable_dataset = IterableDataset.from_generator(my_generator, gen_kwargs=gen_kwargs) my_iterable_dataset.num_shards # 1024 Speed differences Regular Dataset objects are based on Arrow which provides fast random access to the rows. Thanks to memory mapping and the fact that Arrow is an in-memory format, reading data from disk doesn’t do expensive system calls and deserialization. It provides even faster data loading when iterating using a for loop by iterating on contiguous Arrow record batches. However as soon as your Dataset has an indices mapping (via Dataset.shuffle() for example), the speed can become 10x slower. This is because there is an extra step to get the row index to read using the indices mapping, and most importantly, you aren’t reading contiguous chunks of data anymore. To restore the speed, you’d need to rewrite the entire dataset on your disk again using Dataset.flatten_indices() , which removes the indices mapping. This may take a lot of time depending on the size of your dataset though: Copied my_dataset[ 0 ] # fast my_dataset = my_dataset.shuffle(seed= 42 ) my_dataset[ 0 ] # up to 10x slower my_dataset = my_dataset.flatten_indices() # rewrite the shuffled dataset on disk as contiguous chunks of data my_dataset[ 0 ] # fast again In this case, we recommend switching to an IterableDataset and leveraging its fast approximate shuffling method IterableDataset.shuffle() . It only shuffles the shards order and adds a shuffle buffer to your dataset, which keeps the speed of your dataset optimal. You can also reshuffle the dataset easily: Copied for example in enumerate (my_iterable_dataset): # fast pass shuffled_iterable_dataset = my_iterable_dataset.shuffle(seed= 42 , buffer_size= 100 ) for example in enumerate (shuffled_iterable_dataset): # as fast as before pass shuffled_iterable_dataset = my_iterable_dataset.shuffle(seed= 1337 , buffer_size= 100 ) # reshuffling using another seed is instantaneous for example in enumerate (shuffled_iterable_dataset): # still as fast as before pass If you’re using your dataset on multiple epochs, the effective seed to shuffle the shards order in the shuffle buffer is seed + epoch . It makes it easy to reshuffle a dataset between epochs: Copied for epoch in range (n_epochs): my_iterable_dataset.set_epoch(epoch) for example in my_iterable_dataset: # fast + reshuffled at each epoch using `effective_seed = seed + epoch` pass To restart the iteration of a map-style dataset, you can simply skip the first examples: Copied my_dataset = my_dataset.select( range (start_index, len (dataset))) But if you use a DataLoader with a Sampler , you should instead save the state of your sampler (you might have written a custom sampler that allows resuming). On the other hand, iterable datasets don’t provide random access to a specific example index to resume from. But you can use IterableDataset.state_dict() and IterableDataset.load_state_dict() to resume from a checkpoint instead, similarly to what you can do for models and optimizers: Copied >>> iterable_dataset = Dataset.from_dict({ "a" : range ( 6 )}).to_iterable_dataset(num_shards= 3 ) >>> # save in the middle of training >>> state_dict = iterable_dataset.state_dict() >>> # and resume later >>> iterable_dataset.load_state_dict(state_dict) Under the hood, the iterable dataset keeps track of the current shard being read and the example index in the current shard and it stores this info in the state_dict . To resume from a checkpoint, the dataset skips all the shards that were previously read to restart from the current shard. Then it reads the shard and skips examples until it reaches the exact example from the checkpoint. Therefore restarting a dataset is quite fast, since it will not re-read the shards that have already been iterated on. Still, resuming a dataset is generally not instantaneous since it has to restart reading from the beginning of the current shard and skip examples until it reaches the checkpoint location. This can be used with the StatefulDataLoader from torchdata , see streaming with a PyTorch DataLoader . Switch from map-style to iterable If you want to benefit from the “lazy” behavior of an IterableDataset or their speed advantages, you can switch your map-style Dataset to an IterableDataset : Copied my_iterable_dataset = my_dataset.to_iterable_dataset() If you want to shuffle your dataset or use it with a PyTorch DataLoader , we recommend generating a sharded IterableDataset : Copied my_iterable_dataset = my_dataset.to_iterable_dataset(num_shards= 1024 ) my_iterable_dataset.num_shards # 1024 < > Update on GitHub ← The cache Dataset features → Differences between Dataset and Iterable Dataset Downloading and streaming Creating map-style datasets and iterable datasets Loading local files entirely and progressively Eager data processing and lazy data processing Exact and fast approximate shuffling Speed differences Switch from map-style to iterable |
Inferentia_Exporter.txt | Inferentia Exporter Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation Inferentia Exporter AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Inferentia Exporter You can export a PyTorch model to Neuron with 🤗 Optimum to run inference on AWS Inferntia 1 and Inferentia 2 . Export functions There is an export function for each generation of the Inferentia accelerator, export_neuron for INF1 and export_neuronx on INF2, but you will be able to use directly the export function export , which will select the proper exporting function according to the environment. Besides, you can check if the exported model is valid via validate_model_outputs , which compares the compiled model’s output on Neuron devices to the PyTorch model’s output on CPU. ← Supported Architectures Neuron Models → Inferentia Exporter Export functions |
User_access_tokens.txt | User access tokens Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation User access tokens Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security User Access Tokens Two-Factor Authentication Git over SSH Signing Commits with GPG Single Sign-On (SSO) Advanced Access Control (Resource Groups) Malware Scanning Pickle Scanning Secrets Scanning Protect AI Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started User access tokens What are User Access Tokens? User Access Tokens are the preferred way to authenticate an application or notebook to Hugging Face services. You can manage your access tokens in your settings . Access tokens allow applications and notebooks to perform specific actions specified by the scope of the roles shown in the following: fine-grained : tokens with this role can be used to provide fine-grained access to specific resources, such as a specific model or models in a specific organization. This type of token is useful in production environments, as you can use your own token without sharing access to all your resources. read : tokens with this role can only be used to provide read access to repositories you could read. That includes public and private repositories that you, or an organization you’re a member of, own. Use this role if you only need to read content from the Hugging Face Hub (e.g. when downloading private models or doing inference). write : tokens with this role additionally grant write access to the repositories you have write access to. Use this token if you need to create or push content to a repository (e.g., when training a model or modifying a model card). Note that Organization API Tokens have been deprecated: If you are a member of an organization with read/write/admin role, then your User Access Tokens will be able to read/write the resources according to the token permission (read/write) and organization membership (read/write/admin). How to manage User Access Tokens? To create an access token, go to your settings, then click on the Access Tokens tab . Click on the New token button to create a new User Access Token. Select a role and a name for your token and voilà - you’re ready to go! You can delete and refresh User Access Tokens by clicking on the Manage button. How to use User Access Tokens? There are plenty of ways to use a User Access Token to access the Hugging Face Hub, granting you the flexibility you need to build awesome apps on top of it. User Access Tokens can be: used in place of a password to access the Hugging Face Hub with git or with basic authentication. passed as a bearer token when calling the Inference API . used in the Hugging Face Python libraries, such as transformers or datasets : Copied from transformers import AutoModel access_token = "hf_..." model = AutoModel.from_pretrained( "private/model" , token=access_token) Try not to leak your token! Though you can always rotate it, anyone will be able to read or write your private repos in the meantime which is 💩 Best practices We recommend you create one access token per app or usage. For instance, you could have a separate token for: A local machine. A Colab notebook. An awesome custom inference server. This way, you can invalidate one token without impacting your other usages. We also recommend using only fine-grained tokens for production usage. The impact, if leaked, will be reduced, and they can be shared among your organization without impacting your account. For example, if your production application needs read access to a gated model, a member of your organization can request access to the model and then create a fine-grained token with read access to that model. This token can then be used in your production application without giving it access to all your private models. < > Update on GitHub ← Security Two-Factor Authentication → User access tokens What are User Access Tokens? How to manage User Access Tokens? How to use User Access Tokens? Best practices |
Interface__WhoAmIApp.txt | Interface: WhoAmIApp Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: WhoAmIApp Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: WhoAmIApp Properties id • id : string Defined in hub/src/lib/who-am-i.ts:41 name • name : string Defined in hub/src/lib/who-am-i.ts:43 scope • Optional scope : Object Type declaration Name Type entities string [] role "admin" | "write" | "contributor" | "read" Defined in hub/src/lib/who-am-i.ts:44 type • type : "app" Defined in hub/src/lib/who-am-i.ts:42 < > Update on GitHub ← UserInfo WhoAmIOrg → Interface: Who AmI App Properties id Defined in name Defined in scope Type declaration Defined in type Defined in |
Generation_with_LLMs.txt | Generation with LLMs Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Generation with LLMs Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Generation with LLMs LLMs, or Large Language Models, are the key component behind text generation. In a nutshell, they consist of large pretrained transformer models trained to predict the next word (or, more precisely, token) given some input text. Since they predict one token at a time, you need to do something more elaborate to generate new sentences other than just calling the model — you need to do autoregressive generation. Autoregressive generation is the inference-time procedure of iteratively calling a model with its own generated outputs, given a few initial inputs. In 🤗 Transformers, this is handled by the generate() method, which is available to all models with generative capabilities. This tutorial will show you how to: Generate text with an LLM Avoid common pitfalls Next steps to help you get the most out of your LLM Before you begin, make sure you have all the necessary libraries installed: Copied pip install transformers bitsandbytes>=0.39.0 -q Generate text A language model trained for causal language modeling takes a sequence of text tokens as input and returns the probability distribution for the next token. "Forward pass of an LLM" A critical aspect of autoregressive generation with LLMs is how to select the next token from this probability distribution. Anything goes in this step as long as you end up with a token for the next iteration. This means it can be as simple as selecting the most likely token from the probability distribution or as complex as applying a dozen transformations before sampling from the resulting distribution. "Autoregressive generation iteratively selects the next token from a probability distribution to generate text" The process depicted above is repeated iteratively until some stopping condition is reached. Ideally, the stopping condition is dictated by the model, which should learn when to output an end-of-sequence ( EOS ) token. If this is not the case, generation stops when some predefined maximum length is reached. Properly setting up the token selection step and the stopping condition is essential to make your model behave as you’d expect on your task. That is why we have a GenerationConfig file associated with each model, which contains a good default generative parameterization and is loaded alongside your model. Let’s talk code! If you’re interested in basic LLM usage, our high-level Pipeline interface is a great starting point. However, LLMs often require advanced features like quantization and fine control of the token selection step, which is best done through generate() . Autoregressive generation with LLMs is also resource-intensive and should be executed on a GPU for adequate throughput. First, you need to load the model. Copied >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( ... "mistralai/Mistral-7B-v0.1" , device_map= "auto" , load_in_4bit= True ... ) You’ll notice two flags in the from_pretrained call: device_map ensures the model is moved to your GPU(s) load_in_4bit applies 4-bit dynamic quantization to massively reduce the resource requirements There are other ways to initialize a model, but this is a good baseline to begin with an LLM. Next, you need to preprocess your text input with a tokenizer . Copied >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "mistralai/Mistral-7B-v0.1" , padding_side= "left" ) >>> model_inputs = tokenizer([ "A list of colors: red, blue" ], return_tensors= "pt" ).to( "cuda" ) The model_inputs variable holds the tokenized text input, as well as the attention mask. While generate() does its best effort to infer the attention mask when it is not passed, we recommend passing it whenever possible for optimal results. After tokenizing the inputs, you can call the generate() method to returns the generated tokens. The generated tokens then should be converted to text before printing. Copied >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens= True )[ 0 ] 'A list of colors: red, blue, green, yellow, orange, purple, pink,' Finally, you don’t need to do it one sequence at a time! You can batch your inputs, which will greatly improve the throughput at a small latency and memory cost. All you need to do is to make sure you pad your inputs properly (more on that below). Copied >>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default >>> model_inputs = tokenizer( ... [ "A list of colors: red, blue" , "Portugal is" ], return_tensors= "pt" , padding= True ... ).to( "cuda" ) >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens= True ) [ 'A list of colors: red, blue, green, yellow, orange, purple, pink,' , 'Portugal is a country in southwestern Europe, on the Iber' ] And that’s it! In a few lines of code, you can harness the power of an LLM. Common pitfalls There are many generation strategies , and sometimes the default values may not be appropriate for your use case. If your outputs aren’t aligned with what you’re expecting, we’ve created a list of the most common pitfalls and how to avoid them. Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "mistralai/Mistral-7B-v0.1" ) >>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default >>> model = AutoModelForCausalLM.from_pretrained( ... "mistralai/Mistral-7B-v0.1" , device_map= "auto" , load_in_4bit= True ... ) Generated output is too short/long If not specified in the GenerationConfig file, generate returns up to 20 tokens by default. We highly recommend manually setting max_new_tokens in your generate call to control the maximum number of new tokens it can return. Keep in mind LLMs (more precisely, decoder-only models ) also return the input prompt as part of the output. Copied >>> model_inputs = tokenizer([ "A sequence of numbers: 1, 2" ], return_tensors= "pt" ).to( "cuda" ) >>> # By default, the output will contain up to 20 tokens >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens= True )[ 0 ] 'A sequence of numbers: 1, 2, 3, 4, 5' >>> # Setting `max_new_tokens` allows you to control the maximum length >>> generated_ids = model.generate(**model_inputs, max_new_tokens= 50 ) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens= True )[ 0 ] 'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,' Incorrect generation mode By default, and unless specified in the GenerationConfig file, generate selects the most likely token at each iteration (greedy decoding). Depending on your task, this may be undesirable; creative tasks like chatbots or writing an essay benefit from sampling. On the other hand, input-grounded tasks like audio transcription or translation benefit from greedy decoding. Enable sampling with do_sample=True , and you can learn more about this topic in this blog post . Copied >>> # Set seed for reproducibility -- you don't need this unless you want full reproducibility >>> from transformers import set_seed >>> set_seed( 42 ) >>> model_inputs = tokenizer([ "I am a cat." ], return_tensors= "pt" ).to( "cuda" ) >>> # LLM + greedy decoding = repetitive, boring output >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens= True )[ 0 ] 'I am a cat. I am a cat. I am a cat. I am a cat' >>> # With sampling, the output becomes more creative! >>> generated_ids = model.generate(**model_inputs, do_sample= True ) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens= True )[ 0 ] 'I am a cat. Specifically, I am an indoor-only cat. I' Wrong padding side LLMs are decoder-only architectures, meaning they continue to iterate on your input prompt. If your inputs do not have the same length, they need to be padded. Since LLMs are not trained to continue from pad tokens, your input needs to be left-padded. Make sure you also don’t forget to pass the attention mask to generate! Copied >>> # The tokenizer initialized above has right-padding active by default: the 1st sequence, >>> # which is shorter, has padding on the right side. Generation fails to capture the logic. >>> model_inputs = tokenizer( ... [ "1, 2, 3" , "A, B, C, D, E" ], padding= True , return_tensors= "pt" ... ).to( "cuda" ) >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens= True )[ 0 ] '1, 2, 33333333333' >>> # With left-padding, it works as expected! >>> tokenizer = AutoTokenizer.from_pretrained( "mistralai/Mistral-7B-v0.1" , padding_side= "left" ) >>> tokenizer.pad_token = tokenizer.eos_token # Most LLMs don't have a pad token by default >>> model_inputs = tokenizer( ... [ "1, 2, 3" , "A, B, C, D, E" ], padding= True , return_tensors= "pt" ... ).to( "cuda" ) >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens= True )[ 0 ] '1, 2, 3, 4, 5, 6,' Wrong prompt Some models and tasks expect a certain input prompt format to work properly. When this format is not applied, you will get a silent performance degradation: the model kinda works, but not as well as if you were following the expected prompt. More information about prompting, including which models and tasks need to be careful, is available in this guide . Let’s see an example with a chat LLM, which makes use of chat templating : Copied >>> tokenizer = AutoTokenizer.from_pretrained( "HuggingFaceH4/zephyr-7b-alpha" ) >>> model = AutoModelForCausalLM.from_pretrained( ... "HuggingFaceH4/zephyr-7b-alpha" , device_map= "auto" , load_in_4bit= True ... ) >>> set_seed( 0 ) >>> prompt = """How many helicopters can a human eat in one sitting? Reply as a thug.""" >>> model_inputs = tokenizer([prompt], return_tensors= "pt" ).to( "cuda" ) >>> input_length = model_inputs.input_ids.shape[ 1 ] >>> generated_ids = model.generate(**model_inputs, max_new_tokens= 20 ) >>> print (tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens= True )[ 0 ]) "I'm not a thug, but i can tell you that a human cannot eat" >>> # Oh no, it did not follow our instruction to reply as a thug! Let's see what happens when we write >>> # a better prompt and use the right template for this model (through `tokenizer.apply_chat_template`) >>> set_seed( 0 ) >>> messages = [ ... { ... "role" : "system" , ... "content" : "You are a friendly chatbot who always responds in the style of a thug" , ... }, ... { "role" : "user" , "content" : "How many helicopters can a human eat in one sitting?" }, ... ] >>> model_inputs = tokenizer.apply_chat_template(messages, add_generation_prompt= True , return_tensors= "pt" ).to( "cuda" ) >>> input_length = model_inputs.shape[ 1 ] >>> generated_ids = model.generate(model_inputs, do_sample= True , max_new_tokens= 20 ) >>> print (tokenizer.batch_decode(generated_ids[:, input_length:], skip_special_tokens= True )[ 0 ]) 'None, you thug. How bout you try to focus on more useful questions?' >>> # As we can see, it followed a proper thug style 😎 Further resources While the autoregressive generation process is relatively straightforward, making the most out of your LLM can be a challenging endeavor because there are many moving parts. For your next steps to help you dive deeper into LLM usage and understanding: Advanced generate usage Guide on how to control different generation methods , how to set up the generation configuration file, and how to stream the output; Accelerating text generation ; Prompt templates for chat LLMs ; Prompt design guide ; API reference on GenerationConfig , generate() , and generate-related classes . Most of the classes, including the logits processors, have usage examples! LLM leaderboards Open LLM Leaderboard , which focuses on the quality of the open-source models; Open LLM-Perf Leaderboard , which focuses on LLM throughput. Latency, throughput and memory utilization Guide on how to optimize LLMs for speed and memory ; Guide on quantization such as bitsandbytes and autogptq, which shows you how to drastically reduce your memory requirements. Related libraries optimum , an extension of 🤗 Transformers that optimizes for specific hardware devices; outlines , a library where you can constrain text generation (e.g. to generate JSON files); SynCode , a library for context-free grammar guided generation (e.g. JSON, SQL, Python); text-generation-inference , a production-ready server for LLMs; text-generation-webui , a UI for text generation; logits-processor-zoo , containing additional options to control text generation with 🤗 Transformers. See our related blog post . < > Update on GitHub ← Agents, supercharged - Multi-agents, External tools, and more Chatting with Transformers → Generation with LL Ms Generate text Common pitfalls Generated output is too short/long Incorrect generation mode Wrong padding side Wrong prompt Further resources Advanced generate usage LL M leaderboards Latency, throughput and memory utilization Related libraries |
Adding_a_Sign-In_with_HF_button_to_your_Space.txt | Adding a Sign-In with HF button to your Space Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Adding a Sign-In with HF button to your Space Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Adding a Sign-In with HF button to your Space You can enable a built-in sign-in flow in your Space by seamlessly creating and associating an OAuth/OpenID connect app so users can log in with their HF account. This enables new use cases for your Space. For instance, when combined with Persistent Storage , a generative AI Space could allow users to log in to access their previous generations, only accessible to them. This guide will take you through the process of integrating a Sign-In with HF button into any Space. If you’re seeking a fast and simple method to implement this in a Gradio Space, take a look at its built-in integration . You can also use the HF OAuth flow to create a “Sign in with HF” flow in any website or App, outside of Spaces. Read our general OAuth page . Create an OAuth app All you need to do is add hf_oauth: true to your Space’s metadata inside your README.md file. Here’s an example of metadata for a Gradio Space: Copied title: Gradio Oauth Test emoji: 🏆 colorFrom: pink colorTo: pink sdk: gradio sdk_version: 3.40 .0 python_version: 3.10 .6 app_file: app.py hf_oauth: true # optional, default duration is 8 hours/480 minutes. Max duration is 30 days/43200 minutes. hf_oauth_expiration_minutes: 480 # optional, see "Scopes" below. "openid profile" is always included. hf_oauth_scopes: - read-repos - write-repos - manage-repos - inference-api # optional, restrict access to members of specific organizations hf_oauth_authorized_org: ORG_NAME hf_oauth_authorized_org: - ORG_NAME1 - ORG_NAME2 You can check out the configuration reference docs for more information. This will add the following environment variables to your space: OAUTH_CLIENT_ID : the client ID of your OAuth app (public) OAUTH_CLIENT_SECRET : the client secret of your OAuth app OAUTH_SCOPES : scopes accessible by your OAuth app. OPENID_PROVIDER_URL : The URL of the OpenID provider. The OpenID metadata will be available at {OPENID_PROVIDER_URL}/.well-known/openid-configuration . As for any other environment variable, you can use them in your code by using os.getenv("OAUTH_CLIENT_ID") , for example. Redirect URLs You can use any redirect URL you want, as long as it targets your Space. Note that SPACE_HOST is available as an environment variable. For example, you can use https://{SPACE_HOST}/login/callback as a redirect URI. Scopes The following scopes are always included for Spaces: openid : Get the ID token in addition to the access token. profile : Get the user’s profile information (username, avatar, etc.) Those scopes are optional and can be added by setting hf_oauth_scopes in your Space’s metadata: email : Get the user’s email address. read-billing : Know whether the user has a payment method set up. read-repos : Get read access to the user’s personal repos. write-repos : Get write/read access to the user’s personal repos. manage-repos : Get full access to the user’s personal repos. Also grants repo creation and deletion. inference-api : Get access to the Inference API , you will be able to make inference requests on behalf of the user. write-discussions : Open discussions and Pull Requests on behalf of the user as well as interact with discussions (including reactions, posting/editing comments, closing discussions, …). To open Pull Requests on private repos, you need to request the read-repos scope as well. Accessing organization resources By default, the oauth app does not need to access organization resources. But some scopes like read-repos or read-billing apply to organizations as well. The user can select which organizations to grant access to when authorizing the app. If you require access to a specific organization, you can add orgIds=ORG_ID as a query parameter to the OAuth authorization URL. You have to replace ORG_ID with the organization ID, which is available in the organizations.sub field of the userinfo response. Adding the button to your Space You now have all the information to add a “Sign-in with HF” button to your Space. Some libraries ( Python , NodeJS ) can help you implement the OpenID/OAuth protocol. Gradio and huggingface.js also provide built-in support , making implementing the Sign-in with HF button a breeze; you can check out the associated guides with gradio and with huggingface.js . Basically, you need to: Redirect the user to https://huggingface.co/oauth/authorize?redirect_uri={REDIRECT_URI}&scope=openid%20profile&client_id={CLIENT_ID}&state={STATE} , where STATE is a random string that you will need to verify later. Handle the callback on /auth/callback or /login/callback (or your own custom callback URL) and verify the state parameter. Use the code query parameter to get an access token and id token from https://huggingface.co/oauth/token (POST request with client_id , code , grant_type=authorization_code and redirect_uri as form data, and with Authorization: Basic {base64(client_id:client_secret)} as a header). You should use target=_blank on the button to open the sign-in page in a new tab, unless you run the space outside its iframe . Otherwise, you might encounter issues with cookies on some browsers. Examples: Gradio test app Hugging Chat (NodeJS/SvelteKit) Inference Widgets (Auth.js/SvelteKit) , uses the inference-api scope to make inference requests on behalf of the user. Client-Side in a Static Space (huggingface.js) - very simple JavaScript example. JS Code example: Copied import { oauthLoginUrl, oauthHandleRedirectIfPresent } from "@huggingface/hub" ; const oauthResult = await oauthHandleRedirectIfPresent (); if (!oauthResult) { // If the user is not logged in, redirect to the login page window . location . href = await oauthLoginUrl (); } // You can use oauthResult.accessToken, oauthResult.userInfo among other things console . log (oauthResult); < > Update on GitHub ← Spaces Configuration Reference Spaces Changelog → Adding a Sign- In with H F button to your Space Create an O Auth app Redirect UR Ls Scopes Accessing organization resources Adding the button to your Space Examples: |
Interface__Credentials.txt | Interface: Credentials Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: Credentials Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: Credentials Deprecated Use AccessToken instead. Pass { accessToken: “hf …” } instead of { credentials: { accessToken: “hf …” } } Properties accessToken • accessToken : string Defined in hub/src/types/public.ts:21 < > Update on GitHub ← CommitOutput DatasetEntry → Interface: Credentials Properties access Token Defined in |
Data_Utilities.txt | Data Utilities Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Data Utilities TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Data Utilities is_conversational trl.is_conversational < source > ( example : dict ) → bool Parameters example ( dict[str, Any] ) — A single data entry of a dataset. The example can have different keys depending on the dataset type. Returns bool True if the data is in a conversational format, False otherwise. Check if the example is in a conversational format. Examples: Copied >>> example = { "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }]} >>> is_conversational(example) True >>> example = { "prompt" : "The sky is" }) >>> is_conversational(example) False apply_chat_template trl.apply_chat_template < source > ( example : dict tokenizer : PreTrainedTokenizer tools : typing.Optional[list[typing.Union[dict, typing.Callable]]] = None ) Apply a chat template to a conversational example along with the schema for a list of functions in tools . For more details, see maybe_apply_chat_template() . maybe_apply_chat_template trl.maybe_apply_chat_template < source > ( example : dict tokenizer : PreTrainedTokenizer tools : typing.Optional[list[typing.Union[dict, typing.Callable]]] = None ) → dict[str, str] Parameters example ( dict[str, list[dict[str, str]] ) — Dictionary representing a single data entry of a conversational dataset. Each data entry can have different keys depending on the dataset type. The supported dataset types are: Language modeling dataset: "messages" . Prompt-only dataset: "prompt" . Prompt-completion dataset: "prompt" and "completion" . Preference dataset: "prompt" , "chosen" , and "rejected" . Preference dataset with implicit prompt: "chosen" and "rejected" . Unpaired preference dataset: "prompt" , "completion" , and "label" . For keys "messages" , "prompt" , "chosen" , "rejected" , and "completion" , the values are lists of messages, where each message is a dictionary with keys "role" and "content" . tokenizer ( PreTrainedTokenizer ) — The tokenizer to apply the chat template with. tools ( Optional[list[Union[dict, Callable]]] , optional , defaults to None ) — A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect Returns dict[str, str] The formatted example with the chat template applied. If the example is in a conversational format, apply a chat template to it. Note: This function does not alter the keys, except for Language modeling dataset, where "messages" is replaced by "text" . Example: Copied >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "microsoft/Phi-3-mini-128k-instruct" ) >>> example = { ... "prompt" : [{ "role" : "user" , "content" : "What color is the sky?" }], ... "completion" : [{ "role" : "assistant" , "content" : "It is blue." }] ... } >>> apply_chat_template(example, tokenizer) { 'prompt' : '<|user|>\nWhat color is the sky?<|end|>\n<|assistant|>\n' , 'completion' : 'It is blue.<|end|>\n<|endoftext|>' } extract_prompt trl.extract_prompt < source > ( example : dict ) Extracts the shared prompt from a preference data example, where the prompt is implicit within both the chosen and rejected completions. For more details, see maybe_extract_prompt() . maybe_extract_prompt trl.maybe_extract_prompt < source > ( example : dict ) → dict[str, list] Parameters example ( dict[str, list] ) — A dictionary representing a single data entry in the preference dataset. It must contain the keys "chosen" and "rejected" , where each value is either conversational or standard ( str ). Returns dict[str, list] A dictionary containing: "prompt" : The longest common prefix between the “chosen” and “rejected” completions. "chosen" : The remainder of the “chosen” completion, with the prompt removed. "rejected" : The remainder of the “rejected” completion, with the prompt removed. Extracts the shared prompt from a preference data example, where the prompt is implicit within both the chosen and rejected completions. If the example already contains a "prompt" key, the function returns the example as is. Else, the function identifies the longest common sequence (prefix) of conversation turns between the “chosen” and “rejected” completions and extracts this as the prompt. It then removes this prompt from the respective “chosen” and “rejected” completions. Examples: Copied >>> example = { ... "chosen" : [ ... { "role" : "user" , "content" : "What color is the sky?" }, ... { "role" : "assistant" , "content" : "It is blue." } ... ], ... "rejected" : [ ... { "role" : "user" , "content" : "What color is the sky?" }, ... { "role" : "assistant" , "content" : "It is green." } ... ] ... } >>> extract_prompt(example) { 'prompt' : [{ 'role' : 'user' , 'content' : 'What color is the sky?' }], 'chosen' : [{ 'role' : 'assistant' , 'content' : 'It is blue.' }], 'rejected' : [{ 'role' : 'assistant' , 'content' : 'It is green.' }]} Or, with the map method of datasets.Dataset : Copied >>> from trl import extract_prompt >>> from datasets import Dataset >>> dataset_dict = { ... "chosen" : [ ... [ ... { "role" : "user" , "content" : "What color is the sky?" }, ... { "role" : "assistant" , "content" : "It is blue." }, ... ], ... [ ... { "role" : "user" , "content" : "Where is the sun?" }, ... { "role" : "assistant" , "content" : "In the sky." }, ... ], ... ], ... "rejected" : [ ... [ ... { "role" : "user" , "content" : "What color is the sky?" }, ... { "role" : "assistant" , "content" : "It is green." }, ... ], ... [ ... { "role" : "user" , "content" : "Where is the sun?" }, ... { "role" : "assistant" , "content" : "In the sea." }, ... ], ... ], ... } >>> dataset = Dataset.from_dict(dataset_dict) >>> dataset = dataset. map (extract_prompt) >>> dataset[ 0 ] { 'prompt' : [{ 'role' : 'user' , 'content' : 'What color is the sky?' }], 'chosen' : [{ 'role' : 'assistant' , 'content' : 'It is blue.' }], 'rejected' : [{ 'role' : 'assistant' , 'content' : 'It is green.' }]} unpair_preference_dataset trl.unpair_preference_dataset < source > ( dataset : ~DatasetType num_proc : typing.Optional[int] = None desc : typing.Optional[str] = None ) → Dataset Parameters dataset ( Dataset or DatasetDict ) — Preference dataset to unpair. The dataset must have columns "chosen" , "rejected" and optionally "prompt" . num_proc ( Optional[int] , optional , defaults to None ) — Number of processes to use for processing the dataset. desc ( str or None , optional , defaults to None ) — Meaningful description to be displayed alongside with the progress bar while mapping examples. Returns Dataset The unpaired preference dataset. Unpair a preference dataset. Example: Copied >>> from datasets import Dataset >>> dataset_dict = { ... "prompt" : [ "The sky is" , "The sun is" ] ... "chosen" : [ " blue." , "in the sky." ], ... "rejected" : [ " green." , " in the sea." ] ... } >>> dataset = Dataset.from_dict(dataset_dict) >>> dataset = unpair_preference_dataset(dataset) >>> dataset Dataset({ features: [ 'prompt' , 'completion' , 'label' ], num_rows: 4 }) >>> dataset[ 0 ] { 'prompt' : 'The sky is' , 'completion' : ' blue.' , 'label' : True } maybe_unpair_preference_dataset trl.maybe_unpair_preference_dataset < source > ( dataset : ~DatasetType num_proc : typing.Optional[int] = None desc : typing.Optional[str] = None ) → Dataset or DatasetDict Parameters dataset ( Dataset or DatasetDict ) — Preference dataset to unpair. The dataset must have columns "chosen" , "rejected" and optionally "prompt" . num_proc ( Optional[int] , optional , defaults to None ) — Number of processes to use for processing the dataset. desc ( str or None , optional , defaults to None ) — Meaningful description to be displayed alongside with the progress bar while mapping examples. Returns Dataset or DatasetDict The unpaired preference dataset if it was paired, otherwise the original dataset. Unpair a preference dataset if it is paired. Example: Copied >>> from datasets import Dataset >>> dataset_dict = { ... "prompt" : [ "The sky is" , "The sun is" ] ... "chosen" : [ " blue." , "in the sky." ], ... "rejected" : [ " green." , " in the sea." ] ... } >>> dataset = Dataset.from_dict(dataset_dict) >>> dataset = unpair_preference_dataset(dataset) >>> dataset Dataset({ features: [ 'prompt' , 'completion' , 'label' ], num_rows: 4 }) >>> dataset[ 0 ] { 'prompt' : 'The sky is' , 'completion' : ' blue.' , 'label' : True } < > Update on GitHub ← Callbacks Text Environments → Data Utilities is_conversational apply_chat_template maybe_apply_chat_template extract_prompt maybe_extract_prompt unpair_preference_dataset maybe_unpair_preference_dataset |
Text_Generation_Inference_Architecture.txt | Text Generation Inference Architecture Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Text Generation Inference Architecture text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Text Generation Inference Architecture This document aims at describing the architecture of Text Generation Inference (TGI), by describing the call flow between the separate components. A high-level architecture diagram can be seen here: This diagram shows well there are these separate components: The router , also named webserver , that receives the client requests, buffers them, creates some batches, and prepares gRPC calls to a model server. The launcher is a helper that will be able to launch one or several model servers (if model is sharded), and it launches the router with the compatible arguments. The model server , responsible for receiving the gRPC requests and to process the inference on the model. If the model is sharded across multiple accelerators (e.g.: multiple GPUs), the model server shards might be synchronized via NCCL or equivalent. Note that for other backends (eg. TRTLLM) the model server and launcher are specific to the backend. The router and the model server can be two different machines, they do not need to be deployed together. The Router This component is a rust web server binary that accepts HTTP requests using the custom HTTP API , as well as OpenAI’s Messages API . The router receives the API calls and handles the “baches” logic (and introduction to batching can be found here ). It uses different strategies to reduce latency between requests and responses, especially oriented to decoding latency. It will use queues, schedulers, and block allocators to achieve that and produce batched requests that it will then be sent to the model server. Router’s command line The router command line will be the way to pass parameters to it (it does not rely on configuration file): Copied Text Generation Webserver Usage: text-generation-router [OPTIONS] Options: --max-concurrent-requests <MAX_CONCURRENT_REQUESTS> [env: MAX_CONCURRENT_REQUESTS=] [default: 128] --max-best-of <MAX_BEST_OF> [env: MAX_BEST_OF=] [default: 2] --max-stop-sequences <MAX_STOP_SEQUENCES> [env: MAX_STOP_SEQUENCES=] [default: 4] --max-top-n-tokens <MAX_TOP_N_TOKENS> [env: MAX_TOP_N_TOKENS=] [default: 5] --max-input-tokens <MAX_INPUT_TOKENS> [env: MAX_INPUT_TOKENS=] [default: 1024] --max-total-tokens <MAX_TOTAL_TOKENS> [env: MAX_TOTAL_TOKENS=] [default: 2048] --waiting-served-ratio <WAITING_SERVED_RATIO> [env: WAITING_SERVED_RATIO=] [default: 1.2] --max-batch-prefill-tokens <MAX_BATCH_PREFILL_TOKENS> [env: MAX_BATCH_PREFILL_TOKENS=] [default: 4096] --max-batch-total-tokens <MAX_BATCH_TOTAL_TOKENS> [env: MAX_BATCH_TOTAL_TOKENS=] --max-waiting-tokens <MAX_WAITING_TOKENS> [env: MAX_WAITING_TOKENS=] [default: 20] --max-batch-size <MAX_BATCH_SIZE> [env: MAX_BATCH_SIZE=] --hostname <HOSTNAME> [env: HOSTNAME=] [default: 0.0.0.0] - p , --port <PORT> [env: PORT=] [default: 3000] --master-shard-uds-path <MASTER_SHARD_UDS_PATH> [env: MASTER_SHARD_UDS_PATH=] [default: /tmp/text-generation-server-0] --tokenizer-name <TOKENIZER_NAME> [env: TOKENIZER_NAME=] [default: bigscience/bloom] --tokenizer-config-path <TOKENIZER_CONFIG_PATH> [env: TOKENIZER_CONFIG_PATH=] --revision <REVISION> [env: REVISION=] --validation-workers <VALIDATION_WORKERS> [env: VALIDATION_WORKERS=] [default: 2] --json-output [env: JSON_OUTPUT=] --otlp-endpoint <OTLP_ENDPOINT> [env: OTLP_ENDPOINT=] --otlp-service-name <OTLP_SERVICE_NAME> [env: OTLP_SERVICE_NAME=] --cors-allow-origin <CORS_ALLOW_ORIGIN> [env: CORS_ALLOW_ORIGIN=] --ngrok [env: NGROK=] --ngrok-authtoken <NGROK_AUTHTOKEN> [env: NGROK_AUTHTOKEN=] --ngrok-edge <NGROK_EDGE> [env: NGROK_EDGE=] --messages-api-enabled [env: MESSAGES_API_ENABLED=] --disable-grammar-support [env: DISABLE_GRAMMAR_SUPPORT=] --max-client-batch-size <MAX_CLIENT_BATCH_SIZE> [env: MAX_CLIENT_BATCH_SIZE=] [default: 4] -h, --help Print help -V, --version Print version The Model Server The model server is a python server, capable of starting a server waiting for gRPC requests, loads a given model, perform sharding to provide tensor parallelism , and stays alive while waiting for new requests. The model server supports models instantiated using Pytorch and optimized for inference mainly on CUDA/ROCM. Model Server Variants Several variants of the model server exist that are actively supported by Hugging Face: By default, the model server will attempt building a server optimized for Nvidia GPUs with CUDA . The code for this version is hosted in the main TGI repository . A version optimized for AMD with ROCm is hosted in the main TGI repository. Some model features differ. A version optimized for Intel GPUs is hosted in the main TGI repository. Some model features differ. The version for Intel Gaudi is maintained on a forked repository, often resynchronized with the main TGI repository . A version for Neuron (AWS Inferentia2) is maintained as part of Optimum Neuron . A version for Google TPUs is maintained as part of Optimum TPU . Not all variants provide the same features, as hardware and middleware capabilities do not provide the same optimizations. Command Line Interface The official command line interface (CLI) for the server supports three subcommands, download-weights , quantize and serve : download-weights will download weights from the hub and, in some variants it will convert weights to a format that is adapted to the given implementation; quantize will allow to quantize a model using the qptq package. This feature is not available nor supported on all variants; serve will start the server that load a model (or a model shard), receives gRPC calls from the router, performs an inference and provides a formatted response to the given request. Serve’s command line parameters on the TGI repository are these: Copied Usage: cli.py serve [OPTIONS] MODEL_ID ╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────╮ │ * model_id TEXT [default: None] [required] │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────╮ │ --revision TEXT [default: None] │ │ --sharded --no-sharded [default: no-sharded] │ │ --quantize [bitsandbytes|bitsandbytes [default: None] │ │ -nf4|bitsandbytes-fp4|gptq │ │ |awq|eetq|exl2|fp8] │ │ --speculate INTEGER [default: None] │ │ --dtype [float16|bfloat16] [default: None] │ │ --trust-remote-code --no-trust-remote-code [default: │ │ no-trust-remote-code] │ │ --uds-path PATH [default: │ │ /tmp/text-generation-serve … │ │ --logger-level TEXT [default: INFO] │ │ --json-output --no-json-output [default: no-json-output] │ │ --otlp-endpoint TEXT [default: None] │ │ --otlp-service-name TEXT [default: │ │ text-generation-inference. .. │ │ --help Show this message and exit. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯ Note that some variants might support different parameters, and they could possibly accept more options that can be passed on using environment variables. Call Flow Once both components are initialized, weights downloaded and model server is up and running, router and model server exchange data and info through the gRPC call. There are currently two supported schemas, v2 and v3 . These two versions are almost identical, except for: input chunks support, for text and image data, paged attention support Here’s a diagram that displays the exchanges that follow the router and model server startup. Copied sequenceDiagram R outer -> >Model Server: service discovery M odel Server--> >Router: urls for other shards R outer -> >Model Server: get model info M odel Server--> >Router: shard info R outer -> >Model Server: health check M odel Server--> >Router: health OK R outer -> >Model Server: warmup(max_input_tokens, max_batch_prefill_tokens, max_total_tokens, max_batch_size) M odel Server--> >Router: warmup result After these are done, the router is ready to receive generate calls from multiple clients. Here’s an example. Copied sequenceDiagram participant Client 1 participant Client 2 participant Client 3 participant Router participant Model Server Client 1 ->>Router: generate_stream Router->>Model Server: prefill(batch1) Model Server-->>Router: generations, cached_batch1, timings Router-->>Client 1 : token 1 Router->>Model Server: decode(cached_batch1) Model Server-->>Router: generations, cached_batch1, timings Router-->>Client 1 : token 2 Router->>Model Server: decode(cached_batch1) Model Server-->>Router: generations, cached_batch1, timings Router-->>Client 1 : token 3 Client 2 ->>Router: generate_stream Router->>Model Server: prefill(batch2) Note right of Model Server: This stops previous batch, that is restarted Model Server-->>Router: generations, cached_batch2, timings Router-->>Client 2 : token 1 ' Router->>Model Server: decode(cached_batch1, cached_batch2) Model Server-->>Router: generations, cached_batch1, timings Router-->>Client 1 : token 4 Router-->>Client 2 : token 2 ' Note left of Client 1 : Client 1 leaves Router->>Model Server: filter_batch(cached_batch1, request_ids_to_keep=batch2) Model Server-->>Router: filtered batch Router->>Model Server: decode(cached_batch2) Model Server-->>Router: generations, cached_batch2, timings Router-->>Client 2 : token 3 ' Client 3 ->>Router: generate_stream Note right of Model Server: This stops previous batch, that is restarted Router->>Model Server: prefill(batch3) Note left of Client 1 : Client 3 leaves without receiving any batch Router->>Model Server: clear_cache(batch3) Note right of Model Server: This stops previous batch, that is restarted Router->>Model Server: decode(cached_batch3) Note right of Model Server: Last token (stopping criteria) Model Server-->>Router: generations, cached_batch3, timings Router-->>Client 2 : token 4 ' < > Update on GitHub ← Multi-backend support Usage Statistics → Text Generation Inference Architecture The Router Router’s command line The Model Server Model Server Variants Command Line Interface Call Flow |
SQL_Console__Query_Hugging_Face_datasets_in_your_b.txt | SQL Console: Query Hugging Face datasets in your browser Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation SQL Console: Query Hugging Face datasets in your browser Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Configure the Dataset Viewer Embed the Dataset Viewer in a webpage SQL Console Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started SQL Console: Query Hugging Face datasets in your browser You can run SQL queries on the dataset in the browser using the SQL Console. The SQL Console is powered by DuckDB WASM and runs entirely in the browser. You can access the SQL Console from the dataset page by clicking on the SQL Console badge. To learn more about the SQL Console, see the SQL Console blog post . Through the SQL Console, you can: Run DuckDB SQL queries on the dataset ( checkout SQL Snippets for useful queries ) Share results of the query with others via a link ( check out this example ) Download the results of the query to a parquet file Embed the results of the query in your own webpage using an iframe You can also use the DuckDB locally through the CLI to query the dataset via the `hf://` protocol. See the DuckDB Datasets documentation for more information. The SQL Console provides a convenient `Copy to DuckDB CLI` button that generates the SQL query for creating views and executing your query in the DuckDB CLI. Examples Filtering The SQL Console makes filtering datasets really easy. For example, if you want to filter the SkunkworksAI/reasoning-0.01 dataset for instructions and responses with a reasoning length of at least 10, you can use the following query: In the query, we can use the len function to get the length of the reasoning_chains column and the bar function to create a bar chart of the reasoning lengths. Copied SELECT len(reasoning_chains) AS reason_len, bar(reason_len, 0 , 100 ), * FROM train WHERE reason_len > 10 ORDER BY reason_len DESC The bar function is a neat built-in DuckDB function that creates a bar chart of the reasoning lengths. Histogram Many dataset authors choose to include statistics about the distribution of the data in the dataset. Using the DuckDB histogram function, we can plot a histogram of a column’s values. For example, to plot a histogram of the reason_len column in the SkunkworksAI/reasoning-0.01 dataset, you can use the following query: Learn more about the `histogram` function and parameters here . Copied FROM histogram(train, len(reasoning_chains)) Regex Matching One of the most powerful features of DuckDB is the deep support for regular expressions. You can use the regexp function to match patterns in your data. Using the regexp_matches function, we can filter the SkunkworksAI/reasoning-0.01 dataset for instructions that contain markdown code blocks. Learn more about the DuckDB regex functions here . Copied SELECT * FROM train WHERE regexp_matches(instruction, '```[a-z]*\n' ) limit 100 Leakage Detection Leakage detection is the process of identifying whether data in a dataset is present in multiple splits, for example, whether the test set is present in the training set. Learn more about leakage detection here . Copied WITH overlapping_rows AS ( SELECT COALESCE ( ( SELECT COUNT ( * ) AS overlap_count FROM train INTERSECT SELECT COUNT ( * ) AS overlap_count FROM test), 0 ) AS overlap_count ), total_unique_rows AS ( SELECT COUNT ( * ) AS total_count FROM ( SELECT * FROM train UNION SELECT * FROM test ) combined ) SELECT overlap_count, total_count, CASE WHEN total_count > 0 THEN (overlap_count * 100.0 / total_count) ELSE 0 END AS overlap_percentage FROM overlapping_rows, total_unique_rows; < > Update on GitHub ← Embed the Dataset Viewer in a webpage Datasets Download Stats → SQ L Console: Query Hugging Face datasets in your browser Examples Filtering Histogram Regex Matching Leakage Detection |
Perform_vector_similarity_search.txt | Perform vector similarity search Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Perform vector similarity search Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Argilla Dask Datasets Distilabel DuckDB Authentication for private and gated datasets Query datasets Perform SQL operations Combine datasets and export Perform vector similarity search FiftyOne Pandas Polars Spark WebDataset Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Perform vector similarity search The Fixed-Length Arrays feature was added in DuckDB version 0.10.0. This lets you use vector embeddings in DuckDB tables, making your data analysis even more powerful. Additionally, the array_cosine_similarity function was introduced. This function measures the cosine of the angle between two vectors, indicating their similarity. A value of 1 means they’re perfectly aligned, 0 means they’re perpendicular, and -1 means they’re completely opposite. Let’s explore how to use this function for similarity searches. In this section, we’ll show you how to perform similarity searches using DuckDB. We will use the asoria/awesome-chatgpt-prompts-embeddings dataset. First, let’s preview a few records from the dataset: Copied FROM 'hf://datasets/asoria/awesome-chatgpt-prompts-embeddings/data/*.parquet' SELECT act, prompt, len(embedding) as embed_len LIMIT 3; ┌──────────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┐ │ act │ prompt │ embed_len │ │ varchar │ varchar │ int64 │ ├──────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┤ │ Linux Terminal │ I want you to act as a linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output insid… │ 384 │ │ English Translator… │ I want you to act as an English translator, spelling corrector and improver. I will speak to you in any language and you will detect the language, translate it and answer… │ 384 │ │ `position` Intervi… │ I want you to act as an interviewer. I will be the candidate and you will ask me the interview questions for the `position` position. I want you to only reply as the inte… │ 384 │ └──────────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┘ Next, let’s choose an embedding to use for the similarity search: Copied FROM 'hf://datasets/asoria/awesome-chatgpt-prompts-embeddings/data/*.parquet' SELECT embedding WHERE act = 'Linux Terminal' ; ┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ embedding │ │ float [] │ ├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ [-0.020781303, -0.029143505, -0.0660217, -0.00932716, -0.02601602, -0.011426172, 0.06627567, 0.11941507, 0.0013917526, 0.012889079, 0.053234346, -0.07380514, 0.04871567, -0.043601237, -0.0025319182, 0.0448… │ └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ Now, let’s use the selected embedding to find similar records: Copied SELECT act, prompt, array_cosine_similarity(embedding:: float [384], (SELECT embedding FROM 'hf://datasets/asoria/awesome-chatgpt-prompts-embeddings/data/*.parquet' WHERE act = 'Linux Terminal' ):: float [384]) AS similarity FROM 'hf://datasets/asoria/awesome-chatgpt-prompts-embeddings/data/*.parquet' ORDER BY similarity DESC LIMIT 3; ┌──────────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────┐ │ act │ prompt │ similarity │ │ varchar │ varchar │ float │ ├──────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────┤ │ Linux Terminal │ I want you to act as a linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output insi… │ 1.0 │ │ JavaScript Console │ I want you to act as a javascript console. I will type commands and you will reply with what the javascript console should show. I want you to only reply with the termin… │ 0.7599728 │ │ R programming Inte… │ I want you to act as a R interpreter. I 'll type commands and you' ll reply with what the terminal should show. I want you to only reply with the terminal output inside on… │ 0.7303775 │ └──────────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────┘ That’s it! You have successfully performed a vector similarity search using DuckDB. < > Update on GitHub ← Combine datasets and export FiftyOne → Perform vector similarity search |
ExecuTorch.txt | ExecuTorch Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation ExecuTorch Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started ExecuTorch ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch ecosystem and supports the deployment of PyTorch models with a focus on portability, productivity, and performance. ExecuTorch introduces well defined entry points to perform model, device, and/or use-case specific optimizations such as backend delegation, user-defined compiler transformations, memory planning, and more. The first step in preparing a PyTorch model for execution on an edge device using ExecuTorch is to export the model. This is achieved through the use of a PyTorch API called torch.export . ExecuTorch Integration An integration point is being developed to ensure that 🤗 Transformers can be exported using torch.export . The goal of this integration is not only to enable export but also to ensure that the exported artifact can be further lowered and optimized to run efficiently in ExecuTorch , particularly for mobile and edge use cases. class transformers. TorchExportableModuleWithStaticCache < source > ( model : PreTrainedModel ) A wrapper module designed to make a PreTrainedModel exportable with torch.export , specifically for use with static caching. This module ensures that the exported model is compatible with further lowering and execution in ExecuTorch . Note: This class is specifically designed to support export process using torch.export in a way that ensures the model can be further lowered and run efficiently in ExecuTorch . forward < source > ( input_ids : Tensor cache_position : Tensor ) → torch.Tensor Parameters input_ids ( torch.Tensor ) — Tensor representing current input token id to the module. cache_position ( torch.Tensor ) — Tensor representing current input position in the cache. Returns torch.Tensor Logits output from the model. Forward pass of the module, which is compatible with the ExecuTorch runtime. This forward adapter serves two primary purposes: Making the Model torch.export -Compatible : The adapter hides unsupported objects, such as the Cache , from the graph inputs and outputs, enabling the model to be exportable using torch.export without encountering issues. Ensuring Compatibility with ExecuTorch runtime : The adapter matches the model’s forward signature with that in executorch/extension/llm/runner , ensuring that the exported model can be executed in ExecuTorch out-of-the-box. transformers.convert_and_export_with_cache < source > ( model : PreTrainedModel example_input_ids : Tensor = None example_cache_position : Tensor = None ) → Exported program ( torch.export.ExportedProgram ) Parameters model ( PreTrainedModel ) — The pretrained model to be exported. example_input_ids ( torch.Tensor ) — Example input token id used by torch.export . example_cache_position ( torch.Tensor ) — Example current cache position used by torch.export . Returns Exported program ( torch.export.ExportedProgram ) The exported program generated via torch.export . Convert a PreTrainedModel into an exportable module and export it using torch.export , ensuring the exported model is compatible with ExecuTorch . < > Update on GitHub ← DeepSpeed Feature Extractor → Execu Torch Execu Torch Integration |
Fine-tune_a_pretrained_model.txt | Fine-tune a pretrained model Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Fine-tune a pretrained model Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Fine-tune a pretrained model There are significant benefits to using a pretrained model. It reduces computation costs, your carbon footprint, and allows you to use state-of-the-art models without having to train one from scratch. 🤗 Transformers provides access to thousands of pretrained models for a wide range of tasks. When you use a pretrained model, you train it on a dataset specific to your task. This is known as fine-tuning, an incredibly powerful training technique. In this tutorial, you will fine-tune a pretrained model with a deep learning framework of your choice: Fine-tune a pretrained model with 🤗 Transformers Trainer . Fine-tune a pretrained model in TensorFlow with Keras. Fine-tune a pretrained model in native PyTorch. Prepare a dataset Before you can fine-tune a pretrained model, download a dataset and prepare it for training. The previous tutorial showed you how to process data for training, and now you get an opportunity to put those skills to the test! Begin by loading the Yelp Reviews dataset: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "yelp_review_full" ) >>> dataset[ "train" ][ 100 ] { 'label' : 0 , 'text' : 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!' } As you now know, you need a tokenizer to process the text and include a padding and truncation strategy to handle any variable sequence lengths. To process your dataset in one step, use 🤗 Datasets map method to apply a preprocessing function over the entire dataset: Copied >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "google-bert/bert-base-cased" ) >>> def tokenize_function ( examples ): ... return tokenizer(examples[ "text" ], padding= "max_length" , truncation= True ) >>> tokenized_datasets = dataset. map (tokenize_function, batched= True ) If you like, you can create a smaller subset of the full dataset to fine-tune on to reduce the time it takes: Copied >>> small_train_dataset = tokenized_datasets[ "train" ].shuffle(seed= 42 ).select( range ( 1000 )) >>> small_eval_dataset = tokenized_datasets[ "test" ].shuffle(seed= 42 ).select( range ( 1000 )) Train At this point, you should follow the section corresponding to the framework you want to use. You can use the links in the right sidebar to jump to the one you want - and if you want to hide all of the content for a given framework, just use the button at the top-right of that framework’s block! Pytorch Hide Pytorch content Train with PyTorch Trainer 🤗 Transformers provides a Trainer class optimized for training 🤗 Transformers models, making it easier to start training without manually writing your own training loop. The Trainer API supports a wide range of training options and features such as logging, gradient accumulation, and mixed precision. Start by loading your model and specify the number of expected labels. From the Yelp Review dataset card , you know there are five labels. By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set torch_dtype="auto" to load the weights in the data type defined in a model’s config.json file to automatically load the most memory-optimal data type. Copied >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained( "google-bert/bert-base-cased" , num_labels= 5 , torch_dtype= "auto" ) You will see a warning about some of the pretrained weights not being used and some weights being randomly initialized. Don’t worry, this is completely normal! The pretrained head of the BERT model is discarded, and replaced with a randomly initialized classification head. You will fine-tune this new model head on your sequence classification task, transferring the knowledge of the pretrained model to it. Training hyperparameters Next, create a TrainingArguments class which contains all the hyperparameters you can tune as well as flags for activating different training options. For this tutorial you can start with the default training hyperparameters , but feel free to experiment with these to find your optimal settings. Specify where to save the checkpoints from your training: Copied >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir= "test_trainer" ) Evaluate Trainer does not automatically evaluate model performance during training. You’ll need to pass Trainer a function to compute and report metrics. The 🤗 Evaluate library provides a simple accuracy function you can load with the evaluate.load (see this quicktour for more information) function: Copied >>> import numpy as np >>> import evaluate >>> metric = evaluate.load( "accuracy" ) Call compute on metric to calculate the accuracy of your predictions. Before passing your predictions to compute , you need to convert the logits to predictions (remember all 🤗 Transformers models return logits): Copied >>> def compute_metrics ( eval_pred ): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=- 1 ) ... return metric.compute(predictions=predictions, references=labels) If you’d like to monitor your evaluation metrics during fine-tuning, specify the eval_strategy parameter in your training arguments to report the evaluation metric at the end of each epoch: Copied >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir= "test_trainer" , eval_strategy= "epoch" ) Trainer Create a Trainer object with your model, training arguments, training and test datasets, and evaluation function: Copied >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) Then fine-tune your model by calling train() : Copied >>> trainer.train() TensorFlow Hide TensorFlow content Train a TensorFlow model with Keras You can also train 🤗 Transformers models in TensorFlow with the Keras API! Loading data for Keras When you want to train a 🤗 Transformers model with the Keras API, you need to convert your dataset to a format that Keras understands. If your dataset is small, you can just convert the whole thing to NumPy arrays and pass it to Keras. Let’s try that first before we do anything more complicated. First, load a dataset. We’ll use the CoLA dataset from the GLUE benchmark , since it’s a simple binary text classification task, and just take the training split for now. Copied from datasets import load_dataset dataset = load_dataset( "glue" , "cola" ) dataset = dataset[ "train" ] # Just take the training split for now Next, load a tokenizer and tokenize the data as NumPy arrays. Note that the labels are already a list of 0 and 1s, so we can just convert that directly to a NumPy array without tokenization! Copied from transformers import AutoTokenizer import numpy as np tokenizer = AutoTokenizer.from_pretrained( "google-bert/bert-base-cased" ) tokenized_data = tokenizer(dataset[ "sentence" ], return_tensors= "np" , padding= True ) # Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras tokenized_data = dict (tokenized_data) labels = np.array(dataset[ "label" ]) # Label is already an array of 0 and 1 Finally, load, compile , and fit the model. Note that Transformers models all have a default task-relevant loss function, so you don’t need to specify one unless you want to: Copied from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained( "google-bert/bert-base-cased" ) # Lower learning rates are often better for fine-tuning transformers model. compile (optimizer=Adam( 3e-5 )) # No loss argument! model.fit(tokenized_data, labels) You don’t have to pass a loss argument to your models when you compile() them! Hugging Face models automatically choose a loss that is appropriate for their task and model architecture if this argument is left blank. You can always override this by specifying a loss yourself if you want to! This approach works great for smaller datasets, but for larger datasets, you might find it starts to become a problem. Why? Because the tokenized array and labels would have to be fully loaded into memory, and because NumPy doesn’t handle “jagged” arrays, so every tokenized sample would have to be padded to the length of the longest sample in the whole dataset. That’s going to make your array even bigger, and all those padding tokens will slow down training too! Loading data as a tf.data.Dataset If you want to avoid slowing down training, you can load your data as a tf.data.Dataset instead. Although you can write your own tf.data pipeline if you want, we have two convenience methods for doing this: prepare_tf_dataset() : This is the method we recommend in most cases. Because it is a method on your model, it can inspect the model to automatically figure out which columns are usable as model inputs, and discard the others to make a simpler, more performant dataset. to_tf_dataset : This method is more low-level, and is useful when you want to exactly control how your dataset is created, by specifying exactly which columns and label_cols to include. Before you can use prepare_tf_dataset() , you will need to add the tokenizer outputs to your dataset as columns, as shown in the following code sample: Copied def tokenize_dataset ( data ): # Keys of the returned dictionary will be added to the dataset as columns return tokenizer(data[ "text" ]) dataset = dataset. map (tokenize_dataset) Remember that Hugging Face datasets are stored on disk by default, so this will not inflate your memory usage! Once the columns have been added, you can stream batches from the dataset and add padding to each batch, which greatly reduces the number of padding tokens compared to padding the entire dataset. Copied >>> tf_dataset = model.prepare_tf_dataset(dataset[ "train" ], batch_size= 16 , shuffle= True , tokenizer=tokenizer) Note that in the code sample above, you need to pass the tokenizer to prepare_tf_dataset so it can correctly pad batches as they’re loaded. If all the samples in your dataset are the same length and no padding is necessary, you can skip this argument. If you need to do something more complex than just padding samples (e.g. corrupting tokens for masked language modelling), you can use the collate_fn argument instead to pass a function that will be called to transform the list of samples into a batch and apply any preprocessing you want. See our examples or notebooks to see this approach in action. Once you’ve created a tf.data.Dataset , you can compile and fit the model as before: Copied model. compile (optimizer=Adam( 3e-5 )) # No loss argument! model.fit(tf_dataset) Train in native PyTorch Pytorch Hide Pytorch content Trainer takes care of the training loop and allows you to fine-tune a model in a single line of code. For users who prefer to write their own training loop, you can also fine-tune a 🤗 Transformers model in native PyTorch. At this point, you may need to restart your notebook or execute the following code to free some memory: Copied from accelerate.utils.memory import clear_device_cache del model del trainer clear_device_cache() Next, manually postprocess tokenized_dataset to prepare it for training. Remove the text column because the model does not accept raw text as an input: Copied >>> tokenized_datasets = tokenized_datasets.remove_columns([ "text" ]) Rename the label column to labels because the model expects the argument to be named labels : Copied >>> tokenized_datasets = tokenized_datasets.rename_column( "label" , "labels" ) Set the format of the dataset to return PyTorch tensors instead of lists: Copied >>> tokenized_datasets.set_format( "torch" ) Then create a smaller subset of the dataset as previously shown to speed up the fine-tuning: Copied >>> small_train_dataset = tokenized_datasets[ "train" ].shuffle(seed= 42 ).select( range ( 1000 )) >>> small_eval_dataset = tokenized_datasets[ "test" ].shuffle(seed= 42 ).select( range ( 1000 )) DataLoader Create a DataLoader for your training and test datasets so you can iterate over batches of data: Copied >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle= True , batch_size= 8 ) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size= 8 ) Load your model with the number of expected labels: Copied >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained( "google-bert/bert-base-cased" , num_labels= 5 ) Optimizer and learning rate scheduler Create an optimizer and learning rate scheduler to fine-tune the model. Let’s use the AdamW optimizer from PyTorch: Copied >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr= 5e-5 ) Create the default learning rate scheduler from Trainer : Copied >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len (train_dataloader) >>> lr_scheduler = get_scheduler( ... name= "linear" , optimizer=optimizer, num_warmup_steps= 0 , num_training_steps=num_training_steps ... ) Lastly, specify device to use a GPU if you have access to one. Otherwise, training on a CPU may take several hours instead of a couple of minutes. Copied >>> import torch >>> from accelerate.test_utils.testing import get_backend >>> device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) >>> model.to(device) Get free access to a cloud GPU if you don’t have one with a hosted notebook like Colaboratory or SageMaker StudioLab . Great, now you are ready to train! 🥳 Training loop To keep track of your training progress, use the tqdm library to add a progress bar over the number of training steps: Copied >>> from tqdm.auto import tqdm >>> progress_bar = tqdm( range (num_training_steps)) >>> model.train() >>> for epoch in range (num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update( 1 ) Evaluate Just like how you added an evaluation function to Trainer , you need to do the same when you write your own training loop. But instead of calculating and reporting the metric at the end of each epoch, this time you’ll accumulate all the batches with add_batch and calculate the metric at the very end. Copied >>> import evaluate >>> metric = evaluate.load( "accuracy" ) >>> model. eval () >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=- 1 ) ... metric.add_batch(predictions=predictions, references=batch[ "labels" ]) >>> metric.compute() Additional resources For more fine-tuning examples, refer to: 🤗 Transformers Examples includes scripts to train common NLP tasks in PyTorch and TensorFlow. 🤗 Transformers Notebooks contains various notebooks on how to fine-tune a model for specific tasks in PyTorch and TensorFlow. < > Update on GitHub ← Preprocess data Train with a script → Fine-tune a pretrained model Prepare a dataset Train Train with Py Torch Trainer Training hyperparameters Evaluate Trainer Train a Tensor Flow model with Keras Loading data for Keras Loading data as a tf.data. Dataset Train in native Py Torch Data Loader Optimizer and learning rate scheduler Training loop Evaluate Additional resources |
Stable_Diffusion_XL.txt | Stable Diffusion XL Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Stable Diffusion XL Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped introduces a two-stage model process; the base model (can also be run as a standalone model) generates an image as an input to the refiner model which adds additional high-quality details This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab #!pip install -q diffusers transformers accelerate invisible-watermark>=0.2.0 We recommend installing the invisible-watermark library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker: Copied pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker= False ) Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline import torch pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0" , torch_dtype=torch.float16, use_safetensors= True , variant= "fp16" ).to( "cuda" ) You can also use the from_single_file() method to load a model checkpoint stored in a single file format ( .ckpt or .safetensors ) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline import torch pipeline = StableDiffusionXLPipeline.from_single_file( "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors" , torch_dtype=torch.float16 ).to( "cuda" ) refiner = StableDiffusionXLImg2ImgPipeline.from_single_file( "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors" , torch_dtype=torch.float16 ).to( "cuda" ) Text-to-image For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the height and width parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work. Copied from diffusers import AutoPipelineForText2Image import torch pipeline_text2image = AutoPipelineForText2Image.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipeline_text2image(prompt=prompt).images[ 0 ] image Image-to-image For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with: Copied from diffusers import AutoPipelineForImage2Image from diffusers.utils import load_image, make_image_grid # use from_pipe to avoid consuming additional memory when loading a checkpoint pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to( "cuda" ) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" init_image = load_image(url) prompt = "a dog catching a frisbee in the jungle" image = pipeline(prompt, image=init_image, strength= 0.8 , guidance_scale= 10.5 ).images[ 0 ] make_image_grid([init_image, image], rows= 1 , cols= 2 ) Inpainting For inpainting, you’ll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with. Copied from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image, make_image_grid # use from_pipe to avoid consuming additional memory when loading a checkpoint pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to( "cuda" ) img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" init_image = load_image(img_url) mask_image = load_image(mask_url) prompt = "A deep sea diver floating" image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength= 0.85 , guidance_scale= 12.5 ).images[ 0 ] make_image_grid([init_image, mask_image, image], rows= 1 , cols= 3 ) Refine image quality SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: use the base and refiner models together to produce a refined image use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained) Base + refiner model When you use the base and refiner model together to generate an image, this is known as an ensemble of expert denoisers . The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model’s output to the refiner model, so it should be significantly faster to run. However, you won’t be able to inspect the base model’s output because it still contains a large amount of noise. As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model: Copied from diffusers import DiffusionPipeline import torch base = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) refiner = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0" , text_encoder_2=base.text_encoder_2, vae=base.vae, torch_dtype=torch.float16, use_safetensors= True , variant= "fp16" , ).to( "cuda" ) To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the denoising_end parameter and for the refiner model, it is controlled by the denoising_start parameter. The denoising_end and denoising_start parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you’re also using the strength parameter, it’ll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff. Let’s set denoising_end=0.8 so the base model performs the first 80% of denoising the high-noise timesteps and set denoising_start=0.8 so the refiner model performs the last 20% of denoising the low-noise timesteps. The base model output should be in latent space instead of a PIL image. Copied prompt = "A majestic lion jumping from a big stone at night" image = base( prompt=prompt, num_inference_steps= 40 , denoising_end= 0.8 , output_type= "latent" , ).images image = refiner( prompt=prompt, num_inference_steps= 40 , denoising_start= 0.8 , image=image, ).images[ 0 ] image default base model ensemble of expert denoisers The refiner model can also be used for inpainting in the StableDiffusionXLInpaintPipeline : Copied from diffusers import StableDiffusionXLInpaintPipeline from diffusers.utils import load_image, make_image_grid import torch base = StableDiffusionXLInpaintPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) refiner = StableDiffusionXLInpaintPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0" , text_encoder_2=base.text_encoder_2, vae=base.vae, torch_dtype=torch.float16, use_safetensors= True , variant= "fp16" , ).to( "cuda" ) img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = load_image(img_url) mask_image = load_image(mask_url) prompt = "A majestic tiger sitting on a bench" num_inference_steps = 75 high_noise_frac = 0.7 image = base( prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=num_inference_steps, denoising_end=high_noise_frac, output_type= "latent" , ).images image = refiner( prompt=prompt, image=image, mask_image=mask_image, num_inference_steps=num_inference_steps, denoising_start=high_noise_frac, ).images[ 0 ] make_image_grid([init_image, mask_image, image.resize(( 512 , 512 ))], rows= 1 , cols= 3 ) This ensemble of expert denoisers method works well for all available schedulers! Base to refiner model SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting. Load the base and refiner models: Copied from diffusers import DiffusionPipeline import torch base = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) refiner = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0" , text_encoder_2=base.text_encoder_2, vae=base.vae, torch_dtype=torch.float16, use_safetensors= True , variant= "fp16" , ).to( "cuda" ) You can use SDXL refiner with a different base model. For example, you can use the Hunyuan-DiT or PixArt-Sigma pipelines to generate images with better prompt adherence. Once you have generated an image, you can pass it to the SDXL refiner model to enhance final generation quality. Generate an image from the base model, and set the model output to latent space: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = base(prompt=prompt, output_type= "latent" ).images[ 0 ] Pass the generated image to the refiner model: Copied image = refiner(prompt=prompt, image=image[ None , :]).images[ 0 ] base model base model + refiner model For inpainting, load the base and the refiner model in the StableDiffusionXLInpaintPipeline , remove the denoising_end and denoising_start parameters, and choose a smaller number of inference steps for the refiner. Micro-conditioning SDXL training involves several additional conditioning techniques, which are referred to as micro-conditioning . These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images. You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the StableDiffusionXLPipeline , StableDiffusionXLImg2ImgPipeline , StableDiffusionXLInpaintPipeline , and StableDiffusionXLControlNetPipeline . Size conditioning There are two types of size conditioning: original_size conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use original_size to indicate the original image resolution. Using the default value of (1024, 1024) produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as (256, 256) , the model still generates 1024x1024 images, but they’ll look like the low resolution images (simpler patterns, blurring) in the dataset. target_size conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of (1024, 1024) , you’ll get an image that resembles the composition of square images in the dataset. We recommend using the same value for target_size and original_size , but feel free to experiment with other options! 🤗 Diffusers also lets you specify negative conditions about an image’s size to steer generation away from certain image resolutions: Copied from diffusers import StableDiffusionXLPipeline import torch pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe( prompt=prompt, negative_original_size=( 512 , 512 ), negative_target_size=( 1024 , 1024 ), ).images[ 0 ] Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512). Crop conditioning Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL learns that no cropping - coordinates (0, 0) - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions! Copied from diffusers import StableDiffusionXLPipeline import torch pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipeline(prompt=prompt, crops_coords_top_left=( 256 , 0 )).images[ 0 ] image You can also specify negative cropping coordinates to steer generation away from certain cropping parameters: Copied from diffusers import StableDiffusionXLPipeline import torch pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe( prompt=prompt, negative_original_size=( 512 , 512 ), negative_crops_coords_top_left=( 0 , 0 ), negative_target_size=( 1024 , 1024 ), ).images[ 0 ] image Use a different prompt for each text-encoder SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can improve quality . Pass your original prompt to prompt and the second prompt to prompt_2 (use negative_prompt and negative_prompt_2 if you’re using negative prompts): Copied from diffusers import StableDiffusionXLPipeline import torch pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" , torch_dtype=torch.float16, variant= "fp16" , use_safetensors= True ).to( "cuda" ) # prompt is passed to OAI CLIP-ViT/L-14 prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" # prompt_2 is passed to OpenCLIP-ViT/bigG-14 prompt_2 = "Van Gogh painting" image = pipeline(prompt=prompt, prompt_2=prompt_2).images[ 0 ] image The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the SDXL textual inversion section. Optimizations SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference. Offload the model to the CPU with enable_model_cpu_offload() for out-of-memory errors: Copied - base.to("cuda") - refiner.to("cuda") + base.enable_model_cpu_offload() + refiner.enable_model_cpu_offload() Use torch.compile for ~20% speed-up (you need torch>=2.0 ): Copied + base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True) + refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) Enable xFormers to run SDXL if torch<2.0 : Copied + base.enable_xformers_memory_efficient_attention() + refiner.enable_xformers_memory_efficient_attention() Other resources If you’re interested in experimenting with a minimal version of the UNet2DConditionModel used in SDXL, take a look at the minSDXL implementation which is written in PyTorch and directly compatible with 🤗 Diffusers. < > Update on GitHub ← CogVideoX SDXL Turbo → Stable Diffusion XL Load model checkpoints Text-to-image Image-to-image Inpainting Refine image quality Base + refiner model Base to refiner model Micro-conditioning Size conditioning Crop conditioning Use a different prompt for each text-encoder Optimizations Other resources |
Interface__TextGenerationStreamToken.txt | Interface: TextGenerationStreamToken Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Huggingface.js documentation Interface: TextGenerationStreamToken Huggingface.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN 🤗 Hugging Face JS Libraries @huggingface/inference Use Inference Endpoints API reference Classes HfInference HfInferenceEndpoint InferenceOutputError Interfaces AudioClassificationOutputValue AudioToAudioOutputValue AutomaticSpeechRecognitionOutput BaseArgs DocumentQuestionAnsweringOutput ImageClassificationOutputValue ImageSegmentationOutputValue ImageToTextOutput ObjectDetectionOutputValue Options QuestionAnsweringOutput SummarizationOutput TableQuestionAnsweringOutput TextGenerationInput TextGenerationOutput TextGenerationStreamBestOfSequence TextGenerationStreamDetails TextGenerationStreamOutput TextGenerationStreamPrefillToken TextGenerationStreamToken TokenClassificationOutputValue TranslationOutputValue VisualQuestionAnsweringOutput ZeroShotClassificationOutputValue ZeroShotImageClassificationOutputValue @huggingface/hub Interact with the Hub API Reference Classes HubApiError InvalidApiResponseFormatError Interfaces AuthInfo CachedFileInfo CachedRepoInfo CachedRevisionInfo CommitData CommitDeletedEntry CommitFile CommitInfo CommitOutput Credentials DatasetEntry FileDownloadInfoOutput HFCacheInfo LfsPathInfo ListFileEntry ModelEntry OAuthResult PathInfo RepoId SafetensorsIndexJson SafetensorsShardFileInfo SecurityFileStatus SpaceEntry SpaceResourceConfig SpaceResourceRequirement SpaceRuntime TensorInfo UserInfo WhoAmIApp WhoAmIOrg WhoAmIUser @huggingface/agent Use Agents to run multi-modal workflows from a natural language API API Reference Classes HfAgent @huggingface/space-header Use Space mini_header in your app @huggingface/gguf Parse local and remote GGUF files Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Interface: TextGenerationStreamToken Properties id • id : number Token ID from the model tokenizer Defined in inference/src/tasks/nlp/textGenerationStream.ts:7 logprob • logprob : number Logprob Defined in inference/src/tasks/nlp/textGenerationStream.ts:11 special • special : boolean Is the token a special token Can be used to ignore tokens when concatenating Defined in inference/src/tasks/nlp/textGenerationStream.ts:16 text • text : string Token text Defined in inference/src/tasks/nlp/textGenerationStream.ts:9 < > Update on GitHub ← TextGenerationStreamPrefillToken TokenClassificationOutputValue → Interface: Text Generation Stream Token Properties id Defined in logprob Defined in special Defined in text Defined in |
ClickHouse.txt | ClickHouse Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Dataset viewer documentation ClickHouse Dataset viewer 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Get Started 🤗 Dataset viewer Quickstart Analyze a dataset on the Hub Guides Check dataset validity List splits and subsets Get dataset information Preview a dataset Download slices of rows Search text in a dataset Filter rows in a dataset List Parquet files Get the number of rows and the bytes size Explore dataset statistics Get Croissant metadata Query datasets from dataset viewer API Overview ClickHouse cuDF DuckDB Pandas Polars PostgreSQL mlcroissant PySpark Conceptual Guides Splits and subsets Data types Server infrastructure Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started ClickHouse ClickHouse is a fast and efficient column-oriented database for analytical workloads, making it easy to analyze Hub-hosted datasets with SQL. To get started quickly, use clickhouse-local to run SQL queries from the command line and avoid the need to fully install ClickHouse. Check this blog for more details about how to analyze datasets on the Hub with ClickHouse. To start, download and install clickhouse-local : Copied curl https://clickhouse.com/ | sh For this example, you’ll analyze the maharshipandya/spotify-tracks-dataset which contains information about Spotify tracks. Datasets on the Hub are stored as Parquet files and you can access it with the /parquet endpoint: Copied import requests r = requests.get( "https://datasets-server.huggingface.co/parquet?dataset=maharshipandya/spotify-tracks-dataset" ) j = r.json() url = [f[ 'url' ] for f in j[ 'parquet_files' ]] url [ 'https://huggingface.co/datasets/maharshipandya/spotify-tracks-dataset/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet' ] Aggregate functions Now you can begin to analyze the dataset. Use the -q argument to specify the query to execute, and the url function to create a table from the data in the Parquet file. You should set enable_url_encoding to 0 to ensure the escape characters in the URL are preserved as intended, and max_https_get_redirects to 1 to redirect to the path of the Parquet file. Let’s start by identifying the most popular artists: Copied ./clickhouse local -q " SELECT count() AS c, artists FROM url('https://huggingface.co/datasets/maharshipandya/spotify-tracks-dataset/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet') GROUP BY artists ORDER BY c DESC LIMIT 5 SETTINGS enable_url_encoding=0, max_http_get_redirects=1" ┌───c─┬─artists─────────┐ │ 279 │ The Beatles │ │ 271 │ George Jones │ │ 236 │ Stevie Wonder │ │ 224 │ Linkin Park │ │ 222 │ Ella Fitzgerald │ └─────┴─────────────────┘ ClickHouse also provides functions for visualizing your queries. For example, you can use the bar function to create a bar chart of the danceability of songs: Copied ./clickhouse local -q " SELECT round(danceability, 1) AS danceability, bar(count(), 0, max(count()) OVER ()) AS dist FROM url('https://huggingface.co/datasets/maharshipandya/spotify-tracks-dataset/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet') GROUP BY danceability ORDER BY danceability ASC SETTINGS enable_url_encoding=0, max_http_get_redirects=1" ┌─danceability─┬─dist─────────────────────────────────────────────────────────────────────────────────┐ │ 0 │ ▍ │ │ 0.1 │ ████▎ │ │ 0.2 │ █████████████▍ │ │ 0.3 │ ████████████████████████ │ │ 0.4 │ ████████████████████████████████████████████▋ │ │ 0.5 │ ████████████████████████████████████████████████████████████████████▊ │ │ 0.6 │ ████████████████████████████████████████████████████████████████████████████████ │ │ 0.7 │ ██████████████████████████████████████████████████████████████████████ │ │ 0.8 │ ██████████████████████████████████████████ │ │ 0.9 │ ██████████▋ │ │ 1 │ ▌ │ └──────────────┴──────────────────────────────────────────────────────────────────────────────────────┘ To get a deeper understanding about a dataset, ClickHouse provides statistical analysis functions for determining how your data is correlated, calculating statistical hypothesis tests, and more. Take a look at ClickHouse’s List of Aggregate Functions for a complete list of available aggregate functions. User-defined function (UDFs) A user-defined function (UDF) allows you to reuse custom logic. Many Hub datasets are often sharded into more than one Parquet file, so it can be easier and more efficient to create a UDF to list and query all the Parquet files of a given dataset from just the dataset name. For this example, you’ll need to run clickhouse-local in console mode so the UDF persists between queries: Copied ./clickhouse local Remember to set enable_url_encoding to 0 and max_https_get_redirects to 1 to redirect to the path of the Parquet files: Copied SET max_http_get_redirects = 1, enable_url_encoding = 0 Let’s create a function to return a list of Parquet files from the tasksource/blog_authorship_corpus : Copied CREATE OR REPLACE FUNCTION hugging_paths AS dataset -> ( SELECT arrayMap(x -> (x.1), JSONExtract(json, 'parquet_files' , 'Array(Tuple(url String))' )) FROM url( 'https://datasets-server.huggingface.co/parquet?dataset=' || dataset, 'JSONAsString' ) ); SELECT hugging_paths( 'tasksource/blog_authorship_corpus' ) AS paths [ 'https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet' , 'https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet' ] You can make this even easier by creating another function that calls hugging_paths and outputs all the files based on the dataset name: Copied CREATE OR REPLACE FUNCTION hf AS dataset -> ( WITH hugging_paths(dataset) as urls SELECT multiIf(length(urls) = 0, '' , length(urls) = 1, urls[1], 'https://huggingface.co/datasets/{' || arrayStringConcat(arrayMap(x -> replaceRegexpOne(replaceOne(x, 'https://huggingface.co/datasets/' , '' ), '\\.parquet$' , '' ), urls), ',' ) || '}.parquet' ) ); SELECT hf( 'tasksource/blog_authorship_corpus' ) AS pattern https://huggingface.co/datasets/{tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000,tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001}.parquet Now use the hf function to query any dataset by passing the dataset name: Copied SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM url(hf( 'tasksource/blog_authorship_corpus' )) GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5) ┌───────────┬────────┬────────────────────┐ │ sign │ count │ avg_blog_length │ ├───────────┼────────┼────────────────────┤ │ Aquarius │ 49687 │ 1193.9523819107615 │ │ Leo │ 53811 │ 1186.0665291483153 │ │ Cancer │ 65048 │ 1160.8010392325666 │ │ Gemini │ 51985 │ 1158.4132922958545 │ │ Vurgi │ 60399 │ 1142.9977648636566 │ └───────────┴────────┴────────────────────┘ < > Update on GitHub ← Overview cuDF → Click House Aggregate functions User-defined function (UD Fs) |
PEFT_types.txt | PEFT types Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation PEFT types PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started PEFT types PeftType includes the supported adapters in PEFT, and TaskType includes PEFT-supported tasks. PeftType class peft. PeftType < source > ( value names = None module = None qualname = None type = None start = 1 ) Enum class for the different types of adapters in PEFT. Supported PEFT types: PROMPT_TUNING MULTITASK_PROMPT_TUNING P_TUNING PREFIX_TUNING LORA ADALORA BOFT ADAPTION_PROMPT IA3 LOHA LOKR OFT XLORA POLY LN_TUNING VERA FOURIERFT HRA BONE TaskType class peft. TaskType < source > ( value names = None module = None qualname = None type = None start = 1 ) Enum class for the different types of tasks supported by PEFT. Overview of the supported task types: SEQ_CLS: Text classification. SEQ_2_SEQ_LM: Sequence-to-sequence language modeling. CAUSAL_LM: Causal language modeling. TOKEN_CLS: Token classification. QUESTION_ANS: Question answering. FEATURE_EXTRACTION: Feature extraction. Provides the hidden states which can be used as embeddings or features for downstream tasks. < > Update on GitHub ← PEFT model Configuration → PEF T types Peft Type Task Type |
Using_🤗_Datasets.txt | Using 🤗 Datasets Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using 🤗 Datasets Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Argilla Dask Datasets Distilabel DuckDB FiftyOne Pandas Polars Spark WebDataset Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using 🤗 Datasets Once you’ve found an interesting dataset on the Hugging Face Hub, you can load the dataset using 🤗 Datasets. You can click on the Use this dataset button to copy the code to load a dataset. First you need to Login with your Hugging Face account , for example using: Copied huggingface- cli login And then you can load a dataset from the Hugging Face Hub using Copied from datasets import load_dataset dataset = load_dataset( "username/my_dataset" ) # or load the separate splits if the dataset has train/validation/test splits train_dataset = load_dataset( "username/my_dataset" , split= "train" ) valid_dataset = load_dataset( "username/my_dataset" , split= "validation" ) test_dataset = load_dataset( "username/my_dataset" , split= "test" ) You can also upload datasets to the Hugging Face Hub: Copied my_new_dataset.push_to_hub( "username/my_new_dataset" ) This creates a dataset repository username/my_new_dataset containing your Dataset in Parquet format, that you can reload later. For more information about using 🤗 Datasets, check out the tutorials and how-to guides available in the 🤗 Datasets documentation. < > Update on GitHub ← Dask Distilabel → Using 🤗 Datasets |
GGUF_usage_with_llama.cpp.txt | GGUF usage with llama.cpp Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation GGUF usage with llama.cpp Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Integrate a library with the Hub Tasks GGUF GGUF usage with llama.cpp GGUF usage with GPT4All GGUF usage with Ollama DDUF Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started GGUF usage with llama.cpp You can now deploy any llama.cpp compatible GGUF on Hugging Face Endpoints, read more about it here Llama.cpp allows you to download and run inference on a GGUF simply by providing a path to the Hugging Face repo path and the file name. llama.cpp downloads the model checkpoint and automatically caches it. The location of the cache is defined by LLAMA_CACHE environment variable; read more about it here . You can install llama.cpp through brew (works on Mac and Linux), or you can build it from source. There are also pre-built binaries and Docker images that you can check in the official documentation . Option 1: Install with brew Copied brew install llama.cpp Option 2: build from source Step 1: Clone llama.cpp from GitHub. Copied git clone https: //gi thub.com /ggerganov/ llama.cpp Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). Copied cd llama.cpp && LLAMA_CURL=1 make Once installed, you can use the llama-cli or llama-server as follows: Copied llama-cli -hf bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0 Note: You can remove -cnv to run the CLI in chat completion mode. Additionally, you can invoke an OpenAI spec chat completions endpoint directly using the llama.cpp server: Copied llama-server -hf bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0 After running the server you can simply utilise the endpoint as below: Copied curl http://localhost:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer no-key" \ -d '{ "messages": [ { "role": "system", "content": "You are an AI assistant. Your top priority is achieving user fulfilment via helping them with their requests." }, { "role": "user", "content": "Write a limerick about Python exceptions" } ] }' Replace -hf with any valid Hugging Face hub repo name - off you go! 🦙 Note: Remember to build llama.cpp with LLAMA_CURL=1 :) < > Update on GitHub ← GGUF GGUF usage with GPT4All → GGU F usage with llama.cpp Option 1: Install with brew Option 2: build from source |
Notifications.txt | Notifications Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Notifications Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Notifications Notifications allow you to know when new activities ( Pull Requests or discussions ) happen on models, datasets, and Spaces belonging to users or organizations you are watching. By default, you’ll receive a notification if: Someone mentions you in a discussion/PR. A new comment is posted in a discussion/PR you participated in. A new discussion/PR or comment is posted in one of the repositories of an organization or user you are watching. You’ll get new notifications by email and directly on the website , you can change this in your notifications settings . Filtering and managing notifications On the notifications page , you have several options for filtering and managing your notifications more effectively: Filter by Repository: Choose to display notifications from a specific repository only. Filter by Read Status: Display only unread notifications or all notifications. Filter by Participation: Show notifications you have participated in or those which you have been directly mentioned. Additionally, you can take the following actions to manage your notifications: Mark as Read/Unread: Change the status of notifications to mark them as read or unread. Mark as Done: Once marked as done, notifications will no longer appear in the notification center (they are deleted). By default, changes made to notifications will only apply to the selected notifications on the screen. However, you can also apply changes to all matching notifications (like in Gmail for instance) for greater convenience. Watching users and organizations By default, you’ll be watching all the organizations you are a member of and will be notified of any new activity on those. You can also choose to get notified on arbitrary users or organizations. To do so, use the “Watch repos” button on their HF profiles. Note that you can also quickly watch/unwatch users and organizations directly from your notifications settings . Unlike GitHub or similar services, you cannot watch a specific repository. You must watch users/organizations to get notified about any new activity on any of their repositories. The goal is to simplify this functionality for users as much as possible and to make sure you don’t miss anything you might be interested in. Notifications settings In your notifications settings page, you can choose specific channels to get notified on depending on the type of activity, for example, receiving an email for direct mentions but only a web notification for new activity on watched users and organizations. By default, you’ll get an email and a web notification for any new activity but feel free to adjust your settings depending on your needs. Note that clicking the unsubscribe link in an email will unsubscribe you for the type of activity, eg direct mentions. You can quickly add any user/organization to your watch list by searching them by name using the dedicated search bar. Unsubscribe from a specific user/organization simply by unticking the corresponding checkbox. Mute notifications for a specific repository It’s possible to mute notifications for a particular repository by using the “Mute notifications” action in the repository’s contextual menu. This will prevent you from receiving any new notifications for that particular repository. You can unmute the repository at any time by clicking the “Unmute notifications” action in the same repository menu. Note, if a repository is muted, you won’t receive any new notification unless you’re directly mentioned or participating to a discussion. The list of muted repositories is available from the notifications settings page: < > Update on GitHub ← Pull Requests & Discussions Collections → Notifications Filtering and managing notifications Watching users and organizations Notifications settings Mute notifications for a specific repository |
PEFT_checkpoint_format.txt | PEFT checkpoint format Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up PEFT documentation PEFT checkpoint format PEFT 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.2 v0.7.1 v0.6.2 EN Get started 🤗 PEFT Quicktour Installation Tutorial Configurations and models Integrations PEFT method guides Prompt-based methods LoRA methods IA3 Developer guides Model merging Quantization LoRA Custom models Adapter injection Mixed adapter types torch.compile Contribute to PEFT Troubleshooting PEFT checkpoint format 🤗 Accelerate integrations DeepSpeed Fully Sharded Data Parallel Conceptual guides Adapters Soft prompts IA3 OFT/BOFT API reference Main classes AutoPeftModel PEFT model PEFT types Configuration Tuner Adapters AdaLoRA IA3 Llama-Adapter LoHa LoKr LoRA X-LoRA LyCORIS Multitask Prompt Tuning OFT BOFT Polytropon P-tuning Prefix tuning Prompt tuning Layernorm tuning VeRA FourierFT VB-LoRA HRA CPT Bone Utilities Model merge Helpers Hotswapping adapters Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started PEFT checkpoint format This document describes how PEFT’s checkpoint files are structured and how to convert between the PEFT format and other formats. PEFT files PEFT (parameter-efficient fine-tuning) methods only update a small subset of a model’s parameters rather than all of them. This is nice because checkpoint files can generally be much smaller than the original model files and are easier to store and share. However, this also means that to load a PEFT model, you need to have the original model available as well. When you call save_pretrained() on a PEFT model, the PEFT model saves three files, described below: adapter_model.safetensors or adapter_model.bin By default, the model is saved in the safetensors format, a secure alternative to the bin format, which is known to be susceptible to security vulnerabilities because it uses the pickle utility under the hood. Both formats store the same state_dict though, and are interchangeable. The state_dict only contains the parameters of the adapter module, not the base model. To illustrate the difference in size, a normal BERT model requires ~420MB of disk space, whereas an IA³ adapter on top of this BERT model only requires ~260KB. adapter_config.json The adapter_config.json file contains the configuration of the adapter module, which is necessary to load the model. Below is an example of an adapter_config.json for an IA³ adapter with standard settings applied to a BERT model: Copied { "auto_mapping" : { "base_model_class" : "BertModel" , "parent_library" : "transformers.models.bert.modeling_bert" } , "base_model_name_or_path" : "bert-base-uncased" , "fan_in_fan_out" : false , "feedforward_modules" : [ "output.dense" ] , "inference_mode" : true , "init_ia3_weights" : true , "modules_to_save" : null , "peft_type" : "IA3" , "revision" : null , "target_modules" : [ "key" , "value" , "output.dense" ] , "task_type" : null } The configuration file contains: the adapter module type stored, "peft_type": "IA3" information about the base model like "base_model_name_or_path": "bert-base-uncased" the revision of the model (if any), "revision": null If the base model is not a pretrained Transformers model, the latter two entries will be null . Other than that, the settings are all related to the specific IA³ adapter that was used to fine-tune the model. README.md The generated README.md is the model card of a PEFT model and contains a few pre-filled entries. The intent of this is to make it easier to share the model with others and to provide some basic information about the model. This file is not needed to load the model. Convert to PEFT format When converting from another format to the PEFT format, we require both the adapter_model.safetensors (or adapter_model.bin ) file and the adapter_config.json file. adapter_model For the model weights, it is important to use the correct mapping from parameter name to value for PEFT to load the file. Getting this mapping right is an exercise in checking the implementation details, as there is no generally agreed upon format for PEFT adapters. Fortunately, figuring out this mapping is not overly complicated for common base cases. Let’s look at a concrete example, the LoraLayer : Copied # showing only part of the code class LoraLayer ( BaseTunerLayer ): # All names of layers that may contain (trainable) adapter weights adapter_layer_names = ( "lora_A" , "lora_B" , "lora_embedding_A" , "lora_embedding_B" ) # All names of other parameters that may contain adapter-related parameters other_param_names = ( "r" , "lora_alpha" , "scaling" , "lora_dropout" ) def __init__ ( self, base_layer: nn.Module, **kwargs ) -> None : self.base_layer = base_layer self.r = {} self.lora_alpha = {} self.scaling = {} self.lora_dropout = nn.ModuleDict({}) self.lora_A = nn.ModuleDict({}) self.lora_B = nn.ModuleDict({}) # For Embedding layer self.lora_embedding_A = nn.ParameterDict({}) self.lora_embedding_B = nn.ParameterDict({}) # Mark the weight as unmerged self._disable_adapters = False self.merged_adapters = [] self.use_dora: dict [ str , bool ] = {} self.lora_magnitude_vector: Optional [torch.nn.ParameterDict] = None # for DoRA self._caches: dict [ str , Any ] = {} self.kwargs = kwargs In the __init__ code used by all LoraLayer classes in PEFT, there are a bunch of parameters used to initialize the model, but only a few are relevant for the checkpoint file: lora_A , lora_B , lora_embedding_A , and lora_embedding_B . These parameters are listed in the class attribute adapter_layer_names and contain the learnable parameters, so they must be included in the checkpoint file. All the other parameters, like the rank r , are derived from the adapter_config.json and must be included there (unless the default value is used). Let’s check the state_dict of a PEFT LoRA model applied to BERT. When printing the first five keys using the default LoRA settings (the remaining keys are the same, just with different layer numbers), we get: base_model.model.encoder.layer.0.attention.self.query.lora_A.weight base_model.model.encoder.layer.0.attention.self.query.lora_B.weight base_model.model.encoder.layer.0.attention.self.value.lora_A.weight base_model.model.encoder.layer.0.attention.self.value.lora_B.weight base_model.model.encoder.layer.1.attention.self.query.lora_A.weight etc. Let’s break this down: By default, for BERT models, LoRA is applied to the query and value layers of the attention module. This is why you see attention.self.query and attention.self.value in the key names for each layer. LoRA decomposes the weights into two low-rank matrices, lora_A and lora_B . This is where lora_A and lora_B come from in the key names. These LoRA matrices are implemented as nn.Linear layers, so the parameters are stored in the .weight attribute ( lora_A.weight , lora_B.weight ). By default, LoRA isn’t applied to BERT’s embedding layer, so there are no entries for lora_A_embedding and lora_B_embedding . The keys of the state_dict always start with "base_model.model." . The reason is that, in PEFT, we wrap the base model inside a tuner-specific model ( LoraModel in this case), which itself is wrapped in a general PEFT model ( PeftModel ). For this reason, these two prefixes are added to the keys. When converting to the PEFT format, it is required to add these prefixes. This last point is not true for prefix tuning techniques like prompt tuning. There, the extra embeddings are directly stored in the state_dict without any prefixes added to the keys. When inspecting the parameter names in the loaded model, you might be surprised to find that they look a bit different, e.g. base_model.model.encoder.layer.0.attention.self.query.lora_A.default.weight . The difference is the .default part in the second to last segment. This part exists because PEFT generally allows the addition of multiple adapters at once (using an nn.ModuleDict or nn.ParameterDict to store them). For example, if you add another adapter called “other”, the key for that adapter would be base_model.model.encoder.layer.0.attention.self.query.lora_A.other.weight . When you call save_pretrained() , the adapter name is stripped from the keys. The reason is that the adapter name is not an important part of the model architecture; it is just an arbitrary name. When loading the adapter, you could choose a totally different name, and the model would still work the same way. This is why the adapter name is not stored in the checkpoint file. If you call save_pretrained("some/path") and the adapter name is not "default" , the adapter is stored in a sub-directory with the same name as the adapter. So if the name is “other”, it would be stored inside of some/path/other . In some circumstances, deciding which values to add to the checkpoint file can become a bit more complicated. For example, in PEFT, DoRA is implemented as a special case of LoRA. If you want to convert a DoRA model to PEFT, you should create a LoRA checkpoint with extra entries for DoRA. You can see this in the __init__ of the previous LoraLayer code: Copied self.lora_magnitude_vector: Optional [torch.nn.ParameterDict] = None # for DoRA This indicates that there is an optional extra parameter per layer for DoRA. adapter_config All the other information needed to load a PEFT model is contained in the adapter_config.json file. Let’s check this file for a LoRA model applied to BERT: Copied { "alpha_pattern" : { } , "auto_mapping" : { "base_model_class" : "BertModel" , "parent_library" : "transformers.models.bert.modeling_bert" } , "base_model_name_or_path" : "bert-base-uncased" , "bias" : "none" , "fan_in_fan_out" : false , "inference_mode" : true , "init_lora_weights" : true , "layer_replication" : null , "layers_pattern" : null , "layers_to_transform" : null , "loftq_config" : { } , "lora_alpha" : 8 , "lora_dropout" : 0.0 , "megatron_config" : null , "megatron_core" : "megatron.core" , "modules_to_save" : null , "peft_type" : "LORA" , "r" : 8 , "rank_pattern" : { } , "revision" : null , "target_modules" : [ "query" , "value" ] , "task_type" : null , "use_dora" : false , "use_rslora" : false } This contains a lot of entries, and at first glance, it could feel overwhelming to figure out all the right values to put in there. However, most of the entries are not necessary to load the model. This is either because they use the default values and don’t need to be added or because they only affect the initialization of the LoRA weights, which is irrelevant when it comes to loading the model. If you find that you don’t know what a specific parameter does, e.g., "use_rslora", don’t add it, and you should be fine. Also note that as more options are added, this file will get more entries in the future, but it should be backward compatible. At the minimum, you should include the following entries: Copied { "target_modules" : [ "query" , "value" ] , "peft_type" : "LORA" } However, adding as many entries as possible, like the rank r or the base_model_name_or_path (if it’s a Transformers model) is recommended. This information can help others understand the model better and share it more easily. To check which keys and values are expected, check out the config.py file (as an example, this is the config file for LoRA) in the PEFT source code. Model storage In some circumstances, you might want to store the whole PEFT model, including the base weights. This can be necessary if, for instance, the base model is not available to the users trying to load the PEFT model. You can merge the weights first or convert it into a Transformer model. Merge the weights The most straightforward way to store the whole PEFT model is to merge the adapter weights into the base weights: Copied merged_model = model.merge_and_unload() merged_model.save_pretrained(...) There are some disadvantages to this approach, though: Once merge_and_unload() is called, you get a basic model without any PEFT-specific functionality. This means you can’t use any of the PEFT-specific methods anymore. You cannot unmerge the weights, load multiple adapters at once, disable the adapter, etc. Not all PEFT methods support merging weights. Some PEFT methods may generally allow merging, but not with specific settings (e.g. when using certain quantization techniques). The whole model will be much larger than the PEFT model, as it will contain all the base weights as well. But inference with a merged model should be a bit faster. Convert to a Transformers model Another way to save the whole model, assuming the base model is a Transformers model, is to use this hacky approach to directly insert the PEFT weights into the base model and save it, which only works if you “trick” Transformers into believing the PEFT model is not a PEFT model. This only works with LoRA because other adapters are not implemented in Transformers. Copied model = ... # the PEFT model ... # after you finish training the model, save it in a temporary location model.save_pretrained(<temp_location>) # now load this model directly into a transformers model, without the PEFT wrapper # the PEFT weights are directly injected into the base model model_loaded = AutoModel.from_pretrained(<temp_location>) # now make the loaded model believe that it is _not_ a PEFT model model_loaded._hf_peft_config_loaded = False # now when we save it, it will save the whole model model_loaded.save_pretrained(<final_location>) # or upload to Hugging Face Hub model_loaded.push_to_hub(<final_location>) < > Update on GitHub ← Troubleshooting DeepSpeed → PEF T checkpoint format PEF T files Convert to PEF T format adapter_model adapter_config Model storage Merge the weights Convert to a Transformers model |
Using_MLX_at_Hugging_Face.txt | Using MLX at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using MLX at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using MLX at Hugging Face MLX is a model training and serving framework for Apple silicon made by Apple Machine Learning Research. It comes with a variety of examples: Generate text with MLX-LM and generating text with MLX-LM for models in GGUF format . Large-scale text generation with LLaMA . Fine-tuning with LoRA . Generating images with Stable Diffusion . Speech recognition with OpenAI’s Whisper . Exploring MLX on the Hub You can find MLX models by filtering at the left of the models page . There’s also an open MLX community of contributors converting and publishing weights for MLX format. Thanks to MLX Hugging Face Hub integration, you can load MLX models with a few lines of code. Installation MLX comes as a standalone package, and there’s a subpackage called MLX-LM with Hugging Face integration for Large Language Models. To install MLX-LM, you can use the following one-line install through pip : Copied pip install mlx-lm You can get more information about it here . If you install mlx-lm , you don’t need to install mlx . If you don’t want to use mlx-lm but only MLX, you can install MLX itself as follows. With pip : Copied pip install mlx With conda : Copied conda install -c conda-forge mlx Using Existing Models MLX-LM has useful utilities to generate text. The following line directly downloads and loads the model and starts generating text. Copied python -m mlx_lm.generate --model mistralai/Mistral-7B-Instruct-v0.2 --prompt "hello" For a full list of generation options, run Copied python -m mlx_lm.generate -- help You can also load a model and start generating text through Python like below: Copied from mlx_lm import load, generate model, tokenizer = load( "mistralai/Mistral-7B-Instruct-v0.2" ) response = generate(model, tokenizer, prompt= "hello" , verbose= True ) MLX-LM supports popular LLM architectures including LLaMA, Phi-2, Mistral, and Qwen. Models other than supported ones can easily be downloaded as follows: Copied pip install huggingface_hub hf_transfer export HF_HUB_ENABLE_HF_TRANSFER= 1 huggingface-cli download --local- dir <LOCAL FOLDER PATH> <USER_ID>/<MODEL_NAME> Converting and Sharing Models You can convert, and optionally quantize, LLMs from the Hugging Face Hub as follows: Copied python -m mlx_lm.convert --hf-path mistralai/Mistral-7B-v0.1 -q If you want to directly push the model after the conversion, you can do it like below. Copied python -m mlx_lm.convert \ --hf-path mistralai/Mistral-7B-v0.1 \ -q \ --upload-repo <USER_ID>/<MODEL_NAME> Additional Resources MLX Repository MLX Docs MLX Examples MLX-LM All MLX models on Hub < > Update on GitHub ← mlx-image OpenCLIP → Using ML X at Hugging Face Exploring ML X on the Hub Installation Using Existing Models Converting and Sharing Models Additional Resources |
Using_timm_at_Hugging_Face.txt | Using timm at Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using timm at Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using timm at Hugging Face timm , also known as pytorch-image-models , is an open-source collection of state-of-the-art PyTorch image models, pretrained weights, and utility scripts for training, inference, and validation. This documentation focuses on timm functionality in the Hugging Face Hub instead of the timm library itself. For detailed information about the timm library, visit its documentation . You can find a number of timm models on the Hub using the filters on the left of the models page . All models on the Hub come with several useful features: An automatically generated model card, which model authors can complete with information about their model . Metadata tags help users discover the relevant timm models. An interactive widget you can use to play with the model directly in the browser. An Inference API that allows users to make inference requests. Using existing models from the Hub Any timm model from the Hugging Face Hub can be loaded with a single line of code as long as you have timm installed! Once you’ve selected a model from the Hub, pass the model’s ID prefixed with hf-hub: to timm ’s create_model method to download and instantiate the model. Copied import timm # Loading https://huggingface.co/timm/eca_nfnet_l0 model = timm.create_model( "hf-hub:timm/eca_nfnet_l0" , pretrained= True ) If you want to see how to load a specific model, you can click Use in timm and you will be given a working snippet to load it! Inference The snippet below shows how you can perform inference on a timm model loaded from the Hub: Copied import timm import torch from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform # Load from Hub 🔥 model = timm.create_model( 'hf-hub:nateraw/resnet50-oxford-iiit-pet' , pretrained= True ) # Set model to eval mode for inference model. eval () # Create Transform transform = create_transform(**resolve_data_config(model.pretrained_cfg, model=model)) # Get the labels from the model config labels = model.pretrained_cfg[ 'label_names' ] top_k = min ( len (labels), 5 ) # Use your own image file here... image = Image. open ( 'boxer.jpg' ).convert( 'RGB' ) # Process PIL image with transforms and add a batch dimension x = transform(image).unsqueeze( 0 ) # Pass inputs to model forward function to get outputs out = model(x) # Apply softmax to get predicted probabilities for each class probabilities = torch.nn.functional.softmax(out[ 0 ], dim= 0 ) # Grab the values and indices of top 5 predicted classes values, indices = torch.topk(probabilities, top_k) # Prepare a nice dict of top k predictions predictions = [ { "label" : labels[i], "score" : v.item()} for i, v in zip (indices, values) ] print (predictions) This should leave you with a list of predictions, like this: Copied [ { 'label' : 'american_pit_bull_terrier' , 'score' : 0.9999998807907104 }, { 'label' : 'staffordshire_bull_terrier' , 'score' : 1.0000000149011612e-07 }, { 'label' : 'miniature_pinscher' , 'score' : 1.0000000149011612e-07 }, { 'label' : 'chihuahua' , 'score' : 1.0000000149011612e-07 }, { 'label' : 'beagle' , 'score' : 1.0000000149011612e-07 } ] Sharing your models You can share your timm models directly to the Hugging Face Hub. This will publish a new version of your model to the Hugging Face Hub, creating a model repo for you if it doesn’t already exist. Before pushing a model, make sure that you’ve logged in to Hugging Face: Copied python -m pip install huggingface_hub huggingface-cli login Alternatively, if you prefer working from a Jupyter or Colaboratory notebook, once you’ve installed huggingface_hub you can log in with: Copied from huggingface_hub import notebook_login notebook_login() Then, push your model using the push_to_hf_hub method: Copied import timm # Build or load a model, e.g. timm's pretrained resnet18 model = timm.create_model( 'resnet18' , pretrained= True , num_classes= 4 ) ########################### # [Fine tune your model...] ########################### # Push it to the 🤗 Hub timm.models.hub.push_to_hf_hub( model, 'resnet18-random-classifier' , model_config={ 'labels' : [ 'a' , 'b' , 'c' , 'd' ]} ) # Load your model from the Hub model_reloaded = timm.create_model( 'hf-hub:<your-username>/resnet18-random-classifier' , pretrained= True ) Inference Widget and API All timm models on the Hub are automatically equipped with an inference widget , pictured below for nateraw/timm-resnet50-beans . Additionally, timm models are available through the Inference API , which you can access through HTTP with cURL, Python’s requests library, or your preferred method for making network requests. Copied curl https://api-inference.huggingface.co/models/nateraw/timm-resnet50-beans \ -X POST \ --data-binary '@beans.jpeg' \ -H "Authorization: Bearer { $HF_API_TOKEN }" # [{"label":"angular_leaf_spot","score":0.9845947027206421},{"label":"bean_rust","score":0.01368315052241087},{"label":"healthy","score":0.001722085871733725}] Additional resources timm (pytorch-image-models) GitHub Repo . timm documentation . Additional documentation at timmdocs by Aman Arora . Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide by Chris Hughes . < > Update on GitHub ← TensorBoard Transformers → Using timm at Hugging Face Using existing models from the Hub Inference Sharing your models Inference Widget and API Additional resources |
Multi-GPU_inference.txt | Multi-GPU inference Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Multi-GPU inference Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Multi-GPU inference Built-in Tensor Parallelism (TP) is now available with certain models using PyTorch. Tensor parallelism shards a model onto multiple GPUs, enabling larger model sizes, and parallelizes computations such as matrix multiplication. To enable tensor parallel, pass the argument tp_plan="auto" to from_pretrained() : Copied import os import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "meta-llama/Meta-Llama-3-8B-Instruct" # Initialize distributed rank = int (os.environ[ "RANK" ]) device = torch.device( f"cuda: {rank} " ) torch.distributed.init_process_group( "nccl" , device_id=device) # Retrieve tensor parallel model model = AutoModelForCausalLM.from_pretrained( model_id, tp_plan= "auto" , ) # Prepare input tokens tokenizer = AutoTokenizer.from_pretrained(model_id) prompt = "Can I help" inputs = tokenizer(prompt, return_tensors= "pt" ).input_ids.to(device) # Distributed run outputs = model(inputs) You can use torchrun to launch the above script with multiple processes, each mapping to a GPU: Copied torchrun --nproc-per- node 4 demo.py PyTorch tensor parallel is currently supported for the following models: Llama You can request to add tensor parallel support for another model by opening a GitHub Issue or Pull Request. Expected speedups You can benefit from considerable speedups for inference, especially for inputs with large batch size or long sequences. For a single forward pass on Llama with a sequence length of 512 and various batch sizes, the expected speedup is as follows: < > Update on GitHub ← GPU inference Instantiate a big model → Multi-GP U inference Expected speedups |
Livebook_on_Spaces.txt | Livebook on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Livebook on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Livebook on Spaces Livebook is an open-source tool for writing interactive code notebooks in Elixir . It’s part of a growing collection of Elixir tools for numerical computing , data science , and Machine Learning . Some of Livebook’s most exciting features are: Reproducible workflows : Livebook runs your code in a predictable order, all the way down to package management Smart cells : perform complex tasks, such as data manipulation and running machine learning models, with a few clicks using Livebook’s extensible notebook cells Elixir powered : use the power of the Elixir programming language to write concurrent and distributed notebooks that scale beyond your machine To learn more about it, watch this 15-minute video . Or visit Livebook’s website . Or follow its Twitter and blog to keep up with new features and updates. Your first Livebook Space You can get Livebook up and running in a Space with just a few clicks. Click the button below to start creating a new Space using Livebook’s Docker template: Then: Give your Space a name Set the password of your Livebook Set its visibility to public Create your Space This will start building your Space using Livebook’s Docker image. The visibility of the Space must be set to public for the Smart cells feature in Livebook to function properly. However, your Livebook instance will be protected by Livebook authentication. Smart cell is a type of Livebook cell that provides a UI component for accomplishing a specific task. The code for the task is generated automatically based on the user's interactions with the UI, allowing for faster completion of high-level tasks without writing code from scratch. Once the app build is finished, go to the “App” tab in your Space and log in to your Livebook using the password you previously set: That’s it! Now you can start using Livebook inside your Space. If this is your first time using Livebook, you can learn how to use it with its interactive notebooks within Livebook itself: Livebook integration with Hugging Face Models Livebook has an official integration with Hugging Face models . With this feature, you can run various Machine Learning models within Livebook with just a few clicks. Here’s a quick video showing how to do that: How to update Livebook’s version To update Livebook to its latest version, go to the Settings page of your Space and click on “Factory reboot this Space”: Caveats The following caveats apply to running Livebook inside a Space: The Space’s visibility setting must be public. Otherwise, Smart cells won’t work. That said, your Livebook instance will still be behind Livebook authentication since you’ve set the LIVEBOOK_PASSWORD secret. Livebook global configurations will be lost once the Space restarts. Consider using the desktop app if you find yourself in need of persisting configuration across deployments. Feedback and support If you have improvement suggestions or need specific support, please join the Livebook community on GitHub . < > Update on GitHub ← Argilla on Spaces Label Studio on Spaces → Livebook on Spaces Your first Livebook Space Livebook integration with Hugging Face Models How to update Livebook’s version Caveats Feedback and support |
Add_custom_Dependencies.txt | Add custom Dependencies Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Add custom Dependencies Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Add custom Dependencies Inference Endpoints’ base image includes all required libraries to run inference on Transformers models, but it also supports custom dependencies. This is useful if you want to: customize your inference pipeline and need additional Python dependencies run a model which requires special dependencies like the newest or a fixed version of a library (for example, tapas ( torch-scatter )). To add custom dependencies, add a requirements.txt file with the Python dependencies you want to install in your model repository on the Hugging Face Hub. When your Endpoint and Image artifacts are created, Inference Endpoints checks if the model repository contains a requirements.txt file and installs the dependencies listed within. Copied optimum[onnxruntime]==1.2.3 mkl-include mkl Check out the requirements.txt files in the following model repositories for examples: Optimum and onnxruntime diffusers For more information, take a look at how you can create and install dependencies when you use your own custom container for inference. < > Update on GitHub ← Create a Private Endpoint with AWS PrivateLink Create custom Inference Handler → Add custom Dependencies |
Fully_Sharded_Data_Parallel_utilities.txt | Fully Sharded Data Parallel utilities Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Fully Sharded Data Parallel utilities Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Fully Sharded Data Parallel utilities enable_fsdp_ram_efficient_loading accelerate.utils.enable_fsdp_ram_efficient_loading < source > ( ) Enables RAM efficient loading of Hugging Face models for FSDP in the environment. disable_fsdp_ram_efficient_loading accelerate.utils.disable_fsdp_ram_efficient_loading < source > ( ) Disables RAM efficient loading of Hugging Face models for FSDP in the environment. merge_fsdp_weights accelerate.utils.merge_fsdp_weights < source > ( checkpoint_dir : str output_path : str safe_serialization : bool = True remove_checkpoint_dir : bool = False ) Parameters checkpoint_dir ( str ) — The directory containing the FSDP checkpoints (can be either the model or optimizer). output_path ( str ) — The path to save the merged checkpoint. safe_serialization ( bool , optional , defaults to True ) — Whether to save the merged weights with safetensors (recommended). remove_checkpoint_dir ( bool , optional , defaults to False ) — Whether to remove the checkpoint directory after merging. Merge the weights from sharded FSDP model checkpoints into a single combined checkpoint. Should be used if SHARDED_STATE_DICT was used for the model. Weights will be saved to {output_path}/model.safetensors if safe_serialization else pytorch_model.bin . Note: this is a CPU-bound process. FullyShardedDataParallelPlugin class accelerate. FullyShardedDataParallelPlugin < source > ( sharding_strategy : typing.Union[str, ForwardRef('torch.distributed.fsdp.ShardingStrategy')] = None backward_prefetch : typing.Union[str, ForwardRef('torch.distributed.fsdp.BackwardPrefetch')] = None mixed_precision_policy : typing.Union[dict, ForwardRef('torch.distributed.fsdp.MixedPrecision'), NoneType] = None auto_wrap_policy : typing.Union[typing.Callable, typing.Literal['transformer_based_wrap', 'size_based_wrap', 'no_wrap'], NoneType] = None cpu_offload : typing.Union[bool, ForwardRef('torch.distributed.fsdp.CPUOffload')] = None ignored_modules : typing.Optional[typing.Iterable[torch.nn.modules.module.Module]] = None state_dict_type : typing.Union[str, ForwardRef('torch.distributed.fsdp.StateDictType')] = None state_dict_config : typing.Union[ForwardRef('torch.distributed.fsdp.FullStateDictConfig'), ForwardRef('torch.distributed.fsdp.ShardedStateDictConfig'), NoneType] = None optim_state_dict_config : typing.Union[ForwardRef('torch.distributed.fsdp.FullOptimStateDictConfig'), ForwardRef('torch.distributed.fsdp.ShardedOptimStateDictConfig'), NoneType] = None limit_all_gathers : bool = True use_orig_params : bool = None param_init_fn : typing.Optional[typing.Callable[[torch.nn.modules.module.Module], NoneType]] = None sync_module_states : bool = None forward_prefetch : bool = None activation_checkpointing : bool = None cpu_ram_efficient_loading : bool = None transformer_cls_names_to_wrap : typing.Optional[typing.List[str]] = None min_num_params : typing.Optional[int] = None ) Parameters sharding_strategy ( Union[str, torch.distributed.fsdp.ShardingStrategy] , defaults to 'FULL_SHARD' ) — Sharding strategy to use. Should be either a str or an instance of torch.distributed.fsdp.fully_sharded_data_parallel.ShardingStrategy . backward_prefetch ( Union[str, torch.distributed.fsdp.BackwardPrefetch] , defaults to 'NO_PREFETCH' ) — Backward prefetch strategy to use. Should be either a str or an instance of torch.distributed.fsdp.fully_sharded_data_parallel.BackwardPrefetch . mixed_precision_policy ( Optional[Union[dict, torch.distributed.fsdp.MixedPrecision]] , defaults to None ) — A config to enable mixed precision training with FullyShardedDataParallel. If passing in a dict , it should have the following keys: param_dtype , reduce_dtype , and buffer_dtype . auto_wrap_policy ( Optional(Union[Callable, Literal["transformer_based_wrap", "size_based_wrap", "no_wrap"]]), defaults to NO_WRAP ) -- A callable or string specifying a policy to recursively wrap layers with FSDP. If a string, it must be one of transformer_based_wrap , size_based_wrap , or no_wrap . See torch.distributed.fsdp.wrap.size_based_wrap_policy` for a direction on what it should look like. cpu_offload ( Union[bool, torch.distributed.fsdp.CPUOffload] , defaults to False ) — Whether to offload parameters to CPU. Should be either a bool or an instance of torch.distributed.fsdp.fully_sharded_data_parallel.CPUOffload . ignored_modules ( Optional[Iterable[torch.nn.Module]] , defaults to None ) — A list of modules to ignore when wrapping with FSDP. state_dict_type ( Union[str, torch.distributed.fsdp.StateDictType] , defaults to 'FULL_STATE_DICT' ) — State dict type to use. If a string, it must be one of full_state_dict , local_state_dict , or sharded_state_dict . state_dict_config ( Optional[Union[torch.distributed.fsdp.FullStateDictConfig, torch.distributed.fsdp.ShardedStateDictConfig] , defaults to None ) — State dict config to use. Is determined based on the state_dict_type if not passed in. optim_state_dict_config ( Optional[Union[torch.distributed.fsdp.FullOptimStateDictConfig, torch.distributed.fsdp.ShardedOptimStateDictConfig] , defaults to None ) — Optim state dict config to use. Is determined based on the state_dict_type if not passed in. limit_all_gathers ( bool , defaults to True ) — Whether to have FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers. This bool only affects the sharded strategies that schedule all-gathers. Enabling this can help lower the number of CUDA malloc retries. use_orig_params ( bool , defaults to False ) — Whether to use the original parameters for the optimizer. param_init_fn ( Optional[Callable[[torch.nn.Module], None] , defaults to None ) — A Callable[torch.nn.Module] -> None that specifies how modules that are currently on the meta device should be initialized onto an actual device. Only applicable when sync_module_states is True . By default is a lambda which calls to_empty on the module. sync_module_states ( bool , defaults to False ) — Whether each individually wrapped FSDP unit should broadcast module parameters from rank 0 to ensure they are the same across all ranks after initialization. Defaults to False unless cpu_ram_efficient_loading is True , then will be forcibly enabled. forward_prefetch ( bool , defaults to False ) — Whether to have FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass. only use with Static graphs. activation_checkpointing ( bool , defaults to False ) — A technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass. Effectively, this trades extra computation time for reduced memory usage. cpu_ram_efficient_loading ( bool , defaults to None ) — If True, only the first process loads the pretrained model checkoint while all other processes have empty weights. Only applicable for Transformers. When using this, sync_module_states needs to be True . transformer_cls_names_to_wrap ( Optional[List[str]] , defaults to None ) — A list of transformer layer class names to wrap. Only applicable when auto_wrap_policy is transformer_based_wrap . min_num_params ( Optional[int] , defaults to None ) — The minimum number of parameters a module must have to be wrapped. Only applicable when auto_wrap_policy is size_based_wrap . This plugin is used to enable fully sharded data parallelism. set_auto_wrap_policy < source > ( model ) Given model , creates an auto_wrap_policy baesd on the passed in policy and if we can use the transformer_cls_to_wrap set_mixed_precision < source > ( mixed_precision buffer_autocast = False override = False ) Sets the mixed precision policy for FSDP set_state_dict_type < source > ( state_dict_type = None ) Set the state dict config based on the StateDictType . < > Update on GitHub ← Megatron-LM utilities Fully Sharded Data Parallel utilities enable_fsdp_ram_efficient_loading disable_fsdp_ram_efficient_loading merge_fsdp_weights Fully Sharded Data Parallel Plugin |
Sentiment_Tuning_Examples.txt | Sentiment Tuning Examples Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Sentiment Tuning Examples TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Sentiment Tuning Examples The notebooks and scripts in this examples show how to fine-tune a model with a sentiment classifier (such as lvwerra/distilbert-imdb ). Here’s an overview of the notebooks and scripts in the trl repository : File Description examples/scripts/ppo.py This script shows how to use the PPOTrainer to fine-tune a sentiment analysis model using IMDB dataset examples/notebooks/gpt2-sentiment.ipynb This notebook demonstrates how to reproduce the GPT2 imdb sentiment tuning example on a jupyter notebook. examples/notebooks/gpt2-control.ipynb This notebook demonstrates how to reproduce the GPT2 sentiment control example on a jupyter notebook. Usage Copied # 1. run directly python examples/scripts/ppo.py # 2. run via `accelerate` (recommended), enabling more features (e.g., multiple GPUs, deepspeed) accelerate config # will prompt you to define the training configuration accelerate launch examples/scripts/ppo.py # launches training # 3. get help text and documentation python examples/scripts/ppo.py -- help # 4. configure logging with wandb and, say, mini_batch_size=1 and gradient_accumulation_steps=16 python examples/scripts/ppo.py --log_with wandb --mini_batch_size 1 --gradient_accumulation_steps 16 Note: if you don’t want to log with wandb remove log_with="wandb" in the scripts/notebooks. You can also replace it with your favourite experiment tracker that’s supported by accelerate . Few notes on multi-GPU To run in multi-GPU setup with DDP (distributed Data Parallel) change the device_map value to device_map={"": Accelerator().process_index} and make sure to run your script with accelerate launch yourscript.py . If you want to apply naive pipeline parallelism you can use device_map="auto" . < > Update on GitHub ← Example Overview Training with PEFT → Sentiment Tuning Examples Usage Few notes on multi-GPU |
Using_SetFit_with_Hugging_Face.txt | Using SetFit with Hugging Face Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using SetFit with Hugging Face Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Adapters AllenNLP BERTopic Asteroid Diffusers ESPnet fastai Flair Keras TF-Keras (legacy) ML-Agents mlx-image MLX OpenCLIP PaddleNLP peft RL-Baselines3-Zoo Sample Factory Sentence Transformers SetFit spaCy SpanMarker SpeechBrain Stable-Baselines3 Stanza TensorBoard timm Transformers Transformers.js Unity Sentis Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using SetFit with Hugging Face SetFit is an efficient and prompt-free framework for few-shot fine-tuning of Sentence Transformers . It achieves high accuracy with little labeled data - for instance, with only 8 labeled examples per class on the Customer Reviews sentiment dataset, SetFit is competitive with fine-tuning RoBERTa Large on the full training set of 3k examples 🤯! Compared to other few-shot learning methods, SetFit has several unique features: 🗣 No prompts or verbalizers: Current techniques for few-shot fine-tuning require handcrafted prompts or verbalizers to convert examples into a format suitable for the underlying language model. SetFit dispenses with prompts altogether by generating rich embeddings directly from text examples. 🏎 Fast to train: SetFit doesn’t require large-scale models like T0 or GPT-3 to achieve high accuracy. As a result, it is typically an order of magnitude (or more) faster to train and run inference with. 🌎 Multilingual support : SetFit can be used with any Sentence Transformer on the Hub, which means you can classify text in multiple languages by simply fine-tuning a multilingual checkpoint. Exploring SetFit on the Hub You can find SetFit models by filtering at the left of the models page . All models on the Hub come with these useful features: An automatically generated model card with a brief description. An interactive widget you can use to play with the model directly in the browser. An Inference API that allows you to make inference requests. Installation To get started, you can follow the SetFit installation guide . You can also use the following one-line install through pip: Copied pip install -U setfit Using existing models All setfit models can easily be loaded from the Hub. Copied from setfit import SetFitModel model = SetFitModel.from_pretrained( "tomaarsen/setfit-paraphrase-mpnet-base-v2-sst2-8-shot" ) Once loaded, you can use SetFitModel.predict to perform inference. Copied model.predict( "Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris." ) Copied [ 'positive' , 'negative' ] If you want to load a specific SetFit model, you can click Use in SetFit and you will be given a working snippet! Additional resources All SetFit models available on the Hub SetFit repository SetFit docs SetFit paper < > Update on GitHub ← Sentence Transformers spaCy → Using Set Fit with Hugging Face Exploring Set Fit on the Hub Installation Using existing models Additional resources |
Load_image_data.txt | Load image data Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Load image data Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Load image data Image datasets have Image type columns, which contain PIL objects. To work with image datasets, you need to have the vision dependency installed. Check out the installation guide to learn how to install it. When you load an image dataset and call the image column, the images are decoded as PIL Images: Copied >>> from datasets import load_dataset, Image >>> dataset = load_dataset( "beans" , split= "train" ) >>> dataset[ 0 ][ "image" ] Index into an image dataset using the row index first and then the image column - dataset[0]["image"] - to avoid decoding and resampling all the image objects in the dataset. Otherwise, this can be a slow and time-consuming process if you have a large dataset. For a guide on how to load any type of dataset, take a look at the general loading guide . Local files You can load a dataset from the image path. Use the cast_column() function to accept a column of image file paths, and decode it into a PIL image with the Image feature: Copied >>> from datasets import Dataset, Image >>> dataset = Dataset.from_dict({ "image" : [ "path/to/image_1" , "path/to/image_2" , ..., "path/to/image_n" ]}).cast_column( "image" , Image()) >>> dataset[ 0 ][ "image" ] <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1200x215 at 0x15E6D7160 >] If you only want to load the underlying path to the image dataset without decoding the image object, set decode=False in the Image feature: Copied >>> dataset = load_dataset( "beans" , split= "train" ).cast_column( "image" , Image(decode= False )) >>> dataset[ 0 ][ "image" ] { 'bytes' : None , 'path' : '/root/.cache/huggingface/datasets/downloads/extracted/b0a21163f78769a2cf11f58dfc767fb458fc7cea5c05dccc0144a2c0f0bc1292/train/bean_rust/bean_rust_train.29.jpg' } ImageFolder You can also load a dataset with an ImageFolder dataset builder which does not require writing a custom dataloader. This makes ImageFolder ideal for quickly creating and loading image datasets with several thousand images for different vision tasks. Your image dataset structure should look like this: Copied folder /train/ dog/golden_retriever.png folder /train/ dog/german_shepherd.png folder /train/ dog/chihuahua.png folder /train/ cat/maine_coon.png folder /train/ cat/bengal.png folder /train/ cat/birman.png Load your dataset by specifying imagefolder and the directory of your dataset in data_dir : Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "imagefolder" , data_dir= "/path/to/folder" ) >>> dataset[ "train" ][ 0 ] { "image" : <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1200x215 at 0x15E6D7160 >, "label" : 0 } >>> dataset[ "train" ][- 1 ] { "image" : <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1200x215 at 0x15E8DAD30 >, "label" : 1 } Load remote datasets from their URLs with the data_files parameter: Copied >>> dataset = load_dataset( "imagefolder" , data_files= "https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_5340.zip" , split= "train" ) Some datasets have a metadata file ( metadata.csv / metadata.jsonl ) associated with it, containing other information about the data like bounding boxes, text captions, and labels. The metadata is automatically loaded when you call load_dataset() and specify imagefolder . To ignore the information in the metadata file, set drop_labels=False in load_dataset() , and allow ImageFolder to automatically infer the label name from the directory name: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "imagefolder" , data_dir= "/path/to/folder" , drop_labels= False ) For more information about creating your own ImageFolder dataset, take a look at the Create an image dataset guide. WebDataset The WebDataset format is based on a folder of TAR archives and is suitable for big image datasets. Because of their size, WebDatasets are generally loaded in streaming mode (using streaming=True ). You can load a WebDataset like this: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "webdataset" , data_dir= "/path/to/folder" , streaming= True ) < > Update on GitHub ← Create an audio dataset Process image data → Load image data Local files Image Folder Web Dataset |
What_🤗_Transformers_can_do.txt | What 🤗 Transformers can do Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation What 🤗 Transformers can do Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started What 🤗 Transformers can do 🤗 Transformers is a library of pretrained state-of-the-art models for natural language processing (NLP), computer vision, and audio and speech processing tasks. Not only does the library contain Transformer models, but it also has non-Transformer models like modern convolutional networks for computer vision tasks. If you look at some of the most popular consumer products today, like smartphones, apps, and televisions, odds are that some kind of deep learning technology is behind it. Want to remove a background object from a picture taken by your smartphone? This is an example of a panoptic segmentation task (don’t worry if you don’t know what this means yet, we’ll describe it in the following sections!). This page provides an overview of the different speech and audio, computer vision, and NLP tasks that can be solved with the 🤗 Transformers library in just three lines of code! Audio Audio and speech processing tasks are a little different from the other modalities mainly because audio as an input is a continuous signal. Unlike text, a raw audio waveform can’t be neatly split into discrete chunks the way a sentence can be divided into words. To get around this, the raw audio signal is typically sampled at regular intervals. If you take more samples within an interval, the sampling rate is higher, and the audio more closely resembles the original audio source. Previous approaches preprocessed the audio to extract useful features from it. It is now more common to start audio and speech processing tasks by directly feeding the raw audio waveform to a feature encoder to extract an audio representation. This simplifies the preprocessing step and allows the model to learn the most essential features. Audio classification Audio classification is a task that labels audio data from a predefined set of classes. It is a broad category with many specific applications, some of which include: acoustic scene classification: label audio with a scene label (“office”, “beach”, “stadium”) acoustic event detection: label audio with a sound event label (“car horn”, “whale calling”, “glass breaking”) tagging: label audio containing multiple sounds (birdsongs, speaker identification in a meeting) music classification: label music with a genre label (“metal”, “hip-hop”, “country”) Copied >>> from transformers import pipeline >>> classifier = pipeline(task= "audio-classification" , model= "superb/hubert-base-superb-er" ) >>> preds = classifier( "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac" ) >>> preds = [{ "score" : round (pred[ "score" ], 4 ), "label" : pred[ "label" ]} for pred in preds] >>> preds [{ 'score' : 0.4532 , 'label' : 'hap' }, { 'score' : 0.3622 , 'label' : 'sad' }, { 'score' : 0.0943 , 'label' : 'neu' }, { 'score' : 0.0903 , 'label' : 'ang' }] Automatic speech recognition Automatic speech recognition (ASR) transcribes speech into text. It is one of the most common audio tasks due partly to speech being such a natural form of human communication. Today, ASR systems are embedded in “smart” technology products like speakers, phones, and cars. We can ask our virtual assistants to play music, set reminders, and tell us the weather. But one of the key challenges Transformer architectures have helped with is in low-resource languages. By pretraining on large amounts of speech data, finetuning the model on only one hour of labeled speech data in a low-resource language can still produce high-quality results compared to previous ASR systems trained on 100x more labeled data. Copied >>> from transformers import pipeline >>> transcriber = pipeline(task= "automatic-speech-recognition" , model= "openai/whisper-small" ) >>> transcriber( "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac" ) { 'text' : ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.' } Computer vision One of the first and earliest successful computer vision tasks was recognizing images of zip code numbers using a convolutional neural network (CNN) . An image is composed of pixels, and each pixel has a numerical value. This makes it easy to represent an image as a matrix of pixel values. Each particular combination of pixel values describes the colors of an image. Two general ways computer vision tasks can be solved are: Use convolutions to learn the hierarchical features of an image from low-level features to high-level abstract things. Split an image into patches and use a Transformer to gradually learn how each image patch is related to each other to form an image. Unlike the bottom-up approach favored by a CNN, this is kind of like starting out with a blurry image and then gradually bringing it into focus. Image classification Image classification labels an entire image from a predefined set of classes. Like most classification tasks, there are many practical use cases for image classification, some of which include: healthcare: label medical images to detect disease or monitor patient health environment: label satellite images to monitor deforestation, inform wildland management or detect wildfires agriculture: label images of crops to monitor plant health or satellite images for land use monitoring ecology: label images of animal or plant species to monitor wildlife populations or track endangered species Copied >>> from transformers import pipeline >>> classifier = pipeline(task= "image-classification" ) >>> preds = classifier( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{ "score" : round (pred[ "score" ], 4 ), "label" : pred[ "label" ]} for pred in preds] >>> print (*preds, sep= "\n" ) { 'score' : 0.4335 , 'label' : 'lynx, catamount' } { 'score' : 0.0348 , 'label' : 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor' } { 'score' : 0.0324 , 'label' : 'snow leopard, ounce, Panthera uncia' } { 'score' : 0.0239 , 'label' : 'Egyptian cat' } { 'score' : 0.0229 , 'label' : 'tiger cat' } Object detection Unlike image classification, object detection identifies multiple objects within an image and the objects’ positions in an image (defined by the bounding box). Some example applications of object detection include: self-driving vehicles: detect everyday traffic objects such as other vehicles, pedestrians, and traffic lights remote sensing: disaster monitoring, urban planning, and weather forecasting defect detection: detect cracks or structural damage in buildings, and manufacturing defects Copied >>> from transformers import pipeline >>> detector = pipeline(task= "object-detection" ) >>> preds = detector( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{ "score" : round (pred[ "score" ], 4 ), "label" : pred[ "label" ], "box" : pred[ "box" ]} for pred in preds] >>> preds [{ 'score' : 0.9865 , 'label' : 'cat' , 'box' : { 'xmin' : 178 , 'ymin' : 154 , 'xmax' : 882 , 'ymax' : 598 }}] Image segmentation Image segmentation is a pixel-level task that assigns every pixel in an image to a class. It differs from object detection, which uses bounding boxes to label and predict objects in an image because segmentation is more granular. Segmentation can detect objects at a pixel-level. There are several types of image segmentation: instance segmentation: in addition to labeling the class of an object, it also labels each distinct instance of an object (“dog-1”, “dog-2”) panoptic segmentation: a combination of semantic and instance segmentation; it labels each pixel with a semantic class and each distinct instance of an object Segmentation tasks are helpful in self-driving vehicles to create a pixel-level map of the world around them so they can navigate safely around pedestrians and other vehicles. It is also useful for medical imaging, where the task’s finer granularity can help identify abnormal cells or organ features. Image segmentation can also be used in ecommerce to virtually try on clothes or create augmented reality experiences by overlaying objects in the real world through your camera. Copied >>> from transformers import pipeline >>> segmenter = pipeline(task= "image-segmentation" ) >>> preds = segmenter( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{ "score" : round (pred[ "score" ], 4 ), "label" : pred[ "label" ]} for pred in preds] >>> print (*preds, sep= "\n" ) { 'score' : 0.9879 , 'label' : 'LABEL_184' } { 'score' : 0.9973 , 'label' : 'snow' } { 'score' : 0.9972 , 'label' : 'cat' } Depth estimation Depth estimation predicts the distance of each pixel in an image from the camera. This computer vision task is especially important for scene understanding and reconstruction. For example, in self-driving cars, vehicles need to understand how far objects like pedestrians, traffic signs, and other vehicles are to avoid obstacles and collisions. Depth information is also helpful for constructing 3D representations from 2D images and can be used to create high-quality 3D representations of biological structures or buildings. There are two approaches to depth estimation: stereo: depths are estimated by comparing two images of the same image from slightly different angles monocular: depths are estimated from a single image Copied >>> from transformers import pipeline >>> depth_estimator = pipeline(task= "depth-estimation" ) >>> preds = depth_estimator( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) Natural language processing NLP tasks are among the most common types of tasks because text is such a natural way for us to communicate. To get text into a format recognized by a model, it needs to be tokenized. This means dividing a sequence of text into separate words or subwords (tokens) and then converting these tokens into numbers. As a result, you can represent a sequence of text as a sequence of numbers, and once you have a sequence of numbers, it can be input into a model to solve all sorts of NLP tasks! Text classification Like classification tasks in any modality, text classification labels a sequence of text (it can be sentence-level, a paragraph, or a document) from a predefined set of classes. There are many practical applications for text classification, some of which include: sentiment analysis: label text according to some polarity like positive or negative which can inform and support decision-making in fields like politics, finance, and marketing content classification: label text according to some topic to help organize and filter information in news and social media feeds ( weather , sports , finance , etc.) Copied >>> from transformers import pipeline >>> classifier = pipeline(task= "sentiment-analysis" ) >>> preds = classifier( "Hugging Face is the best thing since sliced bread!" ) >>> preds = [{ "score" : round (pred[ "score" ], 4 ), "label" : pred[ "label" ]} for pred in preds] >>> preds [{ 'score' : 0.9991 , 'label' : 'POSITIVE' }] Token classification In any NLP task, text is preprocessed by separating the sequence of text into individual words or subwords. These are known as tokens . Token classification assigns each token a label from a predefined set of classes. Two common types of token classification are: named entity recognition (NER): label a token according to an entity category like organization, person, location or date. NER is especially popular in biomedical settings, where it can label genes, proteins, and drug names. part-of-speech tagging (POS): label a token according to its part-of-speech like noun, verb, or adjective. POS is useful for helping translation systems understand how two identical words are grammatically different (bank as a noun versus bank as a verb). Copied >>> from transformers import pipeline >>> classifier = pipeline(task= "ner" ) >>> preds = classifier( "Hugging Face is a French company based in New York City." ) >>> preds = [ ... { ... "entity" : pred[ "entity" ], ... "score" : round (pred[ "score" ], 4 ), ... "index" : pred[ "index" ], ... "word" : pred[ "word" ], ... "start" : pred[ "start" ], ... "end" : pred[ "end" ], ... } ... for pred in preds ... ] >>> print (*preds, sep= "\n" ) { 'entity' : 'I-ORG' , 'score' : 0.9968 , 'index' : 1 , 'word' : 'Hu' , 'start' : 0 , 'end' : 2 } { 'entity' : 'I-ORG' , 'score' : 0.9293 , 'index' : 2 , 'word' : '##gging' , 'start' : 2 , 'end' : 7 } { 'entity' : 'I-ORG' , 'score' : 0.9763 , 'index' : 3 , 'word' : 'Face' , 'start' : 8 , 'end' : 12 } { 'entity' : 'I-MISC' , 'score' : 0.9983 , 'index' : 6 , 'word' : 'French' , 'start' : 18 , 'end' : 24 } { 'entity' : 'I-LOC' , 'score' : 0.999 , 'index' : 10 , 'word' : 'New' , 'start' : 42 , 'end' : 45 } { 'entity' : 'I-LOC' , 'score' : 0.9987 , 'index' : 11 , 'word' : 'York' , 'start' : 46 , 'end' : 50 } { 'entity' : 'I-LOC' , 'score' : 0.9992 , 'index' : 12 , 'word' : 'City' , 'start' : 51 , 'end' : 55 } Question answering Question answering is another token-level task that returns an answer to a question, sometimes with context (open-domain) and other times without context (closed-domain). This task happens whenever we ask a virtual assistant something like whether a restaurant is open. It can also provide customer or technical support and help search engines retrieve the relevant information you’re asking for. There are two common types of question answering: extractive: given a question and some context, the answer is a span of text from the context the model must extract abstractive: given a question and some context, the answer is generated from the context; this approach is handled by the Text2TextGenerationPipeline instead of the QuestionAnsweringPipeline shown below Copied >>> from transformers import pipeline >>> question_answerer = pipeline(task= "question-answering" ) >>> preds = question_answerer( ... question= "What is the name of the repository?" , ... context= "The name of the repository is huggingface/transformers" , ... ) >>> print ( ... f"score: { round (preds[ 'score' ], 4 )} , start: {preds[ 'start' ]} , end: {preds[ 'end' ]} , answer: {preds[ 'answer' ]} " ... ) score: 0.9327 , start: 30 , end: 54 , answer: huggingface/transformers Summarization Summarization creates a shorter version of a text from a longer one while trying to preserve most of the meaning of the original document. Summarization is a sequence-to-sequence task; it outputs a shorter text sequence than the input. There are a lot of long-form documents that can be summarized to help readers quickly understand the main points. Legislative bills, legal and financial documents, patents, and scientific papers are a few examples of documents that could be summarized to save readers time and serve as a reading aid. Like question answering, there are two types of summarization: extractive: identify and extract the most important sentences from the original text abstractive: generate the target summary (which may include new words not in the input document) from the original text; the SummarizationPipeline uses the abstractive approach Copied >>> from transformers import pipeline >>> summarizer = pipeline(task= "summarization" ) >>> summarizer( ... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles." ... ) [{ 'summary_text' : ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .' }] Translation Translation converts a sequence of text in one language to another. It is important in helping people from different backgrounds communicate with each other, help translate content to reach wider audiences, and even be a learning tool to help people learn a new language. Along with summarization, translation is a sequence-to-sequence task, meaning the model receives an input sequence and returns a target output sequence. In the early days, translation models were mostly monolingual, but recently, there has been increasing interest in multilingual models that can translate between many pairs of languages. Copied >>> from transformers import pipeline >>> text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning." >>> translator = pipeline(task= "translation" , model= "google-t5/t5-small" ) >>> translator(text) [{ 'translation_text' : "Hugging Face est une tribune communautaire de l'apprentissage des machines." }] Language modeling Language modeling is a task that predicts a word in a sequence of text. It has become a very popular NLP task because a pretrained language model can be finetuned for many other downstream tasks. Lately, there has been a lot of interest in large language models (LLMs) which demonstrate zero- or few-shot learning. This means the model can solve tasks it wasn’t explicitly trained to do! Language models can be used to generate fluent and convincing text, though you need to be careful since the text may not always be accurate. There are two types of language modeling: causal: the model’s objective is to predict the next token in a sequence, and future tokens are masked Copied >>> from transformers import pipeline >>> prompt = "Hugging Face is a community-based open-source platform for machine learning." >>> generator = pipeline(task= "text-generation" ) >>> generator(prompt) # doctest: +SKIP masked: the model’s objective is to predict a masked token in a sequence with full access to the tokens in the sequence Copied >>> text = "Hugging Face is a community-based open-source <mask> for machine learning." >>> fill_mask = pipeline(task= "fill-mask" ) >>> preds = fill_mask(text, top_k= 1 ) >>> preds = [ ... { ... "score" : round (pred[ "score" ], 4 ), ... "token" : pred[ "token" ], ... "token_str" : pred[ "token_str" ], ... "sequence" : pred[ "sequence" ], ... } ... for pred in preds ... ] >>> preds [{ 'score' : 0.2236 , 'token' : 1761 , 'token_str' : ' platform' , 'sequence' : 'Hugging Face is a community-based open-source platform for machine learning.' }] Multimodal Multimodal tasks require a model to process multiple data modalities (text, image, audio, video) to solve a particular problem. Image captioning is an example of a multimodal task where the model takes an image as input and outputs a sequence of text describing the image or some properties of the image. Although multimodal models work with different data types or modalities, internally, the preprocessing steps help the model convert all the data types into embeddings (vectors or list of numbers that holds meaningful information about the data). For a task like image captioning, the model learns relationships between image embeddings and text embeddings. Document question answering Document question answering is a task that answers natural language questions from a document. Unlike a token-level question answering task which takes text as input, document question answering takes an image of a document as input along with a question about the document and returns an answer. Document question answering can be used to parse structured documents and extract key information from it. In the example below, the total amount and change due can be extracted from a receipt. Copied >>> from transformers import pipeline >>> from PIL import Image >>> import requests >>> url = "https://huggingface.co/datasets/hf-internal-testing/example-documents/resolve/main/jpeg_images/2.jpg" >>> image = Image. open (requests.get(url, stream= True ).raw) >>> doc_question_answerer = pipeline( "document-question-answering" , model= "magorshunov/layoutlm-invoices" ) >>> preds = doc_question_answerer( ... question= "What is the total amount?" , ... image=image, ... ) >>> preds [{ 'score' : 0.8531 , 'answer' : '17,000' , 'start' : 4 , 'end' : 4 }] Hopefully, this page has given you some more background information about all the types of tasks in each modality and the practical importance of each one. In the next section , you’ll learn how 🤗 Transformers work to solve these tasks. < > Update on GitHub ← Glossary How 🤗 Transformers solve tasks → What 🤗 Transformers can do Audio Audio classification Automatic speech recognition Computer vision Image classification Object detection Image segmentation Depth estimation Natural language processing Text classification Token classification Question answering Summarization Translation Language modeling Multimodal Document question answering |
GGUF.txt | GGUF Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation GGUF Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started GGUF The GGUF file format is typically used to store models for inference with GGML and supports a variety of block wise quantization options. Diffusers supports loading checkpoints prequantized and saved in the GGUF format via from_single_file loading with Model classes. Loading GGUF checkpoints via Pipelines is currently not supported. The following example will load the FLUX.1 DEV transformer model using the GGUF Q2_K quantization variant. Before starting please install gguf in your environment Copied pip install -U gguf Since GGUF is a single file format, use ~FromSingleFileMixin.from_single_file to load the model and pass in the GGUFQuantizationConfig . When using GGUF checkpoints, the quantized weights remain in a low memory dtype (typically torch.uint8 ) and are dynamically dequantized and cast to the configured compute_dtype during each module’s forward pass through the model. The GGUFQuantizationConfig allows you to set the compute_dtype . The functions used for dynamic dequantizatation are based on the great work done by city96 , who created the Pytorch ports of the original numpy implementation by compilade . Copied import torch from diffusers import FluxPipeline, FluxTransformer2DModel, GGUFQuantizationConfig ckpt_path = ( "https://huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q2_K.gguf" ) transformer = FluxTransformer2DModel.from_single_file( ckpt_path, quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.bfloat16, ) pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev" , transformer=transformer, torch_dtype=torch.bfloat16, ) pipe.enable_model_cpu_offload() prompt = "A cat holding a sign that says hello world" image = pipe(prompt, generator=torch.manual_seed( 0 )).images[ 0 ] image.save( "flux-gguf.png" ) Supported Quantization Types BF16 Q4_0 Q4_1 Q5_0 Q5_1 Q8_0 Q2_K Q3_K Q4_K Q5_K Q6_K < > Update on GitHub ← bitsandbytes torchao → GGUF Supported Quantization Types |
Preprocess.txt | Preprocess Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Preprocess Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Preprocess In addition to loading datasets, 🤗 Datasets other main goal is to offer a diverse set of preprocessing functions to get a dataset into an appropriate format for training with your machine learning framework. There are many possible ways to preprocess a dataset, and it all depends on your specific dataset. Sometimes you may need to rename a column, and other times you might need to unflatten nested fields. 🤗 Datasets provides a way to do most of these things. But in nearly all preprocessing cases, depending on your dataset modality, you’ll need to: Tokenize a text dataset. Resample an audio dataset. Apply transforms to an image dataset. The last preprocessing step is usually setting your dataset format to be compatible with your machine learning framework’s expected input format. In this tutorial, you’ll also need to install the 🤗 Transformers library: Copied pip install transformers Grab a dataset of your choice and follow along! Tokenize text Models cannot process raw text, so you’ll need to convert the text into numbers. Tokenization provides a way to do this by dividing text into individual words called tokens . Tokens are finally converted to numbers. Check out the Tokenizers section in Chapter 2 of the Hugging Face course to learn more about tokenization and different tokenization algorithms. 1 . Start by loading the rotten_tomatoes dataset and the tokenizer corresponding to a pretrained BERT model. Using the same tokenizer as the pretrained model is important because you want to make sure the text is split in the same way. Copied >>> from transformers import AutoTokenizer >>> from datasets import load_dataset >>> tokenizer = AutoTokenizer.from_pretrained( "bert-base-uncased" ) >>> dataset = load_dataset( "rotten_tomatoes" , split= "train" ) 2 . Call your tokenizer on the first row of text in the dataset: Copied >>> tokenizer(dataset[ 0 ][ "text" ]) { 'input_ids' : [ 101 , 1103 , 2067 , 1110 , 17348 , 1106 , 1129 , 1103 , 6880 , 1432 , 112 , 188 , 1207 , 107 , 14255 , 1389 , 107 , 1105 , 1115 , 1119 , 112 , 188 , 1280 , 1106 , 1294 , 170 , 24194 , 1256 , 3407 , 1190 , 170 , 11791 , 5253 , 188 , 1732 , 7200 , 10947 , 12606 , 2895 , 117 , 179 , 7766 , 118 , 172 , 15554 , 1181 , 3498 , 6961 , 3263 , 1137 , 188 , 1566 , 7912 , 14516 , 6997 , 119 , 102 ], 'token_type_ids' : [ 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ], 'attention_mask' : [ 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ]} The tokenizer returns a dictionary with three items: input_ids : the numbers representing the tokens in the text. token_type_ids : indicates which sequence a token belongs to if there is more than one sequence. attention_mask : indicates whether a token should be masked or not. These values are actually the model inputs. 3 . The fastest way to tokenize your entire dataset is to use the map() function. This function speeds up tokenization by applying the tokenizer to batches of examples instead of individual examples. Set the batched parameter to True : Copied >>> def tokenization ( example ): ... return tokenizer(example[ "text" ]) >>> dataset = dataset. map (tokenization, batched= True ) 4 . Set the format of your dataset to be compatible with your machine learning framework: Pytorch Hide Pytorch content Use the set_format() function to set the dataset format to be compatible with PyTorch: Copied >>> dataset.set_format( type = "torch" , columns=[ "input_ids" , "token_type_ids" , "attention_mask" , "label" ]) >>> dataset. format [ 'type' ] 'torch' TensorFlow Hide TensorFlow content Use the to_tf_dataset() function to set the dataset format to be compatible with TensorFlow. You’ll also need to import a data collator from 🤗 Transformers to combine the varying sequence lengths into a single batch of equal lengths: Copied >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors= "tf" ) >>> tf_dataset = dataset.to_tf_dataset( ... columns=[ "input_ids" , "token_type_ids" , "attention_mask" ], ... label_cols=[ "label" ], ... batch_size= 2 , ... collate_fn=data_collator, ... shuffle= True ... ) 5 . The dataset is now ready for training with your machine learning framework! Resample audio signals Audio inputs like text datasets need to be divided into discrete data points. This is known as sampling ; the sampling rate tells you how much of the speech signal is captured per second. It is important to make sure the sampling rate of your dataset matches the sampling rate of the data used to pretrain the model you’re using. If the sampling rates are different, the pretrained model may perform poorly on your dataset because it doesn’t recognize the differences in the sampling rate. 1 . Start by loading the MInDS-14 dataset, the Audio feature, and the feature extractor corresponding to a pretrained Wav2Vec2 model: Copied >>> from transformers import AutoFeatureExtractor >>> from datasets import load_dataset, Audio >>> feature_extractor = AutoFeatureExtractor.from_pretrained( "facebook/wav2vec2-base-960h" ) >>> dataset = load_dataset( "PolyAI/minds14" , "en-US" , split= "train" ) 2 . Index into the first row of the dataset. When you call the audio column of the dataset, it is automatically decoded and resampled: Copied >>> dataset[ 0 ][ "audio" ] { 'array' : array([ 0. , 0.00024414 , - 0.00024414 , ..., - 0.00024414 , 0. , 0. ], dtype=float32), 'path' : '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav' , 'sampling_rate' : 8000 } 3 . Reading a dataset card is incredibly useful and can give you a lot of information about the dataset. A quick look at the MInDS-14 dataset card tells you the sampling rate is 8kHz. Likewise, you can get many details about a model from its model card. The Wav2Vec2 model card says it was sampled on 16kHz speech audio. This means you’ll need to upsample the MInDS-14 dataset to match the sampling rate of the model. Use the cast_column() function and set the sampling_rate parameter in the Audio feature to upsample the audio signal. When you call the audio column now, it is decoded and resampled to 16kHz: Copied >>> dataset = dataset.cast_column( "audio" , Audio(sampling_rate= 16_000 )) >>> dataset[ 0 ][ "audio" ] { 'array' : array([ 2.3443763e-05 , 2.1729663e-04 , 2.2145823e-04 , ..., 3.8356509e-05 , - 7.3497440e-06 , - 2.1754686e-05 ], dtype=float32), 'path' : '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav' , 'sampling_rate' : 16000 } 4 . Use the map() function to resample the entire dataset to 16kHz. This function speeds up resampling by applying the feature extractor to batches of examples instead of individual examples. Set the batched parameter to True : Copied >>> def preprocess_function ( examples ): ... audio_arrays = [x[ "array" ] for x in examples[ "audio" ]] ... inputs = feature_extractor( ... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length= 16000 , truncation= True ... ) ... return inputs >>> dataset = dataset. map (preprocess_function, batched= True ) 5 . The dataset is now ready for training with your machine learning framework! Apply data augmentations The most common preprocessing you’ll do with image datasets is data augmentation , a process that introduces random variations to an image without changing the meaning of the data. This can mean changing the color properties of an image or randomly cropping an image. You are free to use any data augmentation library you like, and 🤗 Datasets will help you apply your data augmentations to your dataset. 1 . Start by loading the Beans dataset, the Image feature, and the feature extractor corresponding to a pretrained ViT model: Copied >>> from transformers import AutoFeatureExtractor >>> from datasets import load_dataset, Image >>> feature_extractor = AutoFeatureExtractor.from_pretrained( "google/vit-base-patch16-224-in21k" ) >>> dataset = load_dataset( "beans" , split= "train" ) 2 . Index into the first row of the dataset. When you call the image column of the dataset, the underlying PIL object is automatically decoded into an image. Copied >>> dataset[ 0 ][ "image" ] <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x7FE5A047CC70 > Most image models expect the image to be in the RGB mode. The Beans images are already in the RGB mode, but if your dataset contains images in a different mode, you can use the cast_column() function to set the mode to RGB: Copied >>> dataset = dataset.cast_column( "image" , Image(mode= "RGB" )) 3 . Now, you can apply some transforms to the image. Feel free to take a look at the various transforms available in torchvision and choose one you’d like to experiment with. This example applies a transform that randomly rotates the image: Copied >>> from torchvision.transforms import RandomRotation >>> rotate = RandomRotation(degrees=( 0 , 90 )) >>> def transforms ( examples ): ... examples[ "pixel_values" ] = [rotate(image) for image in examples[ "image" ]] ... return examples 4 . Use the set_transform() function to apply the transform on-the-fly. When you index into the image pixel_values , the transform is applied, and your image gets rotated. Copied >>> dataset.set_transform(transforms) >>> dataset[ 0 ][ "pixel_values" ] 5 . The dataset is now ready for training with your machine learning framework! < > Update on GitHub ← Know your dataset Create a dataset → Preprocess Tokenize text Resample audio signals Apply data augmentations |
Annotated_Model_Card_Template.txt | Annotated Model Card Template Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Annotated Model Card Template Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Annotated Model Card Carbon Emissions Model Card Guidebook Landscape Analysis Card Components Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Annotated Model Card Template Template modelcard_template.md file Directions Fully filling out a model card requires input from a few different roles. (One person may have more than one role.) We’ll refer to these roles as the developer , who writes the code and runs training; the sociotechnic , who is skilled at analyzing the interaction of technology and society long-term (this includes lawyers, ethicists, sociologists, or rights advocates); and the project organizer , who understands the overall scope and reach of the model, can roughly fill out each part of the card, and who serves as a contact person for model card updates. The developer is necessary for filling out Training Procedure and Technical Specifications . They are also particularly useful for the “Limitations” section of Bias, Risks, and Limitations . They are responsible for providing Results for the Evaluation, and ideally work with the other roles to define the rest of the Evaluation: Testing Data, Factors & Metrics . The sociotechnic is necessary for filling out “Bias” and “Risks” within Bias, Risks, and Limitations , and particularly useful for “Out of Scope Use” within Uses . The project organizer is necessary for filling out Model Details and Uses . They might also fill out Training Data . Project organizers could also be in charge of Citation , Glossary , Model Card Contact , Model Card Authors , and More Information . Instructions are provided below, in italics. Template variable names appear in monospace . Model Name Section Overview: Provide the model name and a 1-2 sentence summary of what the model is. model_id model_summary Table of Contents Section Overview: Provide this with links to each section, to enable people to easily jump around/use the file in other locations with the preserved TOC/print out the content/etc. Model Details Section Overview: This section provides basic information about what the model is, its current status, and where it came from. It should be useful for anyone who wants to reference the model. Model Description model_description Provide basic details about the model. This includes the architecture, version, if it was introduced in a paper, if an original implementation is available, and the creators. Any copyright should be attributed here. General information about training procedures, parameters, and important disclaimers can also be mentioned in this section. Developed by: developers List (and ideally link to) the people who built the model. Funded by: funded_by List (and ideally link to) the funding sources that financially, computationally, or otherwise supported or enabled this model. Shared by [optional]: shared_by List (and ideally link to) the people/organization making the model available online. Model type: model_type You can name the “type” as: 1. Supervision/Learning Method 2. Machine Learning Type 3. Modality Language(s) [NLP]: language Use this field when the system uses or processes natural (human) language. License: license Name and link to the license being used. Finetuned From Model [optional]: base_model If this model has another model as its base, link to that model here. Model Sources optional Repository: repo Paper [optional]: paper Demo [optional]: demo Provide sources for the user to directly see the model and its details. Additional kinds of resources – training logs, lessons learned, etc. – belong in the More Information section. If you include one thing for this section, link to the repository. Uses Section Overview: This section addresses questions around how the model is intended to be used in different applied contexts, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. Note this section is not intended to include the license usage details. For that, link directly to the license. Direct Use direct_use Explain how the model can be used without fine-tuning, post-processing, or plugging into a pipeline. An example code snippet is recommended. Downstream Use optional downstream_use Explain how this model can be used when fine-tuned for a task or when plugged into a larger ecosystem or app. An example code snippet is recommended. Out-of-Scope Use out_of_scope_use List how the model may foreseeably be misused (used in a way it will not work for) and address what users ought not do with the model. Bias, Risks, and Limitations Section Overview: This section identifies foreseeable harms, misunderstandings, and technical and sociotechnical limitations. It also provides information on warnings and potential mitigations. Bias, risks, and limitations can sometimes be inseparable/refer to the same issues. Generally, bias and risks are sociotechnical, while limitations are technical: A bias is a stereotype or disproportionate performance (skew) for some subpopulations. A risk is a socially-relevant issue that the model might cause. A limitation is a likely failure mode that can be addressed following the listed Recommendations. bias_risks_limitations What are the known or foreseeable issues stemming from this model? Recommendations bias_recommendations What are recommendations with respect to the foreseeable issues? This can include everything from “downsample your image” to filtering explicit content. Training Details Section Overview: This section provides information to describe and replicate training, including the training data, the speed and size of training elements, and the environmental impact of training. This relates heavily to the Technical Specifications as well, and content here should link to that section when it is relevant to the training procedure. It is useful for people who want to learn more about the model inputs and training footprint. It is relevant for anyone who wants to know the basics of what the model is learning. Training Data training_data Write 1-2 sentences on what the training data is. Ideally this links to a Dataset Card for further information. Links to documentation related to data pre-processing or additional filtering may go here as well as in More Information . Training Procedure optional Preprocessing preprocessing Detail tokenization, resizing/rewriting (depending on the modality), etc. Speeds, Sizes, Times speeds_sizes_times Detail throughput, start/end time, checkpoint sizes, etc. Evaluation Section Overview: This section describes the evaluation protocols, what is being measured in the evaluation, and provides the results. Evaluation ideally has at least two parts, with one part looking at quantitative measurement of general performance ( Testing Data, Factors & Metrics ), such as may be done with benchmarking; and another looking at performance with respect to specific social safety issues ( Societal Impact Assessment ), such as may be done with red-teaming. You can also specify your model’s evaluation results in a structured way in the model card metadata. Results are parsed by the Hub and displayed in a widget on the model page. See https://huggingface.co/docs/hub/model-cards#evaluation-results . Testing Data, Factors & Metrics Evaluation is ideally disaggregated with respect to different factors, such as task, domain and population subgroup; and calculated with metrics that are most meaningful for foreseeable contexts of use. Equal evaluation performance across different subgroups is said to be “fair” across those subgroups; target fairness metrics should be decided based on which errors are more likely to be problematic in light of the model use. However, this section is most commonly used to report aggregate evaluation performance on different task benchmarks. Testing Data testing_data Describe testing data or link to its Dataset Card. Factors testing_factors What are the foreseeable characteristics that will influence how the model behaves? Evaluation should ideally be disaggregated across these factors in order to uncover disparities in performance. Metrics testing_metrics What metrics will be used for evaluation? Results results Results should be based on the Factors and Metrics defined above. Summary results_summary What do the results say? This can function as a kind of tl;dr for general audiences. Societal Impact Assessment optional Use this free text section to explain how this model has been evaluated for risk of societal harm, such as for child safety, NCII, privacy, and violence. This might take the form of answers to the following questions: Is this model safe for kids to use? Why or why not? Has this model been tested to evaluate risks pertaining to non-consensual intimate imagery (including CSEM)? Has this model been tested to evaluate risks pertaining to violent activities, or depictions of violence? What were the results? Quantitative numbers on each issue may also be provided. Model Examination optional Section Overview: This is an experimental section some developers are beginning to add, where work on explainability/interpretability may go. model_examination Environmental Impact Section Overview: Summarizes the information necessary to calculate environmental impacts such as electricity usage and carbon emissions. Hardware Type: hardware_type Hours used: hours_used Cloud Provider: cloud_provider Compute Region: cloud_region Carbon Emitted: co2_emitted Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019) . Technical Specifications optional Section Overview: This section includes details about the model objective and architecture, and the compute infrastructure. It is useful for people interested in model development. Writing this section usually requires the model developer to be directly involved. Model Architecture and Objective model_specs Compute Infrastructure compute_infrastructure Hardware hardware_requirements What are the minimum hardware requirements, e.g. processing, storage, and memory requirements? Software software Citation optional Section Overview: The developers’ preferred citation for this model. This is often a paper. BibTeX citation_bibtex APA citation_apa Glossary optional Section Overview: This section defines common terms and how metrics are calculated. glossary Clearly define terms in order to be accessible across audiences. More Information optional Section Overview: This section provides links to writing on dataset creation, technical specifications, lessons learned, and initial results. more_information Model Card Authors optional Section Overview: This section lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction. model_card_authors Model Card Contact Section Overview: Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors model_card_contact How to Get Started with the Model Section Overview: Provides a code snippet to show how to use the model. get_started_code Please cite as: Ozoani, Ezi and Gerchick, Marissa and Mitchell, Margaret. Model Card Guidebook. Hugging Face, 2022. https://huggingface.co/docs/hub/en/model-card-guidebook < > Update on GitHub ← Model Cards Carbon Emissions → Annotated Model Card Template Template Directions |
Using_Spaces_for_Organization_Cards.txt | Using Spaces for Organization Cards Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Using Spaces for Organization Cards Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Handling Spaces Dependencies Spaces Settings Using Spaces for Organization Cards Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using Spaces for Organization Cards Organization cards are a way to describe your organization to other users. They take the form of a README.md static file, inside a Space repo named README . Please read more in the dedicated doc section . < > Update on GitHub ← Spaces Settings Spaces GPU Upgrades → Using Spaces for Organization Cards |
Saving_methods.txt | Saving methods Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Saving methods Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Saving methods Methods for saving evaluations results: Save evaluate.save < source > ( path_or_file **data ) Parameters path_or_file ( str ) — Path or file to store the file. If only a folder is provided the results file will be saved in the format “result-%Y %m %d-%H %M %S.json” . Saves results to a JSON file. Also saves system information such as current time, current commit hash if inside a repository, and Python system information. Example: Copied >>> import evaluate >>> result = { "bleu" : 0.7 } >>> params = { "model" : "gpt-2" } >>> evaluate.save( "./results/" , **result, **params) ← Loading methods Hub methods → Saving methods Save |
Working_with_large_models.txt | Working with large models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Working with large models Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Working with large models Dispatch and offload init_empty_weights accelerate.init_empty_weights < source > ( include_buffers : bool = None ) Parameters include_buffers ( bool , optional ) — Whether or not to also put all buffers on the meta device while initializing. A context manager under which models are initialized with all parameters on the meta device, therefore creating an empty model. Useful when just initializing the model would blow the available RAM. Example: Copied import torch.nn as nn from accelerate import init_empty_weights # Initialize a model with 100 billions parameters in no time and without using any RAM. with init_empty_weights(): tst = nn.Sequential(*[nn.Linear( 10000 , 10000 ) for _ in range ( 1000 )]) Any model created under this context manager has no weights. As such you can’t do something like model.to(some_device) with it. To load weights inside your empty model, see load_checkpoint_and_dispatch() . Make sure to overwrite the default device_map param for load_checkpoint_and_dispatch() , otherwise dispatch is not called. cpu_offload accelerate.cpu_offload < source > ( model : Module execution_device : typing.Optional[torch.device] = None offload_buffers : bool = False state_dict : typing.Optional[typing.Dict[str, torch.Tensor]] = None preload_module_classes : typing.Optional[typing.List[str]] = None ) Parameters model ( torch.nn.Module ) — The model to offload. execution_device ( torch.device , optional ) — The device on which the forward pass of the model will be executed (should be a GPU). Will default to the model first parameter device. offload_buffers ( bool , optional , defaults to False ) — Whether or not to offload the buffers with the model parameters. state_dict ( Dict[str, torch.Tensor] , optional ) — The state dict of the model that will be kept on CPU. preload_module_classes ( List[str] , optional ) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a dense linear layer is registered, but at forward, dense.weight and dense.bias are used in some operations instead of calling dense directly. Activates full CPU offload for a model. As a result, all parameters of the model will be offloaded and only one copy of the state dict of the model will be kept. During the forward pass, parameters will be extracted from that state dict and put on the execution device passed as they are needed, then offloaded again. cpu_offload_with_hook accelerate.cpu_offload_with_hook < source > ( model : Module execution_device : typing.Union[str, torch.device, int, NoneType] = None prev_module_hook : typing.Optional[accelerate.hooks.UserCpuOffloadHook] = None ) Parameters model ( torch.nn.Module ) — The model to offload. execution_device( str , int or torch.device , optional ) — The device on which the model should be executed. Will default to the MPS device if it’s available, then GPU 0 if there is a GPU, and finally to the CPU. prev_module_hook ( UserCpuOffloadHook , optional ) — The hook sent back by this function for a previous model in the pipeline you are running. If passed, its offload method will be called just before the forward of the model to which this hook is attached. Offloads a model on the CPU and puts it back to an execution device when executed. The difference with cpu_offload() is that the model stays on the execution device after the forward and is only offloaded again when the offload method of the returned hook is called. Useful for pipelines running a model in a loop. Example: Copied model_1, hook_1 = cpu_offload_with_hook(model_1, cuda_device) model_2, hook_2 = cpu_offload_with_hook(model_2, cuda_device, prev_module_hook=hook_1) model_3, hook_3 = cpu_offload_with_hook(model_3, cuda_device, prev_module_hook=hook_2) hid_1 = model_1( input ) for i in range ( 50 ): # model1 is offloaded on the CPU at the first iteration, model 2 stays on the GPU for this whole loop. hid_2 = model_2(hid_1) # model2 is offloaded to the CPU just before this forward. hid_3 = model_3(hid_3) # For model3, you need to manually call the hook offload method. hook_3.offload() disk_offload accelerate.disk_offload < source > ( model : Module offload_dir : typing.Union[str, os.PathLike] execution_device : typing.Optional[torch.device] = None offload_buffers : bool = False preload_module_classes : typing.Optional[typing.List[str]] = None ) Parameters model ( torch.nn.Module ) — The model to offload. offload_dir ( str or os.PathLike ) — The folder in which to offload the model weights (or where the model weights are already offloaded). execution_device ( torch.device , optional ) — The device on which the forward pass of the model will be executed (should be a GPU). Will default to the model’s first parameter device. offload_buffers ( bool , optional , defaults to False ) — Whether or not to offload the buffers with the model parameters. preload_module_classes ( List[str] , optional ) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a dense linear layer is registered, but at forward, dense.weight and dense.bias are used in some operations instead of calling dense directly. Activates full disk offload for a model. As a result, all parameters of the model will be offloaded as memory-mapped array in a given folder. During the forward pass, parameters will be accessed from that folder and put on the execution device passed as they are needed, then offloaded again. dispatch_model accelerate.dispatch_model < source > ( model : Module device_map : typing.Dict[str, typing.Union[int, str, torch.device]] main_device : typing.Optional[torch.device] = None state_dict : typing.Optional[typing.Dict[str, torch.Tensor]] = None offload_dir : typing.Union[str, os.PathLike, NoneType] = None offload_index : typing.Optional[typing.Dict[str, str]] = None offload_buffers : bool = False skip_keys : typing.Union[str, typing.List[str], NoneType] = None preload_module_classes : typing.Optional[typing.List[str]] = None force_hooks : bool = False ) Parameters model ( torch.nn.Module ) — The model to dispatch. device_map ( Dict[str, Union[str, int, torch.device]] ) — A dictionary mapping module names in the models state_dict to the device they should go to. Note that "disk" is accepted even if it’s not a proper value for torch.device . main_device ( str , int or torch.device , optional ) — The main execution device. Will default to the first device in the device_map different from "cpu" or "disk" . state_dict ( Dict[str, torch.Tensor] , optional ) — The state dict of the part of the model that will be kept on CPU. offload_dir ( str or os.PathLike ) — The folder in which to offload the model weights (or where the model weights are already offloaded). offload_index ( Dict , optional ) — A dictionary from weight name to their information ( dtype / shape or safetensors filename). Will default to the index saved in save_folder . offload_buffers ( bool , optional , defaults to False ) — Whether or not to offload the buffers with the model parameters. skip_keys ( str or List[str] , optional ) — A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes ( List[str] , optional ) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a dense linear layer is registered, but at forward, dense.weight and dense.bias are used in some operations instead of calling dense directly. force_hooks ( bool , optional , defaults to False ) — Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a single device. Dispatches a model according to a given device map. Layers of the model might be spread across GPUs, offloaded on the CPU or even the disk. load_checkpoint_and_dispatch accelerate.load_checkpoint_and_dispatch < source > ( model : Module checkpoint : typing.Union[str, os.PathLike] device_map : typing.Union[str, typing.Dict[str, typing.Union[int, str, torch.device]], NoneType] = None max_memory : typing.Optional[typing.Dict[typing.Union[int, str], typing.Union[int, str]]] = None no_split_module_classes : typing.Optional[typing.List[str]] = None offload_folder : typing.Union[str, os.PathLike, NoneType] = None offload_buffers : bool = False dtype : typing.Union[str, torch.dtype, NoneType] = None offload_state_dict : typing.Optional[bool] = None skip_keys : typing.Union[str, typing.List[str], NoneType] = None preload_module_classes : typing.Optional[typing.List[str]] = None force_hooks : bool = False strict : bool = False ) Parameters model ( torch.nn.Module ) — The model in which we want to load a checkpoint. checkpoint ( str or os.PathLike ) — The folder checkpoint to load. It can be: a path to a file containing a whole model state dict a path to a .json file containing the index to a sharded checkpoint a path to a folder containing a unique .index.json file and the shards of a checkpoint. device_map ( Dict[str, Union[int, str, torch.device]] , optional ) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. To have Accelerate compute the most optimized device_map automatically, set device_map="auto" . For more information about each option see here . Defaults to None, which means dispatch_model() will not be called. max_memory ( Dict , optional ) — A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. no_split_module_classes ( List[str] , optional ) — A list of layer class names that should never be split across device (for instance any layer that has a residual connection). offload_folder ( str or os.PathLike , optional ) — If the device_map contains any value "disk" , the folder where we will offload weights. offload_buffers ( bool , optional , defaults to False ) — In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as well as the parameters. dtype ( str or torch.dtype , optional ) — If provided, the weights will be converted to that type when loaded. offload_state_dict ( bool , optional ) — If True , will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit. Will default to True if the device map picked contains "disk" values. skip_keys ( str or List[str] , optional ) — A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes ( List[str] , optional ) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a dense linear layer is registered, but at forward, dense.weight and dense.bias are used in some operations instead of calling dense directly. force_hooks ( bool , optional , defaults to False ) — Whether or not to force device hooks to be attached to the model even if all layers are dispatched to a single device. strict ( bool , optional , defaults to False ) — Whether to strictly enforce that the keys in the checkpoint state_dict match the keys of the model’s state_dict. Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded and adds the various hooks that will make this model run properly (even if split across devices). Example: Copied >>> from accelerate import init_empty_weights, load_checkpoint_and_dispatch >>> from huggingface_hub import hf_hub_download >>> from transformers import AutoConfig, AutoModelForCausalLM >>> # Download the Weights >>> checkpoint = "EleutherAI/gpt-j-6B" >>> weights_location = hf_hub_download(checkpoint, "pytorch_model.bin" ) >>> # Create a model and initialize it with empty weights >>> config = AutoConfig.from_pretrained(checkpoint) >>> with init_empty_weights(): ... model = AutoModelForCausalLM.from_config(config) >>> # Load the checkpoint and dispatch it to the right devices >>> model = load_checkpoint_and_dispatch( ... model, weights_location, device_map= "auto" , no_split_module_classes=[ "GPTJBlock" ] ... ) load_checkpoint_in_model accelerate.load_checkpoint_in_model < source > ( model : Module checkpoint : typing.Union[str, os.PathLike] device_map : typing.Optional[typing.Dict[str, typing.Union[int, str, torch.device]]] = None offload_folder : typing.Union[str, os.PathLike, NoneType] = None dtype : typing.Union[str, torch.dtype, NoneType] = None offload_state_dict : bool = False offload_buffers : bool = False keep_in_fp32_modules : typing.List[str] = None offload_8bit_bnb : bool = False strict : bool = False ) Parameters model ( torch.nn.Module ) — The model in which we want to load a checkpoint. checkpoint ( str or os.PathLike ) — The folder checkpoint to load. It can be: a path to a file containing a whole model state dict a path to a .json file containing the index to a sharded checkpoint a path to a folder containing a unique .index.json file and the shards of a checkpoint. a path to a folder containing a unique pytorch_model.bin or a model.safetensors file. device_map ( Dict[str, Union[int, str, torch.device]] , optional ) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. offload_folder ( str or os.PathLike , optional ) — If the device_map contains any value "disk" , the folder where we will offload weights. dtype ( str or torch.dtype , optional ) — If provided, the weights will be converted to that type when loaded. offload_state_dict ( bool , optional , defaults to False ) — If True , will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit. offload_buffers ( bool , optional , defaults to False ) — Whether or not to include the buffers in the weights offloaded to disk. keep_in_fp32_modules( List[str] , optional ) — A list of the modules that we keep in torch.float32 dtype. offload_8bit_bnb ( bool , optional ) — Whether or not to enable offload of 8-bit modules on cpu/disk. strict ( bool , optional , defaults to False ) — Whether to strictly enforce that the keys in the checkpoint state_dict match the keys of the model’s state_dict. Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded. Once loaded across devices, you still need to call dispatch_model() on your model to make it able to run. To group the checkpoint loading and dispatch in one single call, use load_checkpoint_and_dispatch() . infer_auto_device_map accelerate.infer_auto_device_map < source > ( model : Module max_memory : typing.Optional[typing.Dict[typing.Union[int, str], typing.Union[int, str]]] = None no_split_module_classes : typing.Optional[typing.List[str]] = None dtype : typing.Union[str, torch.dtype, NoneType] = None special_dtypes : typing.Optional[typing.Dict[str, typing.Union[str, torch.dtype]]] = None verbose : bool = False clean_result : bool = True offload_buffers : bool = False fallback_allocation : bool = False ) Parameters model ( torch.nn.Module ) — The model to analyze. max_memory ( Dict , optional ) — A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset. Example: max_memory={0: "1GB"} . no_split_module_classes ( List[str] , optional ) — A list of layer class names that should never be split across device (for instance any layer that has a residual connection). dtype ( str or torch.dtype , optional ) — If provided, the weights will be converted to that type when loaded. special_dtypes ( Dict[str, Union[str, torch.device]] , optional ) — If provided, special dtypes to consider for some specific weights (will override dtype used as default for all weights). verbose ( bool , optional , defaults to False ) — Whether or not to provide debugging statements as the function builds the device_map. clean_result ( bool , optional , defaults to True ) — Clean the resulting device_map by grouping all submodules that go on the same device together. offload_buffers ( bool , optional , defaults to False ) — In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as well as the parameters. fallback_allocation ( bool , optional , defaults to False ) — When regular allocation fails, try to allocate a module that fits in the size limit using BFS. Compute a device map for a given model giving priority to GPUs, then offload on CPU and finally offload to disk, such that: we don’t exceed the memory available of any of the GPU. if offload to the CPU is needed, there is always room left on GPU 0 to put back the layer offloaded on CPU that has the largest size. if offload to the CPU is needed,we don’t exceed the RAM available on the CPU. if offload to the disk is needed, there is always room left on the CPU to put back the layer offloaded on disk that has the largest size. All computation is done analyzing sizes and dtypes of the model parameters. As a result, the model can be on the meta device (as it would if initialized within the init_empty_weights context manager). Hooks ModelHook class accelerate.hooks. ModelHook < source > ( ) A hook that contains callbacks to be executed just before and after the forward method of a model. The difference with PyTorch existing hooks is that they get passed along the kwargs. Class attribute: no_grad ( bool , optional , defaults to False ) — Whether or not to execute the actual forward pass under the torch.no_grad() context manager. detach_hook < source > ( module ) Parameters module ( torch.nn.Module ) — The module detached from this hook. To be executed when the hook is detached from a module. init_hook < source > ( module ) Parameters module ( torch.nn.Module ) — The module attached to this hook. To be executed when the hook is attached to the module. post_forward < source > ( module output ) → Any Parameters module ( torch.nn.Module ) — The module whose forward pass been executed just before this event. output ( Any ) — The output of the module. Returns Any The processed output . To be executed just after the forward method of the model. pre_forward < source > ( module *args **kwargs ) → Tuple[Tuple[Any], Dict[Str, Any]] Parameters module ( torch.nn.Module ) — The module whose forward pass will be executed just after this event. args ( Tuple[Any] ) — The positional arguments passed to the module. kwargs ( Dict[Str, Any] ) — The keyword arguments passed to the module. Returns Tuple[Tuple[Any], Dict[Str, Any]] A tuple with the treated args and kwargs . To be executed just before the forward method of the model. AlignDevicesHook class accelerate.hooks. AlignDevicesHook < source > ( execution_device : typing.Union[str, torch.device, int, NoneType] = None offload : bool = False io_same_device : bool = False weights_map : typing.Optional[typing.Mapping] = None offload_buffers : bool = False place_submodules : bool = False skip_keys : typing.Union[str, typing.List[str], NoneType] = None tied_params_map : typing.Optional[typing.Dict[int, typing.Dict[torch.device, torch.Tensor]]] = None ) Parameters execution_device ( torch.device , optional ) — The device on which inputs and model weights should be placed before the forward pass. offload ( bool , optional , defaults to False ) — Whether or not the weights should be offloaded after the forward pass. io_same_device ( bool , optional , defaults to False ) — Whether or not the output should be placed on the same device as the input was. weights_map ( Mapping[str, torch.Tensor] , optional ) — When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. offload_buffers ( bool , optional , defaults to False ) — Whether or not to include the associated module’s buffers when offloading. place_submodules ( bool , optional , defaults to False ) — Whether to place the submodules on execution_device during the init_hook event. A generic ModelHook that ensures inputs and model weights are on the same device for the forward pass of the associated module, potentially offloading the weights after the forward pass. SequentialHook class accelerate.hooks. SequentialHook < source > ( *hooks ) A hook that can contain several hooks and iterates through them at each event. Adding Hooks add_hook_to_module accelerate.hooks.add_hook_to_module < source > ( module : Module hook : ModelHook append : bool = False ) → torch.nn.Module Parameters module ( torch.nn.Module ) — The module to attach a hook to. hook ( ModelHook ) — The hook to attach. append ( bool , optional , defaults to False ) — Whether the hook should be chained with an existing one (if module already contains a hook) or not. Returns torch.nn.Module The same module, with the hook attached (the module is modified in place, so the result can be discarded). Adds a hook to a given module. This will rewrite the forward method of the module to include the hook, to remove this behavior and restore the original forward method, use remove_hook_from_module . If the module already contains a hook, this will replace it with the new hook passed by default. To chain two hooks together, pass append=True , so it chains the current and new hook into an instance of the SequentialHook class. attach_execution_device_hook accelerate.hooks.attach_execution_device_hook < source > ( module : Module execution_device : typing.Union[int, str, torch.device] skip_keys : typing.Union[str, typing.List[str], NoneType] = None preload_module_classes : typing.Optional[typing.List[str]] = None tied_params_map : typing.Optional[typing.Dict[int, typing.Dict[torch.device, torch.Tensor]]] = None ) Parameters module ( torch.nn.Module ) — The module where we want to attach the hooks. execution_device ( int , str or torch.device ) — The device on which inputs and model weights should be placed before the forward pass. skip_keys ( str or List[str] , optional ) — A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes ( List[str] , optional ) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a dense linear layer is registered, but at forward, dense.weight and dense.bias are used in some operations instead of calling dense directly. tied_params_map (Optional[Dict[int, Dict[torch.device, torch.Tensor]]], optional , defaults to None ) — A map of data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight for all others, instead of duplicating memory. Recursively attaches AlignDevicesHook to all submodules of a given model to make sure they have the right execution device attach_align_device_hook accelerate.hooks.attach_align_device_hook < source > ( module : Module execution_device : typing.Optional[torch.device] = None offload : bool = False weights_map : typing.Optional[typing.Mapping] = None offload_buffers : bool = False module_name : str = '' skip_keys : typing.Union[str, typing.List[str], NoneType] = None preload_module_classes : typing.Optional[typing.List[str]] = None tied_params_map : typing.Optional[typing.Dict[int, typing.Dict[torch.device, torch.Tensor]]] = None ) Parameters module ( torch.nn.Module ) — The module where we want to attach the hooks. execution_device ( torch.device , optional ) — The device on which inputs and model weights should be placed before the forward pass. offload ( bool , optional , defaults to False ) — Whether or not the weights should be offloaded after the forward pass. weights_map ( Mapping[str, torch.Tensor] , optional ) — When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. offload_buffers ( bool , optional , defaults to False ) — Whether or not to include the associated module’s buffers when offloading. module_name ( str , optional , defaults to "" ) — The name of the module. skip_keys ( str or List[str] , optional ) — A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes ( List[str] , optional ) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a dense linear layer is registered, but at forward, dense.weight and dense.bias are used in some operations instead of calling dense directly. tied_params_map (Optional[Dict[int, Dict[torch.device, torch.Tensor]]], optional , defaults to None ) — A map of data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight for all others, instead of duplicating memory. Recursively attaches AlignDevicesHook to all submodules of a given model that have direct parameters and/or buffers. attach_align_device_hook_on_blocks accelerate.hooks.attach_align_device_hook_on_blocks < source > ( module : Module execution_device : typing.Union[torch.device, typing.Dict[str, torch.device], NoneType] = None offload : typing.Union[bool, typing.Dict[str, bool]] = False weights_map : typing.Mapping = None offload_buffers : bool = False module_name : str = '' skip_keys : typing.Union[str, typing.List[str], NoneType] = None preload_module_classes : typing.Optional[typing.List[str]] = None tied_params_map : typing.Optional[typing.Dict[int, typing.Dict[torch.device, torch.Tensor]]] = None ) Parameters module ( torch.nn.Module ) — The module where we want to attach the hooks. execution_device ( torch.device or Dict[str, torch.device] , optional ) — The device on which inputs and model weights should be placed before the forward pass. It can be one device for the whole module, or a dictionary mapping module name to device. offload ( bool , optional , defaults to False ) — Whether or not the weights should be offloaded after the forward pass. It can be one boolean for the whole module, or a dictionary mapping module name to boolean. weights_map ( Mapping[str, torch.Tensor] , optional ) — When the model weights are offloaded, a (potentially lazy) map from param names to the tensor values. offload_buffers ( bool , optional , defaults to False ) — Whether or not to include the associated module’s buffers when offloading. module_name ( str , optional , defaults to "" ) — The name of the module. skip_keys ( str or List[str] , optional ) — A list of keys to ignore when moving inputs or outputs between devices. preload_module_classes ( List[str] , optional ) — A list of classes whose instances should load all their weights (even in the submodules) at the beginning of the forward. This should only be used for classes that have submodules which are registered but not called directly during the forward, for instance if a dense linear layer is registered, but at forward, dense.weight and dense.bias are used in some operations instead of calling dense directly. tied_params_map (Optional[Dict[int, Dict[torch.device, torch.Tensor]]], optional , defaults to None ) — A map of data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight for all others, instead of duplicating memory. Attaches AlignDevicesHook to all blocks of a given model as needed. Removing Hooks remove_hook_from_module accelerate.hooks.remove_hook_from_module < source > ( module : Module recurse = False ) → torch.nn.Module Parameters module ( torch.nn.Module ) — The module to attach a hook to. recurse ( bool , optional ) — Whether to remove the hooks recursively Returns torch.nn.Module The same module, with the hook detached (the module is modified in place, so the result can be discarded). Removes any hook attached to a module via add_hook_to_module . remove_hook_from_submodules accelerate.hooks.remove_hook_from_submodules < source > ( module : Module ) Parameters module ( torch.nn.Module ) — The module on which to remove all hooks. Recursively removes all hooks attached on the submodules of a given model. Utilities has_offloaded_params accelerate.utils.has_offloaded_params < source > ( module : Module ) → bool Parameters module ( torch.nn.Module ) — The module to check for an offload hook. Returns bool True if the module has an offload hook and offloading is enabled, False otherwise. Checks if a module has offloaded parameters by checking if the given module has a AlignDevicesHook attached with offloading enabled align_module_device accelerate.utils.align_module_device < source > ( module : Module execution_device : typing.Optional[torch.device] = None ) Parameters module ( torch.nn.Module ) — Module with parameters to align. execution_device ( torch.device , optional ) — If provided, overrides the module’s execution device within the context. Otherwise, use hook execution device or pass Context manager that moves a module’s parameters to the specified execution device. < > Update on GitHub ← Logging Pipeline parallelism → Working with large models Dispatch and offload init_empty_weights cpu_offload cpu_offload_with_hook disk_offload dispatch_model load_checkpoint_and_dispatch load_checkpoint_in_model infer_auto_device_map Hooks Model Hook Align Devices Hook Sequential Hook Adding Hooks add_hook_to_module attach_execution_device_hook attach_align_device_hook attach_align_device_hook_on_blocks Removing Hooks remove_hook_from_module remove_hook_from_submodules Utilities has_offloaded_params align_module_device |
Load_pretrained_instances_with_an_AutoClass.txt | Load pretrained instances with an AutoClass Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Load pretrained instances with an AutoClass Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Load pretrained instances with an AutoClass With so many different Transformer architectures, it can be challenging to create one for your checkpoint. As a part of 🤗 Transformers core philosophy to make the library easy, simple and flexible to use, an AutoClass automatically infers and loads the correct architecture from a given checkpoint. The from_pretrained() method lets you quickly load a pretrained model for any architecture so you don’t have to devote time and resources to train a model from scratch. Producing this type of checkpoint-agnostic code means if your code works for one checkpoint, it will work with another checkpoint - as long as it was trained for a similar task - even if the architecture is different. Remember, architecture refers to the skeleton of the model and checkpoints are the weights for a given architecture. For example, BERT is an architecture, while google-bert/bert-base-uncased is a checkpoint. Model is a general term that can mean either architecture or checkpoint. In this tutorial, learn to: Load a pretrained tokenizer. Load a pretrained image processor Load a pretrained feature extractor. Load a pretrained processor. Load a pretrained model. Load a model as a backbone. AutoTokenizer Nearly every NLP task begins with a tokenizer. A tokenizer converts your input into a format that can be processed by the model. Load a tokenizer with AutoTokenizer.from_pretrained() : Copied >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained( "google-bert/bert-base-uncased" ) Then tokenize your input as shown below: Copied >>> sequence = "In a hole in the ground there lived a hobbit." >>> print (tokenizer(sequence)) { 'input_ids' : [ 101 , 1999 , 1037 , 4920 , 1999 , 1996 , 2598 , 2045 , 2973 , 1037 , 7570 , 10322 , 4183 , 1012 , 102 ], 'token_type_ids' : [ 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ], 'attention_mask' : [ 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ]} AutoImageProcessor For vision tasks, an image processor processes the image into the correct input format. Copied >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained( "google/vit-base-patch16-224" ) AutoBackbone A Swin backbone with multiple stages for outputting a feature map. The AutoBackbone lets you use pretrained models as backbones to get feature maps from different stages of the backbone. You should specify one of the following parameters in from_pretrained() : out_indices is the index of the layer you’d like to get the feature map from out_features is the name of the layer you’d like to get the feature map from These parameters can be used interchangeably, but if you use both, make sure they’re aligned with each other! If you don’t pass any of these parameters, the backbone returns the feature map from the last layer. A feature map from the first stage of the backbone. The patch partition refers to the model stem. For example, in the above diagram, to return the feature map from the first stage of the Swin backbone, you can set out_indices=(1,) : Copied >>> from transformers import AutoImageProcessor, AutoBackbone >>> import torch >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image. open (requests.get(url, stream= True ).raw) >>> processor = AutoImageProcessor.from_pretrained( "microsoft/swin-tiny-patch4-window7-224" ) >>> model = AutoBackbone.from_pretrained( "microsoft/swin-tiny-patch4-window7-224" , out_indices=( 1 ,)) >>> inputs = processor(image, return_tensors= "pt" ) >>> outputs = model(**inputs) >>> feature_maps = outputs.feature_maps Now you can access the feature_maps object from the first stage of the backbone: Copied >>> list (feature_maps[ 0 ].shape) [ 1 , 96 , 56 , 56 ] AutoFeatureExtractor For audio tasks, a feature extractor processes the audio signal into the correct input format. Load a feature extractor with AutoFeatureExtractor.from_pretrained() : Copied >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) AutoProcessor Multimodal tasks require a processor that combines two types of preprocessing tools. For example, the LayoutLMV2 model requires an image processor to handle images and a tokenizer to handle text; a processor combines both of them. Load a processor with AutoProcessor.from_pretrained() : Copied >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained( "microsoft/layoutlmv2-base-uncased" ) AutoModel Pytorch Hide Pytorch content The AutoModelFor classes let you load a pretrained model for a given task (see here for a complete list of available tasks). For example, load a model for sequence classification with AutoModelForSequenceClassification.from_pretrained() . By default, the weights are loaded in full precision (torch.float32) regardless of the actual data type the weights are stored in such as torch.float16. Set torch_dtype="auto" to load the weights in the data type defined in a model’s config.json file to automatically load the most memory-optimal data type. Copied >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained( "distilbert/distilbert-base-uncased" , torch_dtype= "auto" ) Easily reuse the same checkpoint to load an architecture for a different task: Copied >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained( "distilbert/distilbert-base-uncased" , torch_dtype= "auto" ) For PyTorch models, the from_pretrained() method uses torch.load() which internally uses pickle and is known to be insecure. In general, never load a model that could have come from an untrusted source, or that could have been tampered with. This security risk is partially mitigated for public models hosted on the Hugging Face Hub, which are scanned for malware at each commit. See the Hub documentation for best practices like signed commit verification with GPG. TensorFlow and Flax checkpoints are not affected, and can be loaded within PyTorch architectures using the from_tf and from_flax kwargs for the from_pretrained method to circumvent this issue. Generally, we recommend using the AutoTokenizer class and the AutoModelFor class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next tutorial , learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning. TensorFlow Hide TensorFlow content Finally, the TFAutoModelFor classes let you load a pretrained model for a given task (see here for a complete list of available tasks). For example, load a model for sequence classification with TFAutoModelForSequenceClassification.from_pretrained() : Copied >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained( "distilbert/distilbert-base-uncased" ) Easily reuse the same checkpoint to load an architecture for a different task: Copied >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained( "distilbert/distilbert-base-uncased" ) Generally, we recommend using the AutoTokenizer class and the TFAutoModelFor class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next tutorial , learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning. < > Update on GitHub ← Run inference with pipelines Preprocess data → Load pretrained instances with an Auto Class Auto Tokenizer Auto Image Processor Auto Backbone Auto Feature Extractor Auto Processor Auto Model |
Multi_Adapter_RL__MARL____a_single_base_model_for_.txt | Multi Adapter RL (MARL) - a single base model for everything Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up TRL documentation Multi Adapter RL (MARL) - a single base model for everything TRL 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.13.0 v0.12.2 v0.11.4 v0.10.1 v0.9.6 v0.8.6 v0.7.11 v0.6.0 v0.5.0 v0.4.7 v0.3.1 v0.2.1 v0.1.1 EN Get started TRL Installation Quickstart Get started with Command Line Interfaces (CLIs) Dataset Formats PPO Training FAQ Use Trained Models Customize the Training Understanding Logs API Trainers AlignProp BCO CPO DDPO DPO Online DPO GKD KTO Nash-MD ORPO PPO PRM Reward RLOO SFT Iterative SFT XPO Model Classes Best of N Sampling Judges Callbacks Data Utilities Text Environments Script Utilities Examples Community Tutorials Example Overview Sentiment Tuning Training with PEFT Detoxifying a Language Model Training StackLlama Learning to Use Tools Multi Adapter RLHF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Multi Adapter RL (MARL) - a single base model for everything Here we present an approach that uses a single base model for the entire PPO algorithm - which includes retrieving the reference logits, computing the active logits and the rewards. This feature is experimental as we did not test the convergence of the approach. We encourage the community to let us know if they potentially face issues. Requirements You just need to install peft and optionally install bitsandbytes as well if you want to go for 8bit base models, for more memory efficient finetuning. Summary You need to address this approach in three stages that we summarize as follows: 1- Train a base model on the target domain (e.g. IMDB dataset ) - this is the Supervised Fine Tuning stage - it can leverage the SFTTrainer from TRL. 2- Train a reward model using peft . This is required in order to re-use the adapter during the RL optimisation process (step 3 below). We show an example of leveraging the RewardTrainer from TRL in this example 3- Fine tune new adapters on the base model using PPO and the reward adapter. (“0 abstraction RL”) Make sure to use the same model (i.e. same architecture and same weights) for the stages 2 & 3. Quickstart Let us assume you have trained your reward adapter on llama-7b model using RewardTrainer and pushed the weights on the hub under trl-lib/llama-7b-hh-rm-adapter . When doing PPO, before passing the model to PPOTrainer create your model as follows: Copied model_name = "huggyllama/llama-7b" rm_adapter_id = "trl-lib/llama-7b-hh-rm-adapter" # PPO adapter lora_config = LoraConfig( r= 16 , lora_alpha= 32 , lora_dropout= 0.05 , bias= "none" , task_type= "CAUSAL_LM" , ) model = AutoModelForCausalLMWithValueHead.from_pretrained( model_name, peft_config=lora_config, reward_adapter=rm_adapter_id, ) ... trainer = PPOTrainer( model=model, ... ) ... Then inside your PPO training loop, call the compute_reward_score method by accessing the model attribute from PPOTrainer . Copied rewards = trainer.model.compute_reward_score(**inputs) Advanced usage Control on the adapter name If you are familiar with the peft library, you know that you can use multiple adapters inside the same model. What you can do is train multiple adapters on the same base model to fine-tune on different policies. In this case, you want to be able to control the adapter name you want to activate back, after retrieving the reward. For that, simply pass the appropriate adapter_name to ppo_adapter_name argument when calling compute_reward_score . Copied adapter_name_policy_1 = "policy_1" rewards = trainer.model.compute_reward_score(**inputs, ppo_adapter_name=adapter_name_policy_1) ... Using 4-bit and 8-bit base models For more memory efficient fine-tuning, you can load your base model in 8-bit or 4-bit while keeping the adapters in the default precision (float32). Just pass the appropriate arguments (i.e. load_in_8bit=True or load_in_4bit=True ) to AutoModelForCausalLMWithValueHead.from_pretrained as follows (assuming you have installed bitsandbytes ): Copied model_name = "llama-7b" rm_adapter_id = "trl-lib/llama-7b-hh-rm-adapter" # PPO adapter lora_config = LoraConfig( r= 16 , lora_alpha= 32 , lora_dropout= 0.05 , bias= "none" , task_type= "CAUSAL_LM" , ) model = AutoModelForCausalLMWithValueHead.from_pretrained( model_name, peft_config=lora_config, reward_adapter=rm_adapter_id, load_in_8bit= True , ) ... trainer = PPOTrainer( model=model, ... ) ... < > Update on GitHub ← Learning to Use Tools Multi Adapter R L (MAR L) - a single base model for everything Requirements Summary Quickstart Advanced usage Control on the adapter name Using 4-bit and 8-bit base models |
🤗_Optimum_Neuron.txt | 🤗 Optimum Neuron Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up AWS Trainium & Inferentia documentation 🤗 Optimum Neuron AWS Trainium & Inferentia 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Optimum Neuron 🤗 Optimum Neuron Installation Quickstart Optimum Containers Training Tutorials Notebooks Fine-tune BERT for Text Classification on AWS Trainium Fine-tune Llama 3 8B on AWS Trainium Fine-tune Llama 3 8B on with LoRA and the SFTTrainer Inference Tutorials Notebooks Create your own chatbot with llama-2-13B on AWS Inferentia Sentence Transformers on AWS Inferentia Generate images with Stable Diffusion models on AWS Inferentia How-To Guides Set up AWS Trainium instance Training and Deployment using Amazon Sagemaker Neuron model cache Fine-tune Transformers with AWS Trainium Distributed Training Export a model to Inferentia Inference pipelines with AWS Neuron NeuronX Text-generation-inference for AWS inferentia2 Benchmarks Mistral Small on AWS Inferentia2 Llama-3.1 8B on AWS Inferentia2 Contribute Add support for a new model architecture Reference Neuron Trainer Neuron Distributed Supported Architectures Neuron Exporter Neuron Models Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started 🤗 Optimum Neuron 🤗 Optimum Neuron is the interface between the 🤗 Transformers library and AWS Accelerators including AWS Trainium and AWS Inferentia . It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks. The list of officially validated models and tasks is available here . Tutorials Learn the basics and become familiar with training & deploying transformers on AWS Trainium and AWS Inferentia. Start here if you are using 🤗 Optimum Neuron for the first time! How-to guides Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Optimum Neuron to solve real-world problems. Reference Technical descriptions of how the classes and methods of 🤗 Optimum Neuron work. Installation → 🤗 Optimum Neuron |
GGUF_and_interaction_with_Transformers.txt | GGUF and interaction with Transformers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation GGUF and interaction with Transformers Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started GGUF and interaction with Transformers The GGUF file format is used to store models for inference with GGML and other libraries that depend on it, like the very popular llama.cpp or whisper.cpp . It is a file format supported by the Hugging Face Hub with features allowing for quick inspection of tensors and metadata within the file. This file format is designed as a “single-file-format” where a single file usually contains both the configuration attributes, the tokenizer vocabulary and other attributes, as well as all tensors to be loaded in the model. These files come in different formats according to the quantization type of the file. We briefly go over some of them here . Support within Transformers We have added the ability to load gguf files within transformers in order to offer further training/fine-tuning capabilities to gguf models, before converting back those models to gguf to use within the ggml ecosystem. When loading a model, we first dequantize it to fp32, before loading the weights to be used in PyTorch. [!NOTE] The support is still very exploratory and we welcome contributions in order to solidify it across quantization types and model architectures. For now, here are the supported model architectures and quantization types: Supported quantization types The initial supported quantization types are decided according to the popular quantized files that have been shared on the Hub. F32 F16 BF16 Q4_0 Q4_1 Q5_0 Q5_1 Q8_0 Q2_K Q3_K Q4_K Q5_K Q6_K IQ1_S IQ1_M IQ2_XXS IQ2_XS IQ2_S IQ3_XXS IQ3_S IQ4_XS IQ4_NL [!NOTE] To support gguf dequantization, gguf>=0.10.0 installation is required. Supported model architectures For now the supported model architectures are the architectures that have been very popular on the Hub, namely: LLaMa Mistral Qwen2 Qwen2Moe Phi3 Bloom Falcon StableLM GPT2 Starcoder2 T5 Mamba Nemotron Gemma2 Example usage In order to load gguf files in transformers , you should specify the gguf_file argument to the from_pretrained methods of both tokenizers and models. Here is how one would load a tokenizer and a model, which can be loaded from the exact same file: Copied from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF" filename = "tinyllama-1.1b-chat-v1.0.Q6_K.gguf" tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename) model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename) Now you have access to the full, unquantized version of the model in the PyTorch ecosystem, where you can combine it with a plethora of other tools. In order to convert back to a gguf file, we recommend using the convert-hf-to-gguf.py file from llama.cpp. Here’s how you would complete the script above to save the model and export it back to gguf : Copied tokenizer.save_pretrained( 'directory' ) model.save_pretrained( 'directory' ) !python ${path_to_llama_cpp}/convert-hf-to-gguf.py ${directory} < > Update on GitHub ← Troubleshoot Interoperability with TikToken files → GGU F and interaction with Transformers Support within Transformers Supported quantization types Supported model architectures Example usage |
Advanced_Access_Control_in_Organizations_with_Reso.txt | Advanced Access Control in Organizations with Resource Groups Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Advanced Access Control in Organizations with Resource Groups Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security User Access Tokens Two-Factor Authentication Git over SSH Signing Commits with GPG Single Sign-On (SSO) Advanced Access Control (Resource Groups) Malware Scanning Pickle Scanning Secrets Scanning Protect AI Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Advanced Access Control in Organizations with Resource Groups This feature is part of the Enterprise Hub . In your Hugging Face organization, you can use Resource Groups to control which members have access to specific repositories. How does it work? Resource Groups allow organizations administrators to group related repositories together, and manage access to those repos. Resource Groups allow different teams to work on their respective repositories within the same organization. A repository can belong to only one Resource Group. Organizations members need to be added to the Resource Group to access its repositories. An Organization Member can belong to several Resource Groups. Members are assigned a role in each Resource Group that determines their permissions for the group’s repositories. Four distinct roles exist for Resource Groups: read : Grants read access to repositories within the Resource Group. contributor : Provides extra write rights to the subset of the Organization’s repositories created by the user (i.e., users can create repos and then modify only those repos). Similar to the ‘Write’ role, but limited to repos created by the user. write : Offers write access to all repositories in the Resource Group. Users can create, delete, or rename any repository in the Resource Group. admin : In addition to write permissions on repositories, admin members can administer the Resource Group — add, remove, and alter the roles of other members. They can also transfer repositories in and out of the Resource Group. In addition, Organization admins can manage all resource groups inside the organization. Resource Groups also affect the visibility of private repositories inside the organization. A private repository that is part of a Resource Group will only be visible to members of that Resource Group. Public repositories, on the other hand, are visible to anyone, inside and outside the organization. Getting started Head to your Organization’s settings, then navigate to the “Resource Group” tab in the left menu. If you are an admin of the organization, you can create and manage Resource Groups from that page. After creating a resource group and giving it a meaningful name, you can start adding repositories and users to it. Remember that a repository can be part of only one Resource Group. You’ll be warned when trying to add a repository that already belongs to another Resource Group. Programmatic management (API) See Resource Groups API Section < > Update on GitHub ← How to configure OIDC with Azure in the Hub Malware Scanning → Advanced Access Control in Organizations with Resource Groups How does it work? Getting started Programmatic management (AP I) |
Push_files_to_the_Hub.txt | Push files to the Hub Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Push files to the Hub Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Push files to the Hub 🤗 Diffusers provides a PushToHubMixin for uploading your model, scheduler, or pipeline to the Hub. It is an easy way to store your files on the Hub, and also allows you to share your work with others. Under the hood, the PushToHubMixin : creates a repository on the Hub saves your model, scheduler, or pipeline files so they can be reloaded later uploads folder containing these files to the Hub This guide will show you how to use the PushToHubMixin to upload your files to the Hub. You’ll need to log in to your Hub account with your access token first: Copied from huggingface_hub import notebook_login notebook_login() Models To push a model to the Hub, call push_to_hub() and specify the repository id of the model to be stored on the Hub: Copied from diffusers import ControlNetModel controlnet = ControlNetModel( block_out_channels=( 32 , 64 ), layers_per_block= 2 , in_channels= 4 , down_block_types=( "DownBlock2D" , "CrossAttnDownBlock2D" ), cross_attention_dim= 32 , conditioning_embedding_out_channels=( 16 , 32 ), ) controlnet.push_to_hub( "my-controlnet-model" ) For models, you can also specify the variant of the weights to push to the Hub. For example, to push fp16 weights: Copied controlnet.push_to_hub( "my-controlnet-model" , variant= "fp16" ) The push_to_hub() function saves the model’s config.json file and the weights are automatically saved in the safetensors format. Now you can reload the model from your repository on the Hub: Copied model = ControlNetModel.from_pretrained( "your-namespace/my-controlnet-model" ) Scheduler To push a scheduler to the Hub, call push_to_hub() and specify the repository id of the scheduler to be stored on the Hub: Copied from diffusers import DDIMScheduler scheduler = DDIMScheduler( beta_start= 0.00085 , beta_end= 0.012 , beta_schedule= "scaled_linear" , clip_sample= False , set_alpha_to_one= False , ) scheduler.push_to_hub( "my-controlnet-scheduler" ) The push_to_hub() function saves the scheduler’s scheduler_config.json file to the specified repository. Now you can reload the scheduler from your repository on the Hub: Copied scheduler = DDIMScheduler.from_pretrained( "your-namepsace/my-controlnet-scheduler" ) Pipeline You can also push an entire pipeline with all it’s components to the Hub. For example, initialize the components of a StableDiffusionPipeline with the parameters you want: Copied from diffusers import ( UNet2DConditionModel, AutoencoderKL, DDIMScheduler, StableDiffusionPipeline, ) from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer unet = UNet2DConditionModel( block_out_channels=( 32 , 64 ), layers_per_block= 2 , sample_size= 32 , in_channels= 4 , out_channels= 4 , down_block_types=( "DownBlock2D" , "CrossAttnDownBlock2D" ), up_block_types=( "CrossAttnUpBlock2D" , "UpBlock2D" ), cross_attention_dim= 32 , ) scheduler = DDIMScheduler( beta_start= 0.00085 , beta_end= 0.012 , beta_schedule= "scaled_linear" , clip_sample= False , set_alpha_to_one= False , ) vae = AutoencoderKL( block_out_channels=[ 32 , 64 ], in_channels= 3 , out_channels= 3 , down_block_types=[ "DownEncoderBlock2D" , "DownEncoderBlock2D" ], up_block_types=[ "UpDecoderBlock2D" , "UpDecoderBlock2D" ], latent_channels= 4 , ) text_encoder_config = CLIPTextConfig( bos_token_id= 0 , eos_token_id= 2 , hidden_size= 32 , intermediate_size= 37 , layer_norm_eps= 1e-05 , num_attention_heads= 4 , num_hidden_layers= 5 , pad_token_id= 1 , vocab_size= 1000 , ) text_encoder = CLIPTextModel(text_encoder_config) tokenizer = CLIPTokenizer.from_pretrained( "hf-internal-testing/tiny-random-clip" ) Pass all of the components to the StableDiffusionPipeline and call push_to_hub() to push the pipeline to the Hub: Copied components = { "unet" : unet, "scheduler" : scheduler, "vae" : vae, "text_encoder" : text_encoder, "tokenizer" : tokenizer, "safety_checker" : None , "feature_extractor" : None , } pipeline = StableDiffusionPipeline(**components) pipeline.push_to_hub( "my-pipeline" ) The push_to_hub() function saves each component to a subfolder in the repository. Now you can reload the pipeline from your repository on the Hub: Copied pipeline = StableDiffusionPipeline.from_pretrained( "your-namespace/my-pipeline" ) Privacy Set private=True in the push_to_hub() function to keep your model, scheduler, or pipeline files private: Copied controlnet.push_to_hub( "my-controlnet-model-private" , private= True ) Private repositories are only visible to you, and other users won’t be able to clone the repository and your repository won’t appear in search results. Even if a user has the URL to your private repository, they’ll receive a 404 - Sorry, we can't find the page you are looking for . You must be logged in to load a model from a private repository. < > Update on GitHub ← Load adapters Unconditional image generation → Push files to the Hub Models Scheduler Pipeline Privacy |
A_quick_tour.txt | A quick tour Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation A quick tour Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started A quick tour 🤗 Evaluate provides access to a wide range of evaluation tools. It covers a range of modalities such as text, computer vision, audio, etc. as well as tools to evaluate models or datasets. These tools are split into three categories. Types of evaluations There are different aspects of a typical machine learning pipeline that can be evaluated and for each aspect 🤗 Evaluate provides a tool: Metric : A metric is used to evaluate a model’s performance and usually involves the model’s predictions as well as some ground truth labels. You can find all integrated metrics at evaluate-metric . Comparison : A comparison is used to compare two models. This can for example be done by comparing their predictions to ground truth labels and computing their agreement. You can find all integrated comparisons at evaluate-comparison . Measurement : The dataset is as important as the model trained on it. With measurements one can investigate a dataset’s properties. You can find all integrated measurements at evaluate-measurement . Each of these evaluation modules live on Hugging Face Hub as a Space. They come with an interactive widget and a documentation card documenting its use and limitations. For example accuracy : Each metric, comparison, and measurement is a separate Python module, but for using any of them, there is a single entry point: evaluate.load() ! Load Any metric, comparison, or measurement is loaded with the evaluate.load function: Copied >>> import evaluate >>> accuracy = evaluate.load( "accuracy" ) If you want to make sure you are loading the right type of evaluation (especially if there are name clashes) you can explicitly pass the type: Copied >>> word_length = evaluate.load( "word_length" , module_type= "measurement" ) Community modules Besides the modules implemented in 🤗 Evaluate you can also load any community module by specifying the repository ID of the metric implementation: Copied >>> element_count = evaluate.load( "lvwerra/element_count" , module_type= "measurement" ) See the Creating and Sharing Guide for information about uploading custom metrics. List available modules With list_evaluation_modules() you can check what modules are available on the hub. You can also filter for a specific modules and skip community metrics if you want. You can also see additional information such as likes: Copied >>> evaluate.list_evaluation_modules( ... module_type= "comparison" , ... include_community= False , ... with_details= True ) [{ 'name' : 'mcnemar' , 'type' : 'comparison' , 'community' : False , 'likes' : 1 }, { 'name' : 'exact_match' , 'type' : 'comparison' , 'community' : False , 'likes' : 0 }] Module attributes All evalution modules come with a range of useful attributes that help to use a module stored in a EvaluationModuleInfo object. Attribute Description description A short description of the evaluation module. citation A BibTex string for citation when available. features A Features object defining the input format. inputs_description This is equivalent to the modules docstring. homepage The homepage of the module. license The license of the module. codebase_urls Link to the code behind the module. reference_urls Additional reference URLs. Let’s have a look at a few examples. First, let’s look at the description attribute of the accuracy metric: Copied >>> accuracy = evaluate.load( "accuracy" ) >>> accuracy.description Accuracy is the proportion of correct predictions among the total number of cases processed. It can be computed with : Accuracy = (TP + TN) / (TP + TN + FP + FN) Where: TP: True positive TN: True negative FP: False positive FN: False negative You can see that it describes how the metric works in theory. If you use this metric for your work, especially if it is an academic publication you want to reference it properly. For that you can look at the citation attribute: Copied >>> accuracy.citation @article{scikit-learn, title={Scikit-learn: Machine Learning in {P}ython}, author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, journal={Journal of Machine Learning Research}, volume={ 12 }, pages={ 2825 -- 2830 }, year={ 2011 } } Before we can apply a metric or other evaluation module to a use-case, we need to know what the input format of the metric is: Copied >>> accuracy.features { 'predictions' : Value(dtype= 'int32' , id = None ), 'references' : Value(dtype= 'int32' , id = None ) } Note that features always describe the type of a single input element. In general we will add lists of elements so you can always think of a list around the types in features . Evaluate accepts various input formats (Python lists, NumPy arrays, PyTorch tensors, etc.) and converts them to an appropriate format for storage and computation. Compute Now that we know how the evaluation module works and what should go in there we want to actually use it! When it comes to computing the actual score there are two main ways to do it: All-in-one Incremental In the incremental approach the necessary inputs are added to the module with EvaluationModule.add() or EvaluationModule.add_batch() and the score is calculated at the end with EvaluationModule.compute() . Alternatively, one can pass all the inputs at once to compute() . Let’s have a look at the two approaches. How to compute The simplest way to calculate the score of an evaluation module is by calling compute() directly with the necessary inputs. Simply pass the inputs as seen in features to the compute() method. Copied >>> accuracy.compute(references=[ 0 , 1 , 0 , 1 ], predictions=[ 1 , 0 , 0 , 1 ]) { 'accuracy' : 0.5 } Evaluation modules return the results in a dictionary. However, in some instances you build up the predictions iteratively or in a distributed fashion in which case add() or add_batch() are useful. Calculate a single metric or a batch of metrics In many evaluation pipelines you build the predictions iteratively such as in a for-loop. In that case you could store the predictions in a list and at the end pass them to compute() . With add() and add_batch() you can circumvent the step of storing the predictions separately. If you are only creating single predictions at a time you can use add() : Copied >>> for ref, pred in zip ([ 0 , 1 , 0 , 1 ], [ 1 , 0 , 0 , 1 ]): >>> accuracy.add(references=ref, predictions=pred) >>> accuracy.compute() { 'accuracy' : 0.5 } Once you have gathered all predictions you can call compute() to compute the score based on all stored values. When getting predictions and references in batches you can use add_batch() which adds a list elements for later processing. The rest works as with add() : Copied >>> for refs, preds in zip ([[ 0 , 1 ],[ 0 , 1 ]], [[ 1 , 0 ],[ 0 , 1 ]]): >>> accuracy.add_batch(references=refs, predictions=preds) >>> accuracy.compute() { 'accuracy' : 0.5 } This is especially useful when you need to get the predictions from your model in batches: Copied >>> for model_inputs, gold_standards in evaluation_dataset: >>> predictions = model(model_inputs) >>> metric.add_batch(references=gold_standards, predictions=predictions) >>> metric.compute() Distributed evaluation Computing metrics in a distributed environment can be tricky. Metric evaluation is executed in separate Python processes, or nodes, on different subsets of a dataset. Typically, when a metric score is additive ( f(AuB) = f(A) + f(B) ), you can use distributed reduce operations to gather the scores for each subset of the dataset. But when a metric is non-additive ( f(AuB) ≠ f(A) + f(B) ), it’s not that simple. For example, you can’t take the sum of the F1 scores of each data subset as your final metric . A common way to overcome this issue is to fallback on single process evaluation. The metrics are evaluated on a single GPU, which becomes inefficient. 🤗 Evaluate solves this issue by only computing the final metric on the first node. The predictions and references are computed and provided to the metric separately for each node. These are temporarily stored in an Apache Arrow table, avoiding cluttering the GPU or CPU memory. When you are ready to compute() the final metric, the first node is able to access the predictions and references stored on all the other nodes. Once it has gathered all the predictions and references, compute() will perform the final metric evaluation. This solution allows 🤗 Evaluate to perform distributed predictions, which is important for evaluation speed in distributed settings. At the same time, you can also use complex non-additive metrics without wasting valuable GPU or CPU memory. Combining several evaluations Often one wants to not only evaluate a single metric but a range of different metrics capturing different aspects of a model. E.g. for classification it is usually a good idea to compute F1-score, recall, and precision in addition to accuracy to get a better picture of model performance. Naturally, you can load a bunch of metrics and call them sequentially. However, a more convenient way is to use the combine() function to bundle them together: Copied >>> clf_metrics = evaluate.combine([ "accuracy" , "f1" , "precision" , "recall" ]) The combine function accepts both the list of names of the metrics as well as an instantiated modules. The compute call then computes each metric: Copied >>> clf_metrics.compute(predictions=[ 0 , 1 , 0 ], references=[ 0 , 1 , 1 ]) { 'accuracy' : 0.667 , 'f1' : 0.667 , 'precision' : 1.0 , 'recall' : 0.5 } Save and push to the Hub Saving and sharing evaluation results is an important step. We provide the evaluate.save() function to easily save metrics results. You can either pass a specific filename or a directory. In the latter case, the results are saved in a file with an automatically created file name. Besides the directory or file name, the function takes any key-value pairs as inputs and stores them in a JSON file. Copied >>> result = accuracy.compute(references=[ 0 , 1 , 0 , 1 ], predictions=[ 1 , 0 , 0 , 1 ]) >>> hyperparams = { "model" : "bert-base-uncased" } >>> evaluate.save( "./results/" experiment= "run 42" , **result, **hyperparams) PosixPath( 'results/result-2022_05_30-22_09_11.json' ) The content of the JSON file look like the following: Copied { "experiment" : "run 42" , "accuracy" : 0.5 , "model" : "bert-base-uncased" , "_timestamp" : "2022-05-30T22:09:11.959469" , "_git_commit_hash" : "123456789abcdefghijkl" , "_evaluate_version" : "0.1.0" , "_python_version" : "3.9.12 (main, Mar 26 2022, 15:51:15) \n[Clang 13.1.6 (clang-1316.0.21.2)]" , "_interpreter_path" : "/Users/leandro/git/evaluate/env/bin/python" } In addition to the specified fields, it also contains useful system information for reproducing the results. Besides storing the results locally, you should report them on the model’s repository on the Hub. With the evaluate.push_to_hub() function, you can easily report evaluation results to the model’s repository: Copied evaluate.push_to_hub( model_id= "huggingface/gpt2-wikitext2" , # model repository on hub metric_value= 0.5 , # metric value metric_type= "bleu" , # metric name, e.g. accuracy.name metric_name= "BLEU" , # pretty name which is displayed dataset_type= "wikitext" , # dataset name on the hub dataset_name= "WikiText" , # pretty name dataset_split= "test" , # dataset split used task_type= "text-generation" , # task id, see https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/tasks.json task_name= "Text Generation" # pretty name for task ) Evaluator The evaluate.evaluator() provides automated evaluation and only requires a model, dataset, metric in contrast to the metrics in EvaluationModule s that require the model’s predictions. As such it is easier to evaluate a model on a dataset with a given metric as the inference is handled internally. To make that possible it uses the pipeline abstraction from transformers . However, you can use your own framework as long as it follows the pipeline interface. To make an evaluation with the evaluator let’s load a transformers pipeline (but you can pass your own custom inference class for any framework as long as it follows the pipeline call API) with an model trained on IMDb, the IMDb test split and the accuracy metric. Copied from transformers import pipeline from datasets import load_dataset from evaluate import evaluator import evaluate pipe = pipeline( "text-classification" , model= "lvwerra/distilbert-imdb" , device= 0 ) data = load_dataset( "imdb" , split= "test" ).shuffle().select( range ( 1000 )) metric = evaluate.load( "accuracy" ) Then you can create an evaluator for text classification and pass the three objects to the compute() method. With the label mapping evaluate provides a method to align the pipeline outputs with the label column in the dataset: Copied >>> task_evaluator = evaluator( "text-classification" ) >>> results = task_evaluator.compute(model_or_pipeline=pipe, data=data, metric=metric, ... label_mapping={ "NEGATIVE" : 0 , "POSITIVE" : 1 },) >>> print (results) { 'accuracy' : 0.934 } Calculating the value of the metric alone is often not enough to know if a model performs significantly better than another one. With bootstrapping evaluate computes confidence intervals and the standard error which helps estimate how stable a score is: Copied >>> results = eval .compute(model_or_pipeline=pipe, data=data, metric=metric, ... label_mapping={ "NEGATIVE" : 0 , "POSITIVE" : 1 }, ... strategy= "bootstrap" , n_resamples= 200 ) >>> print (results) { 'accuracy' : { 'confidence_interval' : ( 0.906 , 0.9406749892841922 ), 'standard_error' : 0.00865213251082787 , 'score' : 0.923 } } The evaluator expects a "text" and "label" column for the data input. If your dataset differs you can provide the columns with the keywords input_column="text" and label_column="label" . Currently only "text-classification" is supported with more tasks being added in the future. Visualization When comparing several models, sometimes it’s hard to spot the differences in their performance simply by looking at their scores. Also often there is not a single best model but there are trade-offs between e.g. latency and accuracy as larger models might have better performance but are also slower. We are gradually adding different visualization approaches, such as plots, to make choosing the best model for a use-case easier. For instance, if you have a list of results from multiple models (as dictionaries), you can feed them into the radar_plot() function: Copied import evaluate from evaluate.visualization import radar_plot >>> data = [ { "accuracy" : 0.99 , "precision" : 0.8 , "f1" : 0.95 , "latency_in_seconds" : 33.6 }, { "accuracy" : 0.98 , "precision" : 0.87 , "f1" : 0.91 , "latency_in_seconds" : 11.2 }, { "accuracy" : 0.98 , "precision" : 0.78 , "f1" : 0.88 , "latency_in_seconds" : 87.6 }, { "accuracy" : 0.88 , "precision" : 0.78 , "f1" : 0.81 , "latency_in_seconds" : 101.6 } ] >>> model_names = [ "Model 1" , "Model 2" , "Model 3" , "Model 4" ] >>> plot = radar_plot(data=data, model_names=model_names) >>> plot.show() Which lets you visually compare the 4 models and choose the optimal one for you, based on one or several metrics: Running evaluation on a suite of tasks It can be useful to evaluate models on a variety of different tasks to understand their downstream performance. The EvaluationSuite enables evaluation of models on a collection of tasks. Tasks can be constructed as ( evaluator , dataset, metric) tuples and passed to an EvaluationSuite stored on the Hugging Face Hub as a Space, or locally as a Python script. See the evaluator documentation for a list of currently supported tasks. EvaluationSuite scripts can be defined as follows, and supports Python code for data preprocessing. Copied import evaluate from evaluate.evaluation_suite import SubTask class Suite (evaluate.EvaluationSuite): def __init__ ( self, name ): super ().__init__(name) self.suite = [ SubTask( task_type= "text-classification" , data= "imdb" , split= "test[:1]" , args_for_task={ "metric" : "accuracy" , "input_column" : "text" , "label_column" : "label" , "label_mapping" : { "LABEL_0" : 0.0 , "LABEL_1" : 1.0 } } ), SubTask( task_type= "text-classification" , data= "sst2" , split= "test[:1]" , args_for_task={ "metric" : "accuracy" , "input_column" : "sentence" , "label_column" : "label" , "label_mapping" : { "LABEL_0" : 0.0 , "LABEL_1" : 1.0 } } ) ] Evaluation can be run by loading the EvaluationSuite and calling run() method with a model or pipeline. Copied >>> from evaluate import EvaluationSuite >>> suite = EvaluationSuite.load( 'mathemakitten/sentiment-evaluation-suite' ) >>> results = suite.run( "huggingface/prunebert-base-uncased-6-finepruned-w-distil-mnli" ) accuracy total_time_in_seconds samples_per_second latency_in_seconds task_name 0.3 4.62804 2.16074 0.462804 imdb 0 0.686388 14.569 0.0686388 sst2 ← Installation Choosing the right metric → A quick tour Types of evaluations Load Community modules List available modules Module attributes Compute How to compute Calculate a single metric or a batch of metrics Distributed evaluation Combining several evaluations Save and push to the Hub Evaluator Visualization Running evaluation on a suite of tasks |
Using_multiple_models_with_DeepSpeed.txt | Using multiple models with DeepSpeed Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Accelerate documentation Using multiple models with DeepSpeed Accelerate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v1.3.0 v1.2.1 v1.1.0 v1.0.1 v0.34.2 v0.33.0 v0.32.0 v0.31.0 v0.30.1 v0.29.3 v0.28.0 v0.27.2 v0.26.1 v0.25.0 v0.24.0 v0.23.0 v0.22.0 v0.21.0 v0.20.3 v0.19.0 v0.18.0 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.2 v0.12.0 v0.11.0 v0.10.0 v0.9.0 v0.8.0 v0.7.1 v0.6.0 v0.5.1 v0.4.0 v0.3.0 v0.2.1 v0.1.0 EN Getting started 🤗 Accelerate Installation Quicktour Tutorials Overview Add Accelerate to your code Execution process TPU training Launching Accelerate scripts Launching distributed training from Jupyter Notebooks How to guides Accelerate Start Here! Model memory estimator Model quantization Experiment trackers Profiler Checkpointing Troubleshoot Example Zoo Training Gradient accumulation Local SGD Low precision (FP8) training DeepSpeed Using multiple models with DeepSpeed DDP Communication Hooks Fully Sharded Data Parallel Megatron-LM Amazon SageMaker Apple M1 GPUs IPEX training with CPU Inference Big Model Inference Distributed inference Concepts and fundamentals Accelerate's internal mechanism Loading big models into memory Comparing performance across distributed setups Executing and deferring jobs Gradient synchronization FSDP vs DeepSpeed Low precision training methods Training on TPUs Reference Accelerator Stateful classes The Command Line DataLoaders, Optimizers, Schedulers Experiment trackers Launchers DeepSpeed utilities Logging Working with large models Pipeline parallelism Kwargs handlers FP8 Utility functions and classes Megatron-LM utilities Fully Sharded Data Parallel utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using multiple models with DeepSpeed This guide assumes that you have read and understood the DeepSpeed usage guide . Running multiple models with Accelerate and DeepSpeed is useful for: Knowledge distillation Post-training techniques like RLHF (see the TRL library for more examples) Training multiple models at once Currently, Accelerate has a very experimental API to help you use multiple models. This tutorial will focus on two common use cases: Knowledge distillation, where a smaller student model is trained to mimic a larger, better-performing teacher. If the student model fits on a single GPU, we can use ZeRO-2 for training and ZeRO-3 to shard the teacher for inference. This is significantly faster than using ZeRO-3 for both models. Training multiple disjoint models at once. Knowledge distillation Knowledge distillation is a good example of using multiple models, but only training one of them. Normally, you would use a single utils.DeepSpeedPlugin for both models. However, in this case, there are two separate configurations. Accelerate allows you to create and use multiple plugins if and only if they are in a dict so that you can reference and enable the proper plugin when needed. Copied from accelerate.utils import DeepSpeedPlugin zero2_plugin = DeepSpeedPlugin(hf_ds_config= "zero2_config.json" ) zero3_plugin = DeepSpeedPlugin(hf_ds_config= "zero3_config.json" ) deepspeed_plugins = { "student" : zero2_plugin, "teacher" : zero3_plugin} The zero2_config.json should be configured for full training (so specify scheduler and optimizer if you are not utilizing your own), while zero3_config.json should only be configured for the inference model, as shown in the example below. Copied { "bf16" : { "enabled" : "auto" } , "zero_optimization" : { "stage" : 3 , "overlap_comm" : true , "reduce_bucket_size" : "auto" , "stage3_prefetch_bucket_size" : "auto" , "stage3_param_persistence_threshold" : "auto" , "stage3_max_live_parameters" : "auto" , "stage3_max_reuse_distance" : "auto" , } , "train_micro_batch_size_per_gpu" : 1 } An example zero2_config.json configuration is shown below. Copied { "bf16" : { "enabled" : "auto" } , "optimizer" : { "type" : "AdamW" , "params" : { "lr" : "auto" , "weight_decay" : "auto" , "torch_adam" : true , "adam_w_mode" : true } } , "scheduler" : { "type" : "WarmupLR" , "params" : { "warmup_min_lr" : "auto" , "warmup_max_lr" : "auto" , "warmup_num_steps" : "auto" } } , "zero_optimization" : { "stage" : 2 , "offload_optimizer" : { "device" : "cpu" , "pin_memory" : true } , } , "gradient_accumulation_steps" : 1 , "gradient_clipping" : "auto" , "train_batch_size" : "auto" , "train_micro_batch_size_per_gpu" : "auto" , } DeepSpeed will raise an error if train_micro_batch_size_per_gpu isn’t specified, even if this particular model isn’t being trained. From here, create a single Accelerator and pass in both configurations. Copied from accelerate import Accelerator accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins) Now let’s see how to use them. Student model By default, Accelerate sets the first item in the dict as the default or enabled plugin ( "student" plugin). Verify this by using the utils.deepspeed.get_active_deepspeed_plugin() function to see which plugin is enabled. Copied active_plugin = get_active_deepspeed_plugin(accelerator.state) assert active_plugin is deepspeed_plugins[ "student" ] AcceleratorState also keeps the active DeepSpeed plugin saved in state.deepspeed_plugin . Copied assert active_plugin is accelerator.deepspeed_plugin Since student is the currently active plugin, let’s go ahead and prepare the model, optimizer, and scheduler. Copied student_model, optimizer, scheduler = ... student_model, optimizer, scheduler, train_dataloader = accelerator.prepare(student_model, optimizer, scheduler, train_dataloader) Now it’s time to deal with the teacher model. Teacher model First, you need to specify in Accelerator that the zero3_config.json configuration should be used. Copied accelerator.state.select_deepspeed_plugin( "teacher" ) This disables the "student" plugin and enables the "teacher" plugin instead. The DeepSpeed stateful config inside of Transformers is updated, and it changes which plugin configuration gets called when using deepspeed.initialize() . This allows you to use the automatic deepspeed.zero.Init context manager integration Transformers provides. Copied teacher_model = AutoModel.from_pretrained(...) teacher_model = accelerator.prepare(teacher_model) Otherwise, you should manually initialize the model with deepspeed.zero.Init . Copied with deepspeed.zero.Init(accelerator.deepspeed_plugin.config): model = MyModel(...) Training From here, your training loop can be whatever you like, as long as teacher_model is never being trained on. Copied teacher_model. eval () student_model.train() for batch in train_dataloader: with torch.no_grad(): output_teacher = teacher_model(**batch) output_student = student_model(**batch) # Combine the losses or modify it in some way loss = output_teacher.loss + output_student.loss accelerator.backward(loss) optimizer.step() scheduler.step() optimizer.zero_grad() Train multiple disjoint models Training multiple models is a more complicated scenario. In its current state, we assume each model is completely disjointed from the other during training. This scenario still requires two utils.DeepSpeedPlugin ’s to be made. However, you also need a second Accelerator , since different deepspeed engines are being called at different times. A single Accelerator can only carry one instance at a time. Since the state.AcceleratorState is a stateful object though, it is already aware of both utils.DeepSpeedPlugin ’s available. You can just instantiate a second Accelerator with no extra arguments. Copied first_accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins) second_accelerator = Accelerator() You can call either first_accelerator.state.select_deepspeed_plugin() to enable or disable a particular plugin, and then call prepare . Copied # can be `accelerator_0`, `accelerator_1`, or by calling `AcceleratorState().select_deepspeed_plugin(...)` first_accelerator.state.select_deepspeed_plugin( "first_model" ) first_model = AutoModel.from_pretrained(...) # For this example, `get_training_items` is a nonexistent function that gets the setup we need for training first_optimizer, first_scheduler, train_dl, eval_dl = get_training_items(model1) first_model, first_optimizer, first_scheduler, train_dl, eval_dl = accelerator.prepare( first_model, first_optimizer, first_scheduler, train_dl, eval_dl ) second_accelerator.state.select_deepspeed_plugin( "second_model" ) second_model = AutoModel.from_pretrained(...) # For this example, `get_training_items` is a nonexistent function that gets the setup we need for training second_optimizer, second_scheduler, _, _ = get_training_items(model2) second_model, second_optimizer, second_scheduler = accelerator.prepare( second_model, second_optimizer, second_scheduler ) And now you can train: Copied for batch in dl: outputs1 = first_model(**batch) first_accelerator.backward(outputs1.loss) first_optimizer.step() first_scheduler.step() first_optimizer.zero_grad() outputs2 = model2(**batch) second_accelerator.backward(outputs2.loss) second_optimizer.step() second_scheduler.step() second_optimizer.zero_grad() Resources To see more examples, please check out the related tests currently in [Accelerate]. < > Update on GitHub ← DeepSpeed DDP Communication Hooks → Using multiple models with Deep Speed Knowledge distillation Student model Teacher model Training Train multiple disjoint models Resources |
Semantic_segmentation.txt | Semantic segmentation Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Datasets documentation Semantic segmentation Datasets 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.2.0 v3.1.0 v3.0.2 v2.21.0 v2.20.0 v2.19.0 v2.18.0 v2.17.1 v2.16.1 v2.15.0 v2.14.7 v2.13.2 v2.12.0 v2.11.0 v2.10.0 v2.9.0 v2.8.0 v2.7.1 v2.6.2 v2.5.2 v2.4.0 v2.3.2 v2.2.1 v2.1.0 v2.0.0 v1.18.3 v1.17.0 v1.16.1 v1.15.1 v1.14.0 v1.13.3 v1.12.1 v1.11.0 v1.10.2 v1.9.0 v1.8.0 v1.7.0 v1.6.2 v1.5.0 v1.4.1 v1.3.0 v1.2.1 v1.1.3 v1.0.2 v0.4.0 v0.3.0 EN Get started 🤗 Datasets Quickstart Installation Tutorials Overview Load a dataset from the Hub Know your dataset Preprocess Create a dataset Share a dataset to the Hub How-to guides Overview General usage Load Process Stream Use with TensorFlow Use with PyTorch Use with JAX Use with Spark Cache management Cloud storage Search index CLI Troubleshooting Audio Load audio data Process audio data Create an audio dataset Vision Load image data Process image data Create an image dataset Depth estimation Image classification Semantic segmentation Object detection Load video data Create a video dataset Text Load text data Process text data Tabular Load tabular data Dataset repository Share Create a dataset card Structure your repository Create a dataset loading script Conceptual guides Datasets 🤝 Arrow The cache Dataset or IterableDataset Dataset features Build and load Batch mapping Reference Main classes Builder classes Loading methods Table Classes Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Semantic segmentation Semantic segmentation datasets are used to train a model to classify every pixel in an image. There are a wide variety of applications enabled by these datasets such as background removal from images, stylizing images, or scene understanding for autonomous driving. This guide will show you how to apply transformations to an image segmentation dataset. Before you start, make sure you have up-to-date versions of albumentations and cv2 installed: Copied pip install -U albumentations opencv-python Albumentations is a Python library for performing data augmentation for computer vision. It supports various computer vision tasks such as image classification, object detection, segmentation, and keypoint estimation. This guide uses the Scene Parsing dataset for segmenting and parsing an image into different image regions associated with semantic categories, such as sky, road, person, and bed. Load the train split of the dataset and take a look at an example: Copied >>> from datasets import load_dataset >>> dataset = load_dataset( "scene_parse_150" , split= "train" ) >>> index = 10 >>> dataset[index] { 'image' : <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=683x512 at 0x7FB37B0EC810 >, 'annotation' : <PIL.PngImagePlugin.PngImageFile image mode=L size=683x512 at 0x7FB37B0EC9D0 >, 'scene_category' : 927 } The dataset has three fields: image : a PIL image object. annotation : segmentation mask of the image. scene_category : the label or scene category of the image (like “kitchen” or “office”). Next, check out an image with: Copied >>> dataset[index][ "image" ] Similarly, you can check out the respective segmentation mask: Copied >>> dataset[index][ "annotation" ] We can also add a color palette on the segmentation mask and overlay it on top of the original image to visualize the dataset: After defining the color palette, you should be ready to visualize some overlays. Copied >>> import matplotlib.pyplot as plt >>> def visualize_seg_mask ( image: np.ndarray, mask: np.ndarray ): ... color_seg = np.zeros((mask.shape[ 0 ], mask.shape[ 1 ], 3 ), dtype=np.uint8) ... palette = np.array(create_ade20k_label_colormap()) ... for label, color in enumerate (palette): ... color_seg[mask == label, :] = color ... color_seg = color_seg[..., ::- 1 ] # convert to BGR ... img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map ... img = img.astype(np.uint8) ... plt.figure(figsize=( 15 , 10 )) ... plt.imshow(img) ... plt.axis( "off" ) ... plt.show() >>> visualize_seg_mask( ... np.array(dataset[index][ "image" ]), ... np.array(dataset[index][ "annotation" ]) ... ) Now apply some augmentations with albumentations . You’ll first resize the image and adjust its brightness. Copied >>> import albumentations >>> transform = albumentations.Compose( ... [ ... albumentations.Resize( 256 , 256 ), ... albumentations.RandomBrightnessContrast(brightness_limit= 0.3 , contrast_limit= 0.3 , p= 0.5 ), ... ] ... ) Create a function to apply the transformation to the images: Copied >>> def transforms ( examples ): ... transformed_images, transformed_masks = [], [] ... ... for image, seg_mask in zip (examples[ "image" ], examples[ "annotation" ]): ... image, seg_mask = np.array(image), np.array(seg_mask) ... transformed = transform(image=image, mask=seg_mask) ... transformed_images.append(transformed[ "image" ]) ... transformed_masks.append(transformed[ "mask" ]) ... ... examples[ "pixel_values" ] = transformed_images ... examples[ "label" ] = transformed_masks ... return examples Use the set_transform() function to apply the transformation on-the-fly to batches of the dataset to consume less disk space: Copied >>> dataset.set_transform(transforms) You can verify the transformation worked by indexing into the pixel_values and label of an example: Copied >>> image = np.array(dataset[index][ "pixel_values" ]) >>> mask = np.array(dataset[index][ "label" ]) >>> visualize_seg_mask(image, mask) In this guide, you have used albumentations for augmenting the dataset. It’s also possible to use torchvision to apply some similar transforms. Copied >>> from torchvision.transforms import Resize, ColorJitter, Compose >>> transformation_chain = Compose([ ... Resize(( 256 , 256 )), ... ColorJitter(brightness= 0.25 , contrast= 0.25 , saturation= 0.25 , hue= 0.1 ) ... ]) >>> resize = Resize(( 256 , 256 )) >>> def train_transforms ( example_batch ): ... example_batch[ "pixel_values" ] = [transformation_chain(x) for x in example_batch[ "image" ]] ... example_batch[ "label" ] = [resize(x) for x in example_batch[ "annotation" ]] ... return example_batch >>> dataset.set_transform(train_transforms) >>> image = np.array(dataset[index][ "pixel_values" ]) >>> mask = np.array(dataset[index][ "label" ]) >>> visualize_seg_mask(image, mask) Now that you know how to process a dataset for semantic segmentation, learn how to train a semantic segmentation model and use it for inference. < > Update on GitHub ← Image classification Object detection → Semantic segmentation |
Agents_&_Tools.txt | Agents & Tools Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Agents & Tools Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Agents & Tools Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. To learn more about agents and tools make sure to read the introductory guide . This page contains the API docs for the underlying classes. Agents We provide two types of agents, based on the main Agent class: CodeAgent acts in one shot, generating code to solve the task, then executes it at once. ReactAgent acts step by step, each step consisting of one thought, then one tool call and execution. It has two classes: ReactJsonAgent writes its tool calls in JSON. ReactCodeAgent writes its tool calls in Python code. Agent class transformers. Agent < source > ( tools : typing.Union[typing.List[transformers.agents.tools.Tool], transformers.agents.agents.Toolbox] llm_engine : typing.Callable = None system_prompt : typing.Optional[str] = None tool_description_template : typing.Optional[str] = None additional_args : typing.Dict = {} max_iterations : int = 6 tool_parser : typing.Optional[typing.Callable] = None add_base_tools : bool = False verbose : int = 0 grammar : typing.Optional[typing.Dict[str, str]] = None managed_agents : typing.Optional[typing.List] = None step_callbacks : typing.Optional[typing.List[typing.Callable]] = None monitor_metrics : bool = True ) execute_tool_call < source > ( tool_name : str arguments : typing.Dict[str, str] ) Parameters tool_name ( str ) — Name of the Tool to execute (should be one from self.toolbox). arguments (Dict[str, str]) — Arguments passed to the Tool. Execute tool with the provided input and returns the result. This method replaces arguments with the actual values from the state if they refer to state variables. extract_action < source > ( llm_output : str split_token : str ) Parameters llm_output ( str ) — Output of the LLM split_token ( str ) — Separator for the action. Should match the example in the system prompt. Parse action from the LLM output run < source > ( **kwargs ) To be implemented in the child class write_inner_memory_from_logs < source > ( summary_mode : typing.Optional[bool] = False ) Reads past llm_outputs, actions, and observations or errors from the logs into a series of messages that can be used as input to the LLM. CodeAgent class transformers. CodeAgent < source > ( tools : typing.List[transformers.agents.tools.Tool] llm_engine : typing.Optional[typing.Callable] = None system_prompt : typing.Optional[str] = None tool_description_template : typing.Optional[str] = None grammar : typing.Optional[typing.Dict[str, str]] = None additional_authorized_imports : typing.Optional[typing.List[str]] = None **kwargs ) A class for an agent that solves the given task using a single block of code. It plans all its actions, then executes all in one shot. parse_code_blob < source > ( result : str ) Override this method if you want to change the way the code is cleaned in the run method. run < source > ( task : str return_generated_code : bool = False **kwargs ) Parameters task ( str ) — The task to perform return_generated_code ( bool , optional , defaults to False ) — Whether to return the generated code instead of running it kwargs (additional keyword arguments, optional ) — Any keyword argument to send to the agent when evaluating the code. Runs the agent for the given task. Example: Copied from transformers.agents import CodeAgent agent = CodeAgent(tools=[]) agent.run( "What is the result of 2 power 3.7384?" ) React agents class transformers. ReactAgent < source > ( tools : typing.List[transformers.agents.tools.Tool] llm_engine : typing.Optional[typing.Callable] = None system_prompt : typing.Optional[str] = None tool_description_template : typing.Optional[str] = None grammar : typing.Optional[typing.Dict[str, str]] = None plan_type : typing.Optional[str] = None planning_interval : typing.Optional[int] = None **kwargs ) This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The action will be parsed from the LLM output: it consists in calls to tools from the toolbox, with arguments chosen by the LLM engine. direct_run < source > ( task : str ) Runs the agent in direct mode, returning outputs only at the end: should be launched only in the run method. planning_step < source > ( task is_first_step : bool = False iteration : int = None ) Parameters task ( str ) — The task to perform is_first_step ( bool ) — If this step is not the first one, the plan should be an update over a previous plan. iteration ( int ) — The number of the current step, used as an indication for the LLM. Used periodically by the agent to plan the next steps to reach the objective. provide_final_answer < source > ( task ) This method provides a final answer to the task, based on the logs of the agent’s interactions. run < source > ( task : str stream : bool = False reset : bool = True **kwargs ) Parameters task ( str ) — The task to perform Runs the agent for the given task. Example: Copied from transformers.agents import ReactCodeAgent agent = ReactCodeAgent(tools=[]) agent.run( "What is the result of 2 power 3.7384?" ) stream_run < source > ( task : str ) Runs the agent in streaming mode, yielding steps as they are executed: should be launched only in the run method. class transformers. ReactJsonAgent < source > ( tools : typing.List[transformers.agents.tools.Tool] llm_engine : typing.Optional[typing.Callable] = None system_prompt : typing.Optional[str] = None tool_description_template : typing.Optional[str] = None grammar : typing.Optional[typing.Dict[str, str]] = None planning_interval : typing.Optional[int] = None **kwargs ) This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The tool calls will be formulated by the LLM in JSON format, then parsed and executed. step < source > ( log_entry : typing.Dict[str, typing.Any] ) Perform one step in the ReAct framework: the agent thinks, acts, and observes the result. The errors are raised here, they are caught and logged in the run() method. class transformers. ReactCodeAgent < source > ( tools : typing.List[transformers.agents.tools.Tool] llm_engine : typing.Optional[typing.Callable] = None system_prompt : typing.Optional[str] = None tool_description_template : typing.Optional[str] = None grammar : typing.Optional[typing.Dict[str, str]] = None additional_authorized_imports : typing.Optional[typing.List[str]] = None planning_interval : typing.Optional[int] = None **kwargs ) This agent that solves the given task step by step, using the ReAct framework: While the objective is not reached, the agent will perform a cycle of thinking and acting. The tool calls will be formulated by the LLM in code format, then parsed and executed. step < source > ( log_entry : typing.Dict[str, typing.Any] ) Perform one step in the ReAct framework: the agent thinks, acts, and observes the result. The errors are raised here, they are caught and logged in the run() method. ManagedAgent class transformers. ManagedAgent < source > ( agent name description additional_prompting = None provide_run_summary = False ) Tools load_tool transformers.load_tool < source > ( task_or_repo_id model_repo_id = None token = None **kwargs ) Parameters task_or_repo_id ( str ) — The task for which to load the tool or a repo ID of a tool on the Hub. Tasks implemented in Transformers are: "document_question_answering" "image_question_answering" "speech_to_text" "text_to_speech" "translation" model_repo_id ( str , optional ) — Use this argument to use a different model than the default one for the tool you selected. token ( str , optional ) — The token to identify you on hf.co. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface ). kwargs (additional keyword arguments, optional ) — Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as cache_dir , revision , subfolder ) will be used when downloading the files for your tool, and the others will be passed along to its init. Main function to quickly load a tool, be it on the Hub or in the Transformers library. Loading a tool means that you’ll download the tool and execute it locally. ALWAYS inspect the tool you’re downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt. tool transformers.tool < source > ( tool_function : typing.Callable ) Parameters tool_function — Your function. Should have type hints for each input and a type hint for the output. Should also have a docstring description including an ‘Args —’ part where each argument is described. Converts a function into an instance of a Tool subclass. Tool class transformers. Tool < source > ( *args **kwargs ) A base class for the functions used by the agent. Subclass this and implement the __call__ method as well as the following class attributes: description ( str ) — A short description of what your tool does, the inputs it expects and the output(s) it will return. For instance ‘This is a tool that downloads a file from a url . It takes the url as input, and returns the text contained in the file’. name ( str ) — A performative name that will be used for your tool in the prompt to the agent. For instance "text-classifier" or "image_generator" . inputs ( Dict[str, Dict[str, Union[str, type]]] ) — The dict of modalities expected for the inputs. It has one type key and a description key. This is used by launch_gradio_demo or to make a nice space from your tool, and also can be used in the generated description for your tool. output_type ( type ) — The type of the tool output. This is used by launch_gradio_demo or to make a nice space from your tool, and also can be used in the generated description for your tool. You can also override the method setup() if your tool as an expensive operation to perform before being usable (such as loading a model). setup() will be called the first time you use your tool, but not at instantiation. from_gradio < source > ( gradio_tool ) Creates a Tool from a gradio tool. from_hub < source > ( repo_id : str token : typing.Optional[str] = None **kwargs ) Parameters repo_id ( str ) — The name of the repo on the Hub where your tool is defined. token ( str , optional ) — The token to identify you on hf.co. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface ). kwargs (additional keyword arguments, optional ) — Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as cache_dir , revision , subfolder ) will be used when downloading the files for your tool, and the others will be passed along to its init. Loads a tool defined on the Hub. Loading a tool from the Hub means that you’ll download the tool and execute it locally. ALWAYS inspect the tool you’re downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt. from_langchain < source > ( langchain_tool ) Creates a Tool from a langchain tool. from_space < source > ( space_id : str name : str description : str api_name : typing.Optional[str] = None token : typing.Optional[str] = None ) → Tool Parameters space_id ( str ) — The id of the Space on the Hub. name ( str ) — The name of the tool. description ( str ) — The description of the tool. api_name ( str , optional ) — The specific api_name to use, if the space has several tabs. If not precised, will default to the first available api. token ( str , optional ) — Add your token to access private spaces or increase your GPU quotas. Returns Tool The Space, as a tool. Creates a Tool from a Space given its id on the Hub. Examples: Copied image_generator = Tool.from_space( space_id = "black-forest-labs/FLUX.1-schnell" , name = "image-generator" , description = "Generate an image from a prompt" ) image = image_generator( "Generate an image of a cool surfer in Tahiti" ) Copied face_swapper = Tool .from_space( "tuan2308/face-swap" , "face_swapper" , "Tool that puts the face shown on the first image on the second image. You can give it paths to images." , ) image = face_swapper( './aymeric.jpeg' , './ruth.jpg' ) push_to_hub < source > ( repo_id : str commit_message : str = 'Upload tool' private : typing.Optional[bool] = None token : typing.Union[bool, str, NoneType] = None create_pr : bool = False ) Parameters repo_id ( str ) — The name of the repository you want to push your tool to. It should contain your organization name when pushing to a given organization. commit_message ( str , optional , defaults to "Upload tool" ) — Message to commit while pushing. private ( bool , optional ) — Whether to make the repo private. If None (default), the repo will be public unless the organization’s default is private. This value is ignored if the repo already exists. token ( bool or str , optional ) — The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface ). create_pr ( bool , optional , defaults to False ) — Whether or not to create a PR with the uploaded files or directly commit. Upload the tool to the Hub. For this method to work properly, your tool must have been defined in a separate module (not __main__ ). For instance: Copied from my_tool_module import MyTool my_tool = MyTool() my_tool.push_to_hub( "my-username/my-space" ) save < source > ( output_dir ) Parameters output_dir ( str ) — The folder in which you want to save your tool. Saves the relevant code files for your tool so it can be pushed to the Hub. This will copy the code of your tool in output_dir as well as autogenerate: a config file named tool_config.json an app.py file so that your tool can be converted to a space a requirements.txt containing the names of the module used by your tool (as detected when inspecting its code) You should only use this method to save tools that are defined in a separate module (not __main__ ). setup < source > ( ) Overwrite this method here for any operation that is expensive and needs to be executed before you start using your tool. Such as loading a big model. Toolbox class transformers. Toolbox < source > ( tools : typing.List[transformers.agents.tools.Tool] add_base_tools : bool = False ) Parameters tools ( List[Tool] ) — The list of tools to instantiate the toolbox with add_base_tools ( bool , defaults to False , optional , defaults to False ) — Whether to add the tools available within transformers to the toolbox. The toolbox contains all tools that the agent can perform operations with, as well as a few methods to manage them. add_tool < source > ( tool : Tool ) Parameters tool ( Tool ) — The tool to add to the toolbox. Adds a tool to the toolbox clear_toolbox < source > ( ) Clears the toolbox remove_tool < source > ( tool_name : str ) Parameters tool_name ( str ) — The tool to remove from the toolbox. Removes a tool from the toolbox show_tool_descriptions < source > ( tool_description_template : str = None ) Parameters tool_description_template ( str , optional ) — The template to use to describe the tools. If not provided, the default template will be used. Returns the description of all tools in the toolbox update_tool < source > ( tool : Tool ) Parameters tool ( Tool ) — The tool to update to the toolbox. Updates a tool in the toolbox according to its name. PipelineTool class transformers. PipelineTool < source > ( model = None pre_processor = None post_processor = None device = None device_map = None model_kwargs = None token = None **hub_kwargs ) Parameters model ( str or PreTrainedModel , optional ) — The name of the checkpoint to use for the model, or the instantiated model. If unset, will default to the value of the class attribute default_checkpoint . pre_processor ( str or Any , optional ) — The name of the checkpoint to use for the pre-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the value of model if unset. post_processor ( str or Any , optional ) — The name of the checkpoint to use for the post-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the pre_processor if unset. device ( int , str or torch.device , optional ) — The device on which to execute the model. Will default to any accelerator available (GPU, MPS etc…), the CPU otherwise. device_map ( str or dict , optional ) — If passed along, will be used to instantiate the model. model_kwargs ( dict , optional ) — Any keyword argument to send to the model instantiation. token ( str , optional ) — The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface ). hub_kwargs (additional keyword arguments, optional ) — Any additional keyword argument to send to the methods that will load the data from the Hub. A Tool tailored towards Transformer models. On top of the class attributes of the base class Tool , you will need to specify: model_class ( type ) — The class to use to load the model in this tool. default_checkpoint ( str ) — The default checkpoint that should be used when the user doesn’t specify one. pre_processor_class ( type , optional , defaults to AutoProcessor ) — The class to use to load the pre-processor post_processor_class ( type , optional , defaults to AutoProcessor ) — The class to use to load the post-processor (when different from the pre-processor). decode < source > ( outputs ) Uses the post_processor to decode the model output. encode < source > ( raw_inputs ) Uses the pre_processor to prepare the inputs for the model . forward < source > ( inputs ) Sends the inputs through the model . setup < source > ( ) Instantiates the pre_processor , model and post_processor if necessary. launch_gradio_demo transformers.launch_gradio_demo < source > ( tool_class : Tool ) Parameters tool_class ( type ) — The class of the tool for which to launch the demo. Launches a gradio demo for a tool. The corresponding tool class needs to properly implement the class attributes inputs and output_type . stream_to_gradio transformers.stream_to_gradio < source > ( agent task : str test_mode : bool = False **kwargs ) Runs an agent with the given task and streams the messages from the agent as gradio ChatMessages. ToolCollection class transformers. ToolCollection < source > ( collection_slug : str token : typing.Optional[str] = None ) Parameters collection_slug (str) — The collection slug referencing the collection. token (str, optional ) — The authentication token if the collection is private. Tool collections enable loading all Spaces from a collection in order to be added to the agent’s toolbox. [!NOTE] Only Spaces will be fetched, so you can feel free to add models and datasets to your collection if you’d like for this collection to showcase them. Example: Copied >>> from transformers import ToolCollection, ReactCodeAgent >>> image_tool_collection = ToolCollection(collection_slug= "huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f" ) >>> agent = ReactCodeAgent(tools=[*image_tool_collection.tools], add_base_tools= True ) >>> agent.run( "Please draw me a picture of rivers and lakes." ) Engines You’re free to create and use your own engines to be usable by the Agents framework. These engines have the following specification: Follow the messages format for its input ( List[Dict[str, str]] ) and return a string. Stop generating outputs before the sequences passed in the argument stop_sequences TransformersEngine For convenience, we have added a TransformersEngine that implements the points above, taking a pre-initialized Pipeline as input. Copied >>> from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TransformersEngine >>> model_name = "HuggingFaceTB/SmolLM-135M-Instruct" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> model = AutoModelForCausalLM.from_pretrained(model_name) >>> pipe = pipeline( "text-generation" , model=model, tokenizer=tokenizer) >>> engine = TransformersEngine(pipe) >>> engine([{ "role" : "user" , "content" : "Ok!" }], stop_sequences=[ "great" ]) "What a " class transformers. TransformersEngine < source > ( pipeline : Pipeline model_id : typing.Optional[str] = None ) This engine uses a pre-initialized local text-generation pipeline. HfApiEngine The HfApiEngine is an engine that wraps an HF Inference API client for the execution of the LLM. Copied >>> from transformers import HfApiEngine >>> messages = [ ... { "role" : "user" , "content" : "Hello, how are you?" }, ... { "role" : "assistant" , "content" : "I'm doing great. How can I help you today?" }, ... { "role" : "user" , "content" : "No need to help, take it easy." }, ... ] >>> HfApiEngine()(messages, stop_sequences=[ "conversation" ]) "That's very kind of you to say! It's always nice to have a relaxed " class transformers. HfApiEngine < source > ( model : str = 'meta-llama/Meta-Llama-3.1-8B-Instruct' token : typing.Optional[str] = None max_tokens : typing.Optional[int] = 1500 timeout : typing.Optional[int] = 120 ) Parameters model ( str , optional , defaults to "meta-llama/Meta-Llama-3.1-8B-Instruct" ) — The Hugging Face model ID to be used for inference. This can be a path or model identifier from the Hugging Face model hub. token ( str , optional ) — Token used by the Hugging Face API for authentication. If not provided, the class will use the token stored in the Hugging Face CLI configuration. max_tokens ( int , optional , defaults to 1500) — The maximum number of tokens allowed in the output. timeout ( int , optional , defaults to 120) — Timeout for the API request, in seconds. Raises ValueError ValueError — If the model name is not provided. A class to interact with Hugging Face’s Inference API for language model interaction. This engine allows you to communicate with Hugging Face’s models using the Inference API. It can be used in both serverless mode or with a dedicated endpoint, supporting features like stop sequences and grammar customization. Agent Types Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to correctly render these returns in ipython (jupyter, colab, ipython notebooks, …), we implement wrapper classes around these types. The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image object should still behave as a PIL.Image . These types have three specific purposes: Calling to_raw on the type should return the underlying object Calling to_string on the type should return the object as a string: that can be the string in case of an AgentText but will be the path of the serialized version of the object in other instances Displaying it in an ipython kernel should display the object correctly AgentText class transformers.agents.agent_types. AgentText < source > ( value ) Text type returned by the agent. Behaves as a string. AgentImage class transformers.agents.agent_types. AgentImage < source > ( value ) Image type returned by the agent. Behaves as a PIL.Image. save < source > ( output_bytes format **params ) Parameters output_bytes (bytes) — The output bytes to save the image to. format (str) — The format to use for the output image. The format is the same as in PIL.Image.save. * *params — Additional parameters to pass to PIL.Image.save. Saves the image to a file. to_raw < source > ( ) Returns the “raw” version of that object. In the case of an AgentImage, it is a PIL.Image. to_string < source > ( ) Returns the stringified version of that object. In the case of an AgentImage, it is a path to the serialized version of the image. AgentAudio class transformers.agents.agent_types. AgentAudio < source > ( value samplerate = 16000 ) Audio type returned by the agent. to_raw < source > ( ) Returns the “raw” version of that object. It is a torch.Tensor object. to_string < source > ( ) Returns the stringified version of that object. In the case of an AgentAudio, it is a path to the serialized version of the audio. < > Update on GitHub ← Getting the most out of LLMs Auto Classes → Agents & Tools Agents Agent Code Agent React agents Managed Agent Tools load_tool tool Tool Toolbox Pipeline Tool launch_gradio_demo stream_to_gradio Tool Collection Engines Transformers Engine Hf Api Engine Agent Types Agent Text Agent Image Agent Audio |
Gated_datasets.txt | Gated datasets Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Gated datasets Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Gated datasets To give more control over how datasets are used, the Hub allows datasets authors to enable access requests for their datasets. Users must agree to share their contact information (username and email address) with the datasets authors to access the datasets files when enabled. Datasets authors can configure this request with additional fields. A dataset with access requests enabled is called a gated dataset . Access requests are always granted to individual users rather than to entire organizations. A common use case of gated datasets is to provide access to early research datasets before the wider release. Manage gated datasets as a dataset author To enable access requests, go to the dataset settings page. By default, the dataset is not gated. Click on Enable Access request in the top-right corner. By default, access to the dataset is automatically granted to the user when requesting it. This is referred to as automatic approval . In this mode, any user can access your dataset once they’ve shared their personal information with you. If you want to manually approve which users can access your dataset, you must set it to manual approval . When this is the case, you will notice more options: Add access allows you to search for a user and grant them access even if they did not request it. Notification frequency lets you configure when to get notified if new users request access. It can be set to once a day or real-time. By default, an email is sent to your primary email address. For datasets hosted under an organization, emails are by default sent to the first 5 admins of the organization. In both cases (user or organization) you can set a different email address in the Notifications email field. Review access requests Once access requests are enabled, you have full control of who can access your dataset or not, whether the approval mode is manual or automatic. You can review and manage requests either from the UI or via the API. From the UI You can review who has access to your gated dataset from its settings page by clicking on the Review access requests button. This will open a modal with 3 lists of users: pending : the list of users waiting for approval to access your dataset. This list is empty unless you’ve selected manual approval . You can either Accept or Reject the demand. If the demand is rejected, the user cannot access your dataset and cannot request access again. accepted : the complete list of users with access to your dataset. You can choose to Reject access at any time for any user, whether the approval mode is manual or automatic. You can also Cancel the approval, which will move the user to the pending list. rejected : the list of users you’ve manually rejected. Those users cannot access your datasets. If they go to your dataset repository, they will see a message Your request to access this repo has been rejected by the repo’s authors . Via the API You can automate the approval of access requests by using the API. You must pass a token with write access to the gated repository. To generate a token, go to your user settings . Method URI Description Headers Payload GET /api/datasets/{repo_id}/user-access-request/pending Retrieve the list of pending requests. {"authorization": "Bearer $token"} GET /api/datasets/{repo_id}/user-access-request/accepted Retrieve the list of accepted requests. {"authorization": "Bearer $token"} GET /api/datasets/{repo_id}/user-access-request/rejected Retrieve the list of rejected requests. {"authorization": "Bearer $token"} POST /api/datasets/{repo_id}/user-access-request/handle Change the status of a given access request to status . {"authorization": "Bearer $token"} {"status": "accepted"/"rejected"/"pending", "user": "username", "rejectionReason": "Optional rejection reason that will be visible to the user (max 200 characters)."}} POST /api/datasets/{repo_id}/user-access-request/grant Allow a specific user to access your repo. {"authorization": "Bearer $token"} {"user": "username"} The base URL for the HTTP endpoints above is https://huggingface.co . NEW! Those endpoints are now officially supported in our Python client huggingface_hub . List the access requests to your dataset with list_pending_access_requests , list_accepted_access_requests and list_rejected_access_requests . You can also accept, cancel and reject access requests with accept_access_request , cancel_access_request , reject_access_request . Finally, you can grant access to a user with grant_access . Download access report You can download a report of all access requests for a gated datasets with the download user access report button. Click on it to download a json file with a list of users. For each entry, you have: user : the user id. Example: julien-c . fullname : name of the user on the Hub. Example: Julien Chaumond . status : status of the request. Either "pending" , "accepted" or "rejected" . email : email of the user. time : datetime when the user initially made the request. Customize requested information By default, users landing on your gated dataset will be asked to share their contact information (email and username) by clicking the Agree and send request to access repo button. If you want to request more user information to provide access, you can configure additional fields. This information will be accessible from the Settings tab. To do so, add an extra_gated_fields property to your dataset card metadata containing a list of key/value pairs. The key is the name of the field and value its type or an object with a type field. The list of field types is: text : a single-line text field. checkbox : a checkbox field. date_picker : a date picker field. country : a country dropdown. The list of countries is based on the ISO 3166-1 alpha-2 standard. select : a dropdown with a list of options. The list of options is defined in the options field. Example: options: ["option 1", "option 2", {label: "option3", value: "opt3"}] . Finally, you can also personalize the message displayed to the user with the extra_gated_prompt extra field. Here is an example of customized request form where the user is asked to provide their company name and country and acknowledge that the dataset is for non-commercial use only. Copied --- extra_gated_prompt: "You agree to not use the dataset to conduct experiments that cause harm to human subjects." extra_gated_fields: Company: text Country: country Specific date: date_picker I want to use this dataset for: type: select options: - Research - Education - label: Other value: other I agree to use this dataset for non-commercial use ONLY: checkbox --- In some cases, you might also want to modify the default text in the gate heading, description, and button. For those use cases, you can modify extra_gated_heading , extra_gated_description and extra_gated_button_content like this: Copied --- extra_gated_heading: "Acknowledge license to accept the repository" extra_gated_description: "Our team may take 2-3 days to process your request" extra_gated_button_content: "Acknowledge license" --- Manage gated datasets as an organization (Enterprise Hub) Enterprise Hub subscribers can create a Gating Group Collection to grant (or reject) access to all the models and datasets in a collection at once. More information about Gating Group Collections can be found in our dedicated doc . Access gated datasets as a user As a user, if you want to use a gated dataset, you will need to request access to it. This means that you must be logged in to a Hugging Face user account. Requesting access can only be done from your browser. Go to the dataset on the Hub and you will be prompted to share your information: By clicking on Agree , you agree to share your username and email address with the dataset authors. In some cases, additional fields might be requested. To help the dataset authors decide whether to grant you access, try to fill out the form as completely as possible. Once the access request is sent, there are two possibilities. If the approval mechanism is automatic, you immediately get access to the dataset files. Otherwise, the requests have to be approved manually by the authors, which can take more time. The dataset authors have complete control over dataset access. In particular, they can decide at any time to block your access to the dataset without prior notice, regardless of approval mechanism or if your request has already been approved. Download files To download files from a gated dataset you’ll need to be authenticated. In the browser, this is automatic as long as you are logged in with your account. If you are using a script, you will need to provide a user token . In the Hugging Face Python ecosystem ( transformers , diffusers , datasets , etc.), you can login your machine using the huggingface_hub library and running in your terminal: Copied huggingface-cli login Alternatively, you can programmatically login using login() in a notebook or a script: Copied >>> from huggingface_hub import login >>> login() You can also provide the token parameter to most loading methods in the libraries ( from_pretrained , hf_hub_download , load_dataset , etc.), directly from your scripts. For more details about how to login, check out the login guide . < > Update on GitHub ← Dataset Cards Uploading Datasets → Gated datasets Manage gated datasets as a dataset author Review access requests From the UI Via the API Download access report Customize requested information Manage gated datasets as an organization ( Enterprise Hub) Access gated datasets as a user Download files |
How_to_Hack_Any_Transformers_Model.txt | How to Hack Any Transformers Model Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation How to Hack Any Transformers Model Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started How to Hack Any Transformers Model The 🤗 Transformers library offers a collection of pre-trained models and tools for natural language processing, vision, and beyond. While these models cover a wide range of applications, you might encounter use cases that aren’t supported out of the box. Customizing models can unlock new possibilities, such as adding new layers, altering architectures, or optimizing attention mechanisms. This guide will show you how to modify existing Transformers models to fit your specific needs. The great thing is, you don’t have to step away from the Transformers framework to make these changes. You can actually modify models directly in Transformers and still take advantage of features like the Trainer API , PreTrainedModel , and efficient fine-tuning with tools like PEFT . In this guide, we’ll walk you through how to customize existing Transformers models to meet your requirements—without losing the benefits of the ecosystem. You’ll learn how to: Modify a model’s architecture by changing its attention mechanism. Apply techniques like Low-Rank Adaptation (LoRA) to specific model components. We encourage you to contribute your own hacks and share them here with the community1 Example: Modifying the Attention Mechanism in the Segment Anything Model (SAM) The Segment Anything Model (SAM) is a state-of-the-art model for image segmentation. In its default implementation, SAM uses a combined query-key-value ( qkv ) projection in its attention mechanism. However, you might want to fine-tune only specific components of the attention mechanism, such as the query ( q ) and value ( v ) projections, to reduce the number of trainable parameters and computational resources required. Motivation By splitting the combined qkv projection into separate q , k , and v projections, you can apply techniques like LoRA (Low-Rank Adaptation) to only the q and v projections. This approach allows you to: Fine-tune fewer parameters, reducing computational overhead. Potentially achieve better performance by focusing on specific components. Experiment with different adaptation strategies in the attention mechanism. Implementation Step 1: Create a Custom Attention Class Next, subclass the original SamVisionAttention class and modify it to have separate q , k , and v projections. Copied import torch import torch.nn as nn from transformers.models.sam.modeling_sam import SamVisionAttention class SamVisionAttentionSplit (SamVisionAttention, nn.Module): def __init__ ( self, config, window_size ): super ().__init__(config, window_size) del self.qkv # Separate q, k, v projections self.q = nn.Linear(config.hidden_size, config.hidden_size, bias=config.qkv_bias) self.k = nn.Linear(config.hidden_size, config.hidden_size, bias=config.qkv_bias) self.v = nn.Linear(config.hidden_size, config.hidden_size, bias=config.qkv_bias) self._register_load_state_dict_pre_hook(self.split_q_k_v_load_hook) def split_q_k_v_load_hook ( self, state_dict, prefix, *args ): keys_to_delete = [] for key in list (state_dict.keys()): if "qkv." in key: # Split q, k, v from the combined projection q, k, v = state_dict[key].chunk( 3 , dim= 0 ) # Replace with individual q, k, v projections state_dict[key.replace( "qkv." , "q." )] = q state_dict[key.replace( "qkv." , "k." )] = k state_dict[key.replace( "qkv." , "v." )] = v # Mark the old qkv key for deletion keys_to_delete.append(key) # Remove old qkv keys for key in keys_to_delete: del state_dict[key] def forward ( self, hidden_states: torch.Tensor, output_attentions= False ) -> torch.Tensor: batch_size, height, width, _ = hidden_states.shape qkv_shapes = (batch_size * self.num_attention_heads, height * width, - 1 ) query = self.q(hidden_states).reshape((batch_size, height * width,self.num_attention_heads, - 1 )).permute( 0 , 2 , 1 , 3 ).reshape(qkv_shapes) key = self.k(hidden_states).reshape((batch_size, height * width,self.num_attention_heads, - 1 )).permute( 0 , 2 , 1 , 3 ).reshape(qkv_shapes) value = self.v(hidden_states).reshape((batch_size, height * width,self.num_attention_heads, - 1 )).permute( 0 , 2 , 1 , 3 ).reshape(qkv_shapes) attn_weights = (query * self.scale) @ key.transpose(- 2 , - 1 ) if self.use_rel_pos: attn_weights = self.add_decomposed_rel_pos( attn_weights, query, self.rel_pos_h, self.rel_pos_w, (height, width), (height, width) ) attn_weights = torch.nn.functional.softmax(attn_weights, dtype=torch.float32, dim=- 1 ).to(query.dtype) attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) attn_output = (attn_probs @ value).reshape(batch_size, self.num_attention_heads, height, width, - 1 ) attn_output = attn_output.permute( 0 , 2 , 3 , 1 , 4 ).reshape(batch_size, height, width, - 1 ) attn_output = self.proj(attn_output) if output_attentions: outputs = (attn_output, attn_weights) else : outputs = (attn_output, None ) return outputs Explanation: Separate Projections: The combined qkv projection is removed, and separate q , k , and v linear layers are created. Weight Loading Hook: The _split_qkv_load_hook method splits the pre-trained qkv weights into separate q , k , and v weights when loading the model. This ensures compatibility with any pre-trained model. Forward Pass: Queries, keys, and values are computed separately, and the attention mechanism proceeds as usual. Step 2: Replace the Original Attention Class Replace the original SamVisionAttention class with your custom class so that the model uses the modified attention mechanism. Copied from transformers import SamModel from transformers.models.sam import modeling_sam # Replace the attention class in the modeling_sam module modeling_sam.SamVisionAttention = SamVisionAttentionSplit # Load the pre-trained SAM model model = SamModel.from_pretrained( "facebook/sam-vit-base" ) Explanation: Class Replacement: By assigning your custom class to modeling_sam.SamVisionAttention , any instances of SamVisionAttention in the model will use the modified version. Thus when you call SamModel , it will use the newly defined SamVisionAttentionSplit . Model Loading: The model is loaded using from_pretrained , and the custom attention mechanism is integrated. Step 3: Apply LoRA to Specific Projections With separate q , k , and v projections, you can now apply LoRA to specific components, such as the q and v projections. Copied from peft import LoraConfig, get_peft_model config = LoraConfig( r= 16 , lora_alpha= 32 , target_modules=[ "q" , "v" ], # Apply LoRA to q and v projections lora_dropout= 0.1 , task_type= "mask-generation" ) # Apply LoRA to the model model = get_peft_model(model, config) Explanation: LoRA Configuration: The LoraConfig specifies the rank r , scaling factor lora_alpha , target modules ( "q" and "v" ), dropout, and task type. Applying LoRA: The get_peft_model function applies LoRA to the specified modules in the model. Parameter Reduction: By focusing on q and v , you reduce the number of trainable parameters, leading to faster training and lower memory usage. Step 4: Verify the Number of Trainable Parameters It’s simple to verify the number of trainable parameters and see what impact your modification had. Copied model.print_trainable_parameters() Expected Output: Copied trainable params: 608 , 256 || all params: 94 , 343 , 728 || trainable%: 0 . 6447 trainable params: 912 , 384 || all params: 94 , 647 , 856 || trainable%: 0 . 9640 # with k Contributing Your Own Hacks Modifying pre-trained models can open up new avenues for research and application. By understanding and adjusting the internal mechanisms of models like SAM, you can tailor them to your specific needs, optimize performance, and experiment with new ideas. If you’ve developed your own hacks for Transformers models and would like to share them, consider contributing to this doc. Open a Pull Request: Share your code changes and improvements directly in the repository. Write Documentation: Provide clear explanations and examples of your modifications. Engage with the Community: Discuss your ideas and get feedback from other developers and researchers by opening an issue. < > Update on GitHub ← Modularity in `transformers` Getting started → How to Hack Any Transformers Model Example: Modifying the Attention Mechanism in the Segment Anything Model (SA M) Motivation Implementation Step 1: Create a Custom Attention Class Step 2: Replace the Original Attention Class Step 3: Apply LoR A to Specific Projections Step 4: Verify the Number of Trainable Parameters Contributing Your Own Hacks |
Diffusers.txt | Diffusers Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation Diffusers Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Diffusers 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance , simple over easy , and customizability over abstractions . The library has three main components: State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve. Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. Tutorials Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! How-to guides Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. Conceptual guides Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. Reference Technical descriptions of how 🤗 Diffusers classes and methods work. < > Update on GitHub Quicktour → Diffusers |
Embed_your_Space_in_another_website.txt | Embed your Space in another website Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Embed your Space in another website Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Embed your Space in another website Once your Space is up and running you might wish to embed it in a website or in your blog. Embedding or sharing your Space is a great way to allow your audience to interact with your work and demonstrations without requiring any setup on their side. To embed a Space its visibility needs to be public. Direct URL A Space is assigned a unique URL you can use to share your Space or embed it in a website. This URL is of the form: "https://<space-subdomain>.hf.space" . For instance, the Space NimaBoscarino/hotdog-gradio has the corresponding URL of "https://nimaboscarino-hotdog-gradio.hf.space" . The subdomain is unique and only changes if you move or rename your Space. Your space is always served from the root of this subdomain. You can find the Space URL along with examples snippets of how to embed it directly from the options menu: Embedding with IFrames The default embedding method for a Space is using IFrames. Add in the HTML location where you want to embed your Space the following element: Copied < iframe src = "https://<space-subdomain>.hf.space" frameborder = "0" width = "850" height = "450" > </ iframe > For instance using the NimaBoscarino/hotdog-gradio Space: Embedding with WebComponents If the Space you wish to embed is Gradio-based, you can use Web Components to embed your Space. WebComponents are faster than IFrames and automatically adjust to your web page so that you do not need to configure width or height for your element. First, you need to import the Gradio JS library that corresponds to the Gradio version in the Space by adding the following script to your HTML. Then, add a gradio-app element where you want to embed your Space. Copied < gradio-app src = "https://<space-subdomain>.hf.space" > </ gradio-app > Check out the Gradio documentation for more details. < > Update on GitHub ← Langfuse on Spaces Run Spaces with Docker → Embed your Space in another website Direct URL Embedding with I Frames Embedding with Web Components |
Change_Organization_or_Account.txt | Change Organization or Account Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Inference Endpoints (dedicated) documentation Change Organization or Account Inference Endpoints (dedicated) 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Overview 🤗 Inference Endpoints Security & Compliance Supported Tasks API Reference (Swagger) Autoscaling Pricing Help & Support FAQ Guides Access the solution (UI) Create your first Endpoint Send Requests to Endpoints Update your Endpoint Advanced Setup (Instance Types, Auto Scaling, Versioning) Create a Private Endpoint with AWS PrivateLink Add custom Dependencies Create custom Inference Handler Use a custom Container Image Access and read Logs Access and view Metrics Change Organization or Account Pause and Resume your Endpoint Deploying a llama.cpp Container Others Inference Endpoints Version Serialization & Deserialization for Requests Inference Endpoints Container Types Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Change Organization or Account Inference Endpoints uses your Hugging Face account - which can be either personal or an organization - to authenticate and deploy an Endpoint. You can switch between the two by clicking on the “organization/user” dropdown in the top right corner of the page. The dropdown will only show organizations and accounts that have an active plan. Check out the Access the solution guide to learn more about setting up a plan. Select the organization you want to switch to. If you don’t have access to any organization, you can create a new one by clicking on the “Create Organization” button. < > Update on GitHub ← Access and view Metrics Pause and Resume your Endpoint → Change Organization or Account |
Model_Cards.txt | Model Cards Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation Model Cards Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Annotated Model Card Carbon Emissions Model Card Guidebook Landscape Analysis Card Components Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Model Cards What are Model Cards? Model cards are files that accompany the models and provide handy information. Under the hood, model cards are simple Markdown files with additional metadata. Model cards are essential for discoverability, reproducibility, and sharing! You can find a model card as the README.md file in any model repo. The model card should describe: the model its intended uses & potential limitations, including biases and ethical considerations as detailed in Mitchell, 2018 the training params and experimental info (you can embed or link to an experiment tracking platform for reference) which datasets were used to train your model the model’s evaluation results The model card template is available here . How to fill out each section of the model card is described in the Annotated Model Card . Model Cards on the Hub have two key parts, with overlapping information: Metadata Text descriptions Model card metadata A model repo will render its README.md as a model card. The model card is a Markdown file, with a YAML section at the top that contains metadata about the model. The metadata you add to the model card supports discovery and easier use of your model. For example: Allowing users to filter models at https://huggingface.co/models . Displaying the model’s license. Adding datasets to the metadata will add a message reading Datasets used to train: to your model page and link the relevant datasets, if they’re available on the Hub. Dataset, metric, and language identifiers are those listed on the Datasets , Metrics and Languages pages. Adding metadata to your model card There are a few different ways to add metadata to your model card including: Using the metadata UI Directly editing the YAML section of the README.md file Via the huggingface_hub Python library, see the docs for more details. Many libraries with Hub integration will automatically add metadata to the model card when you upload a model. Using the metadata UI You can add metadata to your model card using the metadata UI. To access the metadata UI, go to the model page and click on the Edit model card button in the top right corner of the model card. This will open an editor showing the model card README.md file, as well as a UI for editing the metadata. This UI will allow you to add key metadata to your model card and many of the fields will autocomplete based on the information you provide. Using the UI is the easiest way to add metadata to your model card, but it doesn’t support all of the metadata fields. If you want to add metadata that isn’t supported by the UI, you can edit the YAML section of the README.md file directly. Editing the YAML section of the README.md file You can also directly edit the YAML section of the README.md file. If the model card doesn’t already have a YAML section, you can add one by adding three --- at the top of the file, then include all of the relevant metadata, and close the section with another group of --- like the example below: Copied --- language: - "List of ISO 639-1 code for your language" - lang1 - lang2 thumbnail: "url to a thumbnail used in social sharing" tags: - tag1 - tag2 license: "any valid license identifier" datasets: - dataset1 - dataset2 metrics: - metric1 - metric2 base_model: "base model Hub identifier" --- You can find the detailed model card metadata specification here . Specifying a library You can specify the supported libraries in the model card metadata section. Find more about our supported libraries here . The library will be specified in the following order of priority: Specifying library_name in the model card (recommended if your model is not a transformers model). This information can be added via the metadata UI or directly in the model card YAML section: Copied library_name: flair Having a tag with the name of a library that is supported Copied tags: - flair If it’s not specified, the Hub will try to automatically detect the library type. However, this approach is discouraged, and repo creators should use the explicit library_name as much as possible. By looking into the presence of files such as *.nemo or *.mlmodel , the Hub can determine if a model is from NeMo or CoreML. In the past, if nothing was detected and there was a config.json file, it was assumed the library was transformers . For model repos created after August 2024, this is not the case anymore – so you need to library_name: transformers explicitly. Specifying a base model If your model is a fine-tune, an adapter, or a quantized version of a base model, you can specify the base model in the model card metadata section. This information can also be used to indicate if your model is a merge of multiple existing models. Hence, the base_model field can either be a single model ID, or a list of one or more base_models (specified by their Hub identifiers). Copied base_model: HuggingFaceH4/zephyr-7b-beta This metadata will be used to display the base model on the model page. Users can also use this information to filter models by base model or find models that are derived from a specific base model: For a fine-tuned model: For an adapter (LoRA, PEFT, etc): For a quantized version of another model: For a merge of two or more models: In the merge case, you specify a list of two or more base_models: Copied base_model: - Endevor/InfinityRP-v1-7B - l3utterfly/mistral-7b-v0.1-layla-v4 The Hub will infer the type of relationship from the current model to the base model ( "adapter", "merge", "quantized", "finetune" ) but you can also set it explicitly if needed: base_model_relation: quantized for instance. Specifying a new version If a new version of your model is available in the Hub, you can specify it in a new_version field. For example, on l3utterfly/mistral-7b-v0.1-layla-v3 : Copied new_version: l3utterfly/mistral-7b-v0.1-layla-v4 This metadata will be used to display a link to the latest version of a model on the model page. If the model linked in new_version also has a new_version field, the very latest version will always be displayed. Specifying a dataset You can specify the datasets used to train your model in the model card metadata section. The datasets will be displayed on the model page and users will be able to filter models by dataset. You should use the Hub dataset identifier, which is the same as the dataset’s repo name as the identifier: Copied datasets: - imdb - HuggingFaceH4/no_robots Specifying a task ( pipeline_tag ) You can specify the pipeline_tag in the model card metadata. The pipeline_tag indicates the type of task the model is intended for. This tag will be displayed on the model page and users can filter models on the Hub by task. This tag is also used to determine which widget to use for the model and which APIs to use under the hood. For transformers models, the pipeline tag is automatically inferred from the model’s config.json file but you can override it in the model card metadata if required. Editing this field in the metadata UI will ensure that the pipeline tag is valid. Some other libraries with Hub integration will also automatically add the pipeline tag to the model card metadata. Specifying a license You can specify the license in the model card metadata section. The license will be displayed on the model page and users will be able to filter models by license. Using the metadata UI, you will see a dropdown of the most common licenses. If required, you can also specify a custom license by adding other as the license value and specifying the name and a link to the license in the metadata. Copied # Example from https://huggingface.co/coqui/XTTS-v1 --- license: other license_name: coqui-public-model-license license_link: https://coqui.ai/cpml --- If the license is not available via a URL you can link to a LICENSE stored in the model repo. Evaluation Results You can specify your model’s evaluation results in a structured way in the model card metadata. Results are parsed by the Hub and displayed in a widget on the model page. Here is an example on how it looks like for the bigcode/starcoder model: The metadata spec was based on Papers with code’s model-index specification . This allow us to directly index the results into Papers with code’s leaderboards when appropriate. You can also link the source from where the eval results has been computed. Here is a partial example to describe 01-ai/Yi-34B ’s score on the ARC benchmark. The result comes from the Open LLM Leaderboard which is defined as the source : Copied --- model-index: - name: Yi-34B results: - task: type: text-generation dataset: name: ai2_arc type: ai2_arc metrics: - name: AI2 Reasoning Challenge (25-Shot) type: AI2 Reasoning Challenge (25-Shot) value: 64.59 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard --- For more details on how to format this data, check out the Model Card specifications . CO2 Emissions The model card is also a great place to show information about the CO 2 impact of your model. Visit our guide on tracking and reporting CO 2 emissions to learn more. Linking a Paper If the model card includes a link to a paper on arXiv, the Hugging Face Hub will extract the arXiv ID and include it in the model tags with the format arxiv:<PAPER ID> . Clicking on the tag will let you: Visit the Paper page Filter for other models on the Hub that cite the same paper. Read more about Paper pages here . Model Card text Details on how to fill out a human-readable model card without Hub-specific metadata (so that it may be printed out, cut+pasted, etc.) is available in the Annotated Model Card . FAQ How are model tags determined? Each model page lists all the model’s tags in the page header, below the model name. These are primarily computed from the model card metadata, although some are added automatically, as described in Enabling a Widget . Can I add custom tags to my model? Yes, you can add custom tags to your model by adding them to the tags field in the model card metadata. The metadata UI will suggest some popular tags, but you can add any tag you want. For example, you could indicate that your model is focused on finance by adding a finance tag. How can I indicate that my model is not suitable for all audiences You can add a not-for-all-audience tag to your model card metadata. When this tag is present, a message will be displayed on the model page indicating that the model is not for all audiences. Users can click through this message to view the model card. Can I write LaTeX in my model card? Yes! The Hub uses the KaTeX math typesetting library to render math formulas server-side before parsing the Markdown. You have to use the following delimiters: $$ ... $$ for display mode \\(...\\) for inline mode (no space between the slashes and the parenthesis). Then you’ll be able to write: LaTeX \LaTeX L A T E X M S E = ( 1 n ) ∑ i = 1 n ( y i − x i ) 2 \mathrm{MSE} = \left(\frac{1}{n}\right)\sum_{i=1}^{n}(y_{i} - x_{i})^{2} MSE = ( n 1 ) i = 1 ∑ n ( y i − x i ) 2 E = m c 2 E=mc^2 E = m c 2 < > Update on GitHub ← The Model Hub Annotated Model Card → Model Cards What are Model Cards? Model card metadata Adding metadata to your model card Using the metadata UI Editing the YAM L section of the READM E.md file Specifying a library Specifying a base model Specifying a new version Specifying a dataset Specifying a task ( pipeline_tag ) Specifying a license Evaluation Results C O2 Emissions Linking a Paper Model Card text FAQ How are model tags determined? Can I add custom tags to my model? How can I indicate that my model is not suitable for all audiences Can I write La Te X in my model card? |
Streaming.txt | Streaming Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up text-generation-inference documentation Streaming text-generation-inference 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main EN Getting started Text Generation Inference Quick Tour Supported Models Using TGI with Nvidia GPUs Using TGI with AMD GPUs Using TGI with Intel Gaudi Using TGI with AWS Inferentia Using TGI with Google TPUs Using TGI with Intel GPUs Installation from source Multi-backend support Internal Architecture Usage Statistics Tutorials Consuming TGI Preparing Model for Serving Serving Private & Gated Models Using TGI CLI Non-core Model Serving Safety Using Guidance, JSON, tools Visual Language Models Monitoring TGI with Prometheus and Grafana Train Medusa Backends TensorRT-LLM Reference All TGI CLI options Exported Metrics API Reference Conceptual Guides V3 update, caching and chunking Streaming Quantization Tensor Parallelism PagedAttention Safetensors Flash Attention Speculation (Medusa, ngram) How Guidance Works (via outlines) LoRA (Low-Rank Adaptation) External Resources Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Streaming What is Streaming? Token streaming is the mode in which the server returns the tokens one by one as the model generates them. This enables showing progressive generations to the user rather than waiting for the whole generation. Streaming is an essential aspect of the end-user experience as it reduces latency, one of the most critical aspects of a smooth experience. With token streaming, the server can start returning the tokens one by one before having to generate the whole response. Users can have a sense of the generation’s quality before the end of the generation. This has different positive effects: Users can get results orders of magnitude earlier for extremely long queries. Seeing something in progress allows users to stop the generation if it’s not going in the direction they expect. Perceived latency is lower when results are shown in the early stages. When used in conversational UIs, the experience feels more natural. For example, a system can generate 100 tokens per second. If the system generates 1000 tokens, with the non-streaming setup, users need to wait 10 seconds to get results. On the other hand, with the streaming setup, users get initial results immediately, and although end-to-end latency will be the same, they can see half of the generation after five seconds. Below you can see an interactive demo that shows non-streaming vs streaming side-by-side. Click generate below. How to use Streaming? Streaming with Python To stream tokens with InferenceClient , simply pass stream=True and iterate over the response. Copied from huggingface_hub import InferenceClient client = InferenceClient(base_url= "http://127.0.0.1:8080" ) output = client.chat.completions.create( messages=[ { "role" : "system" , "content" : "You are a helpful assistant." }, { "role" : "user" , "content" : "Count to 10" }, ], stream= True , max_tokens= 1024 , ) for chunk in output: print (chunk.choices[ 0 ].delta.content) # 1 # 2 # 3 # 4 # 5 # 6 # 7 # 8 # 9 # 10 The huggingface_hub library also comes with an AsyncInferenceClient in case you need to handle the requests concurrently. Copied from huggingface_hub import AsyncInferenceClient client = AsyncInferenceClient(base_url= "http://127.0.0.1:8080" ) async def main (): stream = await client.chat.completions.create( messages=[{ "role" : "user" , "content" : "Say this is a test" }], stream= True , ) async for chunk in stream: print (chunk.choices[ 0 ].delta.content or "" , end= "" ) asyncio.run(main()) # This # is # a # test #. Streaming with cURL To use the OpenAI Chat Completions compatible Messages API v1/chat/completions endpoint with curl, you can add the -N flag, which disables curl default buffering and shows data as it arrives from the server Copied curl localhost: 8080 /v1/chat/completions \ -X POST \ -d '{ "model" : "tgi" , "messages" : [ { "role" : "system" , "content" : "You are a helpful assistant." }, { "role" : "user" , "content" : "What is deep learning?" } ], "stream" : true , "max_tokens" : 20 }' \ -H 'Content - Type : application/json' Streaming with JavaScript First, we need to install the @huggingface/inference library. npm install @huggingface/inference If you’re using the free Inference API, you can use HfInference . If you’re using inference endpoints, you can use HfInferenceEndpoint . We can create a HfInferenceEndpoint providing our endpoint URL and credential. Copied import { HfInferenceEndpoint } from '@huggingface/inference' const hf = new HfInferenceEndpoint ( 'https://YOUR_ENDPOINT.endpoints.huggingface.cloud' , 'hf_YOUR_TOKEN' ) // prompt const prompt = 'What can you do in Nuremberg, Germany? Give me 3 Tips' const stream = hf. textGenerationStream ({ inputs : prompt }) for await ( const r of stream) { // yield the generated token process. stdout . write (r. token . text ) } How does Streaming work under the hood? Under the hood, TGI uses Server-Sent Events (SSE). In an SSE Setup, a client sends a request with the data, opening an HTTP connection and subscribing to updates. Afterward, the server sends data to the client. There is no need for further requests; the server will keep sending the data. SSEs are unidirectional, meaning the client does not send other requests to the server. SSE sends data over HTTP, making it easy to use. SSEs are different than: Polling: where the client keeps calling the server to get data. This means that the server might return empty responses and cause overhead. Webhooks: where there is a bi-directional connection. The server can send information to the client, but the client can also send data to the server after the first request. Webhooks are more complex to operate as they don’t only use HTTP. If there are too many requests at the same time, TGI returns an HTTP Error with an overloaded error type ( huggingface_hub returns OverloadedError ). This allows the client to manage the overloaded server (e.g., it could display a busy error to the user or retry with a new request). To configure the maximum number of concurrent requests, you can specify --max_concurrent_requests , allowing clients to handle backpressure. < > Update on GitHub ← V3 update, caching and chunking Quantization → Streaming What is Streaming? How to use Streaming? Streaming with Python Streaming with cURL Streaming with Java Script How does Streaming work under the hood? |
Export_to_TFLite.txt | Export to TFLite Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers documentation Export to TFLite Transformers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v4.48.0 v4.47.1 v4.46.3 v4.45.2 v4.44.2 v4.43.4 v4.42.4 v4.41.2 v4.40.2 v4.39.3 v4.38.2 v4.37.2 v4.36.1 v4.35.2 v4.34.1 v4.33.3 v4.32.1 v4.31.0 v4.30.0 v4.29.1 v4.28.1 v4.27.2 v4.26.1 v4.25.1 v4.24.0 v4.23.1 v4.22.2 v4.21.3 v4.20.1 v4.19.4 v4.18.0 v4.17.0 v4.16.2 v4.15.0 v4.14.1 v4.13.0 v4.12.5 v4.11.3 v4.10.1 v4.9.2 v4.8.2 v4.7.0 v4.6.0 v4.5.1 v4.4.2 v4.3.3 v4.2.2 v4.1.1 v4.0.1 v3.5.1 v3.4.0 v3.3.1 v3.2.0 v3.1.0 v3.0.2 v2.11.0 v2.10.0 v2.9.1 v2.8.0 v2.7.0 v2.6.0 v2.5.1 v2.4.1 v2.3.0 v2.2.2 v2.1.1 v2.0.0 v1.2.0 v1.1.0 v1.0.0 doc-builder-html AR DE EN ES FR HI IT JA KO PT TE TR ZH Get started 🤗 Transformers Quick tour Installation Adding a new model to `transformers` Tutorials Run inference with pipelines Write portable code with AutoClass Preprocess data Fine-tune a pretrained model Train with a script Set up distributed training with 🤗 Accelerate Load and train adapters with 🤗 PEFT Share your model Agents 101 Agents, supercharged - Multi-agents, External tools, and more Generation with LLMs Chatting with Transformers Task Guides Natural Language Processing Audio Computer Vision Multimodal Generation Prompting Developer guides Use fast tokenizers from 🤗 Tokenizers Run inference with multilingual models Use model-specific APIs Share a custom model Chat templates Trainer Run training on Amazon SageMaker Export to ONNX Export to TFLite Export to TorchScript Benchmarks Notebooks with examples Community resources Troubleshoot Interoperability with GGUF files Interoperability with TikToken files Modularity in `transformers` Model Hacking (overwriting a class to your usage) Quantization Methods Getting started bitsandbytes GPTQ AWQ AQLM VPTQ Quanto EETQ HIGGS HQQ FBGEMM_FP8 Optimum TorchAO BitNet compressed-tensors Contribute new quantization method Performance and scalability Overview LLM inference optimization Efficient training techniques Methods and tools for efficient training on a single GPU Multiple GPUs and parallelism Fully Sharded Data Parallel DeepSpeed Efficient training on CPU Distributed CPU training Training on TPU with TensorFlow PyTorch training on Apple silicon Custom hardware for training Hyperparameter Search using Trainer API Optimizing inference CPU inference GPU inference Multi-GPU inference Instantiate a big model Debugging XLA Integration for TensorFlow Models Optimize inference using `torch.compile()` Contribute How to contribute to 🤗 Transformers? How to add a model to 🤗 Transformers? How to add a pipeline to 🤗 Transformers? Testing Checks on a Pull Request Conceptual guides Philosophy Glossary What 🤗 Transformers can do How 🤗 Transformers solve tasks The Transformer model family Summary of the tokenizers Attention mechanisms Padding and truncation BERTology Perplexity of fixed-length models Pipelines for webserver inference Model training anatomy Getting the most out of LLMs API Main Classes Agents and Tools Auto Classes Backbones Callbacks Configuration Data Collator Keras callbacks Logging Models Text Generation ONNX Optimization Model outputs Pipelines Processors Quantization Tokenizer Trainer DeepSpeed ExecuTorch Feature Extractor Image Processor Models Text models Vision models Audio models Video models Multimodal models Reinforcement learning models Time series models Graph models Internal Helpers Custom Layers and Utilities Utilities for pipelines Utilities for Tokenizers Utilities for Trainer Utilities for Generation Utilities for Image Processors Utilities for Audio processing General Utilities Utilities for Time Series Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Export to TFLite TensorFlow Lite is a lightweight framework for deploying machine learning models on resource-constrained devices, such as mobile phones, embedded systems, and Internet of Things (IoT) devices. TFLite is designed to optimize and run models efficiently on these devices with limited computational power, memory, and power consumption. A TensorFlow Lite model is represented in a special efficient portable format identified by the .tflite file extension. 🤗 Optimum offers functionality to export 🤗 Transformers models to TFLite through the exporters.tflite module. For the list of supported model architectures, please refer to 🤗 Optimum documentation . To export a model to TFLite, install the required dependencies: Copied pip install optimum[exporters-tf] To check out all available arguments, refer to the 🤗 Optimum docs , or view help in command line: Copied optimum-cli export tflite -- help To export a model’s checkpoint from the 🤗 Hub, for example, google-bert/bert-base-uncased , run the following command: Copied optimum-cli export tflite --model google-bert/bert-base-uncased --sequence_length 128 bert_tflite/ You should see the logs indicating progress and showing where the resulting model.tflite is saved, like this: Copied Validating TFLite model... -[✓] TFLite model output names match reference model (logits) - Validating TFLite Model output "logits" : -[✓] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model’s weights and tokenizer files in the same directory ( local_path ). When using CLI, pass the local_path to the model argument instead of the checkpoint name on 🤗 Hub. < > Update on GitHub ← Export to ONNX Export to TorchScript → Export to TF Lite |
marimo_on_Spaces.txt | marimo on Spaces Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Hub documentation marimo on Spaces Hub 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation EN 🤗 Hugging Face Hub Repositories Getting Started with Repositories Repository Settings Pull Requests & Discussions Notifications Collections Webhooks Notebooks Storage Limits Next Steps Licenses Models The Model Hub Model Cards Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Inference API docs Models Download Stats Frequently Asked Questions Advanced Topics Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Downloading Datasets Integrated Libraries Dataset Viewer Datasets Download Stats Data files Configuration Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Persistent Storage Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Your first Docker Spaces Example Docker Spaces JupyterLab on Spaces Argilla on Spaces Livebook on Spaces Label Studio on Spaces Aim on Spaces Shiny on Spaces ZenML on Spaces ChatUI on Spaces Panel on Spaces Tabby on Spaces Giskard on Spaces Evidence on Spaces marimo on Spaces Langfuse on Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Spaces Changelog Advanced Topics Other Organizations Enterprise Hub Billing Security Moderation Paper Pages Search Digital Object Identifier (DOI) Hub API Endpoints Sign-In with HF Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started marimo on Spaces marimo is a reactive notebook for Python that models notebooks as dataflow graphs. When you run a cell or interact with a UI element, marimo automatically runs affected cells (or marks them as stale), keeping code and outputs consistent and preventing bugs before they happen. Every marimo notebook is stored as pure Python, executable as a script, and deployable as an app. Key features: ⚡️ reactive: run a cell, and marimo reactively runs all dependent cells or marks them as stale 🖐️ interactive: bind sliders, tables, plots, and more to Python — no callbacks required 🔬 reproducible: no hidden state, deterministic execution, built-in package management 🏃 executable: execute as a Python script, parametrized by CLI args 🛜 shareable: deploy as an interactive web app or slides, run in the browser via WASM 🛢️ designed for data: query dataframes and databases with SQL, filter and search dataframes Deploying marimo apps on Spaces To get started with marimo on Spaces, click the button below: This will start building your Space using marimo’s Docker template. If successful, you should see a similar application to the marimo introduction notebook . Customizing your marimo app When you create a marimo Space, you’ll get a few key files to help you get started: 1. app.py This is your main marimo notebook file that defines your app’s logic. marimo notebooks are pure Python files that use the @app.cell decorator to define cells. To learn more about building notebooks and apps, see the marimo documentation . As your app grows, you can organize your code into modules and import them into your main notebook. 2. Dockerfile The Dockerfile for a marimo app is minimal since marimo has few system dependencies. The key requirements are: It installs the dependencies listed in requirements.txt (using uv ) It creates a non-root user for security It runs the app using marimo run app.py You may need to modify this file if your application requires additional system dependencies, permissions, or other CLI flags. 3. requirements.txt The Space will automatically install dependencies listed in the requirements.txt file. At minimum, you must include marimo in this file. You will want to add any other required packages your app needs. The marimo Space template provides a basic setup that you can extend based on your needs. When deployed, your notebook will run in “app mode” which hides the code cells and only shows the interactive outputs - perfect for sharing with end users. You can opt to include the code cells in your app by setting adding --include-code to the marimo run command in the Dockerfile. Additional Resources and Support marimo documentation marimo GitHub repository marimo Discord marimo template Space Troubleshooting If you encounter issues: Make sure your notebook runs locally in app mode using marimo run app.py Check that all required packages are listed in requirements.txt Verify the port configuration matches (7860 is the default for Spaces) Check Space logs for any Python errors For more help, visit the marimo Discord or open an issue . < > Update on GitHub ← Evidence on Spaces Langfuse on Spaces → marimo on Spaces Deploying marimo apps on Spaces Customizing your marimo app 1. app.py 2. Dockerfile 3. requirements.txt Additional Resources and Support Troubleshooting |
PyTorch_2.0.txt | PyTorch 2.0 Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Diffusers documentation PyTorch 2.0 Diffusers 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.32.2 v0.31.0 v0.30.3 v0.29.2 v0.28.2 v0.27.2 v0.26.3 v0.25.1 v0.24.0 v0.23.1 v0.22.3 v0.21.0 v0.20.0 v0.19.3 v0.18.2 v0.17.1 v0.16.0 v0.15.0 v0.14.0 v0.13.0 v0.12.0 v0.11.0 v0.10.2 v0.9.0 v0.8.0 v0.7.0 v0.6.0 v0.5.1 v0.4.1 v0.3.0 v0.2.4 EN JA KO PT ZH Get started 🧨 Diffusers Quicktour Effective and efficient diffusion Installation Tutorials Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model Load LoRAs for inference Accelerate inference of text-to-image diffusion models Working with big models Load pipelines and adapters Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Load adapters Push files to the Hub Generative tasks Unconditional image generation Text-to-image Image-to-image Inpainting Text or image-to-video Depth-to-image Inference techniques Overview Create a server Distributed inference Merge LoRAs Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques Advanced inference Outpainting Specific pipeline examples CogVideoX Stable Diffusion XL SDXL Turbo Kandinsky IP-Adapter PAG ControlNet T2I-Adapter Latent Consistency Model Textual inversion Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision Training Overview Create a dataset for training Adapt a model to a new task Models Methods Quantization Methods Getting Started bitsandbytes gguf torchao Accelerate inference and reduce memory Speed up inference Reduce memory usage PyTorch 2.0 xFormers Token merging DeepCache TGATE xDiT Optimized model formats JAX/Flax ONNX OpenVINO Core ML Optimized hardware Metal Performance Shaders (MPS) Habana Gaudi AWS Neuron Conceptual Guides Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models Community Projects Projects built with Diffusers API Main Classes Loaders Models Pipelines Schedulers Internal classes Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started PyTorch 2.0 🤗 Diffusers supports the latest optimizations from PyTorch 2.0 which include: A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. torch.compile , a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled. Both of these optimizations require PyTorch 2.0 or later and 🤗 Diffusers > 0.13.0. Copied pip install --upgrade torch diffusers Scaled dot product attention torch.nn.functional.scaled_dot_product_attention (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you’re using PyTorch 2.0 and the latest version of 🤗 Diffusers, so you don’t need to add anything to your code. However, if you want to explicitly enable it, you can set a DiffusionPipeline to use AttnProcessor2_0 : Copied import torch from diffusers import DiffusionPipeline + from diffusers.models.attention_processor import AttnProcessor2_0 pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + pipe.unet.set_attn_processor(AttnProcessor2_0()) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] SDPA should be as fast and memory efficient as xFormers ; check the benchmark for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, AttnProcessor . To revert to AttnProcessor , call the set_default_attn_processor() function on the pipeline: Copied import torch from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + pipe.unet.set_default_attn_processor() prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] torch.compile The torch.compile function can often provide an additional speed-up to your PyTorch code. In 🤗 Diffusers, it is usually best to wrap the UNet with torch.compile because it does most of the heavy lifting in the pipeline. Copied from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5" , torch_dtype=torch.float16, use_safetensors= True ).to( "cuda" ) pipe.unet = torch. compile (pipe.unet, mode= "reduce-overhead" , fullgraph= True ) images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[ 0 ] Depending on GPU type, torch.compile can provide an additional speed-up of 5-300x on top of SDPA! If you’re using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), torch.compile is able to squeeze even more performance out of these GPUs. Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about torch.compile , refer to the torch_compile tutorial. Learn more about other ways PyTorch 2.0 can help optimize your model in the Accelerate inference of text-to-image diffusion models tutorial. Benchmark We conducted a comprehensive benchmark with PyTorch 2.0’s efficient attention implementation and torch.compile across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on 🤗 Diffusers v0.17.0.dev0 to optimize torch.compile usage (see here for more details). Expand the dropdown below to find the code used to benchmark each pipeline: Stable Diffusion text-to-image Copied from diffusers import DiffusionPipeline import torch path = "stable-diffusion-v1-5/stable-diffusion-v1-5" run_compile = True # Set True / False pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors= True ) pipe = pipe.to( "cuda" ) pipe.unet.to(memory_format=torch.channels_last) if run_compile: print ( "Run torch compile" ) pipe.unet = torch. compile (pipe.unet, mode= "reduce-overhead" , fullgraph= True ) prompt = "ghibli style, a fantasy landscape with castles" for _ in range ( 3 ): images = pipe(prompt=prompt).images Stable Diffusion image-to-image Copied from diffusers import StableDiffusionImg2ImgPipeline from diffusers.utils import load_image import torch url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" init_image = load_image(url) init_image = init_image.resize(( 512 , 512 )) path = "stable-diffusion-v1-5/stable-diffusion-v1-5" run_compile = True # Set True / False pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors= True ) pipe = pipe.to( "cuda" ) pipe.unet.to(memory_format=torch.channels_last) if run_compile: print ( "Run torch compile" ) pipe.unet = torch. compile (pipe.unet, mode= "reduce-overhead" , fullgraph= True ) prompt = "ghibli style, a fantasy landscape with castles" for _ in range ( 3 ): image = pipe(prompt=prompt, image=init_image).images[ 0 ] Stable Diffusion inpainting Copied from diffusers import StableDiffusionInpaintPipeline from diffusers.utils import load_image import torch img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = load_image(img_url).resize(( 512 , 512 )) mask_image = load_image(mask_url).resize(( 512 , 512 )) path = "runwayml/stable-diffusion-inpainting" run_compile = True # Set True / False pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors= True ) pipe = pipe.to( "cuda" ) pipe.unet.to(memory_format=torch.channels_last) if run_compile: print ( "Run torch compile" ) pipe.unet = torch. compile (pipe.unet, mode= "reduce-overhead" , fullgraph= True ) prompt = "ghibli style, a fantasy landscape with castles" for _ in range ( 3 ): image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[ 0 ] ControlNet Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel from diffusers.utils import load_image import torch url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" init_image = load_image(url) init_image = init_image.resize(( 512 , 512 )) path = "stable-diffusion-v1-5/stable-diffusion-v1-5" run_compile = True # Set True / False controlnet = ControlNetModel.from_pretrained( "lllyasviel/sd-controlnet-canny" , torch_dtype=torch.float16, use_safetensors= True ) pipe = StableDiffusionControlNetPipeline.from_pretrained( path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors= True ) pipe = pipe.to( "cuda" ) pipe.unet.to(memory_format=torch.channels_last) pipe.controlnet.to(memory_format=torch.channels_last) if run_compile: print ( "Run torch compile" ) pipe.unet = torch. compile (pipe.unet, mode= "reduce-overhead" , fullgraph= True ) pipe.controlnet = torch. compile (pipe.controlnet, mode= "reduce-overhead" , fullgraph= True ) prompt = "ghibli style, a fantasy landscape with castles" for _ in range ( 3 ): image = pipe(prompt=prompt, image=init_image).images[ 0 ] DeepFloyd IF text-to-image + upscaling Copied from diffusers import DiffusionPipeline import torch run_compile = True # Set True / False pipe_1 = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-I-M-v1.0" , variant= "fp16" , text_encoder= None , torch_dtype=torch.float16, use_safetensors= True ) pipe_1.to( "cuda" ) pipe_2 = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-II-M-v1.0" , variant= "fp16" , text_encoder= None , torch_dtype=torch.float16, use_safetensors= True ) pipe_2.to( "cuda" ) pipe_3 = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-x4-upscaler" , torch_dtype=torch.float16, use_safetensors= True ) pipe_3.to( "cuda" ) pipe_1.unet.to(memory_format=torch.channels_last) pipe_2.unet.to(memory_format=torch.channels_last) pipe_3.unet.to(memory_format=torch.channels_last) if run_compile: pipe_1.unet = torch. compile (pipe_1.unet, mode= "reduce-overhead" , fullgraph= True ) pipe_2.unet = torch. compile (pipe_2.unet, mode= "reduce-overhead" , fullgraph= True ) pipe_3.unet = torch. compile (pipe_3.unet, mode= "reduce-overhead" , fullgraph= True ) prompt = "the blue hulk" prompt_embeds = torch.randn(( 1 , 2 , 4096 ), dtype=torch.float16) neg_prompt_embeds = torch.randn(( 1 , 2 , 4096 ), dtype=torch.float16) for _ in range ( 3 ): image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type= "pt" ).images image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type= "pt" ).images image_3 = pipe_3(prompt=prompt, image=image_1, noise_level= 100 ).images The graph below highlights the relative speed-ups for the StableDiffusionPipeline across five GPU families with PyTorch 2.0 and torch.compile enabled. The benchmarks for the following graphs are measured in number of iterations/second . To give you an even better idea of how this speed-up holds for the other pipelines, consider the following graph for an A100 with PyTorch 2.0 and torch.compile : In the following tables, we report our findings in terms of the number of iterations/second . A100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 21.66 23.13 44.03 49.74 SD - img2img 21.81 22.40 43.92 46.32 SD - inpaint 22.24 23.23 43.76 49.25 SD - controlnet 15.02 15.82 32.13 36.08 IF 20.21 / 13.84 / 24.00 20.12 / 13.70 / 24.03 ❌ 97.34 / 27.23 / 111.66 SDXL - txt2img 8.64 9.9 - - A100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 11.6 13.12 14.62 17.27 SD - img2img 11.47 13.06 14.66 17.25 SD - inpaint 11.67 13.31 14.88 17.48 SD - controlnet 8.28 9.38 10.51 12.41 IF 25.02 18.04 ❌ 48.47 SDXL - txt2img 2.44 2.74 - - A100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.04 3.6 3.83 4.68 SD - img2img 2.98 3.58 3.83 4.67 SD - inpaint 3.04 3.66 3.9 4.76 SD - controlnet 2.15 2.58 2.74 3.35 IF 8.78 9.82 ❌ 16.77 SDXL - txt2img 0.64 0.72 - - V100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 18.99 19.14 20.95 22.17 SD - img2img 18.56 19.18 20.95 22.11 SD - inpaint 19.14 19.06 21.08 22.20 SD - controlnet 13.48 13.93 15.18 15.88 IF 20.01 / 9.08 / 23.34 19.79 / 8.98 / 24.10 ❌ 55.75 / 11.57 / 57.67 V100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 5.96 5.89 6.83 6.86 SD - img2img 5.90 5.91 6.81 6.82 SD - inpaint 5.99 6.03 6.93 6.95 SD - controlnet 4.26 4.29 4.92 4.93 IF 15.41 14.76 ❌ 22.95 V100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.66 1.66 1.92 1.90 SD - img2img 1.65 1.65 1.91 1.89 SD - inpaint 1.69 1.69 1.95 1.93 SD - controlnet 1.19 1.19 OOM after warmup 1.36 IF 5.43 5.29 ❌ 7.06 T4 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.9 6.95 7.3 7.56 SD - img2img 6.84 6.99 7.04 7.55 SD - inpaint 6.91 6.7 7.01 7.37 SD - controlnet 4.89 4.86 5.35 5.48 IF 17.42 / 2.47 / 18.52 16.96 / 2.45 / 18.69 ❌ 24.63 / 2.47 / 23.39 SDXL - txt2img 1.15 1.16 - - T4 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.79 1.79 2.03 1.99 SD - img2img 1.77 1.77 2.05 2.04 SD - inpaint 1.81 1.82 2.09 2.09 SD - controlnet 1.34 1.27 1.47 1.46 IF 5.79 5.61 ❌ 7.39 SDXL - txt2img 0.288 0.289 - - T4 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 2.34s 2.30s OOM after 2nd iteration 1.99s SD - img2img 2.35s 2.31s OOM after warmup 2.00s SD - inpaint 2.30s 2.26s OOM after 2nd iteration 1.95s SD - controlnet OOM after 2nd iteration OOM after 2nd iteration OOM after warmup OOM after warmup IF * 1.44 1.44 ❌ 1.94 SDXL - txt2img OOM OOM - - RTX 3090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 22.56 22.84 23.84 25.69 SD - img2img 22.25 22.61 24.1 25.83 SD - inpaint 22.22 22.54 24.26 26.02 SD - controlnet 16.03 16.33 17.38 18.56 IF 27.08 / 9.07 / 31.23 26.75 / 8.92 / 31.47 ❌ 68.08 / 11.16 / 65.29 RTX 3090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.46 6.35 7.29 7.3 SD - img2img 6.33 6.27 7.31 7.26 SD - inpaint 6.47 6.4 7.44 7.39 SD - controlnet 4.59 4.54 5.27 5.26 IF 16.81 16.62 ❌ 21.57 RTX 3090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.7 1.69 1.93 1.91 SD - img2img 1.68 1.67 1.93 1.9 SD - inpaint 1.72 1.71 1.97 1.94 SD - controlnet 1.23 1.22 1.4 1.38 IF 5.01 5.00 ❌ 6.33 RTX 4090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 40.5 41.89 44.65 49.81 SD - img2img 40.39 41.95 44.46 49.8 SD - inpaint 40.51 41.88 44.58 49.72 SD - controlnet 29.27 30.29 32.26 36.03 IF 69.71 / 18.78 / 85.49 69.13 / 18.80 / 85.56 ❌ 124.60 / 26.37 / 138.79 SDXL - txt2img 6.8 8.18 - - RTX 4090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 12.62 12.84 15.32 15.59 SD - img2img 12.61 12,.79 15.35 15.66 SD - inpaint 12.65 12.81 15.3 15.58 SD - controlnet 9.1 9.25 11.03 11.22 IF 31.88 31.14 ❌ 43.92 SDXL - txt2img 2.19 2.35 - - RTX 4090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.17 3.2 3.84 3.85 SD - img2img 3.16 3.2 3.84 3.85 SD - inpaint 3.17 3.2 3.85 3.85 SD - controlnet 2.23 2.3 2.7 2.75 IF 9.26 9.2 ❌ 13.31 SDXL - txt2img 0.52 0.53 - - Notes Follow this PR for more details on the environment used for conducting the benchmarks. For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1. Thanks to Horace He from the PyTorch team for their support in improving our support of torch.compile() in Diffusers. < > Update on GitHub ← Reduce memory usage xFormers → Py Torch 2.0 Scaled dot product attention torch.compile Benchmark Stable Diffusion text-to-image Stable Diffusion image-to-image Stable Diffusion inpainting Control Net Deep Floyd I F text-to-image + upscaling A100 (batch size: 1) A100 (batch size: 4) A100 (batch size: 16) V100 (batch size: 1) V100 (batch size: 4) V100 (batch size: 16) T4 (batch size: 1) T4 (batch size: 4) T4 (batch size: 16) RT X 3090 (batch size: 1) RT X 3090 (batch size: 4) RT X 3090 (batch size: 16) RT X 4090 (batch size: 1) RT X 4090 (batch size: 4) RT X 4090 (batch size: 16) Notes |
Using_the_`evaluator`.txt | Using the `evaluator` Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Evaluate documentation Using the `evaluator` Evaluate 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v0.4.0 v0.3.0 v0.2.3 v0.1.2 EN Get started 🤗 Evaluate Tutorials Installation A quick tour How-to guides Choosing the right metric Adding new evaluations Using the evaluator Using the evaluator with custom pipelines Creating an EvaluationSuite Using 🤗 Evaluate with other ML frameworks Transformers Keras and Tensorflow scikit-learn Conceptual guides Types of evaluations Considerations for model evaluation Reference Main classes Loading methods Saving methods Hub methods Evaluator classes Visualization methods Logging methods Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started Using the evaluator The Evaluator classes allow to evaluate a triplet of model, dataset, and metric. The models wrapped in a pipeline, responsible for handling all preprocessing and post-processing and out-of-the-box, Evaluator s support transformers pipelines for the supported tasks, but custom pipelines can be passed, as showcased in the section Using the evaluator with custom pipelines . Currently supported tasks are: "text-classification" : will use the TextClassificationEvaluator . "token-classification" : will use the TokenClassificationEvaluator . "question-answering" : will use the QuestionAnsweringEvaluator . "image-classification" : will use the ImageClassificationEvaluator . "text-generation" : will use the TextGenerationEvaluator . "text2text-generation" : will use the Text2TextGenerationEvaluator . "summarization" : will use the SummarizationEvaluator . "translation" : will use the TranslationEvaluator . "automatic-speech-recognition" : will use the AutomaticSpeechRecognitionEvaluator . To run an Evaluator with several tasks in a single call, use the EvaluationSuite , which runs evaluations on a collection of SubTask s. Each task has its own set of requirements for the dataset format and pipeline output, make sure to check them out for your custom use case. Let’s have a look at some of them and see how you can use the evaluator to evalute a single or multiple of models, datasets, and metrics at the same time. Text classification The text classification evaluator can be used to evaluate text models on classification datasets such as IMDb. Beside the model, data, and metric inputs it takes the following optional inputs: input_column="text" : with this argument the column with the data for the pipeline can be specified. label_column="label" : with this argument the column with the labels for the evaluation can be specified. label_mapping=None : the label mapping aligns the labels in the pipeline output with the labels need for evaluation. E.g. the labels in label_column can be integers ( 0 / 1 ) whereas the pipeline can produce label names such as "positive" / "negative" . With that dictionary the pipeline outputs are mapped to the labels. By default the "accuracy" metric is computed. Evaluate models on the Hub There are several ways to pass a model to the evaluator: you can pass the name of a model on the Hub, you can load a transformers model and pass it to the evaluator or you can pass an initialized transformers.Pipeline . Alternatively you can pass any callable function that behaves like a pipeline call for the task in any framework. So any of the following works: Copied from datasets import load_dataset from evaluate import evaluator from transformers import AutoModelForSequenceClassification, pipeline data = load_dataset( "imdb" , split= "test" ).shuffle(seed= 42 ).select( range ( 1000 )) task_evaluator = evaluator( "text-classification" ) # 1. Pass a model name or path eval_results = task_evaluator.compute( model_or_pipeline= "lvwerra/distilbert-imdb" , data=data, label_mapping={ "NEGATIVE" : 0 , "POSITIVE" : 1 } ) # 2. Pass an instantiated model model = AutoModelForSequenceClassification.from_pretrained( "lvwerra/distilbert-imdb" ) eval_results = task_evaluator.compute( model_or_pipeline=model, data=data, label_mapping={ "NEGATIVE" : 0 , "POSITIVE" : 1 } ) # 3. Pass an instantiated pipeline pipe = pipeline( "text-classification" , model= "lvwerra/distilbert-imdb" ) eval_results = task_evaluator.compute( model_or_pipeline=pipe, data=data, label_mapping={ "NEGATIVE" : 0 , "POSITIVE" : 1 } ) print (eval_results) Without specifying a device, the default for model inference will be the first GPU on the machine if one is available, and else CPU. If you want to use a specific device you can pass device to compute where -1 will use the GPU and a positive integer (starting with 0) will use the associated CUDA device. The results will look as follows: Copied { 'accuracy' : 0.918 , 'latency_in_seconds' : 0.013 , 'samples_per_second' : 78.887 , 'total_time_in_seconds' : 12.676 } Note that evaluation results include both the requested metric, and information about the time it took to obtain predictions through the pipeline. The time performances can give useful indication on model speed for inference but should be taken with a grain of salt: they include all the processing that goes on in the pipeline. This may include tokenizing, post-processing, that may be different depending on the model. Furthermore, it depends a lot on the hardware you are running the evaluation on and you may be able to improve the performance by optimizing things like the batch size. Evaluate multiple metrics With the combine() function one can bundle several metrics into an object that behaves like a single metric. We can use this to evaluate several metrics at once with the evaluator: Copied import evaluate eval_results = task_evaluator.compute( model_or_pipeline= "lvwerra/distilbert-imdb" , data=data, metric=evaluate.combine([ "accuracy" , "recall" , "precision" , "f1" ]), label_mapping={ "NEGATIVE" : 0 , "POSITIVE" : 1 } ) print (eval_results) The results will look as follows: Copied { 'accuracy' : 0.918 , 'f1' : 0.916 , 'precision' : 0.9147 , 'recall' : 0.9187 , 'latency_in_seconds' : 0.013 , 'samples_per_second' : 78.887 , 'total_time_in_seconds' : 12.676 } Next let’s have a look at token classification. Token Classification With the token classification evaluator one can evaluate models for tasks such as NER or POS tagging. It has the following specific arguments: input_column="text" : with this argument the column with the data for the pipeline can be specified. label_column="label" : with this argument the column with the labels for the evaluation can be specified. label_mapping=None : the label mapping aligns the labels in the pipeline output with the labels need for evaluation. E.g. the labels in label_column can be integers ( 0 / 1 ) whereas the pipeline can produce label names such as "positive" / "negative" . With that dictionary the pipeline outputs are mapped to the labels. join_by=" " : While most datasets are already tokenized the pipeline expects a string. Thus the tokens need to be joined before passing to the pipeline. By default they are joined with a whitespace. Let’s have a look how we can use the evaluator to benchmark several models. Benchmarking several models Here is an example where several models can be compared thanks to the evaluator in only a few lines of code, abstracting away the preprocessing, inference, postprocessing, metric computation: Copied import pandas as pd from datasets import load_dataset from evaluate import evaluator from transformers import pipeline models = [ "xlm-roberta-large-finetuned-conll03-english" , "dbmdz/bert-large-cased-finetuned-conll03-english" , "elastic/distilbert-base-uncased-finetuned-conll03-english" , "dbmdz/electra-large-discriminator-finetuned-conll03-english" , "gunghio/distilbert-base-multilingual-cased-finetuned-conll2003-ner" , "philschmid/distilroberta-base-ner-conll2003" , "Jorgeutd/albert-base-v2-finetuned-ner" , ] data = load_dataset( "conll2003" , split= "validation" ).shuffle().select( 1000 ) task_evaluator = evaluator( "token-classification" ) results = [] for model in models: results.append( task_evaluator.compute( model_or_pipeline=model, data=data, metric= "seqeval" ) ) df = pd.DataFrame(results, index=models) df[[ "overall_f1" , "overall_accuracy" , "total_time_in_seconds" , "samples_per_second" , "latency_in_seconds" ]] The result is a table that looks like this: model overall_f1 overall_accuracy total_time_in_seconds samples_per_second latency_in_seconds Jorgeutd/albert-base-v2-finetuned-ner 0.941 0.989 4.515 221.468 0.005 dbmdz/bert-large-cased-finetuned-conll03-english 0.962 0.881 11.648 85.850 0.012 dbmdz/electra-large-discriminator-finetuned-conll03-english 0.965 0.881 11.456 87.292 0.011 elastic/distilbert-base-uncased-finetuned-conll03-english 0.940 0.989 2.318 431.378 0.002 gunghio/distilbert-base-multilingual-cased-finetuned-conll2003-ner 0.947 0.991 2.376 420.873 0.002 philschmid/distilroberta-base-ner-conll2003 0.961 0.994 2.436 410.579 0.002 xlm-roberta-large-finetuned-conll03-english 0.969 0.882 11.996 83.359 0.012 Visualizing results You can feed in the results list above into the plot_radar() function to visualize different aspects of their performance and choose the model that is the best fit, depending on the metric(s) that are relevant to your use case: Copied import evaluate from evaluate.visualization import radar_plot >>> plot = radar_plot(data=results, model_names=models, invert_range=[ "latency_in_seconds" ]) >>> plot.show() Don’t forget to specify invert_range for metrics for which smaller is better (such as the case for latency in seconds). If you want to save the plot locally, you can use the plot.savefig() function with the option bbox_inches='tight' , to make sure no part of the image gets cut off. Question Answering With the question-answering evaluator one can evaluate models for QA without needing to worry about the complicated pre- and post-processing that’s required for these models. It has the following specific arguments: question_column="question" : the name of the column containing the question in the dataset context_column="context" : the name of the column containing the context id_column="id" : the name of the column cointaing the identification field of the question and answer pair label_column="answers" : the name of the column containing the answers squad_v2_format=None : whether the dataset follows the format of squad_v2 dataset where a question may have no answer in the context. If this parameter is not provided, the format will be automatically inferred. Let’s have a look how we can evaluate QA models and compute confidence intervals at the same time. Confidence intervals Every evaluator comes with the options to compute confidence intervals using bootstrapping . Simply pass strategy="bootstrap" and set the number of resanmples with n_resamples . Copied from datasets import load_dataset from evaluate import evaluator task_evaluator = evaluator( "question-answering" ) data = load_dataset( "squad" , split= "validation[:1000]" ) eval_results = task_evaluator.compute( model_or_pipeline= "distilbert-base-uncased-distilled-squad" , data=data, metric= "squad" , strategy= "bootstrap" , n_resamples= 30 ) Results include confidence intervals as well as error estimates as follows: Copied { 'exact_match' : { 'confidence_interval' : ( 79.67 , 84.54 ), 'score' : 82.30 , 'standard_error' : 1.28 }, 'f1' : { 'confidence_interval' : ( 85.30 , 88.88 ), 'score' : 87.23 , 'standard_error' : 0.97 }, 'latency_in_seconds' : 0.0085 , 'samples_per_second' : 117.31 , 'total_time_in_seconds' : 8.52 } Image classification With the image classification evaluator we can evaluate any image classifier. It uses the same keyword arguments at the text classifier: input_column="image" : the name of the column containing the images as PIL ImageFile label_column="label" : the name of the column containing the labels label_mapping=None : We want to map class labels defined by the model in the pipeline to values consistent with those defined in the label_column Let’s have a look at how can evaluate image classification models on large datasets. Handling large datasets The evaluator can be used on large datasets! Below, an example shows how to use it on ImageNet-1k for image classification. Beware that this example will require to download ~150 GB. Copied data = load_dataset( "imagenet-1k" , split= "validation" , use_auth_token= True ) pipe = pipeline( task= "image-classification" , model= "facebook/deit-small-distilled-patch16-224" ) task_evaluator = evaluator( "image-classification" ) eval_results = task_evaluator.compute( model_or_pipeline=pipe, data=data, metric= "accuracy" , label_mapping=pipe.model.config.label2id ) Since we are using datasets to store data we make use of a technique called memory mappings. This means that the dataset is never fully loaded into memory which saves a lot of RAM. Running the above code only uses roughly 1.5 GB of RAM while the validation split is more than 30 GB big. ← Adding new evaluations Using the evaluator with custom pipelines → Using the `evaluator` Text classification Evaluate models on the Hub Evaluate multiple metrics Token Classification Benchmarking several models Visualizing results Question Answering Confidence intervals Image classification Handling large datasets |
Models.txt | models Hugging Face Models Datasets Spaces Posts Docs Enterprise Pricing Log In Sign Up Transformers.js documentation models Transformers.js 🏡 View all docs AWS Trainium & Inferentia Accelerate Amazon SageMaker Argilla AutoTrain Bitsandbytes Chat UI Competitions Dataset viewer Datasets Diffusers Distilabel Evaluate Google Cloud Google TPUs Gradio Hub Hub Python Library Hugging Face Generative AI Services (HUGS) Huggingface.js Inference API (serverless) Inference Endpoints (dedicated) Leaderboards Lighteval Optimum PEFT Safetensors Sentence Transformers TRL Tasks Text Embeddings Inference Text Generation Inference Tokenizers Transformers Transformers.js smolagents timm Search documentation main v3.0.0 v2.17.2 EN 🤗 Transformers.js Get started Installation The pipeline API Custom usage Tutorials Building a Vanilla JS Application Building a React Application Building a Next.js Application Building a Browser Extension Building an Electron Application Server-side Inference in Node.js Developer Guides Accessing Private/Gated Models Server-side Audio Processing in Node.js API Reference Index Pipelines Models Tokenizers Processors Configs Environment variables Backends Generation Utilities Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes Sign Up to get started models Definitions of all models available in Transformers.js. Example: Load and run an AutoModel . Copied import { AutoModel , AutoTokenizer } from '@huggingface/transformers' ; let tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/bert-base-uncased' ); let model = await AutoModel . from_pretrained ( 'Xenova/bert-base-uncased' ); let inputs = await tokenizer ( 'I love transformers!' ); let { logits } = await model (inputs); // Tensor { // data: Float32Array(183132) [-7.117443084716797, -7.107812881469727, -7.092104911804199, ...] // dims: (3) [1, 6, 30522], // type: "float32", // size: 183132, // } We also provide other AutoModel s (listed below), which you can use in the same way as the Python library. For example: Example: Load and run an AutoModelForSeq2SeqLM . Copied import { AutoModelForSeq2SeqLM , AutoTokenizer } from '@huggingface/transformers' ; let tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/t5-small' ); let model = await AutoModelForSeq2SeqLM . from_pretrained ( 'Xenova/t5-small' ); let { input_ids } = await tokenizer ( 'translate English to German: I love transformers!' ); let outputs = await model. generate (input_ids); let decoded = tokenizer. decode (outputs[ 0 ], { skip_special_tokens : true }); // 'Ich liebe Transformatoren!' models static .PreTrainedModel new PreTrainedModel(config, sessions, configs) instance .custom_config : * .generation_config ⇒ GenerationConfig | null .dispose() ⇒ Promise.<Array<unknown>> ._call(model_inputs) ⇒ Promise.<Object> .forward(model_inputs) ⇒ Promise.<Object> ._get_logits_warper(generation_config) ⇒ LogitsProcessorList ._prepare_generation_config(generation_config, kwargs) ⇒ GenerationConfig ._get_stopping_criteria(generation_config, [stopping_criteria]) ._validate_model_class() ._update_model_kwargs_for_generation(inputs) ⇒ Object ._prepare_model_inputs(params) ⇒ Object ._prepare_decoder_input_ids_for_generation(param0) .generate(options) ⇒ Promise.<(ModelOutput|Tensor)> .getPastKeyValues(decoderResults, pastKeyValues) ⇒ Object .getAttentions(model_output) ⇒ * .addPastKeyValues(decoderFeeds, pastKeyValues) static .from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise.<PreTrainedModel> .BaseModelOutput new BaseModelOutput(output) .BertForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .BertForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .BertForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .BertForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .RoFormerModel .RoFormerForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .RoFormerForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .RoFormerForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .RoFormerForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .ConvBertModel .ConvBertForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .ConvBertForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .ConvBertForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .ConvBertForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .ElectraModel .ElectraForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .ElectraForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .ElectraForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .ElectraForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .CamembertModel .CamembertForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .CamembertForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .CamembertForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .CamembertForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .DebertaModel .DebertaForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .DebertaForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .DebertaForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .DebertaForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .DebertaV2Model .DebertaV2ForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .DebertaV2ForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .DebertaV2ForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .DebertaV2ForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .DistilBertForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .DistilBertForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .DistilBertForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .DistilBertForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .EsmModel .EsmForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .EsmForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .EsmForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .MobileBertForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .MobileBertForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .MobileBertForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .MPNetModel .MPNetForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .MPNetForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .MPNetForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .MPNetForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .T5ForConditionalGeneration .LongT5PreTrainedModel .LongT5Model .LongT5ForConditionalGeneration .MT5ForConditionalGeneration .BartModel .BartForConditionalGeneration .BartForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .MBartModel .MBartForConditionalGeneration .MBartForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .BlenderbotModel .BlenderbotForConditionalGeneration .BlenderbotSmallModel .BlenderbotSmallForConditionalGeneration .RobertaForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .RobertaForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .RobertaForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .RobertaForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .XLMPreTrainedModel .XLMModel .XLMWithLMHeadModel ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .XLMForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .XLMForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .XLMForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .XLMRobertaForMaskedLM ._call(model_inputs) ⇒ Promise.<MaskedLMOutput> .XLMRobertaForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .XLMRobertaForTokenClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .XLMRobertaForQuestionAnswering ._call(model_inputs) ⇒ Promise.<QuestionAnsweringModelOutput> .ASTModel .ASTForAudioClassification .WhisperModel .WhisperForConditionalGeneration ._retrieve_init_tokens(generation_config) .generate(options) ⇒ Promise.<(ModelOutput|Tensor)> ._extract_token_timestamps(generate_outputs, alignment_heads, [num_frames], [time_precision]) ⇒ Tensor .VisionEncoderDecoderModel .LlavaForConditionalGeneration .CLIPModel .CLIPTextModel .from_pretrained() : PreTrainedModel.from_pretrained .CLIPTextModelWithProjection .from_pretrained() : PreTrainedModel.from_pretrained .CLIPVisionModel .from_pretrained() : PreTrainedModel.from_pretrained .CLIPVisionModelWithProjection .from_pretrained() : PreTrainedModel.from_pretrained .SiglipModel .SiglipTextModel .from_pretrained() : PreTrainedModel.from_pretrained .SiglipVisionModel .from_pretrained() : PreTrainedModel.from_pretrained .CLIPSegForImageSegmentation .GPT2LMHeadModel .JAISModel .JAISLMHeadModel .CodeGenModel .CodeGenForCausalLM .LlamaPreTrainedModel .LlamaModel .CoherePreTrainedModel .GemmaPreTrainedModel .GemmaModel .Gemma2PreTrainedModel .Gemma2Model .Qwen2PreTrainedModel .Qwen2Model .PhiModel .Phi3Model .BloomPreTrainedModel .BloomModel .BloomForCausalLM .MptModel .MptForCausalLM .OPTModel .OPTForCausalLM .VitMatteForImageMatting ._call(model_inputs) .DetrObjectDetectionOutput new DetrObjectDetectionOutput(output) .DetrSegmentationOutput new DetrSegmentationOutput(output) .RTDetrObjectDetectionOutput new RTDetrObjectDetectionOutput(output) .TableTransformerModel .TableTransformerForObjectDetection ._call(model_inputs) .ResNetPreTrainedModel .ResNetModel .ResNetForImageClassification ._call(model_inputs) .Swin2SRModel .Swin2SRForImageSuperResolution .DPTModel .DPTForDepthEstimation .DepthAnythingForDepthEstimation .GLPNModel .GLPNForDepthEstimation .DonutSwinModel .ConvNextModel .ConvNextForImageClassification ._call(model_inputs) .ConvNextV2Model .ConvNextV2ForImageClassification ._call(model_inputs) .Dinov2Model .Dinov2ForImageClassification ._call(model_inputs) .YolosObjectDetectionOutput new YolosObjectDetectionOutput(output) .SamModel .get_image_embeddings(model_inputs) ⇒ Promise.<{image_embeddings: Tensor, image_positional_embeddings: Tensor}> .forward(model_inputs) ⇒ Promise.<Object> ._call(model_inputs) ⇒ Promise.<SamImageSegmentationOutput> .SamImageSegmentationOutput new SamImageSegmentationOutput(output) .Wav2Vec2Model .Wav2Vec2ForAudioFrameClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .PyAnnoteModel .PyAnnoteForAudioFrameClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .UniSpeechModel .UniSpeechForCTC ._call(model_inputs) .UniSpeechForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .UniSpeechSatModel .UniSpeechSatForCTC ._call(model_inputs) .UniSpeechSatForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .UniSpeechSatForAudioFrameClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .Wav2Vec2BertModel .Wav2Vec2BertForCTC ._call(model_inputs) .Wav2Vec2BertForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .HubertModel .HubertForCTC ._call(model_inputs) .HubertForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .WavLMPreTrainedModel .WavLMModel .WavLMForCTC ._call(model_inputs) .WavLMForSequenceClassification ._call(model_inputs) ⇒ Promise.<SequenceClassifierOutput> .WavLMForXVector ._call(model_inputs) ⇒ Promise.<XVectorOutput> .WavLMForAudioFrameClassification ._call(model_inputs) ⇒ Promise.<TokenClassifierOutput> .SpeechT5PreTrainedModel .SpeechT5Model .SpeechT5ForSpeechToText .SpeechT5ForTextToSpeech .generate_speech(input_values, speaker_embeddings, options) ⇒ Promise.<SpeechOutput> .SpeechT5HifiGan .TrOCRForCausalLM .MistralPreTrainedModel .Starcoder2PreTrainedModel .FalconPreTrainedModel .ClapTextModelWithProjection .from_pretrained() : PreTrainedModel.from_pretrained .ClapAudioModelWithProjection .from_pretrained() : PreTrainedModel.from_pretrained .VitsModel ._call(model_inputs) ⇒ Promise.<VitsModelOutput> .SegformerModel .SegformerForImageClassification .SegformerForSemanticSegmentation .StableLmModel .StableLmForCausalLM .EfficientNetModel .EfficientNetForImageClassification ._call(model_inputs) .MusicgenModel .MusicgenForCausalLM .MusicgenForConditionalGeneration ._apply_and_filter_by_delay_pattern_mask(outputs) ⇒ Tensor .generate(options) ⇒ Promise.<(ModelOutput|Tensor)> .MobileNetV1Model .MobileNetV1ForImageClassification ._call(model_inputs) .MobileNetV2Model .MobileNetV2ForImageClassification ._call(model_inputs) .MobileNetV3Model .MobileNetV3ForImageClassification ._call(model_inputs) .MobileNetV4Model .MobileNetV4ForImageClassification ._call(model_inputs) .DecisionTransformerModel .PretrainedMixin instance .MODEL_CLASS_MAPPINGS : * .BASE_IF_FAIL static .from_pretrained() : * .AutoModel .MODEL_CLASS_MAPPINGS : * .AutoModelForSequenceClassification .AutoModelForTokenClassification .AutoModelForSeq2SeqLM .AutoModelForSpeechSeq2Seq .AutoModelForTextToSpectrogram .AutoModelForTextToWaveform .AutoModelForCausalLM .AutoModelForMaskedLM .AutoModelForQuestionAnswering .AutoModelForVision2Seq .AutoModelForImageClassification .AutoModelForImageSegmentation .AutoModelForSemanticSegmentation .AutoModelForUniversalSegmentation .AutoModelForObjectDetection .AutoModelForMaskGeneration .Seq2SeqLMOutput new Seq2SeqLMOutput(output) .SequenceClassifierOutput new SequenceClassifierOutput(output) .XVectorOutput new XVectorOutput(output) .TokenClassifierOutput new TokenClassifierOutput(output) .MaskedLMOutput new MaskedLMOutput(output) .QuestionAnsweringModelOutput new QuestionAnsweringModelOutput(output) .CausalLMOutput new CausalLMOutput(output) .CausalLMOutputWithPast new CausalLMOutputWithPast(output) .ImageMattingOutput new ImageMattingOutput(output) .VitsModelOutput new VitsModelOutput(output) inner ~SamModelInputs : Object ~SpeechOutput : Object models.PreTrainedModel A base class for pre-trained models that provides the model configuration and an ONNX session. Kind : static class of models .PreTrainedModel new PreTrainedModel(config, sessions, configs) instance .custom_config : * .generation_config ⇒ GenerationConfig | null .dispose() ⇒ Promise.<Array<unknown>> ._call(model_inputs) ⇒ Promise.<Object> .forward(model_inputs) ⇒ Promise.<Object> ._get_logits_warper(generation_config) ⇒ LogitsProcessorList ._prepare_generation_config(generation_config, kwargs) ⇒ GenerationConfig ._get_stopping_criteria(generation_config, [stopping_criteria]) ._validate_model_class() ._update_model_kwargs_for_generation(inputs) ⇒ Object ._prepare_model_inputs(params) ⇒ Object ._prepare_decoder_input_ids_for_generation(param0) .generate(options) ⇒ Promise.<(ModelOutput|Tensor)> .getPastKeyValues(decoderResults, pastKeyValues) ⇒ Object .getAttentions(model_output) ⇒ * .addPastKeyValues(decoderFeeds, pastKeyValues) static .from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise.<PreTrainedModel> new PreTrainedModel(config, sessions, configs) Creates a new instance of the PreTrainedModel class. Param Type Description config * The model configuration. sessions Record.<string, any> The inference sessions for the model. configs Record.<string, Object> Additional configuration files (e.g., generation_config.json). preTrainedModel.custom_config : <code> * </code> Kind : instance property of PreTrainedModel preTrainedModel.generation_config ⇒ <code> GenerationConfig </code> | <code> null </code> Get the model’s generation config, if it exists. Kind : instance property of PreTrainedModel Returns : GenerationConfig | null - The model’s generation config if it exists, otherwise null . preTrainedModel.dispose() ⇒ <code> Promise. < Array < unknown > > </code> Disposes of all the ONNX sessions that were created during inference. Kind : instance method of PreTrainedModel Returns : Promise.<Array<unknown>> - An array of promises, one for each ONNX session that is being disposed. Todo Use https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/FinalizationRegistry preTrainedModel._call(model_inputs) ⇒ <code> Promise. < Object > </code> Runs the model with the provided inputs Kind : instance method of PreTrainedModel Returns : Promise.<Object> - Object containing output tensors Param Type Description model_inputs Object Object containing input tensors preTrainedModel.forward(model_inputs) ⇒ <code> Promise. < Object > </code> Forward method for a pretrained model. If not overridden by a subclass, the correct forward method will be chosen based on the model type. Kind : instance method of PreTrainedModel Returns : Promise.<Object> - The output data from the model in the format specified in the ONNX model. Throws : Error This method must be implemented in subclasses. Param Type Description model_inputs Object The input data to the model in the format specified in the ONNX model. preTrainedModel._get_logits_warper(generation_config) ⇒ <code> LogitsProcessorList </code> This function returns a [ LogitsProcessorList ] list object that contains all relevant [ LogitsWarper ] instances used for multinomial sampling. Kind : instance method of PreTrainedModel Returns : LogitsProcessorList - generation_config Param Type Description generation_config GenerationConfig The generation config. preTrainedModel._prepare_generation_config(generation_config, kwargs) ⇒ <code> GenerationConfig </code> This function merges multiple generation configs together to form a final generation config to be used by the model for text generation. It first creates an empty GenerationConfig object, then it applies the model’s own generation_config property to it. Finally, if a generation_config object was passed in the arguments, it overwrites the corresponding properties in the final config with those of the passed config object. Kind : instance method of PreTrainedModel Returns : GenerationConfig - The final generation config object to be used by the model for text generation. Param Type Description generation_config GenerationConfig | null A GenerationConfig object containing generation parameters. kwargs Object Additional generation parameters to be used in place of those in the generation_config object. preTrainedModel._get_stopping_criteria(generation_config, [stopping_criteria]) Kind : instance method of PreTrainedModel Param Type Default generation_config GenerationConfig [stopping_criteria] StoppingCriteriaList preTrainedModel._validate_model_class() Confirms that the model class is compatible with generation. If not, raises an exception that points to the right class to use. Kind : instance method of PreTrainedModel preTrainedModel._update_model_kwargs_for_generation(inputs) ⇒ <code> Object </code> Kind : instance method of PreTrainedModel Returns : Object - The updated model inputs for the next generation iteration. Param Type inputs Object inputs.generated_input_ids Array.<Array<bigint>> inputs.outputs Object inputs.model_inputs Object inputs.is_encoder_decoder boolean preTrainedModel._prepare_model_inputs(params) ⇒ <code> Object </code> This function extracts the model-specific inputs for generation. Kind : instance method of PreTrainedModel Returns : Object - The model-specific inputs for generation. Param Type Default params Object [params.inputs] Tensor [params.bos_token_id] number [params.model_kwargs] Record.<string, (Tensor|Array<number>)> preTrainedModel._prepare_decoder_input_ids_for_generation(param0) Prepares decoder_input_ids for generation with encoder-decoder models Kind : instance method of PreTrainedModel Param Type param0 * preTrainedModel.generate(options) ⇒ <code> Promise. < (ModelOutput|Tensor) > </code> Generates sequences of token ids for models with a language modeling head. Kind : instance method of PreTrainedModel Returns : Promise.<(ModelOutput|Tensor)> - The output of the model, which can contain the generated token ids, attentions, and scores. Param Type options * preTrainedModel.getPastKeyValues(decoderResults, pastKeyValues) ⇒ <code> Object </code> Returns an object containing past key values from the given decoder results object. Kind : instance method of PreTrainedModel Returns : Object - An object containing past key values. Param Type Description decoderResults Object The decoder results object. pastKeyValues Object The previous past key values. preTrainedModel.getAttentions(model_output) ⇒ <code> * </code> Returns an object containing attentions from the given model output object. Kind : instance method of PreTrainedModel Returns : * - An object containing attentions. Param Type Description model_output Object The output of the model. preTrainedModel.addPastKeyValues(decoderFeeds, pastKeyValues) Adds past key values to the decoder feeds object. If pastKeyValues is null, creates new tensors for past key values. Kind : instance method of PreTrainedModel Param Type Description decoderFeeds Object The decoder feeds object to add past key values to. pastKeyValues Object An object containing past key values. PreTrainedModel.from_pretrained(pretrained_model_name_or_path, options) ⇒ <code> Promise. < PreTrainedModel > </code> Instantiate one of the model classes of the library from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible) Kind : static method of PreTrainedModel Returns : Promise.<PreTrainedModel> - A new instance of the PreTrainedModel class. Param Type Description pretrained_model_name_or_path string The name or path of the pretrained model. Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased , or namespaced under a user or organization name, like dbmdz/bert-base-german-cased . A path to a directory containing model weights, e.g., ./my_model_directory/ . options * Additional options for loading the model. models.BaseModelOutput Base class for model’s outputs, with potential hidden states and attentions. Kind : static class of models new BaseModelOutput(output) Param Type Description output Object The output of the model. output.last_hidden_state Tensor Sequence of hidden-states at the output of the last layer of the model. [output.hidden_states] Tensor Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. [output.attentions] Tensor Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. models.BertForMaskedLM BertForMaskedLM is a class representing a BERT model for masked language modeling. Kind : static class of models bertForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of BertForMaskedLM Returns : Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling. Param Type Description model_inputs Object The inputs to the model. models.BertForSequenceClassification BertForSequenceClassification is a class representing a BERT model for sequence classification. Kind : static class of models bertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of BertForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.BertForTokenClassification BertForTokenClassification is a class representing a BERT model for token classification. Kind : static class of models bertForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of BertForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.BertForQuestionAnswering BertForQuestionAnswering is a class representing a BERT model for question answering. Kind : static class of models bertForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of BertForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering. Param Type Description model_inputs Object The inputs to the model. models.RoFormerModel The bare RoFormer Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.RoFormerForMaskedLM RoFormer Model with a language modeling head on top. Kind : static class of models roFormerForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of RoFormerForMaskedLM Returns : Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling. Param Type Description model_inputs Object The inputs to the model. models.RoFormerForSequenceClassification RoFormer Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) Kind : static class of models roFormerForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of RoFormerForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.RoFormerForTokenClassification RoFormer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. Kind : static class of models roFormerForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of RoFormerForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.RoFormerForQuestionAnswering RoFormer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits ). Kind : static class of models roFormerForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of RoFormerForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering. Param Type Description model_inputs Object The inputs to the model. models.ConvBertModel The bare ConvBERT Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.ConvBertForMaskedLM ConvBERT Model with a language modeling head on top. Kind : static class of models convBertForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of ConvBertForMaskedLM Returns : Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling. Param Type Description model_inputs Object The inputs to the model. models.ConvBertForSequenceClassification ConvBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) Kind : static class of models convBertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of ConvBertForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.ConvBertForTokenClassification ConvBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. Kind : static class of models convBertForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of ConvBertForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.ConvBertForQuestionAnswering ConvBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits ) Kind : static class of models convBertForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of ConvBertForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering. Param Type Description model_inputs Object The inputs to the model. models.ElectraModel The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different. Kind : static class of models models.ElectraForMaskedLM Electra model with a language modeling head on top. Kind : static class of models electraForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of ElectraForMaskedLM Returns : Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling. Param Type Description model_inputs Object The inputs to the model. models.ElectraForSequenceClassification ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) Kind : static class of models electraForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of ElectraForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.ElectraForTokenClassification Electra model with a token classification head on top. Kind : static class of models electraForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of ElectraForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.ElectraForQuestionAnswering LECTRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits ). Kind : static class of models electraForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of ElectraForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering. Param Type Description model_inputs Object The inputs to the model. models.CamembertModel The bare CamemBERT Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.CamembertForMaskedLM CamemBERT Model with a language modeling head on top. Kind : static class of models camembertForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of CamembertForMaskedLM Returns : Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling. Param Type Description model_inputs Object The inputs to the model. models.CamembertForSequenceClassification CamemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. Kind : static class of models camembertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of CamembertForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.CamembertForTokenClassification CamemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. Kind : static class of models camembertForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of CamembertForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.CamembertForQuestionAnswering CamemBERT Model with a span classification head on top for extractive question-answering tasks Kind : static class of models camembertForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of CamembertForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering. Param Type Description model_inputs Object The inputs to the model. models.DebertaModel The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.DebertaForMaskedLM DeBERTa Model with a language modeling head on top. Kind : static class of models debertaForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of DebertaForMaskedLM Returns : Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling. Param Type Description model_inputs Object The inputs to the model. models.DebertaForSequenceClassification DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) Kind : static class of models debertaForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of DebertaForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.DebertaForTokenClassification DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. Kind : static class of models debertaForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of DebertaForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.DebertaForQuestionAnswering DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits ). Kind : static class of models debertaForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of DebertaForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering. Param Type Description model_inputs Object The inputs to the model. models.DebertaV2Model The bare DeBERTa-V2 Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.DebertaV2ForMaskedLM DeBERTa-V2 Model with a language modeling head on top. Kind : static class of models debertaV2ForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of DebertaV2ForMaskedLM Returns : Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling. Param Type Description model_inputs Object The inputs to the model. models.DebertaV2ForSequenceClassification DeBERTa-V2 Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) Kind : static class of models debertaV2ForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of DebertaV2ForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.DebertaV2ForTokenClassification DeBERTa-V2 Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. Kind : static class of models debertaV2ForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of DebertaV2ForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.DebertaV2ForQuestionAnswering DeBERTa-V2 Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits ). Kind : static class of models debertaV2ForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of DebertaV2ForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering. Param Type Description model_inputs Object The inputs to the model. models.DistilBertForSequenceClassification DistilBertForSequenceClassification is a class representing a DistilBERT model for sequence classification. Kind : static class of models distilBertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of DistilBertForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.DistilBertForTokenClassification DistilBertForTokenClassification is a class representing a DistilBERT model for token classification. Kind : static class of models distilBertForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of DistilBertForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.DistilBertForQuestionAnswering DistilBertForQuestionAnswering is a class representing a DistilBERT model for question answering. Kind : static class of models distilBertForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of DistilBertForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering. Param Type Description model_inputs Object The inputs to the model. models.DistilBertForMaskedLM DistilBertForMaskedLM is a class representing a DistilBERT model for masking task. Kind : static class of models distilBertForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of DistilBertForMaskedLM Returns : Promise.<MaskedLMOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.EsmModel The bare ESM Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.EsmForMaskedLM ESM Model with a language modeling head on top. Kind : static class of models esmForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of EsmForMaskedLM Returns : Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling. Param Type Description model_inputs Object The inputs to the model. models.EsmForSequenceClassification ESM Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) Kind : static class of models esmForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of EsmForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.EsmForTokenClassification ESM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. Kind : static class of models esmForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of EsmForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.MobileBertForMaskedLM MobileBertForMaskedLM is a class representing a MobileBERT model for masking task. Kind : static class of models mobileBertForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of MobileBertForMaskedLM Returns : Promise.<MaskedLMOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.MobileBertForSequenceClassification MobileBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) Kind : static class of models mobileBertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of MobileBertForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.MobileBertForQuestionAnswering MobileBert Model with a span classification head on top for extractive question-answering tasks Kind : static class of models mobileBertForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of MobileBertForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.MPNetModel The bare MPNet Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.MPNetForMaskedLM MPNetForMaskedLM is a class representing a MPNet model for masked language modeling. Kind : static class of models mpNetForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of MPNetForMaskedLM Returns : Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling. Param Type Description model_inputs Object The inputs to the model. models.MPNetForSequenceClassification MPNetForSequenceClassification is a class representing a MPNet model for sequence classification. Kind : static class of models mpNetForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of MPNetForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.MPNetForTokenClassification MPNetForTokenClassification is a class representing a MPNet model for token classification. Kind : static class of models mpNetForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of MPNetForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.MPNetForQuestionAnswering MPNetForQuestionAnswering is a class representing a MPNet model for question answering. Kind : static class of models mpNetForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of MPNetForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering. Param Type Description model_inputs Object The inputs to the model. models.T5ForConditionalGeneration T5Model is a class representing a T5 model for conditional generation. Kind : static class of models models.LongT5PreTrainedModel An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. Kind : static class of models models.LongT5Model The bare LONGT5 Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.LongT5ForConditionalGeneration LONGT5 Model with a language modeling head on top. Kind : static class of models models.MT5ForConditionalGeneration A class representing a conditional sequence-to-sequence model based on the MT5 architecture. Kind : static class of models models.BartModel The bare BART Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.BartForConditionalGeneration The BART Model with a language modeling head. Can be used for summarization. Kind : static class of models models.BartForSequenceClassification Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) Kind : static class of models bartForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of BartForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.MBartModel The bare MBART Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.MBartForConditionalGeneration The MBART Model with a language modeling head. Can be used for summarization, after fine-tuning the pretrained models. Kind : static class of models models.MBartForSequenceClassification MBart model with a sequence classification/head on top (a linear layer on top of the pooled output). Kind : static class of models mBartForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of MBartForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.BlenderbotModel The bare Blenderbot Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.BlenderbotForConditionalGeneration The Blenderbot Model with a language modeling head. Can be used for summarization. Kind : static class of models models.BlenderbotSmallModel The bare BlenderbotSmall Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.BlenderbotSmallForConditionalGeneration The BlenderbotSmall Model with a language modeling head. Can be used for summarization. Kind : static class of models models.RobertaForMaskedLM RobertaForMaskedLM class for performing masked language modeling on Roberta models. Kind : static class of models robertaForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of RobertaForMaskedLM Returns : Promise.<MaskedLMOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.RobertaForSequenceClassification RobertaForSequenceClassification class for performing sequence classification on Roberta models. Kind : static class of models robertaForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of RobertaForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.RobertaForTokenClassification RobertaForTokenClassification class for performing token classification on Roberta models. Kind : static class of models robertaForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of RobertaForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.RobertaForQuestionAnswering RobertaForQuestionAnswering class for performing question answering on Roberta models. Kind : static class of models robertaForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of RobertaForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.XLMPreTrainedModel An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. Kind : static class of models models.XLMModel The bare XLM Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.XLMWithLMHeadModel The XLM Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). Kind : static class of models xlmWithLMHeadModel._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of XLMWithLMHeadModel Returns : Promise.<MaskedLMOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.XLMForSequenceClassification XLM Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) Kind : static class of models xlmForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of XLMForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.XLMForTokenClassification XLM Model with a token classification head on top (a linear layer on top of the hidden-states output) Kind : static class of models xlmForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of XLMForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.XLMForQuestionAnswering XLM Model with a span classification head on top for extractive question-answering tasks Kind : static class of models xlmForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of XLMForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.XLMRobertaForMaskedLM XLMRobertaForMaskedLM class for performing masked language modeling on XLMRoberta models. Kind : static class of models xlmRobertaForMaskedLM._call(model_inputs) ⇒ <code> Promise. < MaskedLMOutput > </code> Calls the model on new inputs. Kind : instance method of XLMRobertaForMaskedLM Returns : Promise.<MaskedLMOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.XLMRobertaForSequenceClassification XLMRobertaForSequenceClassification class for performing sequence classification on XLMRoberta models. Kind : static class of models xlmRobertaForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of XLMRobertaForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.XLMRobertaForTokenClassification XLMRobertaForTokenClassification class for performing token classification on XLMRoberta models. Kind : static class of models xlmRobertaForTokenClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of XLMRobertaForTokenClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification. Param Type Description model_inputs Object The inputs to the model. models.XLMRobertaForQuestionAnswering XLMRobertaForQuestionAnswering class for performing question answering on XLMRoberta models. Kind : static class of models xlmRobertaForQuestionAnswering._call(model_inputs) ⇒ <code> Promise. < QuestionAnsweringModelOutput > </code> Calls the model on new inputs. Kind : instance method of XLMRobertaForQuestionAnswering Returns : Promise.<QuestionAnsweringModelOutput> - returned object Param Type Description model_inputs Object The inputs to the model. models.ASTModel The bare AST Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.ASTForAudioClassification Audio Spectrogram Transformer model with an audio classification head on top (a linear layer on top of the pooled output) e.g. for datasets like AudioSet, Speech Commands v2. Kind : static class of models models.WhisperModel WhisperModel class for training Whisper models without a language model head. Kind : static class of models models.WhisperForConditionalGeneration WhisperForConditionalGeneration class for generating conditional outputs from Whisper models. Kind : static class of models .WhisperForConditionalGeneration ._retrieve_init_tokens(generation_config) .generate(options) ⇒ Promise.<(ModelOutput|Tensor)> ._extract_token_timestamps(generate_outputs, alignment_heads, [num_frames], [time_precision]) ⇒ Tensor whisperForConditionalGeneration._retrieve_init_tokens(generation_config) Kind : instance method of WhisperForConditionalGeneration Param Type generation_config WhisperGenerationConfig whisperForConditionalGeneration.generate(options) ⇒ <code> Promise. < (ModelOutput|Tensor) > </code> Transcribes or translates log-mel input features to a sequence of auto-regressively generated token ids. Kind : instance method of WhisperForConditionalGeneration Returns : Promise.<(ModelOutput|Tensor)> - The output of the model, which can contain the generated token ids, attentions, and scores. Param Type options * whisperForConditionalGeneration._extract_token_timestamps(generate_outputs, alignment_heads, [num_frames], [time_precision]) ⇒ <code> Tensor </code> Calculates token-level timestamps using the encoder-decoder cross-attentions and dynamic time-warping (DTW) to map each output token to a position in the input audio. If num_frames is specified, the encoder-decoder cross-attentions will be cropped before applying DTW. Kind : instance method of WhisperForConditionalGeneration Returns : Tensor - tensor containing the timestamps in seconds for each predicted token Param Type Default Description generate_outputs Object Outputs generated by the model generate_outputs.cross_attentions Array.<Array<Tensor>> The cross attentions output by the model generate_outputs.sequences Tensor The sequences output by the model alignment_heads Array.<Array<number>> Alignment heads of the model [num_frames] number Number of frames in the input audio. [time_precision] number 0.02 Precision of the timestamps in seconds models.VisionEncoderDecoderModel Vision Encoder-Decoder model based on OpenAI’s GPT architecture for image captioning and other vision tasks Kind : static class of models models.LlavaForConditionalGeneration The LLAVA model which consists of a vision backbone and a language model. Kind : static class of models models.CLIPModel CLIP Text and Vision Model with a projection layers on top Example: Perform zero-shot image classification with a CLIPModel . Copied import { AutoTokenizer , AutoProcessor , CLIPModel , RawImage } from '@huggingface/transformers' ; // Load tokenizer, processor, and model let tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/clip-vit-base-patch16' ); let processor = await AutoProcessor . from_pretrained ( 'Xenova/clip-vit-base-patch16' ); let model = await CLIPModel . from_pretrained ( 'Xenova/clip-vit-base-patch16' ); // Run tokenization let texts = [ 'a photo of a car' , 'a photo of a football match' ] let text_inputs = tokenizer (texts, { padding : true , truncation : true }); // Read image and run processor let image = await RawImage . read ( 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/football-match.jpg' ); let image_inputs = await processor (image); // Run model with both text and pixel inputs let output = await model ({ ...text_inputs, ...image_inputs }); // { // logits_per_image: Tensor { // dims: [ 1, 2 ], // data: Float32Array(2) [ 18.579734802246094, 24.31830596923828 ], // }, // logits_per_text: Tensor { // dims: [ 2, 1 ], // data: Float32Array(2) [ 18.579734802246094, 24.31830596923828 ], // }, // text_embeds: Tensor { // dims: [ 2, 512 ], // data: Float32Array(1024) [ ... ], // }, // image_embeds: Tensor { // dims: [ 1, 512 ], // data: Float32Array(512) [ ... ], // } // } Kind : static class of models models.CLIPTextModel The text model from CLIP without any head or projection on top. Kind : static class of models CLIPTextModel.from_pretrained() : <code> PreTrainedModel.from_pretrained </code> Kind : static method of CLIPTextModel models.CLIPTextModelWithProjection CLIP Text Model with a projection layer on top (a linear layer on top of the pooled output) Example: Compute text embeddings with CLIPTextModelWithProjection . Copied import { AutoTokenizer , CLIPTextModelWithProjection } from '@huggingface/transformers' ; // Load tokenizer and text model const tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/clip-vit-base-patch16' ); const text_model = await CLIPTextModelWithProjection . from_pretrained ( 'Xenova/clip-vit-base-patch16' ); // Run tokenization let texts = [ 'a photo of a car' , 'a photo of a football match' ]; let text_inputs = tokenizer (texts, { padding : true , truncation : true }); // Compute embeddings const { text_embeds } = await text_model (text_inputs); // Tensor { // dims: [ 2, 512 ], // type: 'float32', // data: Float32Array(1024) [ ... ], // size: 1024 // } Kind : static class of models CLIPTextModelWithProjection.from_pretrained() : <code> PreTrainedModel.from_pretrained </code> Kind : static method of CLIPTextModelWithProjection models.CLIPVisionModel The vision model from CLIP without any head or projection on top. Kind : static class of models CLIPVisionModel.from_pretrained() : <code> PreTrainedModel.from_pretrained </code> Kind : static method of CLIPVisionModel models.CLIPVisionModelWithProjection CLIP Vision Model with a projection layer on top (a linear layer on top of the pooled output) Example: Compute vision embeddings with CLIPVisionModelWithProjection . Copied import { AutoProcessor , CLIPVisionModelWithProjection , RawImage } from '@huggingface/transformers' ; // Load processor and vision model const processor = await AutoProcessor . from_pretrained ( 'Xenova/clip-vit-base-patch16' ); const vision_model = await CLIPVisionModelWithProjection . from_pretrained ( 'Xenova/clip-vit-base-patch16' ); // Read image and run processor let image = await RawImage . read ( 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/football-match.jpg' ); let image_inputs = await processor (image); // Compute embeddings const { image_embeds } = await vision_model (image_inputs); // Tensor { // dims: [ 1, 512 ], // type: 'float32', // data: Float32Array(512) [ ... ], // size: 512 // } Kind : static class of models CLIPVisionModelWithProjection.from_pretrained() : <code> PreTrainedModel.from_pretrained </code> Kind : static method of CLIPVisionModelWithProjection models.SiglipModel SigLIP Text and Vision Model with a projection layers on top Example: Perform zero-shot image classification with a SiglipModel . Copied import { AutoTokenizer , AutoProcessor , SiglipModel , RawImage } from '@huggingface/transformers' ; // Load tokenizer, processor, and model const tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/siglip-base-patch16-224' ); const processor = await AutoProcessor . from_pretrained ( 'Xenova/siglip-base-patch16-224' ); const model = await SiglipModel . from_pretrained ( 'Xenova/siglip-base-patch16-224' ); // Run tokenization const texts = [ 'a photo of 2 cats' , 'a photo of 2 dogs' ]; const text_inputs = tokenizer (texts, { padding : 'max_length' , truncation : true }); // Read image and run processor const image = await RawImage . read ( 'http://images.cocodataset.org/val2017/000000039769.jpg' ); const image_inputs = await processor (image); // Run model with both text and pixel inputs const output = await model ({ ...text_inputs, ...image_inputs }); // { // logits_per_image: Tensor { // dims: [ 1, 2 ], // data: Float32Array(2) [ -1.6019744873046875, -10.720091819763184 ], // }, // logits_per_text: Tensor { // dims: [ 2, 1 ], // data: Float32Array(2) [ -1.6019744873046875, -10.720091819763184 ], // }, // text_embeds: Tensor { // dims: [ 2, 768 ], // data: Float32Array(1536) [ ... ], // }, // image_embeds: Tensor { // dims: [ 1, 768 ], // data: Float32Array(768) [ ... ], // } // } Kind : static class of models models.SiglipTextModel The text model from SigLIP without any head or projection on top. Example: Compute text embeddings with SiglipTextModel . Copied import { AutoTokenizer , SiglipTextModel } from '@huggingface/transformers' ; // Load tokenizer and text model const tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/siglip-base-patch16-224' ); const text_model = await SiglipTextModel . from_pretrained ( 'Xenova/siglip-base-patch16-224' ); // Run tokenization const texts = [ 'a photo of 2 cats' , 'a photo of 2 dogs' ]; const text_inputs = tokenizer (texts, { padding : 'max_length' , truncation : true }); // Compute embeddings const { pooler_output } = await text_model (text_inputs); // Tensor { // dims: [ 2, 768 ], // type: 'float32', // data: Float32Array(1536) [ ... ], // size: 1536 // } Kind : static class of models SiglipTextModel.from_pretrained() : <code> PreTrainedModel.from_pretrained </code> Kind : static method of SiglipTextModel models.SiglipVisionModel The vision model from SigLIP without any head or projection on top. Example: Compute vision embeddings with SiglipVisionModel . Copied import { AutoProcessor , SiglipVisionModel , RawImage } from '@huggingface/transformers' ; // Load processor and vision model const processor = await AutoProcessor . from_pretrained ( 'Xenova/siglip-base-patch16-224' ); const vision_model = await SiglipVisionModel . from_pretrained ( 'Xenova/siglip-base-patch16-224' ); // Read image and run processor const image = await RawImage . read ( 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/football-match.jpg' ); const image_inputs = await processor (image); // Compute embeddings const { pooler_output } = await vision_model (image_inputs); // Tensor { // dims: [ 1, 768 ], // type: 'float32', // data: Float32Array(768) [ ... ], // size: 768 // } Kind : static class of models SiglipVisionModel.from_pretrained() : <code> PreTrainedModel.from_pretrained </code> Kind : static method of SiglipVisionModel models.CLIPSegForImageSegmentation CLIPSeg model with a Transformer-based decoder on top for zero-shot and one-shot image segmentation. Example: Perform zero-shot image segmentation with a CLIPSegForImageSegmentation model. Copied import { AutoTokenizer , AutoProcessor , CLIPSegForImageSegmentation , RawImage } from '@huggingface/transformers' ; // Load tokenizer, processor, and model const tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/clipseg-rd64-refined' ); const processor = await AutoProcessor . from_pretrained ( 'Xenova/clipseg-rd64-refined' ); const model = await CLIPSegForImageSegmentation . from_pretrained ( 'Xenova/clipseg-rd64-refined' ); // Run tokenization const texts = [ 'a glass' , 'something to fill' , 'wood' , 'a jar' ]; const text_inputs = tokenizer (texts, { padding : true , truncation : true }); // Read image and run processor const image = await RawImage . read ( 'https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true' ); const image_inputs = await processor (image); // Run model with both text and pixel inputs const { logits } = await model ({ ...text_inputs, ...image_inputs }); // logits: Tensor { // dims: [4, 352, 352], // type: 'float32', // data: Float32Array(495616) [ ... ], // size: 495616 // } You can visualize the predictions as follows: Copied const preds = logits . unsqueeze_ ( 1 ) . sigmoid_ () . mul_ ( 255 ) . round_ () . to ( 'uint8' ); for ( let i = 0 ; i < preds. dims [ 0 ]; ++i) { const img = RawImage . fromTensor (preds[i]); img. save ( `prediction_ ${i} .png` ); } Kind : static class of models models.GPT2LMHeadModel GPT-2 language model head on top of the GPT-2 base model. This model is suitable for text generation tasks. Kind : static class of models models.JAISModel The bare JAIS Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.JAISLMHeadModel The JAIS Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). Kind : static class of models models.CodeGenModel CodeGenModel is a class representing a code generation model without a language model head. Kind : static class of models models.CodeGenForCausalLM CodeGenForCausalLM is a class that represents a code generation model based on the GPT-2 architecture. It extends the CodeGenPreTrainedModel class. Kind : static class of models models.LlamaPreTrainedModel The bare LLama Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.LlamaModel The bare LLaMA Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.CoherePreTrainedModel The bare Cohere Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.GemmaPreTrainedModel The bare Gemma Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.GemmaModel The bare Gemma Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.Gemma2PreTrainedModel The bare Gemma2 Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.Gemma2Model The bare Gemma2 Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.Qwen2PreTrainedModel The bare Qwen2 Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.Qwen2Model The bare Qwen2 Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.PhiModel The bare Phi Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.Phi3Model The bare Phi3 Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.BloomPreTrainedModel The Bloom Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). Kind : static class of models models.BloomModel The bare Bloom Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.BloomForCausalLM The Bloom Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). Kind : static class of models models.MptModel The bare Mpt Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.MptForCausalLM The MPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). Kind : static class of models models.OPTModel The bare OPT Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.OPTForCausalLM The OPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). Kind : static class of models models.VitMatteForImageMatting ViTMatte framework leveraging any vision backbone e.g. for ADE20k, CityScapes. Example: Perform image matting with a VitMatteForImageMatting model. Copied import { AutoProcessor , VitMatteForImageMatting , RawImage } from '@huggingface/transformers' ; // Load processor and model const processor = await AutoProcessor . from_pretrained ( 'Xenova/vitmatte-small-distinctions-646' ); const model = await VitMatteForImageMatting . from_pretrained ( 'Xenova/vitmatte-small-distinctions-646' ); // Load image and trimap const image = await RawImage . fromURL ( 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/vitmatte_image.png' ); const trimap = await RawImage . fromURL ( 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/vitmatte_trimap.png' ); // Prepare image + trimap for the model const inputs = await processor (image, trimap); // Predict alpha matte const { alphas } = await model (inputs); // Tensor { // dims: [ 1, 1, 640, 960 ], // type: 'float32', // size: 614400, // data: Float32Array(614400) [ 0.9894027709960938, 0.9970508813858032, ... ] // } You can visualize the alpha matte as follows: Copied import { Tensor , cat } from '@huggingface/transformers' ; // Visualize predicted alpha matte const imageTensor = image. toTensor (); // Convert float (0-1) alpha matte to uint8 (0-255) const alphaChannel = alphas . squeeze ( 0 ) . mul_ ( 255 ) . clamp_ ( 0 , 255 ) . round_ () . to ( 'uint8' ); // Concatenate original image with predicted alpha const imageData = cat ([imageTensor, alphaChannel], 0 ); // Save output image const outputImage = RawImage . fromTensor (imageData); outputImage. save ( 'output.png' ); Kind : static class of models vitMatteForImageMatting._call(model_inputs) Kind : instance method of VitMatteForImageMatting Param Type model_inputs any models.DetrObjectDetectionOutput Kind : static class of models new DetrObjectDetectionOutput(output) Param Type Description output Object The output of the model. output.logits Tensor Classification logits (including no-object) for all queries. output.pred_boxes Tensor Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). models.DetrSegmentationOutput Kind : static class of models new DetrSegmentationOutput(output) Param Type Description output Object The output of the model. output.logits Tensor The output logits of the model. output.pred_boxes Tensor Predicted boxes. output.pred_masks Tensor Predicted masks. models.RTDetrObjectDetectionOutput Kind : static class of models new RTDetrObjectDetectionOutput(output) Param Type Description output Object The output of the model. output.logits Tensor Classification logits (including no-object) for all queries. output.pred_boxes Tensor Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). models.TableTransformerModel The bare Table Transformer Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top. Kind : static class of models models.TableTransformerForObjectDetection Table Transformer Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection. Kind : static class of models tableTransformerForObjectDetection._call(model_inputs) Kind : instance method of TableTransformerForObjectDetection Param Type model_inputs any models.ResNetPreTrainedModel An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. Kind : static class of models models.ResNetModel The bare ResNet model outputting raw features without any specific head on top. Kind : static class of models models.ResNetForImageClassification ResNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. Kind : static class of models resNetForImageClassification._call(model_inputs) Kind : instance method of ResNetForImageClassification Param Type model_inputs any models.Swin2SRModel The bare Swin2SR Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.Swin2SRForImageSuperResolution Swin2SR Model transformer with an upsampler head on top for image super resolution and restoration. Example: Super-resolution w/ Xenova/swin2SR-classical-sr-x2-64 . Copied import { AutoProcessor , Swin2SRForImageSuperResolution , RawImage } from '@huggingface/transformers' ; // Load processor and model const model_id = 'Xenova/swin2SR-classical-sr-x2-64' ; const processor = await AutoProcessor . from_pretrained (model_id); const model = await Swin2SRForImageSuperResolution . from_pretrained (model_id); // Prepare model inputs const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/butterfly.jpg' ; const image = await RawImage . fromURL (url); const inputs = await processor (image); // Run model const outputs = await model (inputs); // Convert Tensor to RawImage const output = outputs. reconstruction . squeeze (). clamp_ ( 0 , 1 ). mul_ ( 255 ). round_ (). to ( 'uint8' ); const outputImage = RawImage . fromTensor (output); // RawImage { // data: Uint8Array(786432) [ 41, 31, 24, ... ], // width: 512, // height: 512, // channels: 3 // } Kind : static class of models models.DPTModel The bare DPT Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.DPTForDepthEstimation DPT Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2. Example: Depth estimation w/ Xenova/dpt-hybrid-midas . Copied import { DPTForDepthEstimation , AutoProcessor , RawImage , interpolate, max } from '@huggingface/transformers' ; // Load model and processor const model_id = 'Xenova/dpt-hybrid-midas' ; const model = await DPTForDepthEstimation . from_pretrained (model_id); const processor = await AutoProcessor . from_pretrained (model_id); // Load image from URL const url = 'http://images.cocodataset.org/val2017/000000039769.jpg' ; const image = await RawImage . fromURL (url); // Prepare image for the model const inputs = await processor (image); // Run model const { predicted_depth } = await model (inputs); // Interpolate to original size const prediction = interpolate (predicted_depth, image. size . reverse (), 'bilinear' , false ); // Visualize the prediction const formatted = prediction. mul_ ( 255 / max (prediction. data )[ 0 ]). to ( 'uint8' ); const depth = RawImage . fromTensor (formatted); // RawImage { // data: Uint8Array(307200) [ 85, 85, 84, ... ], // width: 640, // height: 480, // channels: 1 // } Kind : static class of models models.DepthAnythingForDepthEstimation Depth Anything Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2. Kind : static class of models models.GLPNModel The bare GLPN encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top. Kind : static class of models models.GLPNForDepthEstimation GLPN Model transformer with a lightweight depth estimation head on top e.g. for KITTI, NYUv2. Example: Depth estimation w/ Xenova/glpn-kitti . Copied import { GLPNForDepthEstimation , AutoProcessor , RawImage , interpolate, max } from '@huggingface/transformers' ; // Load model and processor const model_id = 'Xenova/glpn-kitti' ; const model = await GLPNForDepthEstimation . from_pretrained (model_id); const processor = await AutoProcessor . from_pretrained (model_id); // Load image from URL const url = 'http://images.cocodataset.org/val2017/000000039769.jpg' ; const image = await RawImage . fromURL (url); // Prepare image for the model const inputs = await processor (image); // Run model const { predicted_depth } = await model (inputs); // Interpolate to original size const prediction = interpolate (predicted_depth, image. size . reverse (), 'bilinear' , false ); // Visualize the prediction const formatted = prediction. mul_ ( 255 / max (prediction. data )[ 0 ]). to ( 'uint8' ); const depth = RawImage . fromTensor (formatted); // RawImage { // data: Uint8Array(307200) [ 207, 169, 154, ... ], // width: 640, // height: 480, // channels: 1 // } Kind : static class of models models.DonutSwinModel The bare Donut Swin Model transformer outputting raw hidden-states without any specific head on top. Example: Step-by-step Document Parsing. Copied import { AutoProcessor , AutoTokenizer , AutoModelForVision2Seq , RawImage } from '@huggingface/transformers' ; // Choose model to use const model_id = 'Xenova/donut-base-finetuned-cord-v2' ; // Prepare image inputs const processor = await AutoProcessor . from_pretrained (model_id); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/receipt.png' ; const image = await RawImage . read (url); const image_inputs = await processor (image); // Prepare decoder inputs const tokenizer = await AutoTokenizer . from_pretrained (model_id); const task_prompt = '<s_cord-v2>' ; const decoder_input_ids = tokenizer (task_prompt, { add_special_tokens : false , }). input_ids ; // Create the model const model = await AutoModelForVision2Seq . from_pretrained (model_id); // Run inference const output = await model. generate (image_inputs. pixel_values , { decoder_input_ids, max_length : model. config . decoder . max_position_embeddings , }); // Decode output const decoded = tokenizer. batch_decode (output)[ 0 ]; // <s_cord-v2><s_menu><s_nm> CINNAMON SUGAR</s_nm><s_unitprice> 17,000</s_unitprice><s_cnt> 1 x</s_cnt><s_price> 17,000</s_price></s_menu><s_sub_total><s_subtotal_price> 17,000</s_subtotal_price></s_sub_total><s_total><s_total_price> 17,000</s_total_price><s_cashprice> 20,000</s_cashprice><s_changeprice> 3,000</s_changeprice></s_total></s> Example: Step-by-step Document Visual Question Answering (DocVQA) Copied import { AutoProcessor , AutoTokenizer , AutoModelForVision2Seq , RawImage } from '@huggingface/transformers' ; // Choose model to use const model_id = 'Xenova/donut-base-finetuned-docvqa' ; // Prepare image inputs const processor = await AutoProcessor . from_pretrained (model_id); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/invoice.png' ; const image = await RawImage . read (url); const image_inputs = await processor (image); // Prepare decoder inputs const tokenizer = await AutoTokenizer . from_pretrained (model_id); const question = 'What is the invoice number?' ; const task_prompt = `<s_docvqa><s_question> ${question} </s_question><s_answer>` ; const decoder_input_ids = tokenizer (task_prompt, { add_special_tokens : false , }). input_ids ; // Create the model const model = await AutoModelForVision2Seq . from_pretrained (model_id); // Run inference const output = await model. generate (image_inputs. pixel_values , { decoder_input_ids, max_length : model. config . decoder . max_position_embeddings , }); // Decode output const decoded = tokenizer. batch_decode (output)[ 0 ]; // <s_docvqa><s_question> What is the invoice number?</s_question><s_answer> us-001</s_answer></s> Kind : static class of models models.ConvNextModel The bare ConvNext model outputting raw features without any specific head on top. Kind : static class of models models.ConvNextForImageClassification ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. Kind : static class of models convNextForImageClassification._call(model_inputs) Kind : instance method of ConvNextForImageClassification Param Type model_inputs any models.ConvNextV2Model The bare ConvNextV2 model outputting raw features without any specific head on top. Kind : static class of models models.ConvNextV2ForImageClassification ConvNextV2 Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. Kind : static class of models convNextV2ForImageClassification._call(model_inputs) Kind : instance method of ConvNextV2ForImageClassification Param Type model_inputs any models.Dinov2Model The bare DINOv2 Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.Dinov2ForImageClassification Dinov2 Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. Kind : static class of models dinov2ForImageClassification._call(model_inputs) Kind : instance method of Dinov2ForImageClassification Param Type model_inputs any models.YolosObjectDetectionOutput Kind : static class of models new YolosObjectDetectionOutput(output) Param Type Description output Object The output of the model. output.logits Tensor Classification logits (including no-object) for all queries. output.pred_boxes Tensor Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). models.SamModel Segment Anything Model (SAM) for generating segmentation masks, given an input image and optional 2D location and bounding boxes. Example: Perform mask generation w/ Xenova/sam-vit-base . Copied import { SamModel , AutoProcessor , RawImage } from '@huggingface/transformers' ; const model = await SamModel . from_pretrained ( 'Xenova/sam-vit-base' ); const processor = await AutoProcessor . from_pretrained ( 'Xenova/sam-vit-base' ); const img_url = 'https://huggingface.co/ybelkada/segment-anything/resolve/v3.0.0/assets/car.png' ; const raw_image = await RawImage . read (img_url); const input_points = [[[ 450 , 600 ]]] // 2D localization of a window const inputs = await processor (raw_image, { input_points }); const outputs = await model (inputs); const masks = await processor. post_process_masks (outputs. pred_masks , inputs. original_sizes , inputs. reshaped_input_sizes ); // [ // Tensor { // dims: [ 1, 3, 1764, 2646 ], // type: 'bool', // data: Uint8Array(14002632) [ ... ], // size: 14002632 // } // ] const scores = outputs. iou_scores ; // Tensor { // dims: [ 1, 1, 3 ], // type: 'float32', // data: Float32Array(3) [ // 0.8892380595207214, // 0.9311248064041138, // 0.983696699142456 // ], // size: 3 // } Kind : static class of models .SamModel .get_image_embeddings(model_inputs) ⇒ Promise.<{image_embeddings: Tensor, image_positional_embeddings: Tensor}> .forward(model_inputs) ⇒ Promise.<Object> ._call(model_inputs) ⇒ Promise.<SamImageSegmentationOutput> samModel.get_image_embeddings(model_inputs) ⇒ <code> Promise. < {image_embeddings: Tensor, image_positional_embeddings: Tensor} > </code> Compute image embeddings and positional image embeddings, given the pixel values of an image. Kind : instance method of SamModel Returns : Promise.<{image_embeddings: Tensor, image_positional_embeddings: Tensor}> - The image embeddings and positional image embeddings. Param Type Description model_inputs Object Object containing the model inputs. model_inputs.pixel_values Tensor Pixel values obtained using a SamProcessor . samModel.forward(model_inputs) ⇒ <code> Promise. < Object > </code> Kind : instance method of SamModel Returns : Promise.<Object> - The output of the model. Param Type Description model_inputs SamModelInputs Object containing the model inputs. samModel._call(model_inputs) ⇒ <code> Promise. < SamImageSegmentationOutput > </code> Runs the model with the provided inputs Kind : instance method of SamModel Returns : Promise.<SamImageSegmentationOutput> - Object containing segmentation outputs Param Type Description model_inputs Object Model inputs models.SamImageSegmentationOutput Base class for Segment-Anything model’s output. Kind : static class of models new SamImageSegmentationOutput(output) Param Type Description output Object The output of the model. output.iou_scores Tensor The output logits of the model. output.pred_masks Tensor Predicted boxes. models.Wav2Vec2Model The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top. Example: Load and run a Wav2Vec2Model for feature extraction. Copied import { AutoProcessor , AutoModel , read_audio } from '@huggingface/transformers' ; // Read and preprocess audio const processor = await AutoProcessor . from_pretrained ( 'Xenova/mms-300m' ); const audio = await read_audio ( 'https://huggingface.co/datasets/Narsil/asr_dummy/resolve/v3.0.0/mlk.flac' , 16000 ); const inputs = await processor (audio); // Run model with inputs const model = await AutoModel . from_pretrained ( 'Xenova/mms-300m' ); const output = await model (inputs); // { // last_hidden_state: Tensor { // dims: [ 1, 1144, 1024 ], // type: 'float32', // data: Float32Array(1171456) [ ... ], // size: 1171456 // } // } Kind : static class of models models.Wav2Vec2ForAudioFrameClassification Wav2Vec2 Model with a frame classification head on top for tasks like Speaker Diarization. Kind : static class of models wav2Vec2ForAudioFrameClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of Wav2Vec2ForAudioFrameClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.PyAnnoteModel The bare PyAnnote Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.PyAnnoteForAudioFrameClassification PyAnnote Model with a frame classification head on top for tasks like Speaker Diarization. Example: Load and run a PyAnnoteForAudioFrameClassification for speaker diarization. Copied import { AutoProcessor , AutoModelForAudioFrameClassification , read_audio } from '@huggingface/transformers' ; // Load model and processor const model_id = 'onnx-community/pyannote-segmentation-3.0' ; const model = await AutoModelForAudioFrameClassification . from_pretrained (model_id); const processor = await AutoProcessor . from_pretrained (model_id); // Read and preprocess audio const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/mlk.wav' ; const audio = await read_audio (url, processor. feature_extractor . config . sampling_rate ); const inputs = await processor (audio); // Run model with inputs const { logits } = await model (inputs); // { // logits: Tensor { // dims: [ 1, 767, 7 ], // [batch_size, num_frames, num_classes] // type: 'float32', // data: Float32Array(5369) [ ... ], // size: 5369 // } // } const result = processor. post_process_speaker_diarization (logits, audio. length ); // [ // [ // { id: 0, start: 0, end: 1.0512535626298245, confidence: 0.8220156481664611 }, // { id: 2, start: 1.0512535626298245, end: 2.3398869619825127, confidence: 0.9008811707860472 }, // ... // ] // ] // Display result console . table (result[ 0 ], [ 'start' , 'end' , 'id' , 'confidence' ]); // ┌─────────┬────────────────────┬────────────────────┬────┬─────────────────────┐ // │ (index) │ start │ end │ id │ confidence │ // ├─────────┼────────────────────┼────────────────────┼────┼─────────────────────┤ // │ 0 │ 0 │ 1.0512535626298245 │ 0 │ 0.8220156481664611 │ // │ 1 │ 1.0512535626298245 │ 2.3398869619825127 │ 2 │ 0.9008811707860472 │ // │ 2 │ 2.3398869619825127 │ 3.5946089560890773 │ 0 │ 0.7521651315796233 │ // │ 3 │ 3.5946089560890773 │ 4.578039708226655 │ 2 │ 0.8491978128022479 │ // │ 4 │ 4.578039708226655 │ 4.594995410849717 │ 0 │ 0.2935352600416393 │ // │ 5 │ 4.594995410849717 │ 6.121008646925269 │ 3 │ 0.6788051309866024 │ // │ 6 │ 6.121008646925269 │ 6.256654267909762 │ 0 │ 0.37125512393851134 │ // │ 7 │ 6.256654267909762 │ 8.630452635138397 │ 2 │ 0.7467035186353542 │ // │ 8 │ 8.630452635138397 │ 10.088643060721703 │ 0 │ 0.7689364814666032 │ // │ 9 │ 10.088643060721703 │ 12.58113134631177 │ 2 │ 0.9123324509131324 │ // │ 10 │ 12.58113134631177 │ 13.005023911888312 │ 0 │ 0.4828358177572041 │ // └─────────┴────────────────────┴────────────────────┴────┴─────────────────────┘ Kind : static class of models pyAnnoteForAudioFrameClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of PyAnnoteForAudioFrameClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.UniSpeechModel The bare UniSpeech Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.UniSpeechForCTC UniSpeech Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Kind : static class of models uniSpeechForCTC._call(model_inputs) Kind : instance method of UniSpeechForCTC Param Type Description model_inputs Object model_inputs.input_values Tensor Float values of input raw speech waveform. model_inputs.attention_mask Tensor Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1] models.UniSpeechForSequenceClassification UniSpeech Model with a sequence classification head on top (a linear layer over the pooled output). Kind : static class of models uniSpeechForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of UniSpeechForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.UniSpeechSatModel The bare UniSpeechSat Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.UniSpeechSatForCTC UniSpeechSat Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Kind : static class of models uniSpeechSatForCTC._call(model_inputs) Kind : instance method of UniSpeechSatForCTC Param Type Description model_inputs Object model_inputs.input_values Tensor Float values of input raw speech waveform. model_inputs.attention_mask Tensor Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1] models.UniSpeechSatForSequenceClassification UniSpeechSat Model with a sequence classification head on top (a linear layer over the pooled output). Kind : static class of models uniSpeechSatForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of UniSpeechSatForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.UniSpeechSatForAudioFrameClassification UniSpeechSat Model with a frame classification head on top for tasks like Speaker Diarization. Kind : static class of models uniSpeechSatForAudioFrameClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of UniSpeechSatForAudioFrameClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.Wav2Vec2BertModel The bare Wav2Vec2Bert Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.Wav2Vec2BertForCTC Wav2Vec2Bert Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Kind : static class of models wav2Vec2BertForCTC._call(model_inputs) Kind : instance method of Wav2Vec2BertForCTC Param Type Description model_inputs Object model_inputs.input_features Tensor Float values of input mel-spectrogram. model_inputs.attention_mask Tensor Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1] models.Wav2Vec2BertForSequenceClassification Wav2Vec2Bert Model with a sequence classification head on top (a linear layer over the pooled output). Kind : static class of models wav2Vec2BertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of Wav2Vec2BertForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.HubertModel The bare Hubert Model transformer outputting raw hidden-states without any specific head on top. Example: Load and run a HubertModel for feature extraction. Copied import { AutoProcessor , AutoModel , read_audio } from '@huggingface/transformers' ; // Read and preprocess audio const processor = await AutoProcessor . from_pretrained ( 'Xenova/hubert-base-ls960' ); const audio = await read_audio ( 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/jfk.wav' , 16000 ); const inputs = await processor (audio); // Load and run model with inputs const model = await AutoModel . from_pretrained ( 'Xenova/hubert-base-ls960' ); const output = await model (inputs); // { // last_hidden_state: Tensor { // dims: [ 1, 549, 768 ], // type: 'float32', // data: Float32Array(421632) [0.0682469978928566, 0.08104046434164047, -0.4975186586380005, ...], // size: 421632 // } // } Kind : static class of models models.HubertForCTC Hubert Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Kind : static class of models hubertForCTC._call(model_inputs) Kind : instance method of HubertForCTC Param Type Description model_inputs Object model_inputs.input_values Tensor Float values of input raw speech waveform. model_inputs.attention_mask Tensor Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1] models.HubertForSequenceClassification Hubert Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. Kind : static class of models hubertForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of HubertForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.WavLMPreTrainedModel An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. Kind : static class of models models.WavLMModel The bare WavLM Model transformer outputting raw hidden-states without any specific head on top. Example: Load and run a WavLMModel for feature extraction. Copied import { AutoProcessor , AutoModel , read_audio } from '@huggingface/transformers' ; // Read and preprocess audio const processor = await AutoProcessor . from_pretrained ( 'Xenova/wavlm-base' ); const audio = await read_audio ( 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/jfk.wav' , 16000 ); const inputs = await processor (audio); // Run model with inputs const model = await AutoModel . from_pretrained ( 'Xenova/wavlm-base' ); const output = await model (inputs); // { // last_hidden_state: Tensor { // dims: [ 1, 549, 768 ], // type: 'float32', // data: Float32Array(421632) [-0.349443256855011, -0.39341306686401367, 0.022836603224277496, ...], // size: 421632 // } // } Kind : static class of models models.WavLMForCTC WavLM Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Kind : static class of models wavLMForCTC._call(model_inputs) Kind : instance method of WavLMForCTC Param Type Description model_inputs Object model_inputs.input_values Tensor Float values of input raw speech waveform. model_inputs.attention_mask Tensor Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1] models.WavLMForSequenceClassification WavLM Model with a sequence classification head on top (a linear layer over the pooled output). Kind : static class of models wavLMForSequenceClassification._call(model_inputs) ⇒ <code> Promise. < SequenceClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of WavLMForSequenceClassification Returns : Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.WavLMForXVector WavLM Model with an XVector feature extraction head on top for tasks like Speaker Verification. Example: Extract speaker embeddings with WavLMForXVector . Copied import { AutoProcessor , AutoModel , read_audio } from '@huggingface/transformers' ; // Read and preprocess audio const processor = await AutoProcessor . from_pretrained ( 'Xenova/wavlm-base-plus-sv' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/jfk.wav' ; const audio = await read_audio (url, 16000 ); const inputs = await processor (audio); // Run model with inputs const model = await AutoModel . from_pretrained ( 'Xenova/wavlm-base-plus-sv' ); const outputs = await model (inputs); // { // logits: Tensor { // dims: [ 1, 512 ], // type: 'float32', // data: Float32Array(512) [0.5847219228744507, ...], // size: 512 // }, // embeddings: Tensor { // dims: [ 1, 512 ], // type: 'float32', // data: Float32Array(512) [-0.09079201519489288, ...], // size: 512 // } // } Kind : static class of models wavLMForXVector._call(model_inputs) ⇒ <code> Promise. < XVectorOutput > </code> Calls the model on new inputs. Kind : instance method of WavLMForXVector Returns : Promise.<XVectorOutput> - An object containing the model’s output logits and speaker embeddings. Param Type Description model_inputs Object The inputs to the model. models.WavLMForAudioFrameClassification WavLM Model with a frame classification head on top for tasks like Speaker Diarization. Example: Perform speaker diarization with WavLMForAudioFrameClassification . Copied import { AutoProcessor , AutoModelForAudioFrameClassification , read_audio } from '@huggingface/transformers' ; // Read and preprocess audio const processor = await AutoProcessor . from_pretrained ( 'Xenova/wavlm-base-plus-sd' ); const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/jfk.wav' ; const audio = await read_audio (url, 16000 ); const inputs = await processor (audio); // Run model with inputs const model = await AutoModelForAudioFrameClassification . from_pretrained ( 'Xenova/wavlm-base-plus-sd' ); const { logits } = await model (inputs); // { // logits: Tensor { // dims: [ 1, 549, 2 ], // [batch_size, num_frames, num_speakers] // type: 'float32', // data: Float32Array(1098) [-3.5301010608673096, ...], // size: 1098 // } // } const labels = logits[ 0 ]. sigmoid (). tolist (). map ( frames => frames. map ( speaker => speaker > 0.5 ? 1 : 0 ) ); console . log (labels); // labels is a one-hot array of shape (num_frames, num_speakers) // [ // [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], // [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], // [0, 0], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1], // ... // ] Kind : static class of models wavLMForAudioFrameClassification._call(model_inputs) ⇒ <code> Promise. < TokenClassifierOutput > </code> Calls the model on new inputs. Kind : instance method of WavLMForAudioFrameClassification Returns : Promise.<TokenClassifierOutput> - An object containing the model’s output logits for sequence classification. Param Type Description model_inputs Object The inputs to the model. models.SpeechT5PreTrainedModel An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. Kind : static class of models models.SpeechT5Model The bare SpeechT5 Encoder-Decoder Model outputting raw hidden-states without any specific pre- or post-nets. Kind : static class of models models.SpeechT5ForSpeechToText SpeechT5 Model with a speech encoder and a text decoder. Example: Generate speech from text with SpeechT5ForSpeechToText . Copied import { AutoTokenizer , AutoProcessor , SpeechT5ForTextToSpeech , SpeechT5HifiGan , Tensor } from '@huggingface/transformers' ; // Load the tokenizer and processor const tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/speecht5_tts' ); const processor = await AutoProcessor . from_pretrained ( 'Xenova/speecht5_tts' ); // Load the models // NOTE: We use the full-precision versions as they are more accurate const model = await SpeechT5ForTextToSpeech . from_pretrained ( 'Xenova/speecht5_tts' , { dtype : 'fp32' }); const vocoder = await SpeechT5HifiGan . from_pretrained ( 'Xenova/speecht5_hifigan' , { dtype : 'fp32' }); // Load speaker embeddings from URL const speaker_embeddings_data = new Float32Array ( await ( await fetch ( 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/speaker_embeddings.bin' )). arrayBuffer () ); const speaker_embeddings = new Tensor ( 'float32' , speaker_embeddings_data, [ 1 , speaker_embeddings_data. length ] ) // Run tokenization const { input_ids } = tokenizer ( 'Hello, my dog is cute' ); // Generate waveform const { waveform } = await model. generate_speech (input_ids, speaker_embeddings, { vocoder }); console . log (waveform) // Tensor { // dims: [ 26112 ], // type: 'float32', // size: 26112, // data: Float32Array(26112) [ -0.00043630177970044315, -0.00018082228780258447, ... ], // } Kind : static class of models models.SpeechT5ForTextToSpeech SpeechT5 Model with a text encoder and a speech decoder. Kind : static class of models speechT5ForTextToSpeech.generate_speech(input_values, speaker_embeddings, options) ⇒ <code> Promise. < SpeechOutput > </code> Converts a sequence of input tokens into a sequence of mel spectrograms, which are subsequently turned into a speech waveform using a vocoder. Kind : instance method of SpeechT5ForTextToSpeech Returns : Promise.<SpeechOutput> - A promise which resolves to an object containing the spectrogram, waveform, and cross-attention tensors. Param Type Default Description input_values Tensor Indices of input sequence tokens in the vocabulary. speaker_embeddings Tensor Tensor containing the speaker embeddings. options Object Optional parameters for generating speech. [options.threshold] number 0.5 The generated sequence ends when the predicted stop token probability exceeds this value. [options.minlenratio] number 0.0 Used to calculate the minimum required length for the output sequence. [options.maxlenratio] number 20.0 Used to calculate the maximum allowed length for the output sequence. [options.vocoder] Object The vocoder that converts the mel spectrogram into a speech waveform. If null , the output is the mel spectrogram. [options.output_cross_attentions] boolean false Whether or not to return the attentions tensors of the decoder's cross-attention layers. models.SpeechT5HifiGan HiFi-GAN vocoder. See SpeechT5ForSpeechToText for example usage. Kind : static class of models models.TrOCRForCausalLM The TrOCR Decoder with a language modeling head. Kind : static class of models models.MistralPreTrainedModel The bare Mistral Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.Starcoder2PreTrainedModel The bare Starcoder2 Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.FalconPreTrainedModel The bare Falcon Model outputting raw hidden-states without any specific head on top. Kind : static class of models models.ClapTextModelWithProjection CLAP Text Model with a projection layer on top (a linear layer on top of the pooled output). Example: Compute text embeddings with ClapTextModelWithProjection . Copied import { AutoTokenizer , ClapTextModelWithProjection } from '@huggingface/transformers' ; // Load tokenizer and text model const tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/clap-htsat-unfused' ); const text_model = await ClapTextModelWithProjection . from_pretrained ( 'Xenova/clap-htsat-unfused' ); // Run tokenization const texts = [ 'a sound of a cat' , 'a sound of a dog' ]; const text_inputs = tokenizer (texts, { padding : true , truncation : true }); // Compute embeddings const { text_embeds } = await text_model (text_inputs); // Tensor { // dims: [ 2, 512 ], // type: 'float32', // data: Float32Array(1024) [ ... ], // size: 1024 // } Kind : static class of models ClapTextModelWithProjection.from_pretrained() : <code> PreTrainedModel.from_pretrained </code> Kind : static method of ClapTextModelWithProjection models.ClapAudioModelWithProjection CLAP Audio Model with a projection layer on top (a linear layer on top of the pooled output). Example: Compute audio embeddings with ClapAudioModelWithProjection . Copied import { AutoProcessor , ClapAudioModelWithProjection , read_audio } from '@huggingface/transformers' ; // Load processor and audio model const processor = await AutoProcessor . from_pretrained ( 'Xenova/clap-htsat-unfused' ); const audio_model = await ClapAudioModelWithProjection . from_pretrained ( 'Xenova/clap-htsat-unfused' ); // Read audio and run processor const audio = await read_audio ( 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/v3.0.0/cat_meow.wav' ); const audio_inputs = await processor (audio); // Compute embeddings const { audio_embeds } = await audio_model (audio_inputs); // Tensor { // dims: [ 1, 512 ], // type: 'float32', // data: Float32Array(512) [ ... ], // size: 512 // } Kind : static class of models ClapAudioModelWithProjection.from_pretrained() : <code> PreTrainedModel.from_pretrained </code> Kind : static method of ClapAudioModelWithProjection models.VitsModel The complete VITS model, for text-to-speech synthesis. Example: Generate speech from text with VitsModel . Copied import { AutoTokenizer , VitsModel } from '@huggingface/transformers' ; // Load the tokenizer and model const tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/mms-tts-eng' ); const model = await VitsModel . from_pretrained ( 'Xenova/mms-tts-eng' ); // Run tokenization const inputs = tokenizer ( 'I love transformers' ); // Generate waveform const { waveform } = await model (inputs); // Tensor { // dims: [ 1, 35328 ], // type: 'float32', // data: Float32Array(35328) [ ... ], // size: 35328, // } Kind : static class of models vitsModel._call(model_inputs) ⇒ <code> Promise. < VitsModelOutput > </code> Calls the model on new inputs. Kind : instance method of VitsModel Returns : Promise.<VitsModelOutput> - The outputs for the VITS model. Param Type Description model_inputs Object The inputs to the model. models.SegformerModel The bare SegFormer encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top. Kind : static class of models models.SegformerForImageClassification SegFormer Model transformer with an image classification head on top (a linear layer on top of the final hidden states) e.g. for ImageNet. Kind : static class of models models.SegformerForSemanticSegmentation SegFormer Model transformer with an all-MLP decode head on top e.g. for ADE20k, CityScapes. Kind : static class of models models.StableLmModel The bare StableLm Model transformer outputting raw hidden-states without any specific head on top. Kind : static class of models models.StableLmForCausalLM StableLm Model with a language modeling head on top for Causal Language Modeling (with past). Kind : static class of models models.EfficientNetModel The bare EfficientNet model outputting raw features without any specific head on top. Kind : static class of models models.EfficientNetForImageClassification EfficientNet Model with an image classification head on top (a linear layer on top of the pooled features). Kind : static class of models efficientNetForImageClassification._call(model_inputs) Kind : instance method of EfficientNetForImageClassification Param Type model_inputs any models.MusicgenModel The bare Musicgen decoder model outputting raw hidden-states without any specific head on top. Kind : static class of models models.MusicgenForCausalLM The MusicGen decoder model with a language modelling head on top. Kind : static class of models models.MusicgenForConditionalGeneration The composite MusicGen model with a text encoder, audio encoder and Musicgen decoder, for music generation tasks with one or both of text and audio prompts. Example: Generate music from text with Xenova/musicgen-small . Copied import { AutoTokenizer , MusicgenForConditionalGeneration } from '@huggingface/transformers' ; // Load tokenizer and model const tokenizer = await AutoTokenizer . from_pretrained ( 'Xenova/musicgen-small' ); const model = await MusicgenForConditionalGeneration . from_pretrained ( 'Xenova/musicgen-small' , { dtype : 'fp32' } ); // Prepare text input const prompt = '80s pop track with bassy drums and synth' ; const inputs = tokenizer (prompt); // Generate audio const audio_values = await model. generate ({ ...inputs, max_new_tokens : 512 , do_sample : true , guidance_scale : 3 , }); // (Optional) Write the output to a WAV file import wavefile from 'wavefile' ; import fs from 'fs' ; const wav = new wavefile. WaveFile (); wav. fromScratch ( 1 , model. config . audio_encoder . sampling_rate , '32f' , audio_values. data ); fs. writeFileSync ( 'musicgen_out.wav' , wav. toBuffer ()); Kind : static class of models .MusicgenForConditionalGeneration ._apply_and_filter_by_delay_pattern_mask(outputs) ⇒ Tensor .generate(options) ⇒ Promise.<(ModelOutput|Tensor)> musicgenForConditionalGeneration._apply_and_filter_by_delay_pattern_mask(outputs) ⇒ <code> Tensor </code> Apply the pattern mask to the final ids, then revert the pattern delay mask by filtering the pad token id in a single step. Kind : instance method of MusicgenForConditionalGeneration Returns : Tensor - The filtered output tensor. Param Type Description outputs Tensor The output tensor from the model. musicgenForConditionalGeneration.generate(options) ⇒ <code> Promise. < (ModelOutput|Tensor) > </code> Generates sequences of token ids for models with a language modeling head. Kind : instance method of MusicgenForConditionalGeneration Returns : Promise.<(ModelOutput|Tensor)> - The output of the model, which can contain the generated token ids, attentions, and scores. Param Type options * models.MobileNetV1Model The bare MobileNetV1 model outputting raw hidden-states without any specific head on top. Kind : static class of models models.MobileNetV1ForImageClassification MobileNetV1 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. Kind : static class of models mobileNetV1ForImageClassification._call(model_inputs) Kind : instance method of MobileNetV1ForImageClassification Param Type model_inputs any models.MobileNetV2Model The bare MobileNetV2 model outputting raw hidden-states without any specific head on top. Kind : static class of models models.MobileNetV2ForImageClassification MobileNetV2 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. Kind : static class of models mobileNetV2ForImageClassification._call(model_inputs) Kind : instance method of MobileNetV2ForImageClassification Param Type model_inputs any models.MobileNetV3Model The bare MobileNetV3 model outputting raw hidden-states without any specific head on top. Kind : static class of models models.MobileNetV3ForImageClassification MobileNetV3 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. Kind : static class of models mobileNetV3ForImageClassification._call(model_inputs) Kind : instance method of MobileNetV3ForImageClassification Param Type model_inputs any models.MobileNetV4Model The bare MobileNetV4 model outputting raw hidden-states without any specific head on top. Kind : static class of models models.MobileNetV4ForImageClassification MobileNetV4 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. Kind : static class of models mobileNetV4ForImageClassification._call(model_inputs) Kind : instance method of MobileNetV4ForImageClassification Param Type model_inputs any models.DecisionTransformerModel The model builds upon the GPT2 architecture to perform autoregressive prediction of actions in an offline RL setting. Refer to the paper for more details: https://arxiv.org/abs/2106.01345 Kind : static class of models models.PretrainedMixin Base class of all AutoModels. Contains the from_pretrained function which is used to instantiate pretrained models. Kind : static class of models .PretrainedMixin instance .MODEL_CLASS_MAPPINGS : * .BASE_IF_FAIL static .from_pretrained() : * pretrainedMixin.MODEL_CLASS_MAPPINGS : <code> * </code> Mapping from model type to model class. Kind : instance property of PretrainedMixin pretrainedMixin.BASE_IF_FAIL Whether to attempt to instantiate the base class ( PretrainedModel ) if the model type is not found in the mapping. Kind : instance property of PretrainedMixin PretrainedMixin.from_pretrained() : <code> * </code> Kind : static method of PretrainedMixin models.AutoModel Helper class which is used to instantiate pretrained models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models autoModel.MODEL_CLASS_MAPPINGS : <code> * </code> Kind : instance property of AutoModel models.AutoModelForSequenceClassification Helper class which is used to instantiate pretrained sequence classification models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForTokenClassification Helper class which is used to instantiate pretrained token classification models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForSeq2SeqLM Helper class which is used to instantiate pretrained sequence-to-sequence models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForSpeechSeq2Seq Helper class which is used to instantiate pretrained sequence-to-sequence speech-to-text models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForTextToSpectrogram Helper class which is used to instantiate pretrained sequence-to-sequence text-to-spectrogram models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForTextToWaveform Helper class which is used to instantiate pretrained text-to-waveform models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForCausalLM Helper class which is used to instantiate pretrained causal language models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForMaskedLM Helper class which is used to instantiate pretrained masked language models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForQuestionAnswering Helper class which is used to instantiate pretrained question answering models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForVision2Seq Helper class which is used to instantiate pretrained vision-to-sequence models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForImageClassification Helper class which is used to instantiate pretrained image classification models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForImageSegmentation Helper class which is used to instantiate pretrained image segmentation models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForSemanticSegmentation Helper class which is used to instantiate pretrained image segmentation models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForUniversalSegmentation Helper class which is used to instantiate pretrained universal image segmentation models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForObjectDetection Helper class which is used to instantiate pretrained object detection models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.AutoModelForMaskGeneration Helper class which is used to instantiate pretrained mask generation models with the from_pretrained function. The chosen model class is determined by the type specified in the model config. Kind : static class of models models.Seq2SeqLMOutput Kind : static class of models new Seq2SeqLMOutput(output) Param Type Description output Object The output of the model. output.logits Tensor The output logits of the model. output.past_key_values Tensor An tensor of key/value pairs that represent the previous state of the model. output.encoder_outputs Tensor The output of the encoder in a sequence-to-sequence model. [output.decoder_attentions] Tensor Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. [output.cross_attentions] Tensor Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. models.SequenceClassifierOutput Base class for outputs of sentence classification models. Kind : static class of models new SequenceClassifierOutput(output) Param Type Description output Object The output of the model. output.logits Tensor classification (or regression if config.num_labels==1) scores (before SoftMax). models.XVectorOutput Base class for outputs of XVector models. Kind : static class of models new XVectorOutput(output) Param Type Description output Object The output of the model. output.logits Tensor Classification hidden states before AMSoftmax, of shape (batch_size, config.xvector_output_dim) . output.embeddings Tensor Utterance embeddings used for vector similarity-based retrieval, of shape (batch_size, config.xvector_output_dim) . models.TokenClassifierOutput Base class for outputs of token classification models. Kind : static class of models new TokenClassifierOutput(output) Param Type Description output Object The output of the model. output.logits Tensor Classification scores (before SoftMax). models.MaskedLMOutput Base class for masked language models outputs. Kind : static class of models new MaskedLMOutput(output) Param Type Description output Object The output of the model. output.logits Tensor Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). models.QuestionAnsweringModelOutput Base class for outputs of question answering models. Kind : static class of models new QuestionAnsweringModelOutput(output) Param Type Description output Object The output of the model. output.start_logits Tensor Span-start scores (before SoftMax). output.end_logits Tensor Span-end scores (before SoftMax). models.CausalLMOutput Base class for causal language model (or autoregressive) outputs. Kind : static class of models new CausalLMOutput(output) Param Type Description output Object The output of the model. output.logits Tensor Prediction scores of the language modeling head (scores for each vocabulary token before softmax). models.CausalLMOutputWithPast Base class for causal language model (or autoregressive) outputs. Kind : static class of models new CausalLMOutputWithPast(output) Param Type Description output Object The output of the model. output.logits Tensor Prediction scores of the language modeling head (scores for each vocabulary token before softmax). output.past_key_values Tensor Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. models.ImageMattingOutput Kind : static class of models new ImageMattingOutput(output) Param Type Description output Object The output of the model. output.alphas Tensor Estimated alpha values, of shape (batch_size, num_channels, height, width) . models.VitsModelOutput Describes the outputs for the VITS model. Kind : static class of models new VitsModelOutput(output) Param Type Description output Object The output of the model. output.waveform Tensor The final audio waveform predicted by the model, of shape (batch_size, sequence_length) . output.spectrogram Tensor The log-mel spectrogram predicted at the output of the flow model. This spectrogram is passed to the Hi-Fi GAN decoder model to obtain the final audio waveform. models~SamModelInputs : <code> Object </code> Object containing the model inputs. Kind : inner typedef of models Properties Name Type Description pixel_values Tensor Pixel values as a Tensor with shape (batch_size, num_channels, height, width) . These can be obtained using a SamProcessor . [input_points] Tensor Input 2D spatial points with shape (batch_size, num_points, 2) . This is used by the prompt encoder to encode the prompt. [input_labels] Tensor Input labels for the points, as a Tensor of shape (batch_size, point_batch_size, num_points) . This is used by the prompt encoder to encode the prompt. There are 4 types of labels: 1 : the point is a point that contains the object of interest 0 : the point is a point that does not contain the object of interest -1 : the point corresponds to the background -10 : the point is a padding point, thus should be ignored by the prompt encoder [input_boxes] Tensor Input bounding boxes with shape (batch_size, num_boxes, 4) . [image_embeddings] Tensor Image embeddings used by the mask decoder. [image_positional_embeddings] Tensor Image positional embeddings used by the mask decoder. models~SpeechOutput : <code> Object </code> Kind : inner typedef of models Properties Name Type Description [spectrogram] Tensor The predicted log-mel spectrogram of shape (output_sequence_length, config.num_mel_bins) . Returned when no vocoder is provided [waveform] Tensor The predicted waveform of shape (num_frames,) . Returned when a vocoder is provided. [cross_attentions] Tensor The outputs of the decoder's cross-attention layers of shape (config.decoder_layers, config.decoder_attention_heads, output_sequence_length, input_sequence_length) . returned when output_cross_attentions is true . < > Update on GitHub ← Pipelines Tokenizers → models models. Pre Trained Model new Pre Trained Model(config, sessions, configs) pre Trained Model.custom_config : * pre Trained Model.generation_config ⇒ Generation Config | null pre Trained Model.dispose() ⇒ Promise. < Array < unknown > > pre Trained Model._call(model_inputs) ⇒ Promise. < Object > pre Trained Model.forward(model_inputs) ⇒ Promise. < Object > pre Trained Model._get_logits_warper(generation_config) ⇒ Logits Processor List pre Trained Model._prepare_generation_config(generation_config, kwargs) ⇒ Generation Config pre Trained Model._get_stopping_criteria(generation_config, [stopping_criteria]) pre Trained Model._validate_model_class() pre Trained Model._update_model_kwargs_for_generation(inputs) ⇒ Object pre Trained Model._prepare_model_inputs(params) ⇒ Object pre Trained Model._prepare_decoder_input_ids_for_generation(param0) pre Trained Model.generate(options) ⇒ Promise. < ( Model Output| Tensor) > pre Trained Model.get Past Key Values(decoder Results, past Key Values) ⇒ Object pre Trained Model.get Attentions(model_output) ⇒ * pre Trained Model.add Past Key Values(decoder Feeds, past Key Values) Pre Trained Model.from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise. < Pre Trained Model > models. Base Model Output new Base Model Output(output) models. Bert For MaskedLM bert For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Bert For Sequence Classification bert For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Bert For Token Classification bert For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Bert For Question Answering bert For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models. Ro Former Model models. Ro Former For MaskedLM ro Former For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Ro Former For Sequence Classification ro Former For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Ro Former For Token Classification ro Former For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Ro Former For Question Answering ro Former For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models. Conv Bert Model models. Conv Bert For MaskedLM conv Bert For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Conv Bert For Sequence Classification conv Bert For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Conv Bert For Token Classification conv Bert For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Conv Bert For Question Answering conv Bert For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models. Electra Model models. Electra For MaskedLM electra For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Electra For Sequence Classification electra For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Electra For Token Classification electra For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Electra For Question Answering electra For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models. Camembert Model models. Camembert For MaskedLM camembert For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Camembert For Sequence Classification camembert For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Camembert For Token Classification camembert For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Camembert For Question Answering camembert For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models. Deberta Model models. Deberta For MaskedLM deberta For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Deberta For Sequence Classification deberta For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Deberta For Token Classification deberta For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Deberta For Question Answering deberta For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models. Deberta V2 Model models. Deberta V2 For MaskedLM deberta V2 For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Deberta V2 For Sequence Classification deberta V2 For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Deberta V2 For Token Classification deberta V2 For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Deberta V2 For Question Answering deberta V2 For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models. Distil Bert For Sequence Classification distil Bert For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Distil Bert For Token Classification distil Bert For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Distil Bert For Question Answering distil Bert For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models. Distil Bert For MaskedLM distil Bert For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Esm Model models. Esm For MaskedLM esm For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Esm For Sequence Classification esm For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Esm For Token Classification esm For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Mobile Bert For MaskedLM mobile Bert For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Mobile Bert For Sequence Classification mobile Bert For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Mobile Bert For Question Answering mobile Bert For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models.MP Net Model models.MP Net For MaskedLM mp Net For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models.MP Net For Sequence Classification mp Net For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models.MP Net For Token Classification mp Net For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models.MP Net For Question Answering mp Net For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models. T5 For Conditional Generation models. Long T5 Pre Trained Model models. Long T5 Model models. Long T5 For Conditional Generation models.M T5 For Conditional Generation models. Bart Model models. Bart For Conditional Generation models. Bart For Sequence Classification bart For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models.M Bart Model models.M Bart For Conditional Generation models.M Bart For Sequence Classification m Bart For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Blenderbot Model models. Blenderbot For Conditional Generation models. Blenderbot Small Model models. Blenderbot Small For Conditional Generation models. Roberta For MaskedLM roberta For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models. Roberta For Sequence Classification roberta For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Roberta For Token Classification roberta For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Roberta For Question Answering roberta For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models.XLM Pre Trained Model models.XLM Model models.XLM WithLM Head Model xlm WithLM Head Model._call(model_inputs) ⇒ Promise. < MaskedLM Output > models.XLM For Sequence Classification xlm For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models.XLM For Token Classification xlm For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models.XLM For Question Answering xlm For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models.XLM Roberta For MaskedLM xlm Roberta For MaskedL M._call(model_inputs) ⇒ Promise. < MaskedLM Output > models.XLM Roberta For Sequence Classification xlm Roberta For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models.XLM Roberta For Token Classification xlm Roberta For Token Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models.XLM Roberta For Question Answering xlm Roberta For Question Answering._call(model_inputs) ⇒ Promise. < Question Answering Model Output > models.AST Model models.AST For Audio Classification models. Whisper Model models. Whisper For Conditional Generation whisper For Conditional Generation._retrieve_init_tokens(generation_config) whisper For Conditional Generation.generate(options) ⇒ Promise. < ( Model Output| Tensor) > whisper For Conditional Generation._extract_token_timestamps(generate_outputs, alignment_heads, [num_frames], [time_precision]) ⇒ Tensor models. Vision Encoder Decoder Model models. Llava For Conditional Generation models.CLIP Model models.CLIP Text Model CLIP Text Model.from_pretrained() : Pre Trained Model.from_pretrained models.CLIP Text Model With Projection CLIP Text Model With Projection.from_pretrained() : Pre Trained Model.from_pretrained models.CLIP Vision Model CLIP Vision Model.from_pretrained() : Pre Trained Model.from_pretrained models.CLIP Vision Model With Projection CLIP Vision Model With Projection.from_pretrained() : Pre Trained Model.from_pretrained models. Siglip Model models. Siglip Text Model Siglip Text Model.from_pretrained() : Pre Trained Model.from_pretrained models. Siglip Vision Model Siglip Vision Model.from_pretrained() : Pre Trained Model.from_pretrained models.CLIP Seg For Image Segmentation models.GP T2LM Head Model models.JAIS Model models.JAISLM Head Model models. Code Gen Model models. Code Gen For CausalLM models. Llama Pre Trained Model models. Llama Model models. Cohere Pre Trained Model models. Gemma Pre Trained Model models. Gemma Model models. Gemma2 Pre Trained Model models. Gemma2 Model models. Qwen2 Pre Trained Model models. Qwen2 Model models. Phi Model models. Phi3 Model models. Bloom Pre Trained Model models. Bloom Model models. Bloom For CausalLM models. Mpt Model models. Mpt For CausalLM models.OPT Model models.OPT For CausalLM models. Vit Matte For Image Matting vit Matte For Image Matting._call(model_inputs) models. Detr Object Detection Output new Detr Object Detection Output(output) models. Detr Segmentation Output new Detr Segmentation Output(output) models.RT Detr Object Detection Output new RT Detr Object Detection Output(output) models. Table Transformer Model models. Table Transformer For Object Detection table Transformer For Object Detection._call(model_inputs) models. Res Net Pre Trained Model models. Res Net Model models. Res Net For Image Classification res Net For Image Classification._call(model_inputs) models. Swin2SR Model models. Swin2SR For Image Super Resolution models.DPT Model models.DPT For Depth Estimation models. Depth Anything For Depth Estimation models.GLPN Model models.GLPN For Depth Estimation models. Donut Swin Model models. Conv Next Model models. Conv Next For Image Classification conv Next For Image Classification._call(model_inputs) models. Conv Next V2 Model models. Conv Next V2 For Image Classification conv Next V2 For Image Classification._call(model_inputs) models. Dinov2 Model models. Dinov2 For Image Classification dinov2 For Image Classification._call(model_inputs) models. Yolos Object Detection Output new Yolos Object Detection Output(output) models. Sam Model sam Model.get_image_embeddings(model_inputs) ⇒ Promise. < {image_embeddings: Tensor, image_positional_embeddings: Tensor} > sam Model.forward(model_inputs) ⇒ Promise. < Object > sam Model._call(model_inputs) ⇒ Promise. < Sam Image Segmentation Output > models. Sam Image Segmentation Output new Sam Image Segmentation Output(output) models. Wav2 Vec2 Model models. Wav2 Vec2 For Audio Frame Classification wav2 Vec2 For Audio Frame Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Py Annote Model models. Py Annote For Audio Frame Classification py Annote For Audio Frame Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Uni Speech Model models. Uni Speech ForCTC uni Speech ForCT C._call(model_inputs) models. Uni Speech For Sequence Classification uni Speech For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Uni Speech Sat Model models. Uni Speech Sat ForCTC uni Speech Sat ForCT C._call(model_inputs) models. Uni Speech Sat For Sequence Classification uni Speech Sat For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Uni Speech Sat For Audio Frame Classification uni Speech Sat For Audio Frame Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Wav2 Vec2 Bert Model models. Wav2 Vec2 Bert ForCTC wav2 Vec2 Bert ForCT C._call(model_inputs) models. Wav2 Vec2 Bert For Sequence Classification wav2 Vec2 Bert For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. Hubert Model models. Hubert ForCTC hubert ForCT C._call(model_inputs) models. Hubert For Sequence Classification hubert For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. WavLM Pre Trained Model models. WavLM Model models. WavLM ForCTC wavLM ForCT C._call(model_inputs) models. WavLM For Sequence Classification wavLM For Sequence Classification._call(model_inputs) ⇒ Promise. < Sequence Classifier Output > models. WavLM ForX Vector wavLM ForX Vector._call(model_inputs) ⇒ Promise. < X Vector Output > models. WavLM For Audio Frame Classification wavLM For Audio Frame Classification._call(model_inputs) ⇒ Promise. < Token Classifier Output > models. Speech T5 Pre Trained Model models. Speech T5 Model models. Speech T5 For Speech To Text models. Speech T5 For Text To Speech speech T5 For Text To Speech.generate_speech(input_values, speaker_embeddings, options) ⇒ Promise. < Speech Output > models. Speech T5 Hifi Gan models. TrOCR For CausalLM models. Mistral Pre Trained Model models. Starcoder2 Pre Trained Model models. Falcon Pre Trained Model models. Clap Text Model With Projection Clap Text Model With Projection.from_pretrained() : Pre Trained Model.from_pretrained models. Clap Audio Model With Projection Clap Audio Model With Projection.from_pretrained() : Pre Trained Model.from_pretrained models. Vits Model vits Model._call(model_inputs) ⇒ Promise. < Vits Model Output > models. Segformer Model models. Segformer For Image Classification models. Segformer For Semantic Segmentation models. Stable Lm Model models. Stable Lm For CausalLM models. Efficient Net Model models. Efficient Net For Image Classification efficient Net For Image Classification._call(model_inputs) models. Musicgen Model models. Musicgen For CausalLM models. Musicgen For Conditional Generation musicgen For Conditional Generation._apply_and_filter_by_delay_pattern_mask(outputs) ⇒ Tensor musicgen For Conditional Generation.generate(options) ⇒ Promise. < ( Model Output| Tensor) > models. Mobile Net V1 Model models. Mobile Net V1 For Image Classification mobile Net V1 For Image Classification._call(model_inputs) models. Mobile Net V2 Model models. Mobile Net V2 For Image Classification mobile Net V2 For Image Classification._call(model_inputs) models. Mobile Net V3 Model models. Mobile Net V3 For Image Classification mobile Net V3 For Image Classification._call(model_inputs) models. Mobile Net V4 Model models. Mobile Net V4 For Image Classification mobile Net V4 For Image Classification._call(model_inputs) models. Decision Transformer Model models. Pretrained Mixin pretrained Mixin.MODE L_CLAS S_MAPPING S : * pretrained Mixin.BAS E_I F_FAIL Pretrained Mixin.from_pretrained() : * models. Auto Model auto Model.MODE L_CLAS S_MAPPING S : * models. Auto Model For Sequence Classification models. Auto Model For Token Classification models. Auto Model For Seq2 SeqLM models. Auto Model For Speech Seq2 Seq models. Auto Model For Text To Spectrogram models. Auto Model For Text To Waveform models. Auto Model For CausalLM models. Auto Model For MaskedLM models. Auto Model For Question Answering models. Auto Model For Vision2 Seq models. Auto Model For Image Classification models. Auto Model For Image Segmentation models. Auto Model For Semantic Segmentation models. Auto Model For Universal Segmentation models. Auto Model For Object Detection models. Auto Model For Mask Generation models. Seq2 SeqLM Output new Seq2 SeqLM Output(output) models. Sequence Classifier Output new Sequence Classifier Output(output) models.X Vector Output new X Vector Output(output) models. Token Classifier Output new Token Classifier Output(output) models. MaskedLM Output new MaskedLM Output(output) models. Question Answering Model Output new Question Answering Model Output(output) models. CausalLM Output new CausalLM Output(output) models. CausalLM Output With Past new CausalLM Output With Past(output) models. Image Matting Output new Image Matting Output(output) models. Vits Model Output new Vits Model Output(output) models~ Sam Model Inputs : Object models~ Speech Output : Object |
Subsets and Splits