Hub Python Library documentation

Inference types

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.29.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Inference types

This page lists the types (e.g. dataclasses) available for each task supported on the Hugging Face Hub. Each task is specified using a JSON schema, and the types are generated from these schemas - with some customization due to Python requirements. Visit @huggingface.js/tasks to find the JSON schemas for each task.

This part of the lib is still under development and will be improved in future releases.

audio_classification

class huggingface_hub.AudioClassificationInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.audio_classification.AudioClassificationParameters] = None )

Inputs for Audio Classification inference

class huggingface_hub.AudioClassificationOutputElement

< >

( label: strscore: float )

Outputs for Audio Classification inference

class huggingface_hub.AudioClassificationParameters

< >

( function_to_apply: typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None )

Additional inference parameters for Audio Classification

audio_to_audio

class huggingface_hub.AudioToAudioInput

< >

( inputs: typing.Any )

Inputs for Audio to Audio inference

class huggingface_hub.AudioToAudioOutputElement

< >

( blob: typing.Anycontent_type: strlabel: str )

Outputs of inference for the Audio To Audio task A generated audio file with its label.

automatic_speech_recognition

class huggingface_hub.AutomaticSpeechRecognitionGenerationParameters

< >

( do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('AutomaticSpeechRecognitionEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None )

Parametrization of the text generation process

class huggingface_hub.AutomaticSpeechRecognitionInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionParameters] = None )

Inputs for Automatic Speech Recognition inference

class huggingface_hub.AutomaticSpeechRecognitionOutput

< >

( text: strchunks: typing.Optional[typing.List[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionOutputChunk]] = None )

Outputs of inference for the Automatic Speech Recognition task

class huggingface_hub.AutomaticSpeechRecognitionOutputChunk

< >

( text: strtimestamp: typing.List[float] )

class huggingface_hub.AutomaticSpeechRecognitionParameters

< >

( return_timestamps: typing.Optional[bool] = Nonegenerate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionGenerationParameters] = None )

Additional inference parameters for Automatic Speech Recognition

chat_completion

class huggingface_hub.ChatCompletionInput

< >

( messages: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessage]frequency_penalty: typing.Optional[float] = Nonelogit_bias: typing.Optional[typing.List[float]] = Nonelogprobs: typing.Optional[bool] = Nonemax_tokens: typing.Optional[int] = Nonemodel: typing.Optional[str] = Nonen: typing.Optional[int] = Nonepresence_penalty: typing.Optional[float] = Noneresponse_format: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputGrammarType] = Noneseed: typing.Optional[int] = Nonestop: typing.Optional[typing.List[str]] = Nonestream: typing.Optional[bool] = Nonestream_options: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = Nonetemperature: typing.Optional[float] = Nonetool_choice: typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = Nonetool_prompt: typing.Optional[str] = Nonetools: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = Nonetop_logprobs: typing.Optional[int] = Nonetop_p: typing.Optional[float] = None )

Chat Completion Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.ChatCompletionInputFunctionDefinition

< >

( arguments: typing.Anyname: strdescription: typing.Optional[str] = None )

class huggingface_hub.ChatCompletionInputFunctionName

< >

( name: str )

class huggingface_hub.ChatCompletionInputGrammarType

< >

( type: ChatCompletionInputGrammarTypeTypevalue: typing.Any )

class huggingface_hub.ChatCompletionInputMessage

< >

( role: strcontent: typing.Union[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessageChunk], str, NoneType] = Nonename: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolCall]] = None )

class huggingface_hub.ChatCompletionInputMessageChunk

< >

( type: ChatCompletionInputMessageChunkTypeimage_url: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputURL] = Nonetext: typing.Optional[str] = None )

class huggingface_hub.ChatCompletionInputStreamOptions

< >

( include_usage: typing.Optional[bool] = None )

class huggingface_hub.ChatCompletionInputTool

< >

( function: ChatCompletionInputFunctionDefinitiontype: str )

class huggingface_hub.ChatCompletionInputToolCall

< >

( function: ChatCompletionInputFunctionDefinitionid: strtype: str )

class huggingface_hub.ChatCompletionInputToolChoiceClass

< >

( function: ChatCompletionInputFunctionName )

class huggingface_hub.ChatCompletionInputURL

< >

( url: str )

class huggingface_hub.ChatCompletionOutput

< >

( choices: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputComplete]created: intid: strmodel: strsystem_fingerprint: strusage: ChatCompletionOutputUsage )

Chat Completion Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.ChatCompletionOutputComplete

< >

( finish_reason: strindex: intmessage: ChatCompletionOutputMessagelogprobs: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprobs] = None )

class huggingface_hub.ChatCompletionOutputFunctionDefinition

< >

( arguments: typing.Anyname: strdescription: typing.Optional[str] = None )

class huggingface_hub.ChatCompletionOutputLogprob

< >

( logprob: floattoken: strtop_logprobs: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputTopLogprob] )

class huggingface_hub.ChatCompletionOutputLogprobs

< >

( content: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprob] )

class huggingface_hub.ChatCompletionOutputMessage

< >

( role: strcontent: typing.Optional[str] = Nonetool_call_id: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputToolCall]] = None )

class huggingface_hub.ChatCompletionOutputToolCall

< >

( function: ChatCompletionOutputFunctionDefinitionid: strtype: str )

class huggingface_hub.ChatCompletionOutputTopLogprob

< >

( logprob: floattoken: str )

class huggingface_hub.ChatCompletionOutputUsage

< >

( completion_tokens: intprompt_tokens: inttotal_tokens: int )

class huggingface_hub.ChatCompletionStreamOutput

< >

( choices: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputChoice]created: intid: strmodel: strsystem_fingerprint: strusage: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputUsage] = None )

Chat Completion Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.ChatCompletionStreamOutputChoice

< >

( delta: ChatCompletionStreamOutputDeltaindex: intfinish_reason: typing.Optional[str] = Nonelogprobs: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprobs] = None )

class huggingface_hub.ChatCompletionStreamOutputDelta

< >

( role: strcontent: typing.Optional[str] = Nonetool_call_id: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputDeltaToolCall]] = None )

class huggingface_hub.ChatCompletionStreamOutputDeltaToolCall

< >

( function: ChatCompletionStreamOutputFunctionid: strindex: inttype: str )

class huggingface_hub.ChatCompletionStreamOutputFunction

< >

( arguments: strname: typing.Optional[str] = None )

class huggingface_hub.ChatCompletionStreamOutputLogprob

< >

( logprob: floattoken: strtop_logprobs: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputTopLogprob] )

class huggingface_hub.ChatCompletionStreamOutputLogprobs

< >

( content: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprob] )

class huggingface_hub.ChatCompletionStreamOutputTopLogprob

< >

( logprob: floattoken: str )

class huggingface_hub.ChatCompletionStreamOutputUsage

< >

( completion_tokens: intprompt_tokens: inttotal_tokens: int )

depth_estimation

class huggingface_hub.DepthEstimationInput

< >

( inputs: typing.Anyparameters: typing.Optional[typing.Dict[str, typing.Any]] = None )

Inputs for Depth Estimation inference

class huggingface_hub.DepthEstimationOutput

< >

( depth: typing.Anypredicted_depth: typing.Any )

Outputs of inference for the Depth Estimation task

document_question_answering

class huggingface_hub.DocumentQuestionAnsweringInput

< >

( inputs: DocumentQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.document_question_answering.DocumentQuestionAnsweringParameters] = None )

Inputs for Document Question Answering inference

class huggingface_hub.DocumentQuestionAnsweringInputData

< >

( image: typing.Anyquestion: str )

One (document, question) pair to answer

class huggingface_hub.DocumentQuestionAnsweringOutputElement

< >

( answer: strend: intscore: floatstart: int )

Outputs of inference for the Document Question Answering task

class huggingface_hub.DocumentQuestionAnsweringParameters

< >

( doc_stride: typing.Optional[int] = Nonehandle_impossible_answer: typing.Optional[bool] = Nonelang: typing.Optional[str] = Nonemax_answer_len: typing.Optional[int] = Nonemax_question_len: typing.Optional[int] = Nonemax_seq_len: typing.Optional[int] = Nonetop_k: typing.Optional[int] = Noneword_boxes: typing.Optional[typing.List[typing.Union[typing.List[float], str]]] = None )

Additional inference parameters for Document Question Answering

feature_extraction

class huggingface_hub.FeatureExtractionInput

< >

( inputs: typing.Union[typing.List[str], str]normalize: typing.Optional[bool] = Noneprompt_name: typing.Optional[str] = Nonetruncate: typing.Optional[bool] = Nonetruncation_direction: typing.Optional[ForwardRef('FeatureExtractionInputTruncationDirection')] = None )

Feature Extraction Input. Auto-generated from TEI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tei-import.ts.

fill_mask

class huggingface_hub.FillMaskInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.fill_mask.FillMaskParameters] = None )

Inputs for Fill Mask inference

class huggingface_hub.FillMaskOutputElement

< >

( score: floatsequence: strtoken: inttoken_str: typing.Anyfill_mask_output_token_str: typing.Optional[str] = None )

Outputs of inference for the Fill Mask task

class huggingface_hub.FillMaskParameters

< >

( targets: typing.Optional[typing.List[str]] = Nonetop_k: typing.Optional[int] = None )

Additional inference parameters for Fill Mask

image_classification

class huggingface_hub.ImageClassificationInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_classification.ImageClassificationParameters] = None )

Inputs for Image Classification inference

class huggingface_hub.ImageClassificationOutputElement

< >

( label: strscore: float )

Outputs of inference for the Image Classification task

class huggingface_hub.ImageClassificationParameters

< >

( function_to_apply: typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None )

Additional inference parameters for Image Classification

image_segmentation

class huggingface_hub.ImageSegmentationInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_segmentation.ImageSegmentationParameters] = None )

Inputs for Image Segmentation inference

class huggingface_hub.ImageSegmentationOutputElement

< >

( label: strmask: strscore: typing.Optional[float] = None )

Outputs of inference for the Image Segmentation task A predicted mask / segment

class huggingface_hub.ImageSegmentationParameters

< >

( mask_threshold: typing.Optional[float] = Noneoverlap_mask_area_threshold: typing.Optional[float] = Nonesubtask: typing.Optional[ForwardRef('ImageSegmentationSubtask')] = Nonethreshold: typing.Optional[float] = None )

Additional inference parameters for Image Segmentation

image_to_image

class huggingface_hub.ImageToImageInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageParameters] = None )

Inputs for Image To Image inference

class huggingface_hub.ImageToImageOutput

< >

( image: typing.Any )

Outputs of inference for the Image To Image task

class huggingface_hub.ImageToImageParameters

< >

( guidance_scale: typing.Optional[float] = Nonenegative_prompt: typing.Optional[str] = Nonenum_inference_steps: typing.Optional[int] = Nonetarget_size: typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None )

Additional inference parameters for Image To Image

class huggingface_hub.ImageToImageTargetSize

< >

( height: intwidth: int )

The size in pixel of the output image.

image_to_text

class huggingface_hub.ImageToTextGenerationParameters

< >

( do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('ImageToTextEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None )

Parametrization of the text generation process

class huggingface_hub.ImageToTextInput

< >

( inputs: typing.Anyparameters: typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextParameters] = None )

Inputs for Image To Text inference

class huggingface_hub.ImageToTextOutput

< >

( generated_text: typing.Anyimage_to_text_output_generated_text: typing.Optional[str] = None )

Outputs of inference for the Image To Text task

class huggingface_hub.ImageToTextParameters

< >

( max_new_tokens: typing.Optional[int] = Nonegenerate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextGenerationParameters] = None )

Additional inference parameters for Image To Text

object_detection

class huggingface_hub.ObjectDetectionBoundingBox

< >

( xmax: intxmin: intymax: intymin: int )

The predicted bounding box. Coordinates are relative to the top left corner of the input image.

class huggingface_hub.ObjectDetectionInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.object_detection.ObjectDetectionParameters] = None )

Inputs for Object Detection inference

class huggingface_hub.ObjectDetectionOutputElement

< >

( box: ObjectDetectionBoundingBoxlabel: strscore: float )

Outputs of inference for the Object Detection task

class huggingface_hub.ObjectDetectionParameters

< >

( threshold: typing.Optional[float] = None )

Additional inference parameters for Object Detection

question_answering

class huggingface_hub.QuestionAnsweringInput

< >

( inputs: QuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.question_answering.QuestionAnsweringParameters] = None )

Inputs for Question Answering inference

class huggingface_hub.QuestionAnsweringInputData

< >

( context: strquestion: str )

One (context, question) pair to answer

class huggingface_hub.QuestionAnsweringOutputElement

< >

( answer: strend: intscore: floatstart: int )

Outputs of inference for the Question Answering task

class huggingface_hub.QuestionAnsweringParameters

< >

( align_to_words: typing.Optional[bool] = Nonedoc_stride: typing.Optional[int] = Nonehandle_impossible_answer: typing.Optional[bool] = Nonemax_answer_len: typing.Optional[int] = Nonemax_question_len: typing.Optional[int] = Nonemax_seq_len: typing.Optional[int] = Nonetop_k: typing.Optional[int] = None )

Additional inference parameters for Question Answering

sentence_similarity

class huggingface_hub.SentenceSimilarityInput

< >

( inputs: SentenceSimilarityInputDataparameters: typing.Optional[typing.Dict[str, typing.Any]] = None )

Inputs for Sentence similarity inference

class huggingface_hub.SentenceSimilarityInputData

< >

( sentences: typing.List[str]source_sentence: str )

summarization

class huggingface_hub.SummarizationInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.summarization.SummarizationParameters] = None )

Inputs for Summarization inference

class huggingface_hub.SummarizationOutput

< >

( summary_text: str )

Outputs of inference for the Summarization task

class huggingface_hub.SummarizationParameters

< >

( clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonetruncation: typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None )

Additional inference parameters for summarization.

table_question_answering

class huggingface_hub.TableQuestionAnsweringInput

< >

( inputs: TableQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.table_question_answering.TableQuestionAnsweringParameters] = None )

Inputs for Table Question Answering inference

class huggingface_hub.TableQuestionAnsweringInputData

< >

( question: strtable: typing.Dict[str, typing.List[str]] )

One (table, question) pair to answer

class huggingface_hub.TableQuestionAnsweringOutputElement

< >

( answer: strcells: typing.List[str]coordinates: typing.List[typing.List[int]]aggregator: typing.Optional[str] = None )

Outputs of inference for the Table Question Answering task

class huggingface_hub.TableQuestionAnsweringParameters

< >

( padding: typing.Optional[ForwardRef('Padding')] = Nonesequential: typing.Optional[bool] = Nonetruncation: typing.Optional[bool] = None )

Additional inference parameters for Table Question Answering

text2text_generation

class huggingface_hub.Text2TextGenerationInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text2text_generation.Text2TextGenerationParameters] = None )

Inputs for Text2text Generation inference

class huggingface_hub.Text2TextGenerationOutput

< >

( generated_text: typing.Anytext2_text_generation_output_generated_text: typing.Optional[str] = None )

Outputs of inference for the Text2text Generation task

class huggingface_hub.Text2TextGenerationParameters

< >

( clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonetruncation: typing.Optional[ForwardRef('Text2TextGenerationTruncationStrategy')] = None )

Additional inference parameters for Text2text Generation

text_classification

class huggingface_hub.TextClassificationInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_classification.TextClassificationParameters] = None )

Inputs for Text Classification inference

class huggingface_hub.TextClassificationOutputElement

< >

( label: strscore: float )

Outputs of inference for the Text Classification task

class huggingface_hub.TextClassificationParameters

< >

( function_to_apply: typing.Optional[ForwardRef('TextClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None )

Additional inference parameters for Text Classification

text_generation

class huggingface_hub.TextGenerationInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGenerateParameters] = Nonestream: typing.Optional[bool] = None )

Text Generation Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.TextGenerationInputGenerateParameters

< >

( adapter_id: typing.Optional[str] = Nonebest_of: typing.Optional[int] = Nonedecoder_input_details: typing.Optional[bool] = Nonedetails: typing.Optional[bool] = Nonedo_sample: typing.Optional[bool] = Nonefrequency_penalty: typing.Optional[float] = Nonegrammar: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = Nonemax_new_tokens: typing.Optional[int] = Nonerepetition_penalty: typing.Optional[float] = Nonereturn_full_text: typing.Optional[bool] = Noneseed: typing.Optional[int] = Nonestop: typing.Optional[typing.List[str]] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_n_tokens: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetruncate: typing.Optional[int] = Nonetypical_p: typing.Optional[float] = Nonewatermark: typing.Optional[bool] = None )

class huggingface_hub.TextGenerationInputGrammarType

< >

( type: TypeEnumvalue: typing.Any )

class huggingface_hub.TextGenerationOutput

< >

( generated_text: strdetails: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputDetails] = None )

Text Generation Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.TextGenerationOutputBestOfSequence

< >

( finish_reason: TextGenerationOutputFinishReasongenerated_text: strgenerated_tokens: intprefill: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputPrefillToken]tokens: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]seed: typing.Optional[int] = Nonetop_tokens: typing.Optional[typing.List[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None )

class huggingface_hub.TextGenerationOutputDetails

< >

( finish_reason: TextGenerationOutputFinishReasongenerated_tokens: intprefill: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputPrefillToken]tokens: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]best_of_sequences: typing.Optional[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputBestOfSequence]] = Noneseed: typing.Optional[int] = Nonetop_tokens: typing.Optional[typing.List[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None )

class huggingface_hub.TextGenerationOutputPrefillToken

< >

( id: intlogprob: floattext: str )

class huggingface_hub.TextGenerationOutputToken

< >

( id: intlogprob: floatspecial: booltext: str )

class huggingface_hub.TextGenerationStreamOutput

< >

( index: inttoken: TextGenerationStreamOutputTokendetails: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputStreamDetails] = Nonegenerated_text: typing.Optional[str] = Nonetop_tokens: typing.Optional[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputToken]] = None )

Text Generation Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.TextGenerationStreamOutputStreamDetails

< >

( finish_reason: TextGenerationOutputFinishReasongenerated_tokens: intinput_length: intseed: typing.Optional[int] = None )

class huggingface_hub.TextGenerationStreamOutputToken

< >

( id: intlogprob: floatspecial: booltext: str )

text_to_audio

class huggingface_hub.TextToAudioGenerationParameters

< >

( do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('TextToAudioEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None )

Parametrization of the text generation process

class huggingface_hub.TextToAudioInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioParameters] = None )

Inputs for Text To Audio inference

class huggingface_hub.TextToAudioOutput

< >

( audio: typing.Anysampling_rate: float )

Outputs of inference for the Text To Audio task

class huggingface_hub.TextToAudioParameters

< >

( generate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioGenerationParameters] = None )

Additional inference parameters for Text To Audio

text_to_image

class huggingface_hub.TextToImageInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_image.TextToImageParameters] = None )

Inputs for Text To Image inference

class huggingface_hub.TextToImageOutput

< >

( image: typing.Any )

Outputs of inference for the Text To Image task

class huggingface_hub.TextToImageParameters

< >

( guidance_scale: typing.Optional[float] = Noneheight: typing.Optional[int] = Nonenegative_prompt: typing.Optional[str] = Nonenum_inference_steps: typing.Optional[int] = Nonescheduler: typing.Optional[str] = Noneseed: typing.Optional[int] = Nonewidth: typing.Optional[int] = None )

Additional inference parameters for Text To Image

text_to_speech

class huggingface_hub.TextToSpeechGenerationParameters

< >

( do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None )

Parametrization of the text generation process

class huggingface_hub.TextToSpeechInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechParameters] = None )

Inputs for Text To Speech inference

class huggingface_hub.TextToSpeechOutput

< >

( audio: typing.Anysampling_rate: typing.Optional[float] = None )

Outputs of inference for the Text To Speech task

class huggingface_hub.TextToSpeechParameters

< >

( generate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechGenerationParameters] = None )

Additional inference parameters for Text To Speech

text_to_video

class huggingface_hub.TextToVideoInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_video.TextToVideoParameters] = None )

Inputs for Text To Video inference

class huggingface_hub.TextToVideoOutput

< >

( video: typing.Any )

Outputs of inference for the Text To Video task

class huggingface_hub.TextToVideoParameters

< >

( guidance_scale: typing.Optional[float] = Nonenegative_prompt: typing.Optional[typing.List[str]] = Nonenum_frames: typing.Optional[float] = Nonenum_inference_steps: typing.Optional[int] = Noneseed: typing.Optional[int] = None )

Additional inference parameters for Text To Video

token_classification

class huggingface_hub.TokenClassificationInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.token_classification.TokenClassificationParameters] = None )

Inputs for Token Classification inference

class huggingface_hub.TokenClassificationOutputElement

< >

( end: intscore: floatstart: intword: strentity: typing.Optional[str] = Noneentity_group: typing.Optional[str] = None )

Outputs of inference for the Token Classification task

class huggingface_hub.TokenClassificationParameters

< >

( aggregation_strategy: typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = Noneignore_labels: typing.Optional[typing.List[str]] = Nonestride: typing.Optional[int] = None )

Additional inference parameters for Token Classification

translation

class huggingface_hub.TranslationInput

< >

( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.translation.TranslationParameters] = None )

Inputs for Translation inference

class huggingface_hub.TranslationOutput

< >

( translation_text: str )

Outputs of inference for the Translation task

class huggingface_hub.TranslationParameters

< >

( clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonesrc_lang: typing.Optional[str] = Nonetgt_lang: typing.Optional[str] = Nonetruncation: typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None )

Additional inference parameters for Translation

video_classification

class huggingface_hub.VideoClassificationInput

< >

( inputs: typing.Anyparameters: typing.Optional[huggingface_hub.inference._generated.types.video_classification.VideoClassificationParameters] = None )

Inputs for Video Classification inference

class huggingface_hub.VideoClassificationOutputElement

< >

( label: strscore: float )

Outputs of inference for the Video Classification task

class huggingface_hub.VideoClassificationParameters

< >

( frame_sampling_rate: typing.Optional[int] = Nonefunction_to_apply: typing.Optional[ForwardRef('VideoClassificationOutputTransform')] = Nonenum_frames: typing.Optional[int] = Nonetop_k: typing.Optional[int] = None )

Additional inference parameters for Video Classification

visual_question_answering

class huggingface_hub.VisualQuestionAnsweringInput

< >

( inputs: VisualQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.visual_question_answering.VisualQuestionAnsweringParameters] = None )

Inputs for Visual Question Answering inference

class huggingface_hub.VisualQuestionAnsweringInputData

< >

( image: typing.Anyquestion: str )

One (image, question) pair to answer

class huggingface_hub.VisualQuestionAnsweringOutputElement

< >

( score: floatanswer: typing.Optional[str] = None )

Outputs of inference for the Visual Question Answering task

class huggingface_hub.VisualQuestionAnsweringParameters

< >

( top_k: typing.Optional[int] = None )

Additional inference parameters for Visual Question Answering

zero_shot_classification

class huggingface_hub.ZeroShotClassificationInput

< >

( inputs: strparameters: ZeroShotClassificationParameters )

Inputs for Zero Shot Classification inference

class huggingface_hub.ZeroShotClassificationOutputElement

< >

( label: strscore: float )

Outputs of inference for the Zero Shot Classification task

class huggingface_hub.ZeroShotClassificationParameters

< >

( candidate_labels: typing.List[str]hypothesis_template: typing.Optional[str] = Nonemulti_label: typing.Optional[bool] = None )

Additional inference parameters for Zero Shot Classification

zero_shot_image_classification

class huggingface_hub.ZeroShotImageClassificationInput

< >

( inputs: strparameters: ZeroShotImageClassificationParameters )

Inputs for Zero Shot Image Classification inference

class huggingface_hub.ZeroShotImageClassificationOutputElement

< >

( label: strscore: float )

Outputs of inference for the Zero Shot Image Classification task

class huggingface_hub.ZeroShotImageClassificationParameters

< >

( candidate_labels: typing.List[str]hypothesis_template: typing.Optional[str] = None )

Additional inference parameters for Zero Shot Image Classification

zero_shot_object_detection

class huggingface_hub.ZeroShotObjectDetectionBoundingBox

< >

( xmax: intxmin: intymax: intymin: int )

The predicted bounding box. Coordinates are relative to the top left corner of the input image.

class huggingface_hub.ZeroShotObjectDetectionInput

< >

( inputs: strparameters: ZeroShotObjectDetectionParameters )

Inputs for Zero Shot Object Detection inference

class huggingface_hub.ZeroShotObjectDetectionOutputElement

< >

( box: ZeroShotObjectDetectionBoundingBoxlabel: strscore: float )

Outputs of inference for the Zero Shot Object Detection task

class huggingface_hub.ZeroShotObjectDetectionParameters

< >

( candidate_labels: typing.List[str] )

Additional inference parameters for Zero Shot Object Detection

< > Update on GitHub