Hub Python Library documentation
Inference types
Inference types
This page lists the types (e.g. dataclasses) available for each task supported on the Hugging Face Hub. Each task is specified using a JSON schema, and the types are generated from these schemas - with some customization due to Python requirements. Visit @huggingface.js/tasks to find the JSON schemas for each task.
This part of the lib is still under development and will be improved in future releases.
audio_classification
class huggingface_hub.AudioClassificationInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.audio_classification.AudioClassificationParameters] = None )
Inputs for Audio Classification inference
Outputs for Audio Classification inference
class huggingface_hub.AudioClassificationParameters
< source >( function_to_apply: typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None )
Additional inference parameters for Audio Classification
audio_to_audio
Inputs for Audio to Audio inference
class huggingface_hub.AudioToAudioOutputElement
< source >( blob: typing.Anycontent_type: strlabel: str )
Outputs of inference for the Audio To Audio task A generated audio file with its label.
automatic_speech_recognition
class huggingface_hub.AutomaticSpeechRecognitionGenerationParameters
< source >( do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('AutomaticSpeechRecognitionEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None )
Parametrization of the text generation process
class huggingface_hub.AutomaticSpeechRecognitionInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionParameters] = None )
Inputs for Automatic Speech Recognition inference
class huggingface_hub.AutomaticSpeechRecognitionOutput
< source >( text: strchunks: typing.Optional[typing.List[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionOutputChunk]] = None )
Outputs of inference for the Automatic Speech Recognition task
class huggingface_hub.AutomaticSpeechRecognitionOutputChunk
< source >( text: strtimestamp: typing.List[float] )
class huggingface_hub.AutomaticSpeechRecognitionParameters
< source >( return_timestamps: typing.Optional[bool] = Nonegenerate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionGenerationParameters] = None )
Additional inference parameters for Automatic Speech Recognition
chat_completion
class huggingface_hub.ChatCompletionInput
< source >( messages: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessage]frequency_penalty: typing.Optional[float] = Nonelogit_bias: typing.Optional[typing.List[float]] = Nonelogprobs: typing.Optional[bool] = Nonemax_tokens: typing.Optional[int] = Nonemodel: typing.Optional[str] = Nonen: typing.Optional[int] = Nonepresence_penalty: typing.Optional[float] = Noneresponse_format: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputGrammarType] = Noneseed: typing.Optional[int] = Nonestop: typing.Optional[typing.List[str]] = Nonestream: typing.Optional[bool] = Nonestream_options: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions] = Nonetemperature: typing.Optional[float] = Nonetool_choice: typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = Nonetool_prompt: typing.Optional[str] = Nonetools: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool]] = Nonetop_logprobs: typing.Optional[int] = Nonetop_p: typing.Optional[float] = None )
Chat Completion Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
class huggingface_hub.ChatCompletionInputFunctionDefinition
< source >( arguments: typing.Anyname: strdescription: typing.Optional[str] = None )
class huggingface_hub.ChatCompletionInputGrammarType
< source >( type: ChatCompletionInputGrammarTypeTypevalue: typing.Any )
class huggingface_hub.ChatCompletionInputMessage
< source >( role: strcontent: typing.Union[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessageChunk], str, NoneType] = Nonename: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolCall]] = None )
class huggingface_hub.ChatCompletionInputMessageChunk
< source >( type: ChatCompletionInputMessageChunkTypeimage_url: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputURL] = Nonetext: typing.Optional[str] = None )
class huggingface_hub.ChatCompletionInputStreamOptions
< source >( include_usage: typing.Optional[bool] = None )
class huggingface_hub.ChatCompletionInputTool
< source >( function: ChatCompletionInputFunctionDefinitiontype: str )
class huggingface_hub.ChatCompletionInputToolCall
< source >( function: ChatCompletionInputFunctionDefinitionid: strtype: str )
class huggingface_hub.ChatCompletionInputToolChoiceClass
< source >( function: ChatCompletionInputFunctionName )
class huggingface_hub.ChatCompletionOutput
< source >( choices: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputComplete]created: intid: strmodel: strsystem_fingerprint: strusage: ChatCompletionOutputUsage )
Chat Completion Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
class huggingface_hub.ChatCompletionOutputComplete
< source >( finish_reason: strindex: intmessage: ChatCompletionOutputMessagelogprobs: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprobs] = None )
class huggingface_hub.ChatCompletionOutputFunctionDefinition
< source >( arguments: typing.Anyname: strdescription: typing.Optional[str] = None )
class huggingface_hub.ChatCompletionOutputLogprob
< source >( logprob: floattoken: strtop_logprobs: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputTopLogprob] )
class huggingface_hub.ChatCompletionOutputLogprobs
< source >( content: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprob] )
class huggingface_hub.ChatCompletionOutputMessage
< source >( role: strcontent: typing.Optional[str] = Nonetool_call_id: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputToolCall]] = None )
class huggingface_hub.ChatCompletionOutputToolCall
< source >( function: ChatCompletionOutputFunctionDefinitionid: strtype: str )
class huggingface_hub.ChatCompletionOutputUsage
< source >( completion_tokens: intprompt_tokens: inttotal_tokens: int )
class huggingface_hub.ChatCompletionStreamOutput
< source >( choices: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputChoice]created: intid: strmodel: strsystem_fingerprint: strusage: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputUsage] = None )
Chat Completion Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
class huggingface_hub.ChatCompletionStreamOutputChoice
< source >( delta: ChatCompletionStreamOutputDeltaindex: intfinish_reason: typing.Optional[str] = Nonelogprobs: typing.Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprobs] = None )
class huggingface_hub.ChatCompletionStreamOutputDelta
< source >( role: strcontent: typing.Optional[str] = Nonetool_call_id: typing.Optional[str] = Nonetool_calls: typing.Optional[typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputDeltaToolCall]] = None )
class huggingface_hub.ChatCompletionStreamOutputDeltaToolCall
< source >( function: ChatCompletionStreamOutputFunctionid: strindex: inttype: str )
class huggingface_hub.ChatCompletionStreamOutputFunction
< source >( arguments: strname: typing.Optional[str] = None )
class huggingface_hub.ChatCompletionStreamOutputLogprob
< source >( logprob: floattoken: strtop_logprobs: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputTopLogprob] )
class huggingface_hub.ChatCompletionStreamOutputLogprobs
< source >( content: typing.List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprob] )
class huggingface_hub.ChatCompletionStreamOutputUsage
< source >( completion_tokens: intprompt_tokens: inttotal_tokens: int )
depth_estimation
class huggingface_hub.DepthEstimationInput
< source >( inputs: typing.Anyparameters: typing.Optional[typing.Dict[str, typing.Any]] = None )
Inputs for Depth Estimation inference
class huggingface_hub.DepthEstimationOutput
< source >( depth: typing.Anypredicted_depth: typing.Any )
Outputs of inference for the Depth Estimation task
document_question_answering
class huggingface_hub.DocumentQuestionAnsweringInput
< source >( inputs: DocumentQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.document_question_answering.DocumentQuestionAnsweringParameters] = None )
Inputs for Document Question Answering inference
class huggingface_hub.DocumentQuestionAnsweringInputData
< source >( image: typing.Anyquestion: str )
One (document, question) pair to answer
class huggingface_hub.DocumentQuestionAnsweringOutputElement
< source >( answer: strend: intscore: floatstart: int )
Outputs of inference for the Document Question Answering task
class huggingface_hub.DocumentQuestionAnsweringParameters
< source >( doc_stride: typing.Optional[int] = Nonehandle_impossible_answer: typing.Optional[bool] = Nonelang: typing.Optional[str] = Nonemax_answer_len: typing.Optional[int] = Nonemax_question_len: typing.Optional[int] = Nonemax_seq_len: typing.Optional[int] = Nonetop_k: typing.Optional[int] = Noneword_boxes: typing.Optional[typing.List[typing.Union[typing.List[float], str]]] = None )
Additional inference parameters for Document Question Answering
feature_extraction
class huggingface_hub.FeatureExtractionInput
< source >( inputs: typing.Union[typing.List[str], str]normalize: typing.Optional[bool] = Noneprompt_name: typing.Optional[str] = Nonetruncate: typing.Optional[bool] = Nonetruncation_direction: typing.Optional[ForwardRef('FeatureExtractionInputTruncationDirection')] = None )
Feature Extraction Input. Auto-generated from TEI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tei-import.ts.
fill_mask
class huggingface_hub.FillMaskInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.fill_mask.FillMaskParameters] = None )
Inputs for Fill Mask inference
class huggingface_hub.FillMaskOutputElement
< source >( score: floatsequence: strtoken: inttoken_str: typing.Anyfill_mask_output_token_str: typing.Optional[str] = None )
Outputs of inference for the Fill Mask task
class huggingface_hub.FillMaskParameters
< source >( targets: typing.Optional[typing.List[str]] = Nonetop_k: typing.Optional[int] = None )
Additional inference parameters for Fill Mask
image_classification
class huggingface_hub.ImageClassificationInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_classification.ImageClassificationParameters] = None )
Inputs for Image Classification inference
Outputs of inference for the Image Classification task
class huggingface_hub.ImageClassificationParameters
< source >( function_to_apply: typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None )
Additional inference parameters for Image Classification
image_segmentation
class huggingface_hub.ImageSegmentationInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_segmentation.ImageSegmentationParameters] = None )
Inputs for Image Segmentation inference
class huggingface_hub.ImageSegmentationOutputElement
< source >( label: strmask: strscore: typing.Optional[float] = None )
Outputs of inference for the Image Segmentation task A predicted mask / segment
class huggingface_hub.ImageSegmentationParameters
< source >( mask_threshold: typing.Optional[float] = Noneoverlap_mask_area_threshold: typing.Optional[float] = Nonesubtask: typing.Optional[ForwardRef('ImageSegmentationSubtask')] = Nonethreshold: typing.Optional[float] = None )
Additional inference parameters for Image Segmentation
image_to_image
class huggingface_hub.ImageToImageInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageParameters] = None )
Inputs for Image To Image inference
Outputs of inference for the Image To Image task
class huggingface_hub.ImageToImageParameters
< source >( guidance_scale: typing.Optional[float] = Nonenegative_prompt: typing.Optional[str] = Nonenum_inference_steps: typing.Optional[int] = Nonetarget_size: typing.Optional[huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize] = None )
Additional inference parameters for Image To Image
The size in pixel of the output image.
image_to_text
class huggingface_hub.ImageToTextGenerationParameters
< source >( do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('ImageToTextEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None )
Parametrization of the text generation process
class huggingface_hub.ImageToTextInput
< source >( inputs: typing.Anyparameters: typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextParameters] = None )
Inputs for Image To Text inference
class huggingface_hub.ImageToTextOutput
< source >( generated_text: typing.Anyimage_to_text_output_generated_text: typing.Optional[str] = None )
Outputs of inference for the Image To Text task
class huggingface_hub.ImageToTextParameters
< source >( max_new_tokens: typing.Optional[int] = Nonegenerate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.image_to_text.ImageToTextGenerationParameters] = None )
Additional inference parameters for Image To Text
object_detection
class huggingface_hub.ObjectDetectionBoundingBox
< source >( xmax: intxmin: intymax: intymin: int )
The predicted bounding box. Coordinates are relative to the top left corner of the input image.
class huggingface_hub.ObjectDetectionInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.object_detection.ObjectDetectionParameters] = None )
Inputs for Object Detection inference
class huggingface_hub.ObjectDetectionOutputElement
< source >( box: ObjectDetectionBoundingBoxlabel: strscore: float )
Outputs of inference for the Object Detection task
class huggingface_hub.ObjectDetectionParameters
< source >( threshold: typing.Optional[float] = None )
Additional inference parameters for Object Detection
question_answering
class huggingface_hub.QuestionAnsweringInput
< source >( inputs: QuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.question_answering.QuestionAnsweringParameters] = None )
Inputs for Question Answering inference
One (context, question) pair to answer
class huggingface_hub.QuestionAnsweringOutputElement
< source >( answer: strend: intscore: floatstart: int )
Outputs of inference for the Question Answering task
class huggingface_hub.QuestionAnsweringParameters
< source >( align_to_words: typing.Optional[bool] = Nonedoc_stride: typing.Optional[int] = Nonehandle_impossible_answer: typing.Optional[bool] = Nonemax_answer_len: typing.Optional[int] = Nonemax_question_len: typing.Optional[int] = Nonemax_seq_len: typing.Optional[int] = Nonetop_k: typing.Optional[int] = None )
Additional inference parameters for Question Answering
sentence_similarity
class huggingface_hub.SentenceSimilarityInput
< source >( inputs: SentenceSimilarityInputDataparameters: typing.Optional[typing.Dict[str, typing.Any]] = None )
Inputs for Sentence similarity inference
class huggingface_hub.SentenceSimilarityInputData
< source >( sentences: typing.List[str]source_sentence: str )
summarization
class huggingface_hub.SummarizationInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.summarization.SummarizationParameters] = None )
Inputs for Summarization inference
Outputs of inference for the Summarization task
class huggingface_hub.SummarizationParameters
< source >( clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonetruncation: typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None )
Additional inference parameters for summarization.
table_question_answering
class huggingface_hub.TableQuestionAnsweringInput
< source >( inputs: TableQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.table_question_answering.TableQuestionAnsweringParameters] = None )
Inputs for Table Question Answering inference
class huggingface_hub.TableQuestionAnsweringInputData
< source >( question: strtable: typing.Dict[str, typing.List[str]] )
One (table, question) pair to answer
class huggingface_hub.TableQuestionAnsweringOutputElement
< source >( answer: strcells: typing.List[str]coordinates: typing.List[typing.List[int]]aggregator: typing.Optional[str] = None )
Outputs of inference for the Table Question Answering task
class huggingface_hub.TableQuestionAnsweringParameters
< source >( padding: typing.Optional[ForwardRef('Padding')] = Nonesequential: typing.Optional[bool] = Nonetruncation: typing.Optional[bool] = None )
Additional inference parameters for Table Question Answering
text2text_generation
class huggingface_hub.Text2TextGenerationInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text2text_generation.Text2TextGenerationParameters] = None )
Inputs for Text2text Generation inference
class huggingface_hub.Text2TextGenerationOutput
< source >( generated_text: typing.Anytext2_text_generation_output_generated_text: typing.Optional[str] = None )
Outputs of inference for the Text2text Generation task
class huggingface_hub.Text2TextGenerationParameters
< source >( clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonetruncation: typing.Optional[ForwardRef('Text2TextGenerationTruncationStrategy')] = None )
Additional inference parameters for Text2text Generation
text_classification
class huggingface_hub.TextClassificationInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_classification.TextClassificationParameters] = None )
Inputs for Text Classification inference
Outputs of inference for the Text Classification task
class huggingface_hub.TextClassificationParameters
< source >( function_to_apply: typing.Optional[ForwardRef('TextClassificationOutputTransform')] = Nonetop_k: typing.Optional[int] = None )
Additional inference parameters for Text Classification
text_generation
class huggingface_hub.TextGenerationInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGenerateParameters] = Nonestream: typing.Optional[bool] = None )
Text Generation Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
class huggingface_hub.TextGenerationInputGenerateParameters
< source >( adapter_id: typing.Optional[str] = Nonebest_of: typing.Optional[int] = Nonedecoder_input_details: typing.Optional[bool] = Nonedetails: typing.Optional[bool] = Nonedo_sample: typing.Optional[bool] = Nonefrequency_penalty: typing.Optional[float] = Nonegrammar: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = Nonemax_new_tokens: typing.Optional[int] = Nonerepetition_penalty: typing.Optional[float] = Nonereturn_full_text: typing.Optional[bool] = Noneseed: typing.Optional[int] = Nonestop: typing.Optional[typing.List[str]] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_n_tokens: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetruncate: typing.Optional[int] = Nonetypical_p: typing.Optional[float] = Nonewatermark: typing.Optional[bool] = None )
class huggingface_hub.TextGenerationOutput
< source >( generated_text: strdetails: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputDetails] = None )
Text Generation Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
class huggingface_hub.TextGenerationOutputBestOfSequence
< source >( finish_reason: TextGenerationOutputFinishReasongenerated_text: strgenerated_tokens: intprefill: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputPrefillToken]tokens: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]seed: typing.Optional[int] = Nonetop_tokens: typing.Optional[typing.List[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None )
class huggingface_hub.TextGenerationOutputDetails
< source >( finish_reason: TextGenerationOutputFinishReasongenerated_tokens: intprefill: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputPrefillToken]tokens: typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]best_of_sequences: typing.Optional[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputBestOfSequence]] = Noneseed: typing.Optional[int] = Nonetop_tokens: typing.Optional[typing.List[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]]] = None )
class huggingface_hub.TextGenerationOutputPrefillToken
< source >( id: intlogprob: floattext: str )
class huggingface_hub.TextGenerationOutputToken
< source >( id: intlogprob: floatspecial: booltext: str )
class huggingface_hub.TextGenerationStreamOutput
< source >( index: inttoken: TextGenerationStreamOutputTokendetails: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputStreamDetails] = Nonegenerated_text: typing.Optional[str] = Nonetop_tokens: typing.Optional[typing.List[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputToken]] = None )
Text Generation Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
class huggingface_hub.TextGenerationStreamOutputStreamDetails
< source >( finish_reason: TextGenerationOutputFinishReasongenerated_tokens: intinput_length: intseed: typing.Optional[int] = None )
class huggingface_hub.TextGenerationStreamOutputToken
< source >( id: intlogprob: floatspecial: booltext: str )
text_to_audio
class huggingface_hub.TextToAudioGenerationParameters
< source >( do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('TextToAudioEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None )
Parametrization of the text generation process
class huggingface_hub.TextToAudioInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioParameters] = None )
Inputs for Text To Audio inference
Outputs of inference for the Text To Audio task
class huggingface_hub.TextToAudioParameters
< source >( generate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.text_to_audio.TextToAudioGenerationParameters] = None )
Additional inference parameters for Text To Audio
text_to_image
class huggingface_hub.TextToImageInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_image.TextToImageParameters] = None )
Inputs for Text To Image inference
Outputs of inference for the Text To Image task
class huggingface_hub.TextToImageParameters
< source >( guidance_scale: typing.Optional[float] = Noneheight: typing.Optional[int] = Nonenegative_prompt: typing.Optional[str] = Nonenum_inference_steps: typing.Optional[int] = Nonescheduler: typing.Optional[str] = Noneseed: typing.Optional[int] = Nonewidth: typing.Optional[int] = None )
Additional inference parameters for Text To Image
text_to_speech
class huggingface_hub.TextToSpeechGenerationParameters
< source >( do_sample: typing.Optional[bool] = Noneearly_stopping: typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = Noneepsilon_cutoff: typing.Optional[float] = Noneeta_cutoff: typing.Optional[float] = Nonemax_length: typing.Optional[int] = Nonemax_new_tokens: typing.Optional[int] = Nonemin_length: typing.Optional[int] = Nonemin_new_tokens: typing.Optional[int] = Nonenum_beam_groups: typing.Optional[int] = Nonenum_beams: typing.Optional[int] = Nonepenalty_alpha: typing.Optional[float] = Nonetemperature: typing.Optional[float] = Nonetop_k: typing.Optional[int] = Nonetop_p: typing.Optional[float] = Nonetypical_p: typing.Optional[float] = Noneuse_cache: typing.Optional[bool] = None )
Parametrization of the text generation process
class huggingface_hub.TextToSpeechInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechParameters] = None )
Inputs for Text To Speech inference
class huggingface_hub.TextToSpeechOutput
< source >( audio: typing.Anysampling_rate: typing.Optional[float] = None )
Outputs of inference for the Text To Speech task
class huggingface_hub.TextToSpeechParameters
< source >( generate_kwargs: typing.Optional[huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechGenerationParameters] = None )
Additional inference parameters for Text To Speech
text_to_video
class huggingface_hub.TextToVideoInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.text_to_video.TextToVideoParameters] = None )
Inputs for Text To Video inference
Outputs of inference for the Text To Video task
class huggingface_hub.TextToVideoParameters
< source >( guidance_scale: typing.Optional[float] = Nonenegative_prompt: typing.Optional[typing.List[str]] = Nonenum_frames: typing.Optional[float] = Nonenum_inference_steps: typing.Optional[int] = Noneseed: typing.Optional[int] = None )
Additional inference parameters for Text To Video
token_classification
class huggingface_hub.TokenClassificationInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.token_classification.TokenClassificationParameters] = None )
Inputs for Token Classification inference
class huggingface_hub.TokenClassificationOutputElement
< source >( end: intscore: floatstart: intword: strentity: typing.Optional[str] = Noneentity_group: typing.Optional[str] = None )
Outputs of inference for the Token Classification task
class huggingface_hub.TokenClassificationParameters
< source >( aggregation_strategy: typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = Noneignore_labels: typing.Optional[typing.List[str]] = Nonestride: typing.Optional[int] = None )
Additional inference parameters for Token Classification
translation
class huggingface_hub.TranslationInput
< source >( inputs: strparameters: typing.Optional[huggingface_hub.inference._generated.types.translation.TranslationParameters] = None )
Inputs for Translation inference
Outputs of inference for the Translation task
class huggingface_hub.TranslationParameters
< source >( clean_up_tokenization_spaces: typing.Optional[bool] = Nonegenerate_parameters: typing.Optional[typing.Dict[str, typing.Any]] = Nonesrc_lang: typing.Optional[str] = Nonetgt_lang: typing.Optional[str] = Nonetruncation: typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None )
Additional inference parameters for Translation
video_classification
class huggingface_hub.VideoClassificationInput
< source >( inputs: typing.Anyparameters: typing.Optional[huggingface_hub.inference._generated.types.video_classification.VideoClassificationParameters] = None )
Inputs for Video Classification inference
Outputs of inference for the Video Classification task
class huggingface_hub.VideoClassificationParameters
< source >( frame_sampling_rate: typing.Optional[int] = Nonefunction_to_apply: typing.Optional[ForwardRef('VideoClassificationOutputTransform')] = Nonenum_frames: typing.Optional[int] = Nonetop_k: typing.Optional[int] = None )
Additional inference parameters for Video Classification
visual_question_answering
class huggingface_hub.VisualQuestionAnsweringInput
< source >( inputs: VisualQuestionAnsweringInputDataparameters: typing.Optional[huggingface_hub.inference._generated.types.visual_question_answering.VisualQuestionAnsweringParameters] = None )
Inputs for Visual Question Answering inference
class huggingface_hub.VisualQuestionAnsweringInputData
< source >( image: typing.Anyquestion: str )
One (image, question) pair to answer
class huggingface_hub.VisualQuestionAnsweringOutputElement
< source >( score: floatanswer: typing.Optional[str] = None )
Outputs of inference for the Visual Question Answering task
class huggingface_hub.VisualQuestionAnsweringParameters
< source >( top_k: typing.Optional[int] = None )
Additional inference parameters for Visual Question Answering
zero_shot_classification
class huggingface_hub.ZeroShotClassificationInput
< source >( inputs: strparameters: ZeroShotClassificationParameters )
Inputs for Zero Shot Classification inference
Outputs of inference for the Zero Shot Classification task
class huggingface_hub.ZeroShotClassificationParameters
< source >( candidate_labels: typing.List[str]hypothesis_template: typing.Optional[str] = Nonemulti_label: typing.Optional[bool] = None )
Additional inference parameters for Zero Shot Classification
zero_shot_image_classification
class huggingface_hub.ZeroShotImageClassificationInput
< source >( inputs: strparameters: ZeroShotImageClassificationParameters )
Inputs for Zero Shot Image Classification inference
class huggingface_hub.ZeroShotImageClassificationOutputElement
< source >( label: strscore: float )
Outputs of inference for the Zero Shot Image Classification task
class huggingface_hub.ZeroShotImageClassificationParameters
< source >( candidate_labels: typing.List[str]hypothesis_template: typing.Optional[str] = None )
Additional inference parameters for Zero Shot Image Classification
zero_shot_object_detection
class huggingface_hub.ZeroShotObjectDetectionBoundingBox
< source >( xmax: intxmin: intymax: intymin: int )
The predicted bounding box. Coordinates are relative to the top left corner of the input image.
class huggingface_hub.ZeroShotObjectDetectionInput
< source >( inputs: strparameters: ZeroShotObjectDetectionParameters )
Inputs for Zero Shot Object Detection inference
class huggingface_hub.ZeroShotObjectDetectionOutputElement
< source >( box: ZeroShotObjectDetectionBoundingBoxlabel: strscore: float )
Outputs of inference for the Zero Shot Object Detection task
class huggingface_hub.ZeroShotObjectDetectionParameters
< source >( candidate_labels: typing.List[str] )
Additional inference parameters for Zero Shot Object Detection