Transformers documentation
Llama4
Llama4
Llama 4, developed by Meta, introduces a new auto-regressive Mixture-of-Experts (MoE) architecture. This generation includes two models:
- The highly capable Llama 4 Maverick with 17B active parameters out of ~400B total, with 128 experts.
- The efficient Llama 4 Scout also has 17B active parameters out of ~109B total, using just 16 experts.
- Both models leverage early fusion for native multimodality, enabling them to process text and image inputs. Maverick and Scout are both trained on up to 40 trillion tokens on data encompassing 200 languages (with specific fine-tuning support for 12 languages including Arabic, Spanish, German, and Hindi).
For deployment, Llama 4 Scout is designed for accessibility, fitting on a single server-grade GPU via on-the-fly 4-bit or 8-bitint4 quantization, while Maverick is available in BF16 and FP8 formats. These models are released under the custom Llama 4 Community License Agreement, available on the model repositories.
You can find all the original Llama checkpoints under the meta-llama organization.
The Llama 4 family of models comes in two flavors: 109B, and 402B parameters. Both of these flavors are extremely large and won’t fit on your run-of-the-mill device. See below for some examples to reduce the memory usage of the model.
For the download to be faster and more resilient, we recommend installing the hf_xet
dependency as followed:
pip install transformers[hf_xet]
The examples below demonstrates how to generate with Pipeline or the AutoModel. We additionally add an example showcasing how to toggle the right attributes to enable very long-context generations, as some flavors of Llama 4 have context lengths going up to 10 million tokens.
from transformers import pipeline
import torch
model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
messages = [
{"role": "user", "content": "what is the recipe of mayonnaise?"},
]
pipe = pipeline(
"text-generation",
model=model_id,
device_map="auto",
torch_dtype=torch.bfloat16
)
output = pipe(messages, do_sample=False, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
Efficiency; how to get the best out of llama 4
The Attention methods
Updating the default attention function can significantly improve compute performance as well as memory usage. Refer to the Attention Interface overview for an in-depth explanation of our interface.
As of release, the Llama 4 model supports the following attention methods: eager
, flex_attention
, sdpa
. We recommend using flex_attention
for best results.
Switching attention mechanism is done at the model initialization step:
Setting Flex Attention ensures the best results with the very long context the model can handle.
Beware: the example below uses both device_map="auto"
and flex-attention.
Please use torchrun
to run this example in tensor-parallel mode.
We will work to enable running with device_map="auto"
and flex-attention without
tensor-parallel in the future.
from transformers import Llama4ForConditionalGeneration
import torch
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
attn_implementation="flex_attention",
device_map="auto",
torch_dtype=torch.bfloat16,
)
Quantization
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for available quantization backends. At time of release, both FBGEMM and LLM-Compressor are supported; more quantization methods will be supported in the days that follow the release.
See below for examples using both:
Here is an example loading an BF16 model in FP8 using the FBGEMM approach:
from transformers import AutoTokenizer, Llama4ForConditionalGeneration, FbgemmFp8Config
import torch
model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt", return_dict=True)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
quantization_config=FbgemmFp8Config()
)
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])
Offloading
Enabling CPU-offloading means that components of the model might be moved to CPU instead of GPU in case the GPU-memory available isn’t sufficient to load the entire model. At inference, different components will be loaded/unloaded from/to the GPU on the fly. This ensures that the model can be loaded on smaller machines as long as the CPU-memory is sufficient. However, this also slows down inference as it adds communication overhead.
In order to enable CPU-offloading, you simply need to specify the device_map
to auto
at model load:
from transformers import Llama4ForConditionalGeneration
import torch
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
)
Llama4Config
class transformers.Llama4Config
< source >( vision_config = None text_config = None boi_token_index = 200080 eoi_token_index = 200081 image_token_index = 200092 tie_word_embeddings = False **kwargs )
Parameters
- vision_config (
Llama4VisionConfig
, optional) — The Llama4 Vision config. - text_config (
Llama4TextConfig
, optional) — The Llama4 Text config. - boi_token_index (
int
, optional, defaults to 200080) — The begin-of-image token index to wrap the image prompt. - eoi_token_index (
int
, optional, defaults to 200081) — The end-of-image token index to wrap the image prompt. - image_token_index (
int
, optional, defaults to 200092) — The image token index to encode the image prompt. - tie_word_embeddings (
bool
, optional, defaults toFalse
) — Whether the model’s input and output word embeddings should be tied.
This is the configuration class to store the configuration of a Llama4Model
. It is used to instantiate an
Llama4 model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Llama4 109B.
e.g. meta-llama/Llama-4-Scout-17B-16E
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import Llama4Model, Llama4Config
>>> # Initializing a Llama4 7B style configuration
>>> configuration = Llama4Config()
>>> # Initializing a model from the Llama4 7B style configuration
>>> model = Llama4Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
Llama4TextConfig
class transformers.Llama4TextConfig
< source >( vocab_size = 202048 hidden_size = 5120 intermediate_size = 8192 intermediate_size_mlp = 16384 num_hidden_layers = 48 num_attention_heads = 40 num_key_value_heads = 8 head_dim = 128 hidden_act = 'silu' max_position_embeddings = 131072 initializer_range = 0.02 rms_norm_eps = 1e-05 use_cache = True pad_token_id = None bos_token_id = 1 eos_token_id = 2 tie_word_embeddings = False rope_theta = 500000 attention_dropout = 0.0 num_experts_per_tok = 1 num_local_experts = 16 moe_layers = None interleave_moe_layer_step = 1 use_qk_norm = True output_router_logits = False router_aux_loss_coef = 0.001 router_jitter_noise = 0.0 rope_scaling = None no_rope_layers = None no_rope_layer_interval = 4 attention_chunk_size = 8192 attn_temperature_tuning = 4 floor_scale = 8192 attn_scale = 0.1 **kwargs )
Parameters
- vocab_size (
int
, optional, defaults to 202048) — Vocabulary size of the Llama4 text model. Defines the maximum number of different tokens that can be represented by theinputs_ids
passed when calling Llama4TextModel. - hidden_size (
int
, optional, defaults to 5120) — Dimensionality of the embeddings and hidden states. - intermediate_size (
int
, optional, defaults to 8192) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. - intermediate_size_mlp (
int
, optional, defaults to 16384) — TODO - num_hidden_layers (
int
, optional, defaults to 48) — Number of hidden layers in the Transformer encoder. - num_attention_heads (
int
, optional, defaults to 40) — Number of attention heads for each attention layer in the Transformer encoder. - num_key_value_heads (
int
, optional, defaults to 8) — This is the number of key_value heads that should be used to implement Grouped Query Attention. If not specified, will default tonum_attention_heads
. - head_dim (
int
, optional, defaults to 128) — TODO - hidden_act (
str
orCallable
, optional, defaults to"silu"
) — The non-linear activation function (function or string) in the encoder and pooler. - max_position_embeddings (
int
, optional, defaults to 131072) — The maximum sequence length that this model might ever be used with. - initializer_range (
float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - rms_norm_eps (
float
, optional, defaults to 1e-05) — The epsilon used by the rms normalization layers. - use_cache (
bool
, optional, defaults toTrue
) — Whether or not the model should return the last key/values attentions. - pad_token_id (
int
, optional, defaults to 128004) — The id of the padding token. - bos_token_id (
int
, optional, defaults to 1) — The id of the beginning of sentence token. - eos_token_id (
int
, optional, defaults to 2) — The id of the end of sentence token. - tie_word_embeddings (
bool
, optional, defaults toFalse
) — Whether to tie weight embeddings - rope_theta (
float
, optional, defaults to500000.0
) — The base period of the RoPE embeddings. - attention_dropout (
int
, optional, defaults to 0.0) — TODO - num_experts_per_tok (
int
, optional, defaults to 1) — TODO - num_local_experts (
int
, optional, defaults to 16) — TODO - moe_layers (
int
, optional) — TODO - interleave_moe_layer_step (
int
, optional, defaults to 1) — TODO - use_qk_norm (
int
, optional, defaults toTrue
) — TODO - output_router_logits (
int
, optional, defaults toFalse
) — TODO - router_aux_loss_coef (
int
, optional, defaults to 0.001) — TODO - router_jitter_noise (
int
, optional, defaults to 0.0) — TODO - rope_scaling (
Dict
, optional) — Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type and you expect the model to work on longermax_position_embeddings
, we recommend you to update this value accordingly. Expected contents:rope_type
(str
): The sub-variant of RoPE to use. Can be one of [‘default’, ‘linear’, ‘dynamic’, ‘yarn’, ‘longrope’, ‘llama3’], with ‘default’ being the original RoPE implementation.factor
(float
, optional): Used with all rope types except ‘default’. The scaling factor to apply to the RoPE embeddings. In most scaling types, afactor
of x will enable the model to handle sequences of length x original maximum pre-trained length.original_max_position_embeddings
(int
, optional): Used with ‘dynamic’, ‘longrope’ and ‘llama3’. The original max position embeddings used during pretraining.attention_factor
(float
, optional): Used with ‘yarn’ and ‘longrope’. The scaling factor to be applied on the attention computation. If unspecified, it defaults to value recommended by the implementation, using thefactor
field to infer the suggested value.beta_fast
(float
, optional): Only used with ‘yarn’. Parameter to set the boundary for extrapolation (only) in the linear ramp function. If unspecified, it defaults to 32.beta_slow
(float
, optional): Only used with ‘yarn’. Parameter to set the boundary for interpolation (only) in the linear ramp function. If unspecified, it defaults to 1.short_factor
(List[float]
, optional): Only used with ‘longrope’. The scaling factor to be applied to short contexts (<original_max_position_embeddings
). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2long_factor
(List[float]
, optional): Only used with ‘longrope’. The scaling factor to be applied to long contexts (<original_max_position_embeddings
). Must be a list of numbers with the same length as the hidden size divided by the number of attention heads divided by 2low_freq_factor
(float
, optional): Only used with ‘llama3’. Scaling factor applied to low frequency components of the RoPEhigh_freq_factor
(float
, optional*): Only used with ‘llama3’. Scaling factor applied to high frequency components of the RoPE - no_rope_layers (
int
, optional) — TODO - no_rope_layer_interval (
int
, optional, defaults to 4) — TODO - attention_chunk_size (
int
, optional, defaults to 8192) — - attn_temperature_tuning (
int
, optional, defaults to 4) — TODO - floor_scale (
int
, optional, defaults to 8192) — TODO - attn_scale (
int
, optional, defaults to 0.1) — TODO
This is the configuration class to store the configuration of a Llama4TextModel. It is used to instantiate a Llama4 text model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Llama4 109B.
e.g. meta-llama/Llama-4-Scout-17B-16E
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
Llama4VisionConfig
class transformers.Llama4VisionConfig
< source >( hidden_size: int = 768 hidden_act: str = 'gelu' num_hidden_layers: int = 34 num_attention_heads: int = 16 num_channels: int = 3 intermediate_size: int = 5632 vision_output_dim: int = 7680 image_size: int = 448 patch_size: int = 14 norm_eps: float = 1e-05 vision_feature_layer = -1 vision_feature_select_strategy = 'default' initializer_range: float = 0.02 pixel_shuffle_ratio = 0.5 projector_input_dim = 4096 projector_output_dim = 4096 multi_modal_projector_bias = False projector_dropout = 0.0 attention_dropout = 0.0 rope_theta = 10000 **kwargs )
Parameters
- hidden_size (
int
, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. - hidden_act (
str
orfunction
, optional, defaults to"gelu"
) — The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"selu"
and"gelu_new"
"quick_gelu"
are supported. - num_hidden_layers (
int
, optional, defaults to 34) — Number of hidden layers in the Transformer encoder. - num_attention_heads (
int
, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. - num_channels (
int
, optional, defaults to 3) — Number of channels in the input image. - intermediate_size (
int
, optional, defaults to 5632) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. - vision_output_dim (
int
, optional, defaults to 7680) — Dimensionality of the vision model output. Includes output of transformer encoder with intermediate layers and global transformer encoder. - image_size (
int
, optional, defaults to 448) — The size (resolution) of each image tile. - patch_size (
int
, optional, defaults to 14) — The size (resolution) of each patch. - norm_eps (
float
, optional, defaults to 1e-05) — The epsilon used by the layer normalization layers. - vision_feature_layer (“, optional, defaults to -1) — TODO
- vision_feature_select_strategy (
int
, optional, defaults to"default"
) — TODO - initializer_range (
float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - pixel_shuffle_ratio (
int
, optional, defaults to 0.5) — TODO - projector_input_dim (
int
, optional, defaults to 4096) — TODO - projector_output_dim (
int
, optional, defaults to 4096) — TODO - multi_modal_projector_bias (
int
, optional, defaults toFalse
) — TODO - projector_dropout (
int
, optional, defaults to 0.0) — TODO - attention_dropout (
int
, optional, defaults to 0.0) — TODO - rope_theta (
int
, optional, defaults to 10000) — TODO
This is the configuration class to store the configuration of a Llama4VisionModel. It is used to instantiate a Llama4 vision model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Llama4 109B.
e.g. meta-llama/Llama-4-Scout-17B-16E
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Llama4Processor
class transformers.Llama4Processor
< source >( image_processor = None tokenizer = None patch_size: int = 14 pixel_shuffle_ratio: float = 0.5 fake_image_token = '<|image|>' image_token = '<|image|>' start_of_image_token = '<|image_start|>' end_of_image_token = '<|image_end|>' patch_token = '<|patch|>' tile_x_separator_token = '<|tile_x_separator|>' tile_y_separator_token = '<|tile_y_separator|>' chat_template = '{{- bos_token }}\n{%- if custom_tools is defined %}\n {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n {%- if strftime_now is defined %}\n {%- set date_string = strftime_now("%d %b %Y") %}\n {%- else %}\n {%- set date_string = "26 Jul 2024" %}\n {%- endif %}\n{%- endif %}\n{%- if not tools is defined %}\n {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0][\'role\'] == \'system\' %} \n {%- if messages[0][\'content\'] is string %}\n {%- set system_message = messages[0][\'content\']|trim %}\n {%- else %}\n {#- FIXME: The processor requires an array, always. #}\n {%- set system_message = messages[0][\'content\'][0][\'text\']|trim %}\n {%- endif %}\n {%- set messages = messages[1:] %}\n {%- set user_supplied_system_message = true %}\n{%- else %}\n {%- set system_message = "" %}\n {%- set user_supplied_system_message = false %}\n{%- endif %}\n\n{#- System message if the user supplied one #}\n{%- if user_supplied_system_message %}\n {{- "<|header_start|>system<|header_end|>\n\n" }}\n {%- if tools is not none %}\n {{- "Environment: ipython\n" }}\n {%- endif %}\n {%- if tools is not none and not tools_in_user_message %}\n {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}\n {{- \'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.\' }}\n {{- "Do not use variables.\n\n" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- "\n\n" }}\n {%- endfor %}\n {%- endif %}\n {{- system_message }}\n {{- "<|eot|>" }}\n{%- endif %}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n {#- Extract the first user message so we can plug it in here #}\n {%- if messages | length != 0 %}\n {%- set first_user_message = messages[0][\'content\']|trim %}\n {%- set messages = messages[1:] %}\n {%- else %}\n {{- raise_exception("Cannot put tools in the first user message when there\'s no first user message!") }}\n{%- endif %}\n {{- \'<|header_start|>user<|header_end|>\n\n\' -}}\n {{- "Given the following functions, please respond with a JSON for a function call " }}\n {{- "with its proper arguments that best answers the given prompt.\n\n" }}\n {{- \'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.\' }}\n {{- "Do not use variables.\n\n" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- "\n\n" }}\n {%- endfor %}\n {{- first_user_message + "<|eot|>"}}\n{%- endif %}\n\n{%- for message in messages %}\n {%- if not (message.role == \'ipython\' or message.role == \'tool\' or \'tool_calls\' in message) %}\n {{- \'<|header_start|>\' + message[\'role\'] + \'<|header_end|>\n\n\' }}\n {%- if message[\'content\'] is string %}\n {{- message[\'content\'] }}\n {%- else %}\n {%- for content in message[\'content\'] %}\n {%- if content[\'type\'] == \'image\' %}\n {{- \'<|image|>\' }}\n {%- elif content[\'type\'] == \'text\' %}\n {{- content[\'text\'] }}\n {%- endif %}\n {%- endfor %}\n {%- endif %}\n {{- "<|eot|>" }}\n {%- elif \'tool_calls\' in message and message.tool_calls|length > 0 %}\n {{- \'<|header_start|>assistant<|header_end|>\n\n\' -}}\n {{- \'<|python_start|>\' }}\n {%- if message[\'content\'] is string %}\n {{- message[\'content\'] }}\n {%- else %}\n {%- for content in message[\'content\'] %}\n {%- if content[\'type\'] == \'image\' %}\n {{- \'<|image|>\' }}\n {%- elif content[\'type\'] == \'text\' %}\n {{- content[\'text\'] }}\n {%- endif %}\n {%- endfor %}\n {%- endif %}\n {{- \'<|python_end|>\' }}\n {%- for tool_call in message.tool_calls %}\n {{- \'{"name": "\' + tool_call.function.name + \'", \' }}\n {{- \'"parameters": \' }}\n {{- tool_call.function.arguments | tojson }}\n {{- "}" }}\n {%- endfor %}\n {{- "<|eot|>" }}\n {%- elif message.role == "tool" or message.role == "ipython" %}\n {{- "<|header_start|>ipython<|header_end|>\n\n" }}\n {%- if message.content is mapping or message.content is iterable %}\n {{- message.content | tojson }}\n {%- else %}\n {{- message.content }}\n {%- endif %}\n {{- "<|eot|>" }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- \'<|header_start|>assistant<|header_end|>\n\n\' }}\n{%- endif %}\n' **kwargs )
Parameters
- image_processor (AutoImageProcessor, optional) — The image processor is a required input.
- tokenizer ([
PreTrainedTokenizer
,PreTrainedTokenizerFast
], optional) — The tokenizer is a required input. - patch_size (
int
, optional, defaults to 28) — The size of image patches for tokenization. - img_size (
int
, optional, defaults to 364) — The size of the image to be tokenized. This should correspond to the size given to the image processor. - image_token (
str
, optional, defaults to"<|image|>"
) — The token to be used to represent an image in the text. - downsample_factor (
int
, optional, defaults to 1) — The factor by which to scale the patch size. - start_of_img_token (
str
, optional, defaults to"<|START_OF_IMG|>"
) — The token to be used to represent the start of an image in the text. - end_of_img_token (
str
, optional, defaults to"<|END_OF_IMG|>"
) — The token to be used to represent the end of an image in the text. - img_patch_token (
str
, optional, defaults to"<|IMG_PATCH|>"
) — The token to be used to represent an image patch in the text. - img_line_break_token (
str
, optional, defaults to"<|IMG_LINE_BREAK|>"
) — The token to be used to represent a line break in the text. - tile_token (
str
, optional, defaults to"TILE"
) — The token to be used to represent an image patch in the text. - tile_global_token (
str
, optional, defaults to"TILE_GLOBAL"
) — The token to be used to represent the cover image in the text. - chat_template (
str
, optional) — A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string.
Constructs a Llama4 processor which wraps a AutoImageProcessor and
PretrainedTokenizerFast
tokenizer into a single processor that inherits both the image processor and
tokenizer functionalities. See the __call__()
and decode() for more information.
This method forwards all its arguments to PreTrainedTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to PreTrainedTokenizerFast’s decode(). Please refer to the docstring of this method for more information.
Llama4ImageProcessorFast
class transformers.Llama4ImageProcessorFast
< source >( **kwargs: typing_extensions.Unpack[transformers.models.llama4.image_processing_llama4_fast.Llama4ImageProcessorKwargs] )
Parameters
- do_resize (
bool
, optional, defaults toself.do_resize
) — Whether to resize the image’s (height, width) dimensions to the specifiedsize
. Can be overridden by thedo_resize
parameter in thepreprocess
method. - size (
dict
, optional, defaults toself.size
) — Size of the output image after resizing. Can be overridden by thesize
parameter in thepreprocess
method. - default_to_square (
bool
, optional, defaults toself.default_to_square
) — Whether to default to a square image when resizing, if size is an int. - resample (
PILImageResampling
, optional, defaults toself.resample
) — Resampling filter to use if resizing the image. Only has an effect ifdo_resize
is set toTrue
. Can be overridden by theresample
parameter in thepreprocess
method. - do_center_crop (
bool
, optional, defaults toself.do_center_crop
) — Whether to center crop the image to the specifiedcrop_size
. Can be overridden bydo_center_crop
in thepreprocess
method. - crop_size (
Dict[str, int]
optional, defaults toself.crop_size
) — Size of the output image after applyingcenter_crop
. Can be overridden bycrop_size
in thepreprocess
method. - do_rescale (
bool
, optional, defaults toself.do_rescale
) — Whether to rescale the image by the specified scalerescale_factor
. Can be overridden by thedo_rescale
parameter in thepreprocess
method. - rescale_factor (
int
orfloat
, optional, defaults toself.rescale_factor
) — Scale factor to use if rescaling the image. Only has an effect ifdo_rescale
is set toTrue
. Can be overridden by therescale_factor
parameter in thepreprocess
method. - do_normalize (
bool
, optional, defaults toself.do_normalize
) — Whether to normalize the image. Can be overridden by thedo_normalize
parameter in thepreprocess
method. Can be overridden by thedo_normalize
parameter in thepreprocess
method. - image_mean (
float
orList[float]
, optional, defaults toself.image_mean
) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by theimage_mean
parameter in thepreprocess
method. Can be overridden by theimage_mean
parameter in thepreprocess
method. - image_std (
float
orList[float]
, optional, defaults toself.image_std
) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by theimage_std
parameter in thepreprocess
method. Can be overridden by theimage_std
parameter in thepreprocess
method. - do_convert_rgb (
bool
, optional, defaults toself.do_convert_rgb
) — Whether to convert the image to RGB. - return_tensors (
str
orTensorType
, optional, defaults toself.return_tensors
) — Returns stacked tensors if set to `pt, otherwise returns a list of tensors. - data_format (
ChannelDimension
orstr
, optional, defaults toself.data_format
) — OnlyChannelDimension.FIRST
is supported. Added for compatibility with slow processors. - input_data_format (
ChannelDimension
orstr
, optional, defaults toself.input_data_format
) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:"channels_first"
orChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
orChannelDimension.LAST
: image in (height, width, num_channels) format."none"
orChannelDimension.NONE
: image in (height, width) format.
- device (
torch.device
, optional, defaults toself.device
) — The device to process the images on. If unset, the device is inferred from the input images. - max_patches (
int
, optional, defaults to 16) — The maximum number of patches to be extracted from the image. Can be overridden by themax_patches
parameter in thepreprocess
method. - resize_to_max_canvas (
bool
, optional, defaults to False) — Whether to resize the image to the maximum canvas size. If True, picks the canvas the allows the largest resizing without distortion. If False, downsample as little as possible, including no resizing at all, but never upsample, unless the image is smaller than the patch size.
Constructs a fast Llama4 image processor.
preprocess
< source >( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] **kwargs: typing_extensions.Unpack[transformers.models.llama4.image_processing_llama4_fast.Llama4ImageProcessorKwargs] )
Parameters
- images (
ImageInput
) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, setdo_rescale=False
. - do_resize (
bool
, optional, defaults toself.do_resize
) — Whether to resize the image. - size (
Dict[str, int]
, optional, defaults toself.size
) — Describes the maximum input dimensions to the model. - resample (
PILImageResampling
orInterpolationMode
, optional, defaults toself.resample
) — Resampling filter to use if resizing the image. This can be one of the enumPILImageResampling
. Only has an effect ifdo_resize
is set toTrue
. - do_center_crop (
bool
, optional, defaults toself.do_center_crop
) — Whether to center crop the image. - crop_size (
Dict[str, int]
, optional, defaults toself.crop_size
) — Size of the output image after applyingcenter_crop
. - do_rescale (
bool
, optional, defaults toself.do_rescale
) — Whether to rescale the image. - rescale_factor (
float
, optional, defaults toself.rescale_factor
) — Rescale factor to rescale the image by ifdo_rescale
is set toTrue
. - do_normalize (
bool
, optional, defaults toself.do_normalize
) — Whether to normalize the image. - image_mean (
float
orList[float]
, optional, defaults toself.image_mean
) — Image mean to use for normalization. Only has an effect ifdo_normalize
is set toTrue
. - image_std (
float
orList[float]
, optional, defaults toself.image_std
) — Image standard deviation to use for normalization. Only has an effect ifdo_normalize
is set toTrue
. - do_convert_rgb (
bool
, optional, defaults toself.do_convert_rgb
) — Whether to convert the image to RGB. - return_tensors (
str
orTensorType
, optional, defaults toself.return_tensors
) — Returns stacked tensors if set to `pt, otherwise returns a list of tensors. - data_format (
ChannelDimension
orstr
, optional, defaults toself.data_format
) — OnlyChannelDimension.FIRST
is supported. Added for compatibility with slow processors. - input_data_format (
ChannelDimension
orstr
, optional, defaults toself.input_data_format
) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:"channels_first"
orChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
orChannelDimension.LAST
: image in (height, width, num_channels) format."none"
orChannelDimension.NONE
: image in (height, width) format.
- device (
torch.device
, optional, defaults toself.device
) — The device to process the images on. If unset, the device is inferred from the input images. - max_patches (
int
, optional, defaults to 16) — The maximum number of patches to be extracted from the image. Can be overridden by themax_patches
parameter in thepreprocess
method. - resize_to_max_canvas (
bool
, optional, defaults to False) — Whether to resize the image to the maximum canvas size. If True, picks the canvas the allows the largest resizing without distortion. If False, downsample as little as possible, including no resizing at all, but never upsample, unless the image is smaller than the patch size.
Preprocess an image or batch of images.
rescale_and_normalize
< source >( images: torch.Tensor do_rescale: bool rescale_factor: float do_normalize: bool image_mean: typing.Union[float, typing.List[float]] image_std: typing.Union[float, typing.List[float]] )
Rescale and normalize images. Override to rescale and normalize the images in torch.bfloat16 as in the original implementation
Llama4ForConditionalGeneration
forward
< source >( input_ids: LongTensor = None pixel_values: FloatTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None vision_feature_layer: typing.Union[int, typing.List[int], NoneType] = None vision_feature_select_strategy: typing.Optional[str] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None logits_to_keep: typing.Union[int, torch.Tensor] = 0 image_sizes: Tensor = None **lm_kwargs ) → transformers.models.llama4.modeling_llama4.Llama4CausalLMOutputWithPast
or tuple(torch.FloatTensor)
Returns
transformers.models.llama4.modeling_llama4.Llama4CausalLMOutputWithPast
or tuple(torch.FloatTensor)
A transformers.models.llama4.modeling_llama4.Llama4CausalLMOutputWithPast
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (Llama4Config) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Language modeling loss (for next-token prediction). -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — Tuple oftuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
)Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
image_hidden_states (
torch.FloatTensor
, optional) — Atorch.FloatTensor
of size (batch_size, num_images, sequence_length, hidden_size)`. image_hidden_states of the model produced by the vision encoder and after projecting the last hidden state.
labels (torch.LongTensor
of shape (batch_size, sequence_length)
, optional):
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids
docstring). Tokens with indices set to -100
are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
.
logits_to_keep (int
or torch.Tensor
, optional):
If an int
, compute logits for the last logits_to_keep
tokens. If 0
, calculate logits for all
input_ids
(special case). Only last token logits are needed for generation, and calculating them only for that
token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
If a torch.Tensor
, must be 1D corresponding to the indices to keep in the sequence length dimension.
This is useful when using packed tensor format (single dimension for batch and sequence length).
Example:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, LlavaForConditionalGeneration
>>> model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf")
>>> processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf")
>>> prompt = "USER: <image>\nWhat's the content of the image? ASSISTANT:"
>>> url = "https://www.ilankelman.org/stopsigns/australia.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, text=prompt, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(**inputs, max_new_tokens=15)
>>> processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"USER: \nWhat's the content of the image? ASSISTANT: The image features a busy city street with a stop sign prominently displayed"
get_image_features
< source >( pixel_values: FloatTensor vision_feature_layer: typing.Union[int, typing.List[int]] vision_feature_select_strategy: str **kwargs ) → image_features (torch.Tensor
)
Parameters
- pixel_values (
torch.FloatTensor]
of shape(batch_size, channels, height, width)
) — The tensors corresponding to the input images. - vision_feature_layer (
Union[int, List[int]]
) — The index of the layer to select the vision feature. If multiple indices are provided, the vision feature of the corresponding indices will be concatenated to form the vision features. - vision_feature_select_strategy (
str
) — The feature selection strategy used to select the vision feature from the vision backbone. Can be one of"default"
or"full"
Returns
image_features (torch.Tensor
)
Image feature tensor of shape (num_images, image_length, embed_dim)
).
Obtains image last hidden states from the vision tower and apply al projection.
- forward
Llama4ForCausalLM
forward
< source >( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[transformers.cache_utils.Cache, typing.List[torch.FloatTensor], NoneType] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None logits_to_keep: typing.Union[int, torch.Tensor] = 0 **kwargs ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If
past_key_values
is used, optionally only the lastinput_ids
have to be input (seepast_key_values
).If you want to change padding behavior, you should read
modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more information on the default strategy.- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]
. - past_key_values (
Cache
ortuple(tuple(torch.FloatTensor))
, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_values
returned by the model at a previous stage of decoding, whenuse_cache=True
orconfig.use_cache=True
.Two formats are allowed:
- a Cache instance, see our kv cache guide;
- Tuple of
tuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy cache format.
The model will output the same cache format that is fed as input. If no
past_key_values
are passed, the legacy cache format will be returned.If
past_key_values
are used, the user can optionally input only the lastinput_ids
(those that don’t have their past key value states given to this model) of shape(batch_size, 1)
instead of allinput_ids
of shape(batch_size, sequence_length)
. - inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. - use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - cache_position (
torch.LongTensor
of shape(sequence_length)
, optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids
, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length. - Args —
labels (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional): Labels for computing the masked language modeling loss. Indices should either be in[0, ..., config.vocab_size]
or -100 (seeinput_ids
docstring). Tokens with indices set to-100
are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
.logits_to_keep (
int
ortorch.Tensor
, optional): If anint
, compute logits for the lastlogits_to_keep
tokens. If0
, calculate logits for allinput_ids
(special case). Only last token logits are needed for generation, and calculating them only for that token can save memory, which becomes pretty significant for long sequences or large vocabulary size. If atorch.Tensor
, must be 1D corresponding to the indices to keep in the sequence length dimension. This is useful when using packed tensor format (single dimension for batch and sequence length).
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (Llama4Config) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Language modeling loss (for next-token prediction). -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — Tuple oftuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
)Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The Llama4ForCausalLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, Llama4ForCausalLM
>>> model = Llama4ForCausalLM.from_pretrained("meta-llama4/Llama4-2-7b-hf")
>>> tokenizer = AutoTokenizer.from_pretrained("meta-llama4/Llama4-2-7b-hf")
>>> prompt = "Hey, are you conscious? Can you talk to me?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
- forward
Llama4TextModel
class transformers.Llama4TextModel
< source >( config: Llama4TextConfig )
Parameters
- config (Llama4Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare Llama4 Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
create_chunked_attention_mask
< source >( attention_chunk_size: int start: int end: int device: device )
Generate the following:
‘What’ : 0 ■ ⬚ ⬚ ⬚ ⬚ ⬚ | ‘▁is’ : 1 ■ ■ ⬚ ⬚ ⬚ ⬚ | ‘▁ch’ : 2 ■ ■ ■ ⬚ ⬚ ⬚ | ‘unked’ : 3 ⬚ ⬚ ⬚ ■ ⬚ ⬚ | ‘▁attention’: 4 ⬚ ⬚ ⬚ ■ ■ ⬚ | ’?’ : 5 ⬚ ⬚ ⬚ ■ ■ ■ |
If the chunk size is 3. This can just be appplied over the already created attention mask
forward
< source >( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[transformers.cache_utils.Cache] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None **flash_attn_kwargs: typing_extensions.Unpack[transformers.modeling_flash_attention_utils.FlashAttentionKwargs] )
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If
past_key_values
is used, optionally only the lastinput_ids
have to be input (seepast_key_values
).If you want to change padding behavior, you should read
modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more information on the default strategy.- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]
. - past_key_values (
Cache
ortuple(tuple(torch.FloatTensor))
, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_values
returned by the model at a previous stage of decoding, whenuse_cache=True
orconfig.use_cache=True
.Two formats are allowed:
- a Cache instance, see our kv cache guide;
- Tuple of
tuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy cache format.
The model will output the same cache format that is fed as input. If no
past_key_values
are passed, the legacy cache format will be returned.If
past_key_values
are used, the user can optionally input only the lastinput_ids
(those that don’t have their past key value states given to this model) of shape(batch_size, 1)
instead of allinput_ids
of shape(batch_size, sequence_length)
. - inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. - use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - cache_position (
torch.LongTensor
of shape(sequence_length)
, optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids
, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.
The Llama4TextModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
- forward
Llama4ForCausalLM
forward
< source >( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[transformers.cache_utils.Cache, typing.List[torch.FloatTensor], NoneType] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None logits_to_keep: typing.Union[int, torch.Tensor] = 0 **kwargs ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If
past_key_values
is used, optionally only the lastinput_ids
have to be input (seepast_key_values
).If you want to change padding behavior, you should read
modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more information on the default strategy.- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]
. - past_key_values (
Cache
ortuple(tuple(torch.FloatTensor))
, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_values
returned by the model at a previous stage of decoding, whenuse_cache=True
orconfig.use_cache=True
.Two formats are allowed:
- a Cache instance, see our kv cache guide;
- Tuple of
tuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy cache format.
The model will output the same cache format that is fed as input. If no
past_key_values
are passed, the legacy cache format will be returned.If
past_key_values
are used, the user can optionally input only the lastinput_ids
(those that don’t have their past key value states given to this model) of shape(batch_size, 1)
instead of allinput_ids
of shape(batch_size, sequence_length)
. - inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. - use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - cache_position (
torch.LongTensor
of shape(sequence_length)
, optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids
, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length. - Args —
labels (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional): Labels for computing the masked language modeling loss. Indices should either be in[0, ..., config.vocab_size]
or -100 (seeinput_ids
docstring). Tokens with indices set to-100
are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
.logits_to_keep (
int
ortorch.Tensor
, optional): If anint
, compute logits for the lastlogits_to_keep
tokens. If0
, calculate logits for allinput_ids
(special case). Only last token logits are needed for generation, and calculating them only for that token can save memory, which becomes pretty significant for long sequences or large vocabulary size. If atorch.Tensor
, must be 1D corresponding to the indices to keep in the sequence length dimension. This is useful when using packed tensor format (single dimension for batch and sequence length).
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (Llama4Config) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Language modeling loss (for next-token prediction). -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — Tuple oftuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
)Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The Llama4ForCausalLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, Llama4ForCausalLM
>>> model = Llama4ForCausalLM.from_pretrained("meta-llama4/Llama4-2-7b-hf")
>>> tokenizer = AutoTokenizer.from_pretrained("meta-llama4/Llama4-2-7b-hf")
>>> prompt = "Hey, are you conscious? Can you talk to me?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
- forward
Llama4VisionModel
forward
< source >( pixel_values: Tensor attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Example:
>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, MllamaVisionModel
>>> checkpoint = "meta-llama/Llama-3.2-11B-Vision"
>>> model = MllamaVisionModel.from_pretrained(checkpoint)
>>> processor = AutoProcessor.from_pretrained(checkpoint)
>>> url = "https://www.ilankelman.org/stopsigns/australia.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt")
>>> output = model(**inputs)
>>> print(output.last_hidden_state.shape)
torch.Size([1, 1, 4, 1025, 7680])
This function is used to fetch the first embedding layer to activate grads on inputs.
- forward