Model Overview

Gemma is Google's family of lightweight, state-of-the art open models built from the same research and technology used to create the Gemini models. Gemma models are available with and without instruction tuning and come in two sizes: 2 billion and 7 billion parameters. Gemma 1.1 is the latest weights refresh. See the model card below for benchmarks, data sources, and intended use cases.

Weights are released under the Gemma License. Keras model code is released under the Apache 2 License.

Links

Installation

Keras and KerasHub can be installed with:

pip install -U -q keras-hub
pip install -U -q keras>=3

Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the Keras Getting Started page.

Presets

The following model checkpoints are provided by the Keras team. Full code examples for each are available below.

Preset name Parameters Description
gemma_2b_en 2.51B 2 billion parameter, 18-layer, base Gemma model.
gemma_instruct_2b_en 2.51B 2 billion parameter, 18-layer, instruction tuned Gemma model.
gemma_1.1_instruct_2b_en 2.51B 2 billion parameter, 18-layer, instruction tuned Gemma model. The 1.1 update improves model quality.
gemma_7b_en 8.54B 7 billion parameter, 28-layer, base Gemma model.
gemma_instruct_7b_en 8.54B 7 billion parameter, 28-layer, instruction tuned Gemma model.
gemma_1.1_instruct_7b_en 8.54B 7 billion parameter, 28-layer, instruction tuned Gemma model. The 1.1 update improves model quality.

Prompts

Gemma models are made available both pretrained and instruction tuned on turn by turn conversations. Base pretrained models (gemma_2b_en, gemma_7b_en) will complete sentences. The following are some example prompts:

  • "My favorite brownie recipe is "
  • "Why is the sky blue?"

Instruction tuned versions (suffixed with instruct) should be prompted with examples that precisely match the training data. Specifically, you must alternate user and assistant turns that begin and end with special tokens. New lines do matter. See the following for an example:

start_of_turn_user = "<start_of_turn>user\n"
start_of_turn_model = "<start_of_turn>model\n"
end_of_turn = "<end_of_turn>\n"
prompt = start_of_turn_user + "You are a friendly assistant. Say hi." + \
    end_of_turn + start_of_turn_model

Example Usage

!pip install -U keras-hub
!pip install -U keras
import keras
import keras_hub
import numpy as np

Use generate() to do text generation.

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("gemma_1.1_instruct_7b_en")
gemma_lm.generate("Keras is a", max_length=30)

# Generate with batched prompts.
gemma_lm.generate(["Keras is a", "I want to say"], max_length=30)

Compile the generate() function with a custom sampler.

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("gemma_1.1_instruct_7b_en")
gemma_lm.compile(sampler="top_k")
gemma_lm.generate("I want to say", max_length=30)

gemma_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
gemma_lm.generate("I want to say", max_length=30)

Use generate() without preprocessing.

prompt = {
    # `2, 214064, 603` maps to the start token followed by "Keras is".
    "token_ids": np.array([[2, 214064, 603, 0, 0, 0, 0]] * 2),
    # Use `"padding_mask"` to indicate values that should not be overridden.
    "padding_mask": np.array([[1, 1, 1, 0, 0, 0, 0]] * 2),
}

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
    "gemma_1.1_instruct_7b_en",
    preprocessor=None,
)
gemma_lm.generate(prompt)

Call fit() on a single batch.

features = ["The quick brown fox jumped.", "I forgot my homework."]
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("gemma_1.1_instruct_7b_en")
gemma_lm.fit(x=features, batch_size=2)

Call fit() without preprocessing.

x = {
    "token_ids": np.array([[2, 214064, 603, 5271, 6044, 9581, 3, 0]] * 2),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 0]] * 2),
}
y = np.array([[214064, 603, 5271, 6044, 9581, 3, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 0, 0]] * 2)

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
    "gemma_1.1_instruct_7b_en",
    preprocessor=None,
)
gemma_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)

Example Usage with Hugging Face URI

!pip install -U keras-hub
!pip install -U keras
import keras
import keras_hub
import numpy as np

Use generate() to do text generation.

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("hf://keras/gemma_1.1_instruct_7b_en")
gemma_lm.generate("Keras is a", max_length=30)

# Generate with batched prompts.
gemma_lm.generate(["Keras is a", "I want to say"], max_length=30)

Compile the generate() function with a custom sampler.

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("hf://keras/gemma_1.1_instruct_7b_en")
gemma_lm.compile(sampler="top_k")
gemma_lm.generate("I want to say", max_length=30)

gemma_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
gemma_lm.generate("I want to say", max_length=30)

Use generate() without preprocessing.

prompt = {
    # `2, 214064, 603` maps to the start token followed by "Keras is".
    "token_ids": np.array([[2, 214064, 603, 0, 0, 0, 0]] * 2),
    # Use `"padding_mask"` to indicate values that should not be overridden.
    "padding_mask": np.array([[1, 1, 1, 0, 0, 0, 0]] * 2),
}

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
    "hf://keras/gemma_1.1_instruct_7b_en",
    preprocessor=None,
)
gemma_lm.generate(prompt)

Call fit() on a single batch.

features = ["The quick brown fox jumped.", "I forgot my homework."]
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("hf://keras/gemma_1.1_instruct_7b_en")
gemma_lm.fit(x=features, batch_size=2)

Call fit() without preprocessing.

x = {
    "token_ids": np.array([[2, 214064, 603, 5271, 6044, 9581, 3, 0]] * 2),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 0]] * 2),
}
y = np.array([[214064, 603, 5271, 6044, 9581, 3, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 0, 0]] * 2)

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
    "hf://keras/gemma_1.1_instruct_7b_en",
    preprocessor=None,
)
gemma_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
Downloads last month
8
Inference Examples
Inference API (serverless) does not yet support keras-hub models for this pipeline type.

Spaces using keras/gemma_1.1_instruct_7b_en 2