Edit model card

This is a Gemma model uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends. This model is related to a CausalLM task.

Model config:

  • name: gemma_backbone
  • trainable: True
  • vocabulary_size: 256000
  • num_layers: 26
  • num_query_heads: 8
  • num_key_value_heads: 4
  • hidden_dim: 2304
  • intermediate_dim: 18432
  • head_dim: 256
  • layer_norm_epsilon: 1e-06
  • dropout: 0
  • query_head_dim_normalize: True
  • use_post_ffw_norm: True
  • use_post_attention_norm: True
  • final_logit_soft_cap: 30.0
  • attention_logit_soft_cap: 50.0
  • sliding_window_size: 4096
  • use_sliding_window_attention: True

This model card has been generated automatically and should be completed by the model author. See Model Cards documentation for more information.

Downloads last month
64
Inference Examples
Inference API (serverless) does not yet support keras-hub models for this pipeline type.

Model tree for mgbam/finetune_gemma2_2b_en_medical_qa

Unable to build the model tree, the base model loops to the model itself. Learn more.

Dataset used to train mgbam/finetune_gemma2_2b_en_medical_qa