|
--- |
|
license: gemma |
|
library_name: gemma.cpp |
|
extra_gated_heading: Access RecurrentGemma on Hugging Face |
|
extra_gated_prompt: >- |
|
To access RecurrentGemma on Hugging Face, you’re required to review and agree |
|
to Google’s usage license. To do this, please ensure you’re logged-in to |
|
Hugging Face and click below. Requests are processed immediately. |
|
extra_gated_button_content: Acknowledge license |
|
pipeline_tag: text-generation |
|
tags: |
|
- recurrent_gemma |
|
--- |
|
|
|
# RecurrentGemma Model Card |
|
|
|
**Model Page**: [RecurrentGemma]( https://ai.google.dev/gemma/docs/recurrentgemma/model_card) |
|
|
|
This model card corresponds to the base 2B version of the RecurrentGemma model for gemma.cpp, the C++ implementation of the Gemma model family. |
|
|
|
SFP checkpoints are pre-quantized to 8-bit using Switching Floating Point (SFP), a method that switches between eXmY floating point representations based on the values stored. gemma.cpp uses the [Highway](https://github.com/google/highway) SIMD library for optimal performance across most CPU architectures. |
|
|
|
Visit [https://github.com/google/gemma.cpp](https://github.com/google/gemma.cpp) for more information and usage examples! |
|
|
|
**Resources and technical documentation:** |
|
|
|
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) |
|
* [RecurrentGemma on Kaggle](https://www.kaggle.com/models/google/recurrentgemma) |
|
|
|
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/gemma/license/consent) |
|
|
|
**Authors:** Google |