YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

gemma-2-baku-2b - GGUF

Original model description:

thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png license: gemma datasets: - mc4 - wikipedia - EleutherAI/pile - oscar-corpus/colossal-oscar-1.0 - cc100 language: - ja - en tags: - gemma2 inference: false base_model: google/gemma-2-2b pipeline_tag: text-generation library_name: transformers

Gemma 2 Baku 2B (rinna/gemma-2-baku-2b)

rinna-icon

Overview

We conduct continual pre-training of google/gemma-2-2b on 80B tokens from a mixture of Japanese and English datasets. The continual pre-training improves the model's performance on Japanese tasks.

The name baku comes from the Japanese word ็/ใฐใ/Baku, which is a kind of Japanese mythical creature (ๅฆ–ๆ€ช/ใ‚ˆใ†ใ‹ใ„/Youkai).

Size Continual Pre-Training Instruction-Tuning
2B Gemma 2 Baku 2B [HF] Gemma 2 Baku 2B Instruct [HF]

Benchmarking

Please refer to rinna's LM benchmark page.


How to use the model

import transformers
import torch

model_id = "rinna/gemma-2-baku-2b"
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16, "attn_implementation": "eager"},
    device_map="auto"
)
output = pipeline(
    "่ฅฟ็”ฐๅนพๅคš้ƒŽใฏใ€",
    max_new_tokens=256,
    do_sample=True
)
print(output[0]["generated_text"])

It is recommended to use eager attention when conducting batch inference under bfloat16 precision. Currently, Gemma 2 yields NaN values for input sequences with padding when the default attention mechanism (torch.scaled_dot_product_attention) is employed in conjunction with bfloat16.


Tokenization

The model uses the original google/gemma-2-2b tokenizer.


How to cite

@misc{rinna-gemma-2-baku-2b,
    title = {rinna/gemma-2-baku-2b},
    author = {Wakatsuki, Toshiaki and Chen, Xinqi and Sawada, Kei},
    url = {https://huggingface.co/rinna/gemma-2-baku-2b}
}

@inproceedings{sawada2024release,
    title = {Release of Pre-Trained Models for the {J}apanese Language},
    author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
    booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
    month = {5},
    year = {2024},
    pages = {13898--13905},
    url = {https://aclanthology.org/2024.lrec-main.1213},
    note = {\url{https://arxiv.org/abs/2404.01657}}
}

References

@article{gemma-2-2024,
    title = {Gemma 2},
    url = {https://www.kaggle.com/models/google/gemma-2},
    publisher = {Kaggle},
    author = {Gemma Team},
    year = {2024}
}

@misc{litgpt-2023,
    author = {Lightning AI},
    title = {LitGPT},
    howpublished = {\url{https://github.com/Lightning-AI/litgpt}},
    year = {2023}
}

License

Gemma Terms of Use

Downloads last month
21
GGUF
Model size
2.61B params
Architecture
gemma2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .