Overview
The purpose of this model is to create a base model for creating Nurture Intelligence. After merging a model that is good at ASEAN language, a model that is good at Japanese language, and a model with high basic intelligence, the instruct Models trained on the dataset.
We would like to thank all those who created the original modelοΌ
How to use
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "Nurture-intelligence/kEy-llama3.1-8b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "user", "content": "θͺε·±ζι·γγLLMγδ½γγ«γγγ£γ¦ε€§εγͺγγ¨γ5γ€γγγγ¦γγ γγγ"},
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(input_ids,
do_sample=True,
temperature=0.2, ## Recommend 0.6 or lower
max_new_tokens=1024
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
Used model
- meditsolutions/Llama-3.1-MedIT-SUN-8B
- elyza/Llama-3-ELYZA-JP-8B
- aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
Configuration
The following YAML configuration was used to produce this model: ```yaml
models: model: meditsolutions/Llama-3.1-MedIT-SUN-8B
- model: meditsolutions/Llama-3.1-MedIT-SUN-8B
parameters: weight: 1.0
weight: 1.0
- model: elyza/Llama-3-ELYZA-JP-8B
parameters: weight: 0.3
weight: 0.3
- model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
parameters: weight: 0.5
weight: 0.5
merge_method: breadcrumbs
base_model: meditsolutions/Llama-3.1-MedIT-SUN-8B
dtype: bfloat16
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for Nurture-intelligence/kEy-llama3.1-8b-v0.1
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct
Finetuned
arcee-ai/Llama-3.1-SuperNova-Lite
Finetuned
meditsolutions/Llama-3.1-MedIT-SUN-8B