metadata
license: llama3
tags:
- moe
- merge
language:
- en
Megatron_llama3_2x8B
Megatron_llama3_2x8B is a Mixure of Experts (MoE) (two llama3 models)
💻 Usage
!pip install -qU transformers bitsandbytes accelerate
model_id = "Eurdem/Megatron_llama3_2x8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_4bit= True)
messages = [
{"role": "system", "content": "You are a helpful chatbot who always responds friendly."},
{"role": "user", "content": "f(x)=3x^2+4x+12 so what is f(3)?"},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids,
max_new_tokens=1024,
do_sample=True,
temperature=0.7,
top_p=0.7,
top_k=500,
eos_token_id = tokenizer.eos_token_id
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))