MobiLlama-05B / README.md
Omkar Thawakar
Update README.md
1876606 verified
|
raw
history blame
2.65 kB
metadata
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
  - en
pipeline_tag: text-generation
tags:
  - nlp
  - code

MobiLlama-05B

mobillama logo

Model Summary

MobiLlama-05B is a Small Language Model with 0.5 billion parameters. It was trained using the Amber data sources Amber-Dataset.

Model Description

How to Use

MobiLlama-05B has been integrated in the development version (4.37.0.dev) of transformers. Until the official version is released through pip, ensure that you are doing one of the following:

  • When loading the model, ensure that trust_remote_code=True is passed as an argument of the from_pretrained() function.

  • Update your local transformers to the development version: pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers. The previous command is an alternative to cloning and installing from the source.

The current transformers version can be verified with: pip list | grep transformers.

To load a specific checkpoint, simply pass a revision with a value between "ckpt_000" and "ckpt_358". If no revision is provided, it will load "ckpt_359", which is the final checkpoint.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")

model = AutoModelForCausalLM.from_pretrained("MBZUAI/MobiLlama-05B", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("MBZUAI/MobiLlama-05B", trust_remote_code=True)

text = "Write a C language program to find fibonnaci series?"
input_ids = tokenizer(text, return_tensors="pt").to('cuda').input_ids
outputs = model.generate(input_ids, max_length=1000, repetition_penalty=1.2, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())

Intended Uses

Given the nature of the training data, the MobiLlama-05B model is best suited for prompts using the QA format, the chat format, and the code format.