Jais-590m-merged / README.md
Solshine's picture
Update README.md
7165054 verified
|
raw
history blame
3.39 kB
metadata
base_model:
  - inceptionai/jais-family-590m
  - inceptionai/jais-family-590m
tags:
  - merge
  - mergekit
  - lazymergekit
  - inceptionai/jais-family-590m

Jais-590m-merged

Jais-590m-merged is a merge of the following models using LazyMergekit:

🧩 Configuration

slices:
  - sources:
      - model: inceptionai/jais-family-590m
        layer_range: [0, 18]
      - model: inceptionai/jais-family-590m
        layer_range: [0, 18]
merge_method: slerp
base_model: inceptionai/jais-family-590m
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

💻 Usage

/Due to the jais family tokenizer deployment with trust remote code, especially if handling Arabic, the following implementation is suggested for inferencing this merge model/

!pip install -qU transformers accelerate

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch

# Model and message setup
model_name = "Solshine/Jais-590m-merged"
user_message = "Explain how transformers work in machine learning"  # This can be any user input

# Structure the message with role-content pairing for compatibility with Jais-chat format
messages = [{"role": "user", "content": user_message}]

# Initialize tokenizer with trust_remote_code for custom Arabic-English handling
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

# Check if tokenizer is valid
if tokenizer is None:
    raise ValueError("Tokenizer initialization failed!")

# Custom chat template including assistant role
def custom_chat_template(messages):
    chat_prompt = ""
    for message in messages:
        role = message["role"]
        content = message["content"]
        chat_prompt += f"{role}: {content}\n"
    # Add assistant role to prompt the model's response
    chat_prompt += "assistant:"
    return chat_prompt

# Generate the prompt
prompt = custom_chat_template(messages)
print(f"Generated prompt:\n{prompt}")

# Initialize the model
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
if model is None:
    raise ValueError("Model initialization failed!")

# Move model to the appropriate device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

# Initialize the text generation pipeline
text_gen_pipeline = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    device=device,
    torch_dtype=torch.float16,
    trust_remote_code=True
)

# Generate text
try:
    outputs = text_gen_pipeline(
        prompt,
        max_new_tokens=256,
        do_sample=True,
        temperature=0.7,
        top_k=50,
        top_p=0.95,
        pad_token_id=tokenizer.eos_token_id  # Ensure proper stopping
    )
    # Extract and print the assistant's response
    generated_text = outputs[0]["generated_text"]
    assistant_response = generated_text.split("assistant:")[1].strip()
    print(f"Assistant's response:\n{assistant_response}")
except Exception as e:
    print(f"Error during text generation: {e}")