Zephyr Logo

Neuronx model for Zephyr 7B β

This repository contains AWS Inferentia2 and neuronx compatible checkpoints for HuggingFaceH4/zephyr-7b-beta. You can find detailed information about the base model on its Model Card.

This model has been exported to the neuron format using specific input_shapes and compiler parameters detailed in the paragraphs below.

Please refer to the 🤗 optimum-neuron documentation for an explanation of these parameters.

Usage on Amazon SageMaker

coming soon

Usage with 🤗 optimum-neuron

from optimum.neuron import pipeline

pipe = pipeline('text-generation', 'aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2')
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {
        "role": "system",
        "content": "You are a friendly chatbot who always responds in the style of a pirate",
    },
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

This repository contains tags specific to versions of neuronx. When using with 🤗 optimum-neuron, use the repo revision specific to the version of neuronx you are using, to load the right serialized checkpoints.

Arguments passed during export

input_shapes

{
  "batch_size": 4,
  "sequence_length": 2048,
}

compiler_args

{
  "auto_cast_type": "fp16",
  "num_cores": 2,
}
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2

Finetuned
(806)
this model

Datasets used to train aws-neuron/zephyr-7b-seqlen-2048-bs-4-cores-2

Evaluation results