Meta-Llama-3.1-70B-Instruct-FP8-128K

Model Overview

  • Model Architecture: Meta-Llama-3.1
    • Input: Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: FP8
    • Activation quantization: FP8
    • KV Cache quantization:FP8
  • Intended Use Cases: Intended for commercial and research use in multiple languages. Similarly to Meta-Llama-3.1-8B-Instruct, this models is intended for assistant-like chat.
  • Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
  • Release Date: 8/27/2024
  • Version: 1.0
  • License(s): llama3.1
  • Quantized version of Meta-Llama-3.1-8B-Instruct.

Serve with vLLM engine

python3 -m vllm.entrypoints.openai.api_server \
    --port <port> --model yejingfu/Meta-Llama-3.1-70B-Instruct-FP8-128K \
    --tensor-parallel-size 4 --swap-space 16 --gpu-memory-utilization 0.96 --dtype auto \
    --max-num-seqs 32 --max-model-len 131072 --kv-cache-dtype fp8 --enable-chunked-prefill

license: llama3.1

Downloads last month
7
Safetensors
Model size
70.6B params
Tensor type
BF16
·
F8_E4M3
·
Inference Examples
Unable to determine this model's library. Check the docs .