Safetensors
qwen2
zeta / README.md
dsarfati's picture
Added vllm run commands
62016c4 verified
|
raw
history blame
2.19 kB
metadata
datasets:
  - zed-industries/zeta
license: apache-2.0

Edit Prediction: Fine-Tuned from Qwen2.5-Coder-7B

This repository contains a fine-tuned version of Qwen2.5-Coder-7B to support edit prediction in Zed.

Training Details

The model has been fine-tuned using the zeta dataset. If you want to fine-tune the model yourself, you can refer to the following scripts:

Dataset

The dataset used for training is available at: zed-industries/zeta

Running Zeta

vLLM - Simple

vllm serve zed-industries/zeta --served-model-name zeta

vLLM - Advanced

  • Quantization vLLM supports FP8 (8-bit floating point) weight and activation quantization using hardware acceleration on GPUs such as Nvidia H100 and AMD MI300x.

  • NGram Speculative Decoding configures vLLM to use speculative decoding where proposals are generated by matching n-grams in the prompt. This is a great fit for edit predictions since many of the tokens are already present in the prompt and the model is only needed to generate changes to the code file.

vllm serve zed-industries/zeta --served-model-name zeta --enable-prefix-caching --enable-chunked-prefill --quantization="fp8" --speculative-model [ngram] --ngram-prompt-lookup-max 4 --ngram-prompt-lookup-min 2 --num-speculative-tokens 8

Learn More

For more insights about the model and its integration in Zed, check out the official blog post: Zed Blog - Edit Prediction