Safetensors
qwen2
zeta / README.md
dsarfati's picture
Added vllm run commands
62016c4 verified
|
raw
history blame
2.19 kB
---
datasets:
- zed-industries/zeta
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/644a8bc1cb1654dcb6e762f9/6296GYaJsrUBSAeUwUHvm.png" width="100">
# Edit Prediction: Fine-Tuned from Qwen2.5-Coder-7B
This repository contains a fine-tuned version of **Qwen2.5-Coder-7B** to support [edit prediction](https://zed.dev/edit-prediction) in Zed.
## Training Details
The model has been fine-tuned using the [zeta dataset](https://huggingface.co/datasets/zed-industries/zeta). If you want to fine-tune the model yourself, you can refer to the following scripts:
- **DPO Fine-Tuning**: [View Notebook](https://huggingface.co/datasets/zed-industries/zeta/blob/main/script/dpo.ipynb)
- **SFT Fine-Tuning**: [View Notebook](https://huggingface.co/datasets/zed-industries/zeta/blob/main/script/sft.ipynb)
## Dataset
The dataset used for training is available at:
[zed-industries/zeta](https://huggingface.co/datasets/zed-industries/zeta)
## Running Zeta
### vLLM - Simple
`vllm serve zed-industries/zeta --served-model-name zeta`
### vLLM - Advanced
- [Quantization](https://docs.vllm.ai/en/latest/features/quantization/fp8.html#) vLLM supports FP8 (8-bit floating point) weight and activation quantization using hardware acceleration on GPUs such as Nvidia H100 and AMD MI300x.
- [NGram Speculative Decoding](https://docs.vllm.ai/en/latest/features/spec_decode.html#speculating-by-matching-n-grams-in-the-prompt) configures vLLM to use
speculative decoding where proposals are generated by matching n-grams in the prompt. This is a great fit for edit predictions since many of the tokens are already present in the prompt and
the model is only needed to generate changes to the code file.
`vllm serve zed-industries/zeta --served-model-name zeta --enable-prefix-caching --enable-chunked-prefill --quantization="fp8" --speculative-model [ngram] --ngram-prompt-lookup-max 4 --ngram-prompt-lookup-min 2 --num-speculative-tokens 8`
## Learn More
For more insights about the model and its integration in Zed, check out the official blog post:
[Zed Blog - Edit Prediction](https://zed-k1xdvw833-zed-industries.vercel.app/blog/edit-prediction)