metadata
license: apache-2.0
DeciLM-7B-instruct GGUF checkpoints
This repository includes DeciLM-7B-instruct checkpoints in the GGUF format.
DeciLM demonstrates strong performance on commodity CPUs when utilizing the llama.cpp codebase.
1. Clone and build llama.cpp (1 minute)
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j
2. Download the GGUF checkpoint
- Navigate to the 'Files' section
- Click on 'decilm-7b-uniform-gqa-q8_0.gguf'
- Click on the 'Download' button
- Click on 'decilm-7b-uniform-gqa-q8_0.gguf'
3. Generate outputs
Use the chat template and feed a prompt to DeciLM-7B-instruct. We are using the INT8 quantized GGUF checkpoint.
./main -m ~/Downloads/decilm-7b-uniform-gqa-q8_0.gguf -p """ ### System: You are an AI assistant that follows instructions exceptionally well. Be as helpful as possible. ### User: How do I make the most delicious pancakes the world has ever tasted? ### Assistant: """
Output:
### System: You are an AI assistant that follows instructions exceptionally well. Be as helpful as possible. ### User: How do I make the most delicious pancakes the world has ever tasted? ### Assistant: To make the most delicious pancakes the world has ever tasted, follow these steps: 1. In a mixing bowl, combine 2 cups of all-purpose flour, 4 tablespoons of sugar, and 3 teaspoons of baking powder with 1/2 teaspoon salt; mix well. 2. Make a well in the center and pour in 4 eggs and 1 cup of milk. Whisk well until smooth. Add 3 tablespoons of oil and 1 tablespoon of melted butter. 3. Heat your frying pan with some butter or oil. Ladle the batter onto the pan, and spread it to a 1/2-inch thickness. Wait for tiny bubbles to form on the surface, then flip it over to brown the other side until golden. 4. Enjoy your delicious pancakes [end of text] llama_print_timings: load time = 343.16 ms llama_print_timings: sample time = 14.69 ms / 172 runs ( 0.09 ms per token, 11712.63 tokens per second) llama_print_timings: prompt eval time = 239.48 ms / 52 tokens ( 4.61 ms per token, 217.14 tokens per second) llama_print_timings: eval time = 7767.20 ms / 171 runs ( 45.42 ms per token, 22.02 tokens per second) llama_print_timings: total time = 8045.89 ms ggml_metal_free: deallocating Log end