Zacharias030 commited on
Commit
30bdfe4
·
verified ·
1 Parent(s): 36a8d36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -44
README.md CHANGED
@@ -2,84 +2,173 @@
2
  license: other
3
  base_model:
4
  - meta-llama/Llama-3.1-8B-Instruct
 
 
5
  ---
6
 
7
  # KernelLLM
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
- We introduce KernelLLM, a large language model, based on Llama 3.1, which has been trained specfically for the task of authoring GPU kernels.
10
  For more information, please see [Project Popcorn](https://gpu-mode.github.io/popcorn/).
11
 
12
- ## Model Use
13
 
14
- To use this model, please make sure to install transformers:
15
 
16
  ```bash
17
- pip install transformers accelerate
18
  ```
19
 
20
- The code below demonstrates default capabilities. You may need to set the HuggingFace access token - see (https://huggingface.co/docs/hub/security-tokens).
 
 
 
 
21
 
22
  ```python
23
- from transformers import AutoTokenizer
24
- import transformers
 
 
 
 
 
25
  import torch
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- model = "facebook/KernelLLM"
28
-
29
- tokenizer = AutoTokenizer.from_pretrained(model)
30
- pipeline = transformers.pipeline(
31
- "text-generation",
32
- model=model,
33
- torch_dtype=torch.float16,
34
- device_map="auto",
35
- )
36
-
37
- prompt = "import torch"
38
-
39
- response = pipeline(
40
- prompt,
41
- do_sample=True,
42
- top_k=10,
43
- temperature=0.1,
44
- top_p=0.95,
45
- num_return_sequences=1,
46
- eos_token_id=tokenizer.eos_token_id,
47
- max_length=200,
48
- truncation=True,
49
- )[0]
50
- print(response["generated_text"])
51
  ```
52
 
 
 
 
 
 
 
 
 
53
  ## Model Details
54
 
55
- **Model Developers** Meta.
56
 
57
- **Input** Models input text only.
58
 
59
- **Output** Models generate text only.
60
 
61
- **Model Architecture** KernelLLM is an auto-regressive language model that uses an optimized transformer architecture.
62
 
63
- **Model Dates** KernelLLM was been trained in March 2025.
64
 
65
- **Status** This is a static model trained on an offline dataset.
66
 
67
- **License** See LICENSE.pdf for details.
68
 
69
  ## Intended Use
70
 
71
- **Intended Use Cases** KernelLLM is intended for commercial and research use in English, relevant programming languages, Python, and Triton.
72
 
73
- **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy) and Licensing Agreement for KernelLLM and its variants.
74
 
75
  ## Hardware and Software
76
 
77
- **Training Factors** We used custom training libraries.
78
 
79
- **Carbon Footprint** In aggregate, training KernelLLM required 250 hours of computation on hardware of type A100-80GB (TDP of 350-400W), not including the training of the base model. 100% of the estimated tCO2eq emissions were offset by Metas sustainability program.
80
 
81
  ## Ethical Considerations and Limitations
82
 
83
- KernelLLM and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, KernelLLMs’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of KernelLLM, developers should perform safety testing and tuning tailored to their specific applications of the model.
84
 
85
- Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
 
2
  license: other
3
  base_model:
4
  - meta-llama/Llama-3.1-8B-Instruct
5
+ datasets:
6
+ - ScalingIntelligence/KernelBench
7
  ---
8
 
9
  # KernelLLM
10
+ ![scatter performance comparison plot](llm_performance_comparison.png)
11
+ Caption: On KernelBench-Triton Level 1, our 8B parameter model matches GPT-4o in single-shot performance. With multiple inferences, KernelLLM's performance matches DeepSeek R1. This is all from a model with two orders of magnitude fewer parameters than its competitors.
12
+ ## Making Kernel Development more accessible with KernelLLM
13
+
14
+ We introduce KernelLLM, a large language model based on Llama 3.1, which has been trained specifically for the task of authoring GPU kernels using Triton. KernelLLM translates PyTorch modules into Triton kernels and was evaluated on KernelBench-Triton (see [here](https://github.com/ScalingIntelligence/KernelBench/pull/35)).
15
+
16
+ KernelLLM's vision is to meet the growing demand for high-performance GPU kernels by automating the generation of efficient Triton implementations. As workloads grow larger and more diverse accelerator architectures emerge, the need for tailored kernel solutions has increased significantly. Although a number of [works](https://metr.org/blog/2025-02-14-measuring-automated-kernel-engineering/) [exist](https://cognition.ai/blog/kevin-32b), most of them are limited to [test-time](https://sakana.ai/ai-cuda-engineer/) [optimization](https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/), while some tune on solution traced of KernelBench problems itself. To the best of our knowledge KernelLLM is the first LLM finetuned on external (torch, triton) pairs, and we hope that making our model available can accelerate progress.
17
+ KernelLLM aims to democratize GPU programming by making kernel development more accessible and efficient.
18
+
19
+
20
+ ![alt text](triton-kernel-workflow.png)
21
+
22
+ Caption: KernelLLM Workflow for Triton Kernel Generation Our approach uses KernelLLM to translate PyTorch code (green) into Triton kernel candidates. Input and output components are marked in bold. The generations are validated against unit tests, which run kernels with random inputs of known shapes. This workflow allows us to evaluate multiple generations (pass@k) by increasing the number of kernel candidate generations. The best kernel implementation is selected and returned (green output).
23
+
24
+
25
+ The model was trained on approximately 25,000 paired examples of PyTorch modules and their equivalent Triton kernel implementations and additional synthetically generated samples. Our approach combines filtered code from TheStack [Kocetkov et al. 2022] and synthetic examples generated through torch.compile() and additional prompting techniques. The filtered and compiled dataset can be found [on Huggingface](https://huggingface.co/datasets/GPUMODE/Inductor_Created_Data_Permissive).
26
+
27
+ We finetuned Llama3.1-8B-Instruct on the created dataset using supervised instruction tuning and measured its ability to generate correct Triton kernels and calling code on KernelBench-Triton, our newly created variant of KernelBench [Ouyang et al. 2025] targeting Triton kernel generation. The torch code was used with a prompt template containing a format example as instruction during both training and evaluation. The model was trained for 10 epochs with a batch size of 32 and a standard SFT recipe with hyperparameters selected by perplexity on a held-out subset of the data. Training took circa 12 hours wall clock time on 16 GPUs (192 GPU hours), and we report the best validation results obtained.
28
+
29
+ ### Model Performance
30
+
31
+ KernelLLM significantly outperforms larger general-purpose models on specialized kernel generation tasks, demonstrating the value of domain-specific fine-tuning.
32
+
33
+ ![alt text](vscode-local:/Users/zacharias/code/gtc_presentation/blog_post_model_performance_rev_4_RC.png)
34
+
35
+ | Model | Parameters (B) | Score | Pass@k |
36
+ |-------|---------------|-------|--------|
37
+ | KernelLLM | 8 | 15.5 | 1 |
38
+ | KernelLLM | 8 | 34.7 | 10 |
39
+ | KernelLLM | 8 | 39.8 | 20 |
40
+ | DeepSeek V3 | 671 | 16 | 1 |
41
+ | GPT-4o | ~200 | 15 | 1 |
42
+ | Qwen2.5 | 32 | 15 | 1 |
43
+ | Llama 3.3 | 70 | 13 | 1 |
44
+ | Llama 3.1 | 8 | 14 | 20 |
45
+ | Llama 3.1 | 8 | 6 | 1 |
46
+ | Llama R1 Distill | 70 | 11 | reasoning |
47
+ | DeepSeek R1 | 671 | 30 | 1 |
48
+
49
+ Our 8B parameter model achieves competitive or superior performance compared to much larger models on kernel generation tasks, demonstrating the effectiveness of our specialized training approach.
50
+
51
+ The resulting model is competitive with state of the art LLMs despite its small size. We evaluate our model on KernelBench which is an open-source benchmark to evaluate the ability of LLMs to write efficient GPU kernels. It contains 250 selected PyTorch modules organized into difficulty levels, from single torch operators such as Conv2D or Swish (level 1), to full model architectures (level 3). The benchmark measures both correctness (by comparing against reference PyTorch outputs) and performance (by measuring speedup over baseline implementations). We implemented a new KernelBench-Triton variant that evaluates an LLMs ability to generate Triton kernels, making it an ideal benchmark for evaluating KernelLLM's capabilities. All our measurements were done on Nvidia H100 GPUs.
52
+
53
+
54
+
55
 
 
56
  For more information, please see [Project Popcorn](https://gpu-mode.github.io/popcorn/).
57
 
58
+ ## Installation
59
 
60
+ To use KernelLLM, install the required dependencies:
61
 
62
  ```bash
63
+ pip install transformers accelerate torch triton
64
  ```
65
 
66
+ ## Usage
67
+
68
+ KernelLLM provides a simple interface for generating Triton kernels from PyTorch code. The included `kernelllm.py` script offers multiple methods for interacting with the model.
69
+
70
+ ### Basic Usage
71
 
72
  ```python
73
+ from kernelllm import KernelLLM
74
+
75
+ # Initialize the model
76
+ model = KernelLLM()
77
+
78
+ # Define your PyTorch module
79
+ pytorch_code = '''
80
  import torch
81
+ import torch.nn as nn
82
+
83
+ class Model(nn.Module):
84
+ """
85
+ A model that computes Hinge Loss for binary classification tasks.
86
+ """
87
+ def __init__(self):
88
+ super(Model, self).__init__()
89
+
90
+ def forward(self, predictions, targets):
91
+ return torch.mean(torch.clamp(1 - predictions * targets, min=0))
92
+
93
+ batch_size = 128
94
+ input_shape = (1,)
95
+
96
+ def get_inputs():
97
+ return [torch.randn(batch_size, *input_shape), torch.randint(0, 2, (batch_size, 1)).float() * 2 - 1]
98
+
99
+ def get_init_inputs():
100
+ return []
101
+ '''
102
+
103
+ # Generate optimized Triton code
104
+ optimized_code = model.generate_triton(pytorch_code, max_new_tokens=512)
105
+ print(optimized_code)
106
+ ```
107
+
108
+ ### Interactive REPL
109
+
110
+ You can also use the built-in REPL interface:
111
+
112
+ ```bash
113
+ python kernelllm.py
114
+ ```
115
+
116
+ This will start an interactive session where you can input your PyTorch code and receive Triton-optimized implementations.
117
 
118
+ ### Advanced Options
119
+
120
+ KernelLLM provides several methods for customizing the generation process:
121
+
122
+ ```python
123
+ from kernelllm import KernelLLM
124
+
125
+ model = KernelLLM()
126
+
127
+ # Stream output in real-time
128
+ model.stream_raw("Your prompt here", max_new_tokens=2048)
129
+
130
+ # Generate raw text without the Triton-specific prompt template
131
+ raw_output = model.generate_raw("Your prompt here", temperature=0.6, max_new_tokens=2048)
 
 
 
 
 
 
 
 
 
 
132
  ```
133
 
134
+ ## Current Limitations and Future Work
135
+
136
+ Despite showing promising results, KernelLLM has several limitations:
137
+
138
+ - The model may still produce incorrect API references and syntax errors
139
+ - Generated code structurally resembles compiler-generated output
140
+ - Error analysis shows common issues related to tensor shapes, type handling, and numerical precision
141
+
142
  ## Model Details
143
 
144
+ **Model Developers:** Meta.
145
 
146
+ **Input:** Models input text only.
147
 
148
+ **Output:** Models generate text only.
149
 
150
+ **Model Architecture:** KernelLLM is an auto-regressive language model that uses an optimized transformer architecture.
151
 
152
+ **Model Dates:** KernelLLM was trained in March 2025.
153
 
154
+ **Status:** This is a static model trained on an offline dataset.
155
 
156
+ **License:** See LICENSE.pdf for details.
157
 
158
  ## Intended Use
159
 
160
+ **Intended Use Cases:** KernelLLM is intended for commercial and research use in English, relevant programming languages, Python, and Triton.
161
 
162
+ **Out-of-Scope Uses:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy) and Licensing Agreement for KernelLLM and its variants.
163
 
164
  ## Hardware and Software
165
 
166
+ **Training Factors:** We used custom training libraries.
167
 
168
+ **Carbon Footprint:** In aggregate, training KernelLLM required 250 hours of computation on hardware of type A100-80GB (TDP of 350-400W), not including the training of the base model. 100% of the estimated tCO2eq emissions were offset by Meta's sustainability program.
169
 
170
  ## Ethical Considerations and Limitations
171
 
172
+ KernelLLM and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, KernelLLM's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of KernelLLM, developers should perform safety testing and tuning tailored to their specific applications of the model.
173
 
174
+ Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).