YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

CodeV-CL-7B - GGUF

Name Quant method Size
CodeV-CL-7B.Q2_K.gguf Q2_K 2.36GB
CodeV-CL-7B.IQ3_XS.gguf IQ3_XS 2.6GB
CodeV-CL-7B.IQ3_S.gguf IQ3_S 2.75GB
CodeV-CL-7B.Q3_K_S.gguf Q3_K_S 2.75GB
CodeV-CL-7B.IQ3_M.gguf IQ3_M 2.9GB
CodeV-CL-7B.Q3_K.gguf Q3_K 3.07GB
CodeV-CL-7B.Q3_K_M.gguf Q3_K_M 3.07GB
CodeV-CL-7B.Q3_K_L.gguf Q3_K_L 3.35GB
CodeV-CL-7B.IQ4_XS.gguf IQ4_XS 3.4GB
CodeV-CL-7B.Q4_0.gguf Q4_0 3.56GB
CodeV-CL-7B.IQ4_NL.gguf IQ4_NL 3.58GB
CodeV-CL-7B.Q4_K_S.gguf Q4_K_S 3.59GB
CodeV-CL-7B.Q4_K.gguf Q4_K 3.8GB
CodeV-CL-7B.Q4_K_M.gguf Q4_K_M 3.8GB
CodeV-CL-7B.Q4_1.gguf Q4_1 3.95GB
CodeV-CL-7B.Q5_0.gguf Q5_0 4.33GB
CodeV-CL-7B.Q5_K_S.gguf Q5_K_S 4.33GB
CodeV-CL-7B.Q5_K.gguf Q5_K 4.45GB
CodeV-CL-7B.Q5_K_M.gguf Q5_K_M 4.45GB
CodeV-CL-7B.Q5_1.gguf Q5_1 4.72GB
CodeV-CL-7B.Q6_K.gguf Q6_K 5.15GB
CodeV-CL-7B.Q8_0.gguf Q8_0 6.67GB

Original model description:

license: llama2 library_name: transformers pipeline_tag: text-generation tags: - code

CodeV:Empowering LLMs for Verilog Generation through Multi-Level Summarization

CodeV is an innovative series of open-source, instruction-tuned Large Language Models (LLMs) specifically designed for the generation of high-quality Verilog code, addressing the challenges faced by existing models in this domain. (This repo is under development)

Models and Datasets

Test

If you want to test the generation capability of existing models on Verilog, you need to install the VerilogEval and RTLLM environments.

Quick Start

from transformers import pipeline

import torch



prompt= "FILL IN THE QUESTION"



generator = pipeline(

  model="CODEV",

  task="text-generation",

  torch_dtype=torch.bfloat16,

  device_map="auto",

)



result = generator(prompt , max_length=2048, num_return_sequences=1, temperature=0.0)

response = result[0]["generated_text"]

print("Response:", response)

Paper

Arxiv: https://arxiv.org/abs/2407.10424

Please cite the paper if you use the models from CodeV.

@misc{yang-z,
      title={CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization}, 
      author={Yang Zhao and Di Huang and Chongxiao Li and Pengwei Jin and Ziyuan Nan and Tianyun Ma and Lei Qi and Yansong Pan and Zhenxing Zhang and Rui Zhang and Xishan Zhang and Zidong Du and Qi Guo and Xing Hu and Yunji Chen},
      year={2024},
      eprint={2407.10424},
      archivePrefix={arXiv},
      primaryClass={cs.PL},
      url={https://arxiv.org/abs/2407.10424}, 
}

Acknowledgements

Downloads last month
63
GGUF
Model size
6.74B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .