Code-Jamba-v0.1

This model is trained upon my dataset Code-290k-ShareGPT and Code-Feedback. It is finetuned on Jamba-v0.1 . It is very very good in Code generation in various languages such as Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc.. This model will also generate detailed explanation/logic behind each code. This model uses ChatML prompt format.

Training

Entire dataset was trained on 2 x H100 94GB. For 3 epoch, training took 162 hours. Axolotl along with DeepSpeed codebase was used for training purpose. This was trained on Jamba-v0.1 by AI21Labs.

This is a qlora model. Links for quantized models will be updated very soon.

GPTQ, GGUF, AWQ & Exllama

GPTQ: TBA

GGUF: TBA

AWQ: TBA

Exllama v2: TBA

Example Prompt:

This model uses ChatML prompt format.

<|im_start|>system
You are a Helpful Assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

You can modify above Prompt as per your requirement.

I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.

Thank you for your love & support.

Example Output

Coming soon!

Downloads last month
21
Safetensors
Model size
51.6B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train ajibawa-2023/Code-Jamba-v0.1