Mehdi Challakh
adapted the safetensor example since injecting the system prompt manually is no longer necessary
9268a68 verified
|
raw
history blame
5.47 kB
metadata
license: apache-2.0
language:
  - en
metrics:
  - bleu
  - rouge
tags:
  - causal-lm
  - code
  - cypher
  - graph
  - neo4j
inference: false

Model Description

A finetune of https://huggingface.co/stabilityai/stable-code-instruct-3b trained on https://github.com/neo4j-labs/text2cypher/tree/main/datasets/synthetic_opus_demodbs to generate CYPHER statements for GraphDB queries such as neo4j.

Usage

Safetensors (recommended)

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("lakkeo/stable-cypher-instruct-3b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("lakkeo/stable-cypher-instruct-3b", torch_dtype=torch.bfloat16, trust_remote_code=True)

messages = [
    {
        "role": "user",
        "content": "Show me the people who have Python and Cloud skills and have been in the company for at least 3 years."
    }
]

prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)

inputs = tokenizer([prompt], return_tensors="pt").to(model.device)

tokens = model.generate(
        **inputs,
        max_new_tokens=128,
        do_sample=True,
        top_p=0.9,
        temperature=0.2,
        pad_token_id=tokenizer.eos_token_id,
    )

outputs = tokenizer.batch_decode(tokens[:, inputs.input_ids.shape[-1]:], skip_special_tokens=False)[0]

GGUF

from llama_cpp import Llama

# Load the GGUF model
print("Loading model...")
model = Llama(
    model_path="stable-cypher-instruct-3b.Q4_K_M.gguf",
    n_ctx=512,
    n_batch=512,
    n_gpu_layers=-1,  # Use all available GPU layers
    max_tokens=128,
    top_p=0.9,
    temperature=0.2,
    verbose=False 
)

# Define your question
'''instruction is VERY IMPORTANT the model was finetuned on this particular system prompt. 
Except bad performance without it'''
instruction = "Create a Cypher statement to answer the following question:" 
question = "List the first 3 articles mentioning organizations with a revenue less than 5 million."

# Create the full prompt
full_prompt = f"{instruction}\n\nHuman: {question}\n\nAssistant:"

# Generate response
print("Generating response...")
response = model(
    full_prompt,
    max_tokens=128,
    stop=["Human:", "\n\n"],
    echo=False
)

# Extract and print the generated response
answer = response['choices'][0]['text'].strip()
print("\nQuestion:", question)
print("\nGenerated Cypher statement:")
print(answer)

Performance

Metric stable-code-instruct-3b stable-cypher-instruct-3b
BLEU-4 19.07 88.63
ROUGE-1 39.49 95.09
ROUGE-2 24.82 90.71
ROUGE-L 29.63 91.51

Example

Stable Cypher

image/png

Stable Code

image/png

Eval params

image/png

Reproducability

This is the config file from Llama Factory :

{
  "top.model_name": "Custom",
  "top.finetuning_type": "lora",
  "top.adapter_path": [],
  "top.quantization_bit": "none",
  "top.template": "default",
  "top.rope_scaling": "none",
  "top.booster": "none",
  "train.training_stage": "Supervised Fine-Tuning",
  "train.dataset_dir": "data",
  "train.dataset": [
    "cypher_opus"
  ],
  "train.learning_rate": "2e-4",
  "train.num_train_epochs": "5.0",
  "train.max_grad_norm": "1.0",
  "train.max_samples": "5000",
  "train.compute_type": "fp16",
  "train.cutoff_len": 256,
  "train.batch_size": 16,
  "train.gradient_accumulation_steps": 2,
  "train.val_size": 0.1,
  "train.lr_scheduler_type": "cosine",
  "train.logging_steps": 10,
  "train.save_steps": 100,
  "train.warmup_steps": 20,
  "train.neftune_alpha": 0,
  "train.optim": "adamw_torch",
  "train.resize_vocab": false,
  "train.packing": false,
  "train.upcast_layernorm": false,
  "train.use_llama_pro": false,
  "train.shift_attn": false,
  "train.report_to": false,
  "train.num_layer_trainable": 3,
  "train.name_module_trainable": "all",
  "train.lora_rank": 64,
  "train.lora_alpha": 64,
  "train.lora_dropout": 0.1,
  "train.loraplus_lr_ratio": 0,
  "train.create_new_adapter": false,
  "train.use_rslora": false,
  "train.use_dora": true,
  "train.lora_target": "",
  "train.additional_target": "",
  "train.dpo_beta": 0.1,
  "train.dpo_ftx": 0,
  "train.orpo_beta": 0.1,
  "train.reward_model": null,
  "train.use_galore": false,
  "train.galore_rank": 16,
  "train.galore_update_interval": 200,
  "train.galore_scale": 0.25,
  "train.galore_target": "all"
}

I used llama.cpp to merge the LoRa and generate the quants.

The progress achieved from the base model is significant but this is just a first draft. I ran a few batches of training tickering with some of the values but was far from being exhaustive and thourough. Main concern is the capability of the model to expand on unseen fields and syntax. I'm open to the idea of making a v2 which (should) be production ready and a full tutorial if there is enough interest in this project.