MongoDB Query Generator - Llama-3.2-1B (Fine-tuned)

🚀 Model Overview

This model is designed to generate MongoDB queries from natural language prompts. It supports:

  • Basic CRUD operations: find, insert, update, delete
  • Aggregation Pipelines: $group, $match, $lookup, $sort, etc.
  • Indexing & Performance Queries
  • Nested Queries & Joins ($lookup)

Trained using Unsloth for efficient fine-tuning and GGUF quantization for fast inference.


📌 Example Usage (Transformers)

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "skshmjn/Llama-3.2-1B-Mongo-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Find all employees older than 30 in the 'employees' collection."
inputs = tokenizer(prompt, return_tensors="pt")

output = model.generate(**inputs, max_length=100)
query = tokenizer.decode(output[0], skip_special_tokens=True)

print(query)
Downloads last month
269
GGUF
Model size
1.24B params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for skshmjn/Llama-3.2-1B-Mongo-Instruct

Quantized
(58)
this model

Dataset used to train skshmjn/Llama-3.2-1B-Mongo-Instruct