TBH.AI Valhala

  • Developed by: TBH.AI
  • License: apache-2.0
  • Fine-tuned from: saishshinde15/TBH.AI_Valhala
  • Part of: Vortex Family (A collection of four fine-tuned SFT models)

Model Description

TBH.AI Valhala is a highly optimized reasoning model built upon saishshinde15/TBH.AI_Base_Reasoning, further refined with high-quality, curated datasets to enhance reasoning and structured response generation. This model belongs to the Vortex Family, a suite of four fine-tuned models tailored for advanced knowledge synthesis and decision-making.

Unlike reinforcement learning-based enhancements, Supervised Fine-Tuning (SFT) was chosen to ensure stability, reliability, and alignment with human-preferred responses, making Valhala an ideal choice for analytical and structured tasks.

Why TBH.AI Valhala Stands Out

  • Superior Knowledge & Reasoning: Incorporates higher-quality training data to improve logical consistency and factual accuracy.
  • Enhanced Response Coherence: Designed to provide structured, well-reasoned, and contextually relevant answers across various domains.
  • Optimized for Complex Queries: Excels in multi-step logical deductions, research synthesis, and structured decision-making.
  • Robust Generalization: Performs exceptionally well in scientific, technical, and analytical reasoning tasks, ensuring versatility and reliability.

Why Supervised Fine-Tuning (SFT) Instead of RL?

  • Better Control Over Model Behavior: Directly fine-tuned with high-quality labeled data for consistent and reliable responses.
  • Avoids RLHF Pitfalls: Unlike RLHF, which can lead to reward hacking, over-optimization, or biases, SFT ensures a balanced and dependable output.
  • Logical Consistency & Stability: RL-based methods can cause inconsistent or unnatural responses, while SFT maintains logical coherence.
  • Computational Efficiency: SFT is more efficient and avoids the complexity of reward modeling and multi-stage training required in RL-based approaches.

Intended Use Cases

  • Advanced Question-Answering: Ideal for technical, analytical, and logical Q&A, ensuring precise and structured responses.
  • Research & Knowledge Synthesis: Processes and summarizes large volumes of information with higher accuracy.
  • Problem-Solving & Deductive Reasoning: Handles multi-step logical deductions and complex problem-solving tasks.
  • Code & Algorithmic Logic: Useful for debugging, code explanation, and algorithmic structuring.

Usage

Call the model using Unsloth:

from unsloth import FastLanguageModel
import torch
max_seq_length = 2048 # Choose any! RoPE Scaling supported internally!
dtype = None # Auto detection (Float16 for T4/V100, Bfloat16 for Ampere+)
load_in_4bit = True # Use 4-bit quantization to optimize memory usage
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "saishshinde15/TBH.AI_Valhala",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit
)

FastLanguageModel.for_inference(model)
instruction = """You are an advanced AI assistant. Provide answers in a clear manner."""

messages = [
    {"role": "system", "content": instruction},
    {"role": "user", "content": "Who created you?"}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors='pt', padding=True, truncation=True).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1500, num_return_sequences=1)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)

assistant_start = text.find("assistant")
response = text[assistant_start + len("assistant"):].strip() if assistant_start != -1 else text

print(response)

Call the model using Transformers:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "saishshinde15/TBH.AI_Valhala"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)

instruction = """You are an advanced AI assistant. Provide answers in a clear manner."""

messages = [
    {"role": "system", "content": instruction},
    {"role": "user", "content": "Who created you?"}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True).to(device)

output_ids = model.generate(
    **inputs,
    max_new_tokens=1500,
    temperature=0.8,
    top_p=0.95,
    do_sample=True,
)

response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
assistant_start = response.find("assistant")
response = response[assistant_start + len("assistant"):].strip() if assistant_start != -1 else response

print(response)
Downloads last month
33
Safetensors
Model size
3.09B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for saishshinde15/TBH.AI_Valhala

Base model

Qwen/Qwen2.5-3B
Finetuned
(4)
this model
Quantizations
2 models

Collection including saishshinde15/TBH.AI_Valhala