Usage

from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="krishkpatil/legal_llm",
    filename="unsloth.Q4_K_M.gguf",  # Replace with the actual GGUF filename if different
)

response = llm.create_chat_completion(
    messages = [
        {
            "role": "user",
            "content": "Explain the concept of judicial review in India."
        }
    ]
)

print(response['choices'][0]['message']['content'])

Uploaded model

  • Developed by: krishkpatil
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
55
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for krishkpatil/legal_llm

Quantized
(737)
this model

Dataset used to train krishkpatil/legal_llm