#LLaMA-3-8B-RDF-Experiment

Purpose

This model is an experimental model to see if LLaMA-3-8B can be used to construct knowledge graph triples. The model is a finetune of NousResearch/Hermes-2-Pro-Llama-3-8B. Finetuning was completed on Unsloth using qLoRA and then merged back to 16-bit.

Prompt Template

It is recommended that you use the apply_chat_template feature. This is the recommened system prompt:

"""You are an expert knowledge graph annotator and you respond in JSON. Here's the json schema you must adhere to where each element is a new triple if needed:\n<schema>\n[{"subject": str, "predicate": str, "object": str},...{"subject": str, "predicate": str, "object": str}]\n</schema>"""

Downloads last month
17
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.