DRAGON-LLAMA3.1

dragon-llama-3.1 (8b) is part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a LLama-3.1 8b base model.

DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation.

Benchmark Tests

Benchmark tests were performed only on the 4_K_M quantized GGUF version of this model - dragon-llama3.1-gguf.

Evaluated against the benchmark test: RAG-Instruct-Benchmark-Tester
1 Test Run (temperature=0.0, sample=False) with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.

--Accuracy Score: 94.0 correct out of 100
--Not Found Classification: 70.0%
--Boolean: 90.0%
--Math/Logic: 72.5%
--Complex Questions (1-5): 4 (Above Average - table-reading, causal)
--Summarization Quality (1-5): 4 (Above Average)
--Hallucinations: No hallucinations but a few instances of drawing on 'background' knowledge.

For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).

The inference accuracy tests were performed on this model (GGUF 4_K_M) not the original Pytorch, and it is possible/likely that the original Pytorch may score higher, but we have chosen to use the quantized version as it is most representative of the likely use of the model for inference.

Please compare with dragon-llama2-7b or the most recent dragon-mistral-0.3.

Model Description

  • Developed by: llmware
  • Model type: LLama-3.1 Base (8b)
  • Language(s) (NLP): English
  • License: LLama 3.1 Community License Agreement
  • Finetuned from model: Llama-3.1-8b-Base

Direct Use

DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources.

DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.

Bias, Risks, and Limitations

Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.

How to Get Started with the Model

The fastest way to get started with dRAGon is through direct import in transformers:

from transformers import AutoTokenizer, AutoModelForCausalLM  
tokenizer = AutoTokenizer.from_pretrained("dragon-llama-3.1")  
model = AutoModelForCausalLM.from_pretrained("dragon-llama-3.1")  

Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The generation_test_llmware_script.py includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.

The dRAGon model was fine-tuned with a simple "<human> and <bot>" wrapper, so to get the best results, wrap inference entries as:

full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"

The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:

  1. Text Passage Context, and
  2. Specific question or instruction based on the text passage

To get the best results, package "my_prompt" as follows:

my_prompt = {{text_passage}} + "\n" + {{question/instruction}}

If you are using a HuggingFace generation script:

# prepare prompt packaging used in fine-tuning process
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"

inputs = tokenizer(new_prompt, return_tensors="pt")  
start_of_output = len(inputs.input_ids[0])

#   temperature: set at 0.3 for consistency of output
#   max_new_tokens:  set at 100 - may prematurely stop a few of the summaries

outputs = model.generate(
        inputs.input_ids.to(device),
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id,
        do_sample=True,
        temperature=0.3,
        max_new_tokens=100,
        )

output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)  

Model Card Contact

Darren Oberst & llmware team

Downloads last month
4
Inference API
Inference API (serverless) has been turned off for this model.

Collection including llmware/dragon-llama-3.1