Indigo-v0.1 / README.md
TylerG01's picture
Update README.md
2d4edc2 verified
metadata
language:
  - en
license: mit
tags:
  - pretrained
  - security
  - redteam
  - blueteam
pipeline_tag: text-generation
inference:
  parameters:
    temperature: 0.7
extra_gated_description: >-
  If you want to learn more about how we process your personal data, please read
  our <a href="https://mistral.ai/terms/">Privacy Policy</a>.

TylerG01/Indigo-v0.1

Refer to the original model card for more details on the model.

Project Goals

This is v0.1 (alpha) release of the Indigo LLM project, which used LoRA Fine-Tuning to train Mistral 7B on more than 400 books, pamphlets, training documents, code snippets and other works in the cyber security field, openly sourced on the surface web. This version used 16 LoRA layers and had a val loss of 1.601 after the 4th training epoch. However, my goal for the LoRA version of this model is to produce a val loss of <1.51 after some modification to the dataset and training approach.

For more information on this project, check out the blog post at https://t2-security.com/indigo-llm-503cd6e22fe4.