wenqiglantz's picture
Create README.md
5f644ed verified
|
raw
history blame
935 Bytes
metadata
license: apache-2.0
tags:
  - merge
  - mergekit
  - lazymergekit
  - finetuned
  - mistralai/Mistral-7B-Instruct-v0.2
  - janai-hq/trinity-v1
  - wenqiglantz/MistralTrinity-7b-slerp

MistralTrinity-7B-slerp-finetuned-dolly-1k

MistralTrinity-7B-slerp-finetuned-dolly-1k is a fine-tuned model of MistralTrinity-7B-slerp, which was merged from the following two models using mergekit:

Dataset

The dataset used for fine-tuning is from wenqiglantz/databricks-dolly-1k, a subset (1000 samples) of databricks/databricks-dolly-15k dataset.