File size: 935 Bytes
5f644ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- finetuned
- mistralai/Mistral-7B-Instruct-v0.2
- janai-hq/trinity-v1
- wenqiglantz/MistralTrinity-7b-slerp
---

# MistralTrinity-7B-slerp-finetuned-dolly-1k

MistralTrinity-7B-slerp-finetuned-dolly-1k is a fine-tuned model of [MistralTrinity-7B-slerp](https://huggingface.co/wenqiglantz/MistralTrinity-7B-slerp), which was merged from the following two models using [mergekit](https://github.com/cg123/mergekit):
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [janai-hq/trinity-v1](https://huggingface.co/janai-hq/trinity-v1)

## Dataset

The dataset used for fine-tuning is from [wenqiglantz/databricks-dolly-1k](https://huggingface.co/datasets/wenqiglantz/databricks-dolly-1k), a subset (1000 samples) of [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset.