InferenceIllusionist commited on
Commit
69fb7e4
·
verified ·
1 Parent(s): 469d285

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -21,23 +21,26 @@ datasets:
21
  An experiment with the goal of reducing halucinations in [VQA](https://huggingface.co/tasks/visual-question-answering)
22
  First in of a series of projects centering around fine-tuning for image captioning.
23
 
24
- Release Notes:
 
25
  * v0.1 - Initial Release
26
  * v0.2 (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating
27
 
 
28
 
29
  Mistral-7b-02 base model was fine-tuned using the [RealWorldQA dataset](https://huggingface.co/datasets/visheratin/realworldqa), originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v
30
 
31
- <b>Initial Results</b>
32
  <img src="https://i.imgur.com/E9mS4Xb.jpeg" width="400"/>
 
33
  * Experiment yielded model that provides shorter, less verbose output for questions about pictures
34
  * The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user
35
  * Best suited for captioning use cases that require concise descriptions and low token counts
36
  * This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
37
 
38
- *<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
39
- * [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
40
- * [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mmproj-model-f16.gguf?download=true)
41
 
42
  Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
43
  <img src="https://i.imgur.com/x8vqH29.png" width="425"/>
@@ -45,6 +48,8 @@ Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/
45
  ## Prompt Format
46
  Use Alpaca for best results.
47
 
 
 
48
  - **Developed by:** InferenceIllusionist
49
  - **License:** apache-2.0
50
  - **Finetuned from model :** mistral-community/Mistral-7B-v0.2
 
21
  An experiment with the goal of reducing halucinations in [VQA](https://huggingface.co/tasks/visual-question-answering)
22
  First in of a series of projects centering around fine-tuning for image captioning.
23
 
24
+ <h1>Release Notes</h1>
25
+
26
  * v0.1 - Initial Release
27
  * v0.2 (Current)- Updating base model to official Mistral-7b fp16 release, refinements to dataset and instruction formating
28
 
29
+ <h2>Background & Methodology</h2>
30
 
31
  Mistral-7b-02 base model was fine-tuned using the [RealWorldQA dataset](https://huggingface.co/datasets/visheratin/realworldqa), originally provided by the X.Ai Team here: https://x.ai/blog/grok-1.5v
32
 
33
+ <h1>Vision Results</h1>
34
  <img src="https://i.imgur.com/E9mS4Xb.jpeg" width="400"/>
35
+
36
  * Experiment yielded model that provides shorter, less verbose output for questions about pictures
37
  * The likelihood of hallucinations in output has decreased, however, the model can still be easily influenced to be inaccurate by the user
38
  * Best suited for captioning use cases that require concise descriptions and low token counts
39
  * This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
40
 
41
+ <b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
42
+ 1. [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
43
+ 2. [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mmproj-model-f16.gguf?download=true)
44
 
45
  Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
46
  <img src="https://i.imgur.com/x8vqH29.png" width="425"/>
 
48
  ## Prompt Format
49
  Use Alpaca for best results.
50
 
51
+
52
+ ## Other info
53
  - **Developed by:** InferenceIllusionist
54
  - **License:** apache-2.0
55
  - **Finetuned from model :** mistral-community/Mistral-7B-v0.2