brooketh commited on
Commit
6df160a
·
verified ·
1 Parent(s): 288f269

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ quantized_by: brooketh
23
  - **Original:** [Smart Lemon Cookie 7B](https://huggingface.co/FallenMerick/Smart-Lemon-Cookie-7B)
24
  - **Date Created:** 2024-04-30
25
  - **Trained Context:** 32768 tokens
26
- - **Description:** Uncensored roleplay model from [FallenMerick](https://huggingface.co/FallenMerick/) with excellent reasoning and context-following abilities. It is based on [Multi-Verse-Model](https://huggingface.co/MTSAIR/multi_verse_model) and merges [Silicon Maid](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) and [Kunoichi](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) for strong roleplaying ability, and [LemonadeRP](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) for storywriting skill.
27
 
28
  ## What is a GGUF?
29
  GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Faraday.dev. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
 
23
  - **Original:** [Smart Lemon Cookie 7B](https://huggingface.co/FallenMerick/Smart-Lemon-Cookie-7B)
24
  - **Date Created:** 2024-04-30
25
  - **Trained Context:** 32768 tokens
26
+ - **Description:** Uncensored roleplay model from [FallenMerick](https://huggingface.co/FallenMerick/) with excellent reasoning and context-following abilities. It is based on the [Multi-Verse-Model](https://huggingface.co/MTSAIR/multi_verse_model) and merges [Silicon Maid](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) and [Kunoichi](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) for strong roleplaying ability, and [LemonadeRP](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) for storywriting skill.
27
 
28
  ## What is a GGUF?
29
  GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Faraday.dev. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.