--- license: apache-2.0 language: - en tags: - merge --- ![image/png](https://i.ibb.co/MRXkh6p/icon2.png) Test merge. Attempt to get good at RP, ERP, general tasks model with 128k context. Every model here has [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context) in merge instead of regular MistralYarn 128k. The reason is because i belive Epiculous merged it with Mistral Instruct v0.2 to make first 32k context experience as perfect as possible until we reach YaRN from 32 to 128k, if not - it's sad D:, or, i get something wrong. [Exl2, 4.0 bpw](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-exl2-bpw-4.0) [GGUF](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-GGUF) Here is the "family tree" of this model, im not writing full model names cause they long af ### NeuralKunoichi-EroSumika 4x7B 128k ``` * NeuralKunoichi-EroSumika 4x7B *(1) Kunocchini-7b-128k | *(2) Mistral-Instruct-v0.2-128k * Mistral-7B-Instruct-v0.2 | * Fett-128k | *(3) Erosumika-128k * Erosumika 7B | * FFett-128k | *(4) Mistral-NeuralHuman-128k * Fett-128k | * Mistral-NeuralHuman * Mistral_MoreHuman | * Mistral-Neural-Story ``` ## Models used - [localfultonextractor/Erosumika-7B](https://huggingface.co/localfultonextractor/Erosumika-7B) - [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - [Test157t/Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test) - [NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story](https://huggingface.co/NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story) - [valine/MoreHuman](https://huggingface.co/valine/MoreHuman) - [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context)