Triangle104/Stellar-Odyssey-12b-v0.0-Q8_0-GGUF

This model was converted to GGUF format from ProdeusUnity/Stellar-Odyssey-12b-v0.0 using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


Model details:

Stellar Odyssey 12b v0.0

We will see... Come with me, take the journey~

Listen to the song on Youtube: https://www.youtube.com/watch?v=3FEFtFMBREA

Soo... after I failed the first time, I took a crack at merging again. This time, these models were used

mistralai/Mistral-Nemo-Base-2407 Sao10K/MN-12B-Lyra-v4 nothingiisreal/MN-12B-Starcannon-v2 Gryphe/Pantheon-RP-1.5-12b-Nemo

License for this model is: cc-by-nc-4.0

TO CLEAR SOME CONFUSION: Please use ChatML I hope this was worth the time I spent to create this merge, lol

Gated access for now, gated access will be disabled when testing is done, and thanks to all who have interest.

Thank you to AuriAetherwiing for helping me merge the models.

Details

This is a merge of pre-trained language models created using mergekit. Merge Details Merge Method

This model was merged using the della_linear merge method using C:\Users\lg911\Downloads\Mergekit-Fixed\mergekit\mistralai_Mistral-Nemo-Base-2407 as a base. Models Merged

The following models were included in the merge:

C:\Users\lg911\Downloads\Mergekit-Fixed\mergekit\Sao10K_MN-12B-Lyra-v4 C:\Users\lg911\Downloads\Mergekit-Fixed\mergekit\Gryphe_Pantheon-RP-1.5-12b-Nemo C:\Users\lg911\Downloads\Mergekit-Fixed\mergekit\nothingiisreal_MN-12B-Starcannon-v2

Configuration

The following YAML configuration was used to produce this model:

models:

model: C:\Users\Downloads\Mergekit-Fixed\mergekit\Sao10K_MN-12B-Lyra-v4 parameters: weight: 0.3 density: 0.25
model: C:\Users\Downloads\Mergekit-Fixed\mergekit\nothingiisreal_MN-12B-Starcannon-v2 parameters: weight: 0.1 density: 0.4
model: C:\Users\Downloads\Mergekit-Fixed\mergekit\Gryphe_Pantheon-RP-1.5-12b-Nemo parameters: weight: 0.4 density: 0.5 merge_method: della_linear base_model: C:\Users\Downloads\Mergekit-Fixed\mergekit\mistralai_Mistral-Nemo-Base-2407 parameters: epsilon: 0.05 lambda: 1 merge_method: della_linear dtype: bfloat16

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/Stellar-Odyssey-12b-v0.0-Q8_0-GGUF --hf-file stellar-odyssey-12b-v0.0-q8_0.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/Stellar-Odyssey-12b-v0.0-Q8_0-GGUF --hf-file stellar-odyssey-12b-v0.0-q8_0.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/Stellar-Odyssey-12b-v0.0-Q8_0-GGUF --hf-file stellar-odyssey-12b-v0.0-q8_0.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/Stellar-Odyssey-12b-v0.0-Q8_0-GGUF --hf-file stellar-odyssey-12b-v0.0-q8_0.gguf -c 2048
Downloads last month
0
GGUF
Model size
12.2B params
Architecture
llama

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Triangle104/Stellar-Odyssey-12b-v0.0-Q8_0-GGUF

Quantized
(11)
this model

Collections including Triangle104/Stellar-Odyssey-12b-v0.0-Q8_0-GGUF