|
--- |
|
base_model: remyxai/stablelm-zephyr-3B_localmentor |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- stablelm |
|
- zephyr |
|
- gguf |
|
library_name: llama.cpp |
|
model_creator: remyxai |
|
model_name: stablelm-zephyr-3B_localmentor |
|
model_type: stablelm |
|
prompt_template: | |
|
<|system|> |
|
{system_prompt}</s> |
|
<|user|> |
|
{prompt}</s> |
|
<|assistant|> |
|
quantized_by: mgonzs13 |
|
--- |
|
|
|
# stablelm-zephyr-3B-localmentor-GGUF |
|
|
|
**Model creator:** [remyxai](https://huggingface.co/remyxai)<br> |
|
**Original model**: [stablelm-zephyr-3B_localmentor](https://huggingface.co/remyxai/stablelm-zephyr-3B_localmentor)<br> |
|
**GGUF quantization:** `llama.cpp` commit [fadde6713506d9e6c124f5680ab8c7abebe31837](https://github.com/ggerganov/llama.cpp/tree/fadde6713506d9e6c124f5680ab8c7abebe31837)<br> |
|
|
|
## Description |
|
|
|
Fine-tune with low-rank adapters on 25K conversational turns discussing tech/startup from over 800 podcast episodes. |
|
|
|
- **Developed by:** [Remyx.AI](https://huggingface.co/remyxai) |
|
- **License:** apache-2.0 |
|
- **Finetuned from model:** [stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) |
|
- **Repository**: https://github.com/remyxai/LocalMentor |
|
|
|
## Prompt Template |
|
|
|
Following the [tokenizer_config.json](https://huggingface.co/remyxai/stablelm-zephyr-3B_localmentor/blob/main/tokenizer_config.json), the prompt template is Zephyr. |
|
|
|
``` |
|
<|system|> |
|
{system_prompt}</s> |
|
<|user|> |
|
{prompt}</s> |
|
<|assistant|> |
|
``` |
|
|