Spaces:
Runtime error
Runtime error
title: Prometh-MOEM-V.01 Model Showcase | |
emoji: π | |
colorFrom: red | |
colorTo: pink | |
sdk: gradio | |
pinned: false | |
license: apache-2.0 | |
language: | |
- en | |
# Prometh-MOEM-V.01 Model Card π | |
**Prometh-MOEM-V.01** is a pioneering Mixture of Experts (MoE) model, blending the capabilities of multiple foundational models to enhance performance across a variety of tasks. This model leverages the collective strengths of its components, achieving unparalleled accuracy, speed, and versatility. | |
## π Model Sources and Components | |
This MoE model amalgamates specialized models including: | |
- [Wtzwho/Prometh-merge-test2](https://huggingface.co/Wtzwho/Prometh-merge-test2) | |
- [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | |
- [Wtzwho/Prometh-merge-test3](https://huggingface.co/Wtzwho/Prometh-merge-test3) | |
- [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) | |
## π Key Features | |
- **Enhanced Performance**: Tailored for peak accuracy and efficiency. | |
- **Versatility**: Exceptionally adaptable across a wide range of NLP tasks. | |
- **State-of-the-Art Integration**: Incorporates the latest in AI research for effective model integration. | |
## π Application Areas | |
Prometh-MOEM-V.01 excels in: | |
- Text generation | |
- Sentiment analysis | |
- Language translation | |
- Question answering | |
## π» Usage Instructions | |
To utilize Prometh-MOEM-V.01 in your projects: | |
```python | |
pip install -qU transformers bitsandbytes accelerate | |
from transformers import AutoTokenizer, pipeline | |
import torch | |
model = "Wtzwho/Prometh-MOEM-V.01" | |
tokenizer = AutoTokenizer.from_pretrained(model) | |
# Setup pipeline | |
pipeline = pipeline( | |
"text-generation", | |
model=model, | |
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, | |
) | |
# Example query | |
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] | |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) | |
print(outputs[0]["generated_text"]) | |
``` |