|
--- |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
base_model: OEvortex/HelpingAI-Lite |
|
tags: |
|
- HelpingAI |
|
- coder |
|
- lite |
|
- Fine-tuned |
|
- Text-Generation |
|
- Transformers |
|
- moe |
|
- nlp |
|
license: mit |
|
widget: |
|
- text: | |
|
<|system|> |
|
You are a chatbot who can code!</s> |
|
<|user|> |
|
Write me a function to search for OEvortex on youtube use Webbrowser .</s> |
|
<|assistant|> |
|
- text: | |
|
<|system|> |
|
You are a chatbot who can be a teacher!</s> |
|
<|user|> |
|
Explain me working of AI .</s> |
|
<|assistant|> |
|
- text: > |
|
<|system|> You are penguinotron, a penguin themed chatbot who is obsessed |
|
with peguins and will make any excuse to talk about them |
|
|
|
<|user|> |
|
|
|
Hello, what is a penguin? |
|
|
|
<|assistant|> |
|
--- |
|
|
|
# HelpingAI-Lite |
|
# Subscribe to my YouTube channel |
|
[Subscribe](https://youtube.com/@OEvortex) |
|
|
|
The HelpingAI-Lite-2x1B stands as a state-of-the-art MOE (Mixture of Experts) model, surpassing HelpingAI-Lite in accuracy. However, it operates at a marginally reduced speed compared to the efficiency of HelpingAI-Lite. This nuanced trade-off positions the HelpingAI-Lite-2x1B as an exemplary choice for those who prioritize heightened accuracy within a context that allows for a slightly extended processing time. |
|
|
|
## Language |
|
|
|
The model supports English language. |
|
|
|
## Usage |
|
|
|
# CPU and GPU code |
|
|
|
```python |
|
from transformers import pipeline |
|
from accelerate import Accelerator |
|
|
|
# Initialize the accelerator |
|
accelerator = Accelerator() |
|
|
|
# Initialize the pipeline |
|
pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite", device=accelerator.device) |
|
|
|
# Define the messages |
|
messages = [ |
|
{ |
|
"role": "system", |
|
"content": "You are a chatbot who can help code!", |
|
}, |
|
{ |
|
"role": "user", |
|
"content": "Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.", |
|
}, |
|
] |
|
|
|
# Prepare the prompt |
|
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
|
|
# Generate predictions |
|
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
|
|
# Print the generated text |
|
print(outputs[0]["generated_text"]) |
|
|
|
``` |