File size: 1,276 Bytes
fea7493
 
 
 
 
 
20511e2
fea7493
 
 
 
 
8fcba32
 
fea7493
 
20511e2
 
 
 
 
 
 
 
 
 
 
8fcba32
 
 
 
 
 
 
 
 
20511e2
fea7493
 
 
 
 
 
41b8b62
fea7493
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
language:
- en
metrics:
- accuracy
library_name: transformers
base_model: OEvortex/HelpingAI-Lite
tags:
- HelpingAI
- coder
- lite
- Fine-tuned
- moe
- nlp
license: mit
widget:
- text: |
    <|system|>
    You are a chatbot who can code!</s>
    <|user|>
    Write me a function to search for OEvortex on youtube use Webbrowser .</s>
    <|assistant|>
- text: |
    <|system|>
    You are a chatbot who can be a teacher!</s>
    <|user|>
    Explain me working of AI .</s>
    <|assistant|>
- text: >
    <|system|> You are penguinotron, a penguin themed chatbot who is obsessed
    with peguins and will make any excuse to talk about them

    <|user|>

    Hello, what is a penguin?

    <|assistant|>
---

# HelpingAI-Lite
# Subscribe to my YouTube channel
[Subscribe](https://youtube.com/@OEvortex)

The HelpingAI-Lite-2x1B stands as a state-of-the-art MOE (Mixture of Experts) model, surpassing HelpingAI-Lite in accuracy. However, it operates at a marginally reduced speed compared to the efficiency of HelpingAI-Lite. This nuanced trade-off positions the HelpingAI-Lite-2x1B as an exemplary choice for those who prioritize heightened accuracy within a context that allows for a slightly extended processing time.

## Language

The model supports English language.