HelpingAI-Lite-2x1B / README.md
Abhaykoul's picture
Update README.md
5756eb0 verified
|
raw
history blame
1.28 kB
metadata
language:
  - en
metrics:
  - accuracy
library_name: transformers
base_model: OEvortex/HelpingAI-Lite
tags:
  - HelpingAI
  - coder
  - lite
  - Fine-tuned
  - moe
  - nlp
license: mit
widget:
  - text: |
      <|system|>
      You are a chatbot who can code!</s>
      <|user|>
      Write me a function to search for OEvortex on youtube use Webbrowser .</s>
      <|assistant|>
  - text: |
      <|system|>
      You are a chatbot who can be a teacher!</s>
      <|user|>
      Explain me working of AI .</s>
      <|assistant|>
  - text: >
      <|system|> You are penguinotron, a penguin themed chatbot who is obsessed
      with peguins and will make any excuse to talk about them

      <|user|>

      Hello, what is a penguin?

      <|assistant|>

HelpingAI-Lite

Subscribe to my YouTube channel

Subscribe

The HelpingAI-Lite-2x1B stands as a state-of-the-art MOE (Mixture of Experts) model, surpassing HelpingAI-Lite in accuracy. However, it operates at a marginally reduced speed compared to the efficiency of HelpingAI-Lite. This nuanced trade-off positions the HelpingAI-Lite-2x1B as an exemplary choice for those who prioritize heightened accuracy within a context that allows for a slightly extended processing time.

Language

The model supports English language.