|
--- |
|
language: |
|
- en |
|
tags: |
|
- llama |
|
- llama2 |
|
--- |
|
# MorningStar |
|
|
|
![Morningstar](Morningstar.jpg) |
|
|
|
| Model | Average ⬆️ | ARC | HellaSwag | MMLU | TruthfulQA | |
|
|----------------------------------|------------|-------|-----------|-------|------------| |
|
| NewstaR/Morningstar-13b-hf 📑 | 59.93 | 59.04 | 81.93 | 54.63 | 44.12 | |
|
|
|
|
|
## Model Details |
|
- Model name: MorningStar |
|
- Model type: LLaMa 2 (13 billion parameters) |
|
|
|
## Intended Use |
|
- Text generation |
|
- Content creation |
|
- Conversational agent |
|
|
|
## Capabilities |
|
MorningStar is optimized for natural language processing tasks like text generation and dialogue. It can produce fluent, coherent text across a variety of topics. |
|
|
|
## Limitations |
|
- May generate incorrect or nonsensical text |
|
- Lacks true language understanding |
|
- Potential for generating biased or unsafe content |
|
|
|
## Training Data |
|
Details on MorningStar's training data are unavailable. It was likely trained on a large corpus of text data scraped from the internet. |
|
|
|
## Ethical Considerations |
|
- Large language models like MorningStar carry risks around bias, toxicity, and misinformation. |
|
- Model outputs should be monitored and filtered before use in real applications. |
|
- Avoid harmful or unethical prompts. |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NewstaR__Morningstar-13b-hf) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 50.48 | |
|
| ARC (25-shot) | 59.04 | |
|
| HellaSwag (10-shot) | 81.93 | |
|
| MMLU (5-shot) | 54.63 | |
|
| TruthfulQA (0-shot) | 44.12 | |
|
| Winogrande (5-shot) | 74.51 | |
|
| GSM8K (5-shot) | 15.24 | |
|
| DROP (3-shot) | 23.87 | |
|
|