MorningStar
Model | Average β¬οΈ | ARC | HellaSwag | MMLU | TruthfulQA |
---|---|---|---|---|---|
NewstaR/Morningstar-13b-hf π | 59.93 | 59.04 | 81.93 | 54.63 | 44.12 |
Model Details
- Model name: MorningStar
- Model type: LLaMa 2 (13 billion parameters)
Intended Use
- Text generation
- Content creation
- Conversational agent
Capabilities
MorningStar is optimized for natural language processing tasks like text generation and dialogue. It can produce fluent, coherent text across a variety of topics.
Limitations
- May generate incorrect or nonsensical text
- Lacks true language understanding
- Potential for generating biased or unsafe content
Training Data
Details on MorningStar's training data are unavailable. It was likely trained on a large corpus of text data scraped from the internet.
Ethical Considerations
- Large language models like MorningStar carry risks around bias, toxicity, and misinformation.
- Model outputs should be monitored and filtered before use in real applications.
- Avoid harmful or unethical prompts.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 50.48 |
ARC (25-shot) | 59.04 |
HellaSwag (10-shot) | 81.93 |
MMLU (5-shot) | 54.63 |
TruthfulQA (0-shot) | 44.12 |
Winogrande (5-shot) | 74.51 |
GSM8K (5-shot) | 15.24 |
DROP (3-shot) | 23.87 |
- Downloads last month
- 749
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.