Update README.md
Browse files
README.md
CHANGED
@@ -14,9 +14,9 @@ language:
|
|
14 |
For up to date model, please go to [ReasoningCore-3B-R01](https://huggingface.co/EpistemeAI/ReasoningCore-3B-R01)
|
15 |
|
16 |
|
17 |
-
#
|
18 |
|
19 |
-
**
|
20 |
|
21 |
---
|
22 |
|
@@ -24,11 +24,11 @@ For up to date model, please go to [ReasoningCore-3B-R01](https://huggingface.co
|
|
24 |
|
25 |
- **Model Developer:** EpitemeAI
|
26 |
- **Model Architecture:**
|
27 |
-
|
28 |
|
29 |
| | Training Data | Params | Input Modalities | Output Modalities | Context Length | GQA | Shared Embeddings | Token Count | Knowledge Cutoff |
|
30 |
|--------------------------------|--------------------------------------------------|--------|-----------------------|------------------------------|----------------|-----|-------------------|----------------|-------------------|
|
31 |
-
| **
|
32 |
|
33 |
- **Supported Languages:**
|
34 |
Officially supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. While the pretraining included a broader range of languages, additional languages can be fine‑tuned in compliance with the community license and acceptable use policies.
|
@@ -56,7 +56,7 @@ For up to date model, please go to [ReasoningCore-3B-R01](https://huggingface.co
|
|
56 |
|
57 |
## How to Use
|
58 |
|
59 |
-
|
60 |
|
61 |
## Use system prompt
|
62 |
```bash
|
@@ -99,7 +99,7 @@ Please use "Please reason step by step, and put your final answer within \boxed{
|
|
99 |
### Responsible Deployment
|
100 |
|
101 |
#### Approach:
|
102 |
-
- **
|
103 |
|
104 |
#### System‑Level Safety:
|
105 |
- The model is designed to be deployed as part of a broader system that implements safety measures (e.g., Prompt Guard, Code Shield) to ensure outputs remain safe even under adversarial conditions.
|
@@ -145,7 +145,7 @@ Please use "Please reason step by step, and put your final answer within \boxed{
|
|
145 |
### Ethical Considerations and Limitations
|
146 |
|
147 |
#### Core Values:
|
148 |
-
- **
|
149 |
|
150 |
#### Testing and Limitations:
|
151 |
- Despite extensive testing across diverse scenarios, the model may occasionally produce inaccurate, biased, or objectionable outputs. Developers must perform additional safety testing and integrate further safeguards as needed.
|
@@ -159,7 +159,7 @@ Please use "Please reason step by step, and put your final answer within \boxed{
|
|
159 |
|
160 |
### Conclusion
|
161 |
|
162 |
-
**
|
163 |
|
164 |
For further details, questions, or feedback, please email [email protected]
|
165 |
|
|
|
14 |
For up to date model, please go to [ReasoningCore-3B-R01](https://huggingface.co/EpistemeAI/ReasoningCore-3B-R01)
|
15 |
|
16 |
|
17 |
+
# ReasoningCore‑3B
|
18 |
|
19 |
+
**ReasoningCore‑3B** is a multilingual, reasoning‑enhanced large language model developed by EpitemeAI. Pretrained on vast amounts of publicly available data and instruction‑tuned to excel at nuanced reasoning, dialogue management, retrieval, and summarization tasks, it often outperforms many current open source and proprietary conversational models on a range of industry benchmarks.
|
20 |
|
21 |
---
|
22 |
|
|
|
24 |
|
25 |
- **Model Developer:** EpitemeAI
|
26 |
- **Model Architecture:**
|
27 |
+
ReasoningCore‑3B is an auto‑regressive language model built on an optimized transformer architecture. It incorporates specialized reasoning pathways and has been fine‑tuned using both supervised learning and reinforcement learning with human feedback (RLHF) to align with human expectations for clarity, accuracy, and safety in complex tasks.
|
28 |
|
29 |
| | Training Data | Params | Input Modalities | Output Modalities | Context Length | GQA | Shared Embeddings | Token Count | Knowledge Cutoff |
|
30 |
|--------------------------------|--------------------------------------------------|--------|-----------------------|------------------------------|----------------|-----|-------------------|----------------|-------------------|
|
31 |
+
| **ReasoningCore‑3B (text only)** | A new mix of publicly available online data. | 3B | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
|
32 |
|
33 |
- **Supported Languages:**
|
34 |
Officially supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. While the pretraining included a broader range of languages, additional languages can be fine‑tuned in compliance with the community license and acceptable use policies.
|
|
|
56 |
|
57 |
## How to Use
|
58 |
|
59 |
+
ReasoningCore‑3B can be integrated using popular machine learning frameworks. Two primary methods are provided:
|
60 |
|
61 |
## Use system prompt
|
62 |
```bash
|
|
|
99 |
### Responsible Deployment
|
100 |
|
101 |
#### Approach:
|
102 |
+
- **ReasoningCore‑3B** is a foundational technology that includes built‑in safety guardrails. Developers are encouraged to integrate additional safeguards tailored to their specific applications.
|
103 |
|
104 |
#### System‑Level Safety:
|
105 |
- The model is designed to be deployed as part of a broader system that implements safety measures (e.g., Prompt Guard, Code Shield) to ensure outputs remain safe even under adversarial conditions.
|
|
|
145 |
### Ethical Considerations and Limitations
|
146 |
|
147 |
#### Core Values:
|
148 |
+
- **ReasoningCore‑3B** is built on the values of openness, inclusivity, and helpfulness. It is designed to respect user autonomy and foster free thought and expression while mitigating potential harm.
|
149 |
|
150 |
#### Testing and Limitations:
|
151 |
- Despite extensive testing across diverse scenarios, the model may occasionally produce inaccurate, biased, or objectionable outputs. Developers must perform additional safety testing and integrate further safeguards as needed.
|
|
|
159 |
|
160 |
### Conclusion
|
161 |
|
162 |
+
**ReasoningCore‑3B** represents a significant advancement in multilingual, reasoning‑enhanced language models. Optimized for tasks requiring deep reasoning, contextual understanding, and safe, helpful interactions, it offers a powerful tool for both commercial and research applications. We invite developers and researchers to explore its capabilities and contribute to building secure, innovative AI systems.
|
163 |
|
164 |
For further details, questions, or feedback, please email [email protected]
|
165 |
|