Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,27 @@ I do not endorse any particular perspectives presented in the training data.
|
|
20 |
|
21 |
---
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
## Base
|
24 |
|
25 |
This model and its related LoRA was fine-tuned on [https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3).
|
|
|
20 |
|
21 |
---
|
22 |
|
23 |
+
## Centaurus Series
|
24 |
+
|
25 |
+
This series aims to develop highly uncensored Large Language Models (LLMs) with the following focuses:
|
26 |
+
|
27 |
+
- Science, Technology, Engineering, and Mathematics (STEM)
|
28 |
+
- Computer Science (including programming)
|
29 |
+
- Social Sciences
|
30 |
+
|
31 |
+
And several key cognitive skills, including but not limited to:
|
32 |
+
|
33 |
+
- Reasoning and logical deduction
|
34 |
+
- Critical thinking
|
35 |
+
- Analysis
|
36 |
+
|
37 |
+
While maintaining strong overall knowledge and expertise, the models will undergo refinement through:
|
38 |
+
|
39 |
+
- Fine-tuning processes
|
40 |
+
- Model merging techniques including Mixture of Experts (MoE)
|
41 |
+
|
42 |
+
Please note that these models are experimental and may demonstrate varied levels of effectiveness. Your feedback, critique, or queries are most welcome for improvement purposes.
|
43 |
+
|
44 |
## Base
|
45 |
|
46 |
This model and its related LoRA was fine-tuned on [https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3).
|