Update README.md
Browse files
README.md
CHANGED
@@ -36,9 +36,9 @@ GPT-NeoX-20B, a sibling model to StellarX, is a 20 billion parameter autoregress
|
|
36 |
|
37 |
## Training and Evaluation
|
38 |
|
39 |
-
StellarX's training dataset comprises a comprehensive collection of English-language texts, covering various domains, thanks to the efforts of "redpajama"
|
40 |
|
41 |
-
Evaluation of
|
42 |
|
43 |
## Limitations and Considerations
|
44 |
|
@@ -72,4 +72,3 @@ Thank you for your time.
|
|
72 |
|
73 |
|
74 |
|
75 |
-
--ChatGPT
|
|
|
36 |
|
37 |
## Training and Evaluation
|
38 |
|
39 |
+
StellarX's training dataset comprises a comprehensive collection of English-language texts, covering various domains, thanks to the efforts of "redpajama" dataset by the group "togethercomputer" group.
|
40 |
|
41 |
+
Evaluation of GPT-NeoX 20B performance has demonstrated its competence across different natural language tasks. Although since this description provides a brief summary, we refer to the GPT-NeoX Paper https://arxiv.org/abs/2204.06745, comparing GPT-NeoX 20B to other models on tasks such as OpenAI's LAMBADA, SciQ, PIQA, TriviaQA, and ARC Challenge.
|
42 |
|
43 |
## Limitations and Considerations
|
44 |
|
|
|
72 |
|
73 |
|
74 |
|
|