prince-canuma commited on
Commit
a139750
1 Parent(s): 7a19e2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -116,8 +116,10 @@ In the course of this study, the [SlimOrca](https://huggingface.co/datasets/Open
116
 
117
 
118
  Subsequently, two distinct subsets were crafted, comprising 102,000 and 1,000 samples, denoted as:
 
119
  - [prince-canuma/SmallOrca](https://huggingface.co/datasets/prince-canuma/SmallOrca)
120
  - [prince-canuma/TinyOrca](https://huggingface.co/datasets/prince-canuma/TinyOrca)
 
121
  Although experimentation was conducted with both datasets, optimal results were achieved through fine-tuning on a modest set of 200 samples.
122
  Notably, the investigation revealed that augmenting the training data beyond this threshold predominantly enhanced the model's proficiency in generating Chain-of-Thought responses.
123
  However, it is imperative to note that the preference for Chain-of-Thought responses may not be universally applicable. Particularly in scenarios like the RAG setup,
 
116
 
117
 
118
  Subsequently, two distinct subsets were crafted, comprising 102,000 and 1,000 samples, denoted as:
119
+
120
  - [prince-canuma/SmallOrca](https://huggingface.co/datasets/prince-canuma/SmallOrca)
121
  - [prince-canuma/TinyOrca](https://huggingface.co/datasets/prince-canuma/TinyOrca)
122
+
123
  Although experimentation was conducted with both datasets, optimal results were achieved through fine-tuning on a modest set of 200 samples.
124
  Notably, the investigation revealed that augmenting the training data beyond this threshold predominantly enhanced the model's proficiency in generating Chain-of-Thought responses.
125
  However, it is imperative to note that the preference for Chain-of-Thought responses may not be universally applicable. Particularly in scenarios like the RAG setup,