Update README.md
Browse files
README.md
CHANGED
@@ -51,7 +51,7 @@ The corpus was created by downloading and combining 14 novels of the famous auth
|
|
51 |
|
52 |
The corpus consists of 14 novels written by H G Wells downloaded from Project Gutenberg. The text added by Project Gutenberg at the beginning and end of each novel were removed. Then the entire text in each novel
|
53 |
was converted into one line. Then the single line was broken into 20 parts. In this way 20 lines were generated for each novel. The lines from each novel were then combined and
|
54 |
-
stored in a single text file. This text file was then used to finetune the model.
|
55 |
|
56 |
The values of the parameters used during finetuning are:
|
57 |
|
|
|
51 |
|
52 |
The corpus consists of 14 novels written by H G Wells downloaded from Project Gutenberg. The text added by Project Gutenberg at the beginning and end of each novel were removed. Then the entire text in each novel
|
53 |
was converted into one line. Then the single line was broken into 20 parts. In this way 20 lines were generated for each novel. The lines from each novel were then combined and
|
54 |
+
stored in a single text file. The text was tokenized by using the tokenizer from the GPT2Tokenizer library. This text file was then used to finetune the model.
|
55 |
|
56 |
The values of the parameters used during finetuning are:
|
57 |
|