language:
- en
pipeline_tag: text-generation
tags:
- science fiction
- text generation
This model has been fine-tuned on the novels written by H G Wells. H G Wells is a famous author and is well known for his science fiction novels. He is known as the father of science fiction.
This model can be used to generate text in the style of H G Wells. Since this model has been trained on most novels of the science fiction genre by H G Wells, it produces text of the science fiction genre.
The limitations of this model are that it can only generate text in the style of H G Wells, and not in the style of any other author. It may also be prone to generate text that has the stereotypes that were present at that time. An ethical consideration that needs to be taken into account is that the generated text may have gender biases that were present at the time when H G Wells wrote these novels.
I created my own dataset to train this model. I chose 14 novels written by H G Wells for my dataset. Most of the novels in the dataset are of the genre science fiction. The dataset contains more than 1 million tokens.
The texts included in the corpus are novels written by H G Wells. The novels in the corpus are:
The Time Machine - 37677
In the Days of the Comet - 95299
The Food of the Gods - 90723
Tales of Space and Time - 85850
The World Set Free - 74971
The War of the Worlds - 69530
The First Men in the Moon - 81517
The Invisible Man - 60581
The Island of Doctor Moreau - 52073
The Sleeper Awakes - 91274
The War in the Air - 115573
The Research Magnificient - 131866
The Udying Fire - 52036
The Red Room - 4618
The total number of tokens in the corpus is 1043588.
The corpus was created by downloading and combining 14 novels of the famous author H G Wells from Project Gutenberg. Most of these novels are science fiction novels, so this model has been trained to generate text of the science fiction genre. It produces text in the style of H G Wells.
The corpus consists of 14 novels written by H G Wells downloaded from Project Gutenberg. The text added by Project Gutenberg at the beginning and end of each novel were removed. Then the entire text in each novel was converted into one line. Then the single line was broken into 20 parts. In this way 20 lines were generated for each novel. The lines from each novel were then combined and stored in a single text file. The text was tokenized by using the tokenizer from the GPT2Tokenizer library. This text file was then used to finetune the model.
The values of the parameters used during finetuning are:
batch_size = 2
max length = 1024
epochs = 10
learning rate = 5e-4
warmup steps = 1e2
The corpus has been uploaded on HuggingFace. It can be accessed from the following link: https://huggingface.co/datasets/MinzaKhan/HGWells