language: en
thumbnail: >-
https://i.ibb.co/HBqvBFY/mountain-xianxia-chinese-scenic-landscape-craggy-mist-action-scene-pagoda-s-2336925014-1.png
tags:
- text generation
- pytorch
license: mit
Qilin-lit-6b Description
Most updated version is V1.1.0 which is fine-tuned on 550 MB of webnovels found on the NovelUpdates website. (https://www.novelupdates.com/)
The style is SFW and whimsical, excelling at telling fantasy stories, especially webnovels.
Downstream Uses
This model can be used for entertainment purposes and as a creative writing assistant for fiction writers.
Usage with Kobold AI Colab (Easiest)
GPU -> https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/GPU.ipynb TPU -> https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb Replace the drop-down value with "rexwang8/qilin-lit-6b" and select that model.
Usage with Kobold AI Local
Load at AI/load a model from it's directory. Model name is "rexwang8/qilin-lit-6b". If you get a config.json not found error, reload the program and give it some time to find your GPUs.
Example Code
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('rexwang8/qilin-lit-6b')
tokenizer = AutoTokenizer.from_pretrained('rexwang8/lit-6b')
prompt = '''I had eyes but couldn't see Mount Tai!'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
Qilin-lit-6b (V1.1.0)
Fine-tuned version of EleutherAI/gpt-j-6B (https://huggingface.co/EleutherAI/gpt-j-6B) on Coreweave's infrastructure (https://www.coreweave.com/) using an A40 over ~80 hours.
3150 steps, 1 epoch trained on 550 MB of primarily Xianxia genre Webnovels. (Translated to English)
Team members and Acknowledgements
Rex Wang - Author
Coreweave - Computational materials
With help from:
Wes Brown, Anthony Mercurio
Version History
1.1.0 - 550 MB Dataset(34 books) 3150 steps (no reordering, no sampling)
1.0.0 - 100 MB Dataset(3 books) 300 steps (no reordering, no sampling)