File size: 2,335 Bytes
6da16d6 cf48103 6da16d6 b884236 a783bdb 2afbeaf e13ab3e b884236 816d5ea b884236 79b8541 28bced0 e13ab3e e3769ce 79b8541 28bced0 b884236 5eaab30 b884236 5eaab30 7f44353 85111b5 b884236 85111b5 7f44353 bd4df46 157f9d8 bd4df46 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
---
language: en
thumbnail: "https://i.ibb.co/HBqvBFY/mountain-xianxia-chinese-scenic-landscape-craggy-mist-action-scene-pagoda-s-2336925014-1.png"
tags:
- text generation
- pytorch
license: mit
---
# Qilin-lit-6b Description
Most updated version is V1.1.0 which is fine-tuned on 550 MB of webnovels found on the NovelUpdates website. (https://www.novelupdates.com/)
The style is SFW and whimsical, excelling at telling fantasy stories, especially webnovels.
## Downstream Uses
This model can be used for entertainment purposes and as a creative writing assistant for fiction writers.
## Usage with Kobold AI Colab (Easiest)
GPU -> https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/GPU.ipynb
TPU -> https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb
Replace the drop-down value with "rexwang8/qilin-lit-6b" and select that model.
## Usage with Kobold AI Local
Load at AI/load a model from it's directory. Model name is "rexwang8/qilin-lit-6b". If you get a config.json not found error, reload the program and give it some time to find your GPUs.
## Example Code
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('rexwang8/qilin-lit-6b')
tokenizer = AutoTokenizer.from_pretrained('rexwang8/lit-6b')
prompt = '''I had eyes but couldn't see Mount Tai!'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
```
---
## Qilin-lit-6b (V1.1.0)
Fine-tuned version of EleutherAI/gpt-j-6B (https://huggingface.co/EleutherAI/gpt-j-6B) on Coreweave's infrastructure (<https://www.coreweave.com/>) using an A40 over ~80 hours.
3150 steps, 1 epoch trained on 550 MB of primarily Xianxia genre Webnovels. (Translated to English)
---
## Team members and Acknowledgements
Rex Wang - Author
Coreweave - Computational materials
With help from:
Wes Brown, Anthony Mercurio
---
## Version History
1.1.0 - 550 MB Dataset(34 books) 3150 steps (no reordering, no sampling)
1.0.0 - 100 MB Dataset(3 books) 300 steps (no reordering, no sampling) |