Julius ter Pelkwijk
commited on
Commit
·
c6db29f
1
Parent(s):
43e59fa
Initial commit
Browse files- README.md +26 -0
- config.json +31 -0
- merges.txt +0 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +1 -0
- vocab.json +0 -0
README.md
CHANGED
@@ -1,3 +1,29 @@
|
|
1 |
---
|
|
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language: en
|
3 |
license: mit
|
4 |
---
|
5 |
+
# Fairseq-dense 13B - Shinen
|
6 |
+
## Model Description
|
7 |
+
Fairseq-dense 13B-Shinen is a finetune created using Fairseq's MoE dense model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
|
8 |
+
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
|
9 |
+
## Training data
|
10 |
+
The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
|
11 |
+
```
|
12 |
+
[Theme: <theme1>, <theme2> ,<theme3>]
|
13 |
+
<Story goes here>
|
14 |
+
```
|
15 |
+
### How to use
|
16 |
+
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
|
17 |
+
```py
|
18 |
+
>>> from transformers import pipeline
|
19 |
+
>>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-13B-Shinen')
|
20 |
+
>>> generator("She was staring at me", do_sample=True, min_length=50)
|
21 |
+
[{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
|
22 |
+
```
|
23 |
+
### Limitations and Biases
|
24 |
+
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
|
25 |
+
|
26 |
+
### BibTeX entry and citation info
|
27 |
+
```
|
28 |
+
Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts
|
29 |
+
```
|
config.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "KoboldAI/fairseq-dense-13B",
|
3 |
+
"activation_dropout": 0.0,
|
4 |
+
"activation_function": "gelu",
|
5 |
+
"architectures": [
|
6 |
+
"XGLMForCausalLM"
|
7 |
+
],
|
8 |
+
"attention_dropout": 0.1,
|
9 |
+
"attention_heads": 40,
|
10 |
+
"bos_token_id": 50257,
|
11 |
+
"d_model": 5120,
|
12 |
+
"decoder_start_token_id": 2,
|
13 |
+
"dropout": 0.1,
|
14 |
+
"eos_token_id": 50259,
|
15 |
+
"ffn_dim": 20480,
|
16 |
+
"init_std": 0.02,
|
17 |
+
"layerdrop": 0.0,
|
18 |
+
"max_position_embeddings": 2048,
|
19 |
+
"model_type": "xglm",
|
20 |
+
"newlinemode": "s",
|
21 |
+
"num_layers": 40,
|
22 |
+
"pad_token_id": 1,
|
23 |
+
"scale_embedding": true,
|
24 |
+
"tokenizer_class": "GPT2Tokenizer",
|
25 |
+
"torch_dtype": "float16",
|
26 |
+
"transformers_version": "4.17.0",
|
27 |
+
"use_cache": false,
|
28 |
+
"vocab_size": 50261,
|
29 |
+
"welcome": "## Warning: This model has a very heavy NSFW bias and is not suitable for use by minors!\n\nYou are currently running story-writing model `Shinen, version 4.`\n\n This model is made by [Mr. Seeker](https://www.patreon.com/mrseeker)\n\n### How to use this model\n\nShinen is designed to generate short stories and novels. Use the authors note to give it a certain genre to follow, use memory to give an overview of the story and use World Information to give it specific details about the characters. To start off, give the AI an idea what you are writing about by setting the scene. Give the AI around 10 sentences that make your story reall interesting to read. Introduce your character, describe the world, blow something up, or let the AI use it's creative mind.",
|
30 |
+
"antemplate": "[Theme: <|>]"
|
31 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:82863847ce06b577a5dae0c7a7638e9b5c56d6472bca370e3bec210be9d0fc58
|
3 |
+
size 25707041085
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "<|endoftext|>", "pad_token": "<pad>"}
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"errors": "replace", "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "special_tokens_map_file": "/root/.cache/huggingface/transformers/d62d75cc3a7250ada25f0a99e2741555d3712693661d5eef48b3fcbdd151d255.f4b0476f9d35aab16d5dd877dd9e5d547702eff96a3d808497c0d3fc36a32c99", "name_or_path": "KoboldAI/fairseq-dense-13B", "tokenizer_class": "GPT2Tokenizer"}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|