cactusfriend
commited on
Commit
·
2d66101
1
Parent(s):
6bb033b
added example code
Browse files
README.md
CHANGED
@@ -5,6 +5,35 @@ pipeline_tag: text-generation
|
|
5 |
|
6 |
A model based upon the prompts of all the images in my InvokeAI's output directory. Mostly only positive prompts, though you may catch some words in [] brackets.
|
7 |
|
8 |
-
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
A model based upon the prompts of all the images in my InvokeAI's output directory. Mostly only positive prompts, though you may catch some words in [] brackets.
|
7 |
|
8 |
+
Note: the prompts are very chaotic; a good way to stress test a model, perhaps?
|
9 |
|
10 |
+
|
11 |
+
To use this model, you can import it as a pipeline like so:
|
12 |
+
```py
|
13 |
+
from transformers import pipeline
|
14 |
+
|
15 |
+
generator = pipeline(model="cactusfriend/nightmare-invokeai-prompts",
|
16 |
+
tokenizer="cactusfriend/nightmare-invokeai-prompts",
|
17 |
+
task="text-generation")
|
18 |
+
```
|
19 |
+
|
20 |
+
Here's an example function that'll generate by default 20 prompts, at a temperature of 1.8 which seems good for this model.
|
21 |
+
```py
|
22 |
+
def makePrompts(prompt: str, *, p: float=0.9,
|
23 |
+
k: int = 40, num: int = 20,
|
24 |
+
temp: float = 1.8, mnt: int = 150):
|
25 |
+
outputs = generator(prompt, max_new_tokens=mnt,
|
26 |
+
temperature=temp, do_sample=True,
|
27 |
+
top_p=p, top_k=k, num_return_sequences=num)
|
28 |
+
items = set([i['generated_text'] for i in outputs])
|
29 |
+
print("-" * 60)
|
30 |
+
print("\n".join(items))
|
31 |
+
print("-" * 60)
|
32 |
+
```
|
33 |
+
|
34 |
+
Then, you can call it like so:
|
35 |
+
```py
|
36 |
+
makePrompts("a photograph of")
|
37 |
+
# or, to change some defaults:
|
38 |
+
makePrompts("spaghetti all over", temp=1.4, p=0.92, k=45)
|
39 |
+
```
|