Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,94 +1,18 @@
|
|
1 |
-
|
2 |
|
3 |
-
|
4 |
-
|
|
|
5 |
|
6 |
-
|
|
|
|
|
|
|
|
|
7 |
|
8 |
-
*
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
## (clipmodel,cliptextmodel)-calculate-distances.py
|
13 |
-
|
14 |
-
Loads the generated embeddings, reads in a word, calculates "distance" to every
|
15 |
-
embedding, and then shows the closest "neighbours".
|
16 |
-
|
17 |
-
To run this requires the files "embeddings.safetensors" and "dictionary",
|
18 |
-
in matching format
|
19 |
-
|
20 |
-
You will need to rename or copy appropriate files for this as mentioned below.
|
21 |
-
|
22 |
-
Note that SD models use cliptextmodel, NOT clipmodel
|
23 |
-
|
24 |
-
## graph-textmodels.py
|
25 |
-
|
26 |
-
Shows the difference between the same word, embedded by CLIPTextModel
|
27 |
-
vs CLIPModel
|
28 |
-
|
29 |
-
## graph-embeddings.py
|
30 |
-
|
31 |
-
Run the script. It will ask you for two text strings.
|
32 |
-
Once you enter both, it will plot the graph and display it for you
|
33 |
-
|
34 |
-
Note that this tool does not require any of the other files; just that you
|
35 |
-
have the requisite python modules installed. (pip install -r requirements.txt)
|
36 |
-
|
37 |
-
### embeddings.safetensors
|
38 |
-
|
39 |
-
You can either copy one of the provided files, or generate your own.
|
40 |
-
See generate-embeddings.py for that.
|
41 |
-
|
42 |
-
Note that you muist always use the "dictionary" file that matchnes your embeddings file
|
43 |
-
|
44 |
-
### embeddings.allids.safetensors
|
45 |
-
|
46 |
-
DO NOT USE THIS ONE for programs that expect a matching dictionary.
|
47 |
-
This one is purely numeric based.
|
48 |
-
Its intention is more for research datamining, but it does have a matching
|
49 |
-
graph front end, graph-byid.py
|
50 |
-
|
51 |
-
|
52 |
-
### dictionary
|
53 |
-
|
54 |
-
Make sure to always use the dictionary file that matches your embeddings file.
|
55 |
-
|
56 |
-
The "dictionary.fullword" file is pulled from fullword.json, which is distilled from "full words"
|
57 |
-
present in the ViT-L/14 CLIP model's provided token dictionary, called "vocab.json".
|
58 |
-
Thus there are only around 30,000 words in it
|
59 |
-
|
60 |
-
If you want to use the provided "embeddings.safetensors.huge" file, you will want to use the matching
|
61 |
-
"dictionary.huge" file, which has over 300,000 words
|
62 |
-
|
63 |
-
This huge file comes from the linux "wamerican-huge" package, which delivers it under
|
64 |
-
/usr/share/dict/american-english-huge
|
65 |
-
|
66 |
-
There also exists a "american-insane" package
|
67 |
-
|
68 |
-
|
69 |
-
## generate-embeddings.py
|
70 |
-
|
71 |
-
Generates the "embeddings.safetensor" file, based on the "dictionary" file present.
|
72 |
-
Takes a few minutes to run, depending on size of the dictionary
|
73 |
-
|
74 |
-
The shape of the embeddings tensor, is
|
75 |
-
[number-of-words][768]
|
76 |
-
|
77 |
-
Note that yes, it is possible to directly pull a tensor from the CLIP model,
|
78 |
-
using keyname of text_model.embeddings.token_embedding.weight
|
79 |
-
|
80 |
-
This will NOT GIVE YOU THE RIGHT DISTANCES!
|
81 |
-
Hence why we are calculating and then storing the embedding weights actually
|
82 |
-
generated by the CLIP process
|
83 |
-
|
84 |
-
|
85 |
-
## fullword.json
|
86 |
-
|
87 |
-
This file contains a collection of "one word, one CLIP token id" pairings.
|
88 |
-
The file was taken from vocab.json, which is part of multiple SD models in huggingface.co
|
89 |
-
|
90 |
-
The file was optimized for what people are actually going to type as words.
|
91 |
-
First all the non-(/w) entries were stripped out.
|
92 |
-
Then all the garbage punctuation and foreign characters were stripped out.
|
93 |
-
Finally, the actual (/w) was stripped out, for ease of use.
|
94 |
|
|
|
|
|
|
1 |
+
This directory specializes in the Google T5 xxl LLM.
|
2 |
|
3 |
+
Specifically, it focuses on
|
4 |
+
https://huggingface.co/mcmonkey/google_t5-v1_1-xxl_encoderonly/
|
5 |
+
because that is the one most used by AI generative models at the moment
|
6 |
|
7 |
+
* dictionary.T5.fullword
|
8 |
+
This file is filtered from the full list of tokens in this model
|
9 |
+
(generated by dumptokens.py),
|
10 |
+
and has only the tokens that are full standalone words
|
11 |
+
(and are also plain English ascii. Sorry, fancy languages)
|
12 |
|
13 |
+
* dictionary.both
|
14 |
+
Words that are common across CLIP-L and T5. Generated by
|
15 |
+
sort dictionary.T5.fullword ../dictionary.fullword |uniq -c |awk '$1 =="2"{print $2}' >dict.both
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
+
* showtokens.py
|
18 |
+
Given a word, or string of words, shows how T5 tokenizes it
|