hpprc commited on
Commit
97ed133
1 Parent(s): 4021682

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -88
README.md CHANGED
@@ -6,42 +6,16 @@ tags:
6
  - sentence-transformers
7
  - sentence-similarity
8
  - feature-extraction
9
- base_model: tohoku-nlp/bert-large-japanese-v2
10
  widget: []
11
  pipeline_tag: sentence-similarity
12
  license: apache-2.0
 
 
13
  ---
14
 
15
- # SentenceTransformer based on tohoku-nlp/bert-large-japanese-v2
16
 
17
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [tohoku-nlp/bert-large-japanese-v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
18
-
19
- ## Model Details
20
-
21
- ### Model Description
22
- - **Model Type:** Sentence Transformer
23
- - **Base model:** [tohoku-nlp/bert-large-japanese-v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2) <!-- at revision 75b828083735e953e3ed13e2ad6ea945c1fdb390 -->
24
- - **Maximum Sequence Length:** 512 tokens
25
- - **Output Dimensionality:** 1024 tokens
26
- - **Similarity Function:** Cosine Similarity
27
- <!-- - **Training Dataset:** Unknown -->
28
- <!-- - **Language:** Unknown -->
29
- <!-- - **License:** Unknown -->
30
-
31
- ### Model Sources
32
-
33
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
34
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
35
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
36
-
37
- ### Full Model Architecture
38
-
39
- ```
40
- MySentenceTransformer(
41
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
42
- (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
43
- )
44
- ```
45
 
46
  ## Usage
47
 
@@ -55,64 +29,82 @@ pip install -U sentence-transformers
55
 
56
  Then you can load this model and run inference.
57
  ```python
 
58
  from sentence_transformers import SentenceTransformer
59
 
60
  # Download from the 🤗 Hub
61
- model = SentenceTransformer("sentence_transformers_model_id")
62
- # Run inference
 
63
  sentences = [
64
- 'The weather is lovely today.',
65
- "It's so sunny outside!",
66
- 'He drove to the stadium.',
 
67
  ]
68
- embeddings = model.encode(sentences)
69
- print(embeddings.shape)
70
- # [3, 1024]
71
-
72
- # Get the similarity scores for the embeddings
73
- similarities = model.similarity(embeddings, embeddings)
74
- print(similarities.shape)
75
- # [3, 3]
76
- ```
77
-
78
- <!--
79
- ### Direct Usage (Transformers)
80
 
81
- <details><summary>Click to see the direct usage in Transformers</summary>
 
 
82
 
83
- </details>
84
- -->
85
-
86
- <!--
87
- ### Downstream Usage (Sentence Transformers)
88
-
89
- You can finetune this model on your own dataset.
90
 
91
- <details><summary>Click to expand</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
93
- </details>
94
- -->
95
 
96
- <!--
97
- ### Out-of-Scope Use
98
 
99
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
100
- -->
101
 
102
- <!--
103
- ## Bias, Risks and Limitations
 
 
 
 
 
 
 
 
104
 
105
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
106
- -->
107
 
108
- <!--
109
- ### Recommendations
 
 
 
 
110
 
111
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
112
- -->
113
 
114
  ## Training Details
115
 
 
116
  ### Framework Versions
117
  - Python: 3.10.13
118
  - Sentence Transformers: 3.0.0
@@ -122,24 +114,10 @@ You can finetune this model on your own dataset.
122
  - Datasets: 2.19.1
123
  - Tokenizers: 0.19.1
124
 
125
- ## Citation
126
 
127
  ### BibTeX
 
128
 
129
- <!--
130
- ## Glossary
131
-
132
- *Clearly define terms in order to be accessible across audiences.*
133
- -->
134
-
135
- <!--
136
- ## Model Card Authors
137
-
138
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
139
- -->
140
-
141
- <!--
142
- ## Model Card Contact
143
-
144
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
145
- -->
 
6
  - sentence-transformers
7
  - sentence-similarity
8
  - feature-extraction
9
+ base_model: tohoku-nlp/bert-base-japanese-v3
10
  widget: []
11
  pipeline_tag: sentence-similarity
12
  license: apache-2.0
13
+ datasets:
14
+ - cl-nagoya/ruri-dataset-ft
15
  ---
16
 
17
+ # Ruri: Japanese General Text Embeddings
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Usage
21
 
 
29
 
30
  Then you can load this model and run inference.
31
  ```python
32
+ import torch.nn.functional as F
33
  from sentence_transformers import SentenceTransformer
34
 
35
  # Download from the 🤗 Hub
36
+ model = SentenceTransformer("cl-nagoya/ruri-pt-base")
37
+
38
+ # Don't forget to add the prefix "クエリ: " for query-side or "文章: " for passage-side texts.
39
  sentences = [
40
+ "クエリ: 瑠璃色はどんな色?",
41
+ "文章: 瑠璃色(るりいろ)は、紫みを帯びた濃い青。名は、半貴石の瑠璃(ラピスラズリ、英: lapis lazuli)による。JIS慣用色名では「こい紫みの青」(略号 dp-pB)と定義している[1][2]。",
42
+ "クエリ: ワシやタカのように、鋭いくちばしと爪を持った大型の鳥類を総称して「何類」というでしょう?",
43
+ "文章: ワシ、タカ、ハゲワシ、ハヤブサ、コンドル、フクロウが代表的である。これらの猛禽類はリンネ前後の時代(17~18世紀)には鷲類・鷹類・隼類及び梟類に分類された。ちなみにリンネは狩りをする鳥を単一の目(もく)にまとめ、vultur(コンドル、ハゲワシ)、falco(ワシ、タカ、ハヤブサなど)、strix(フクロウ)、lanius(モズ)の4属を含めている。",
44
  ]
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
+ embeddings = model.encode(sentences, convert_to_tensor=True)
47
+ print(embeddings.size())
48
+ # [4, 1024]
49
 
50
+ similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
51
+ print(similarities)
52
+ ```
 
 
 
 
53
 
54
+ ## Benchmarks
55
+
56
+ ### JMTEB
57
+ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
58
+
59
+ |Model|#Param.|Avg.|Retrieval|STS|Classfification|Reranking|Clustering|PairClassification|
60
+ |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
61
+ |[cl-nagoya/sup-simcse-ja-base](https://huggingface.co/cl-nagoya/sup-simcse-ja-base)|111M|68.56|49.64|82.05|73.47|91.83|51.79|62.57|
62
+ |[cl-nagoya/sup-simcse-ja-large](https://huggingface.co/cl-nagoya/sup-simcse-ja-large)|337M|66.51|37.62|83.18|73.73|91.48|50.56|62.51|
63
+ |[cl-nagoya/unsup-simcse-ja-base](https://huggingface.co/cl-nagoya/unsup-simcse-ja-base)|111M|65.07|40.23|78.72|73.07|91.16|44.77|62.44|
64
+ |[cl-nagoya/unsup-simcse-ja-large](https://huggingface.co/cl-nagoya/unsup-simcse-ja-large)|337M|66.27|40.53|80.56|74.66|90.95|48.41|62.49|
65
+ |[pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja)|133M|70.44|59.02|78.71|76.82|91.90|49.78|66.39|
66
+ ||||||||||
67
+ |[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)|472M|64.70|40.12|76.56|72.66|91.63|44.88|62.33|
68
+ |[intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small)|118M|69.52|67.27|80.07|67.62|93.03|46.91|62.19|
69
+ |[intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base)|278M|70.12|68.21|79.84|69.30|92.85|48.26|62.26|
70
+ |[intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)|560M|71.65|70.98|79.70|72.89|92.96|51.24|62.15|
71
+ ||||||||||
72
+ |OpenAI/text-embedding-ada-002|-|69.48|64.38|79.02|69.75|93.04|48.30|62.40|
73
+ |OpenAI/text-embedding-3-small|-|70.86|66.39|79.46|73.06|92.92|51.06|62.27|
74
+ |OpenAI/text-embedding-3-large|-|73.97|74.48|82.52|77.58|93.58|53.32|62.35|
75
+ ||||||||||
76
+ |[Ruri-Small](https://huggingface.co/cl-nagoya/ruri-small)|68M|71.53|69.41|82.79|76.22|93.00|51.19|62.11|
77
+ |[Ruri-Base](https://huggingface.co/cl-nagoya/ruri-base)|111M|71.91|69.82|82.87|75.58|92.91|54.16|62.38|
78
+ |[Ruri-Large](https://huggingface.co/cl-nagoya/ruri-large)|337M|73.31|73.02|83.13|77.43|92.99|51.82|62.29|
79
 
 
 
80
 
 
 
81
 
82
+ ## Model Details
 
83
 
84
+ ### Model Description
85
+ - **Model Type:** Sentence Transformer
86
+ - **Base model:** [tohoku-nlp/bert-large-japanese-v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2)
87
+ - **Maximum Sequence Length:** 512 tokens
88
+ - **Output Dimensionality:** 1024
89
+ - **Similarity Function:** Cosine Similarity
90
+ - **Language:** Japanese
91
+ - **License:** Apache 2.0
92
+ - **Paper:** https://arxiv.org/abs/2409.07737
93
+ <!-- - **Training Dataset:** Unknown -->
94
 
95
+ ### Full Model Architecture
 
96
 
97
+ ```
98
+ SentenceTransformer(
99
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
100
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
101
+ )
102
+ ```
103
 
 
 
104
 
105
  ## Training Details
106
 
107
+
108
  ### Framework Versions
109
  - Python: 3.10.13
110
  - Sentence Transformers: 3.0.0
 
114
  - Datasets: 2.19.1
115
  - Tokenizers: 0.19.1
116
 
117
+ <!-- ## Citation
118
 
119
  ### BibTeX
120
+ -->
121
 
122
+ ## License
123
+ This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).