thxCode commited on
Commit
8bc29db
0 Parent(s):

feat: first commit

Browse files

Signed-off-by: thxCode <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ assets/rag_eval_multiple_domains_summary.jpg filter=lfs diff=lfs merge=lfs -text
38
+ *.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,637 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-classification
4
+ tags:
5
+ - transformers
6
+ - sentence-transformers
7
+ language:
8
+ - en
9
+ - zh
10
+ - ja
11
+ - ko
12
+ ---
13
+
14
+ # bce-reranker-base_v1-GGUF
15
+
16
+ **Model creator**: [Netease YouDao](https://github.com/netease-youdao/BCEmbedding)<br/>
17
+ **Original model**: [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)<br/>
18
+ **GGUF quantization**: based on llama.cpp release [cc298](https://github.com/ggerganov/llama.cpp/commit/cc2983d3753c94a630ca7257723914d4c4f6122b)
19
+
20
+ <!--
21
+ * @Description:
22
+ * @Author: shenlei
23
+ * @Date: 2023-12-19 10:31:41
24
+ * @LastEditTime: 2024-01-10 00:17:02
25
+ * @LastEditors: shenlei
26
+ -->
27
+ <h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1>
28
+
29
+ <p align="center">
30
+ <a href="https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE">
31
+ <img src="https://img.shields.io/badge/license-Apache--2.0-yellow">
32
+ </a>
33
+ <a href="https://twitter.com/YDopensource">
34
+ <img src="https://img.shields.io/badge/follow-%40YDOpenSource-1DA1F2?logo=twitter&style={style}">
35
+ </a>
36
+ </p>
37
+
38
+ 最新、最详细bce-reranker-base_v1相关信息,请移步(The latest "Updates" should be checked in):
39
+
40
+ <p align="left">
41
+ <a href="https://github.com/netease-youdao/BCEmbedding">GitHub</a>
42
+ </p>
43
+
44
+ ## 主要特点(Key Features):
45
+ - 中英日韩四个语种,以及中英日韩四个语种的跨语种能力(Multilingual and Crosslingual capability in English, Chinese, Japanese and Korean);
46
+ - RAG优化,适配更多真实业务场景(RAG adaptation for more domains, including Education, Law, Finance, Medical, Literature, FAQ, Textbook, Wikipedia, etc.);
47
+ - <a href="https://github.com/netease-youdao/BCEmbedding">BCEmbedding</a>适配长文本做rerank(Handle long passages reranking more than 512 limit in <a href="https://github.com/netease-youdao/BCEmbedding">BCEmbedding</a>);
48
+ - RerankerModel可以提供 **“平滑”的“绝对”相关性分数**,**“平滑”对排序友好**,**“绝对”分数用于过滤低质量passage**,低质量passage过滤阈值推荐0.35或0.4。(RerankerModel provides **"smooth" (for reranking) and "meaningful" (for filtering bad passages with a threshold of 0.35 or 0.4) similarity score**, which help you figure out how relavent the query and passages are!)
49
+ - **最佳实践(Best practice)** :embedding召回top50-100片段,reranker对这50-100片段精排,最后取top5-10片段。(1. Get top 50-100 passages with [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) for "`recall`"; 2. Rerank passages with [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) and get top 5-10 for "`precision`" finally. )
50
+
51
+ ## News:
52
+ - `BCEmbedding`技术博客( **Technical Blog** ): [为RAG而生-BCEmbedding技术报告](https://zhuanlan.zhihu.com/p/681370855)
53
+ - Related link for **EmbeddingModel** : [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)
54
+
55
+ ## Third-party Examples:
56
+ - RAG applications: [QAnything](https://github.com/netease-youdao/qanything), [HuixiangDou](https://github.com/InternLM/HuixiangDou), [ChatPDF](https://github.com/shibing624/ChatPDF).
57
+ - Efficient inference framework: [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp), [Xinference](https://github.com/xorbitsai/inference), [mindnlp (Huawei GPU, 华为GPU)](https://github.com/mindspore-lab/mindnlp/tree/master/llm/inference/bce).
58
+
59
+ ![image/jpeg](assets/rag_eval_multiple_domains_summary.jpg)
60
+
61
+ ![image/jpeg](assets/Wechat.jpg)
62
+
63
+ -----------------------------------------
64
+ <details open="open">
65
+ <summary>Click to Open Contents</summary>
66
+
67
+ - <a href="#-bilingual-and-crosslingual-superiority" target="_Self">🌐 Bilingual and Crosslingual Superiority</a>
68
+ - <a href="#-key-features" target="_Self">💡 Key Features</a>
69
+ - <a href="#-latest-updates" target="_Self">🚀 Latest Updates</a>
70
+ - <a href="#-model-list" target="_Self">🍎 Model List</a>
71
+ - <a href="#-manual" target="_Self">📖 Manual</a>
72
+ - <a href="#installation" target="_Self">Installation</a>
73
+ - <a href="#quick-start" target="_Self">Quick Start (`transformers`, `sentence-transformers`)</a>
74
+ - <a href="#integrations-for-rag-frameworks" target="_Self">Integrations for RAG Frameworks (`langchain`, `llama_index`)</a>
75
+ - <a href="#%EF%B8%8F-evaluation" target="_Self">⚙️ Evaluation</a>
76
+ - <a href="#evaluate-semantic-representation-by-mteb" target="_Self">Evaluate Semantic Representation by MTEB</a>
77
+ - <a href="#evaluate-rag-by-llamaindex" target="_Self">Evaluate RAG by LlamaIndex</a>
78
+ - <a href="#-leaderboard" target="_Self">📈 Leaderboard</a>
79
+ - <a href="#semantic-representation-evaluations-in-mteb" target="_Self">Semantic Representation Evaluations in MTEB</a>
80
+ - <a href="#rag-evaluations-in-llamaindex" target="_Self">RAG Evaluations in LlamaIndex</a>
81
+ - <a href="#-youdaos-bcembedding-api" target="_Self">🛠 Youdao's BCEmbedding API</a>
82
+ - <a href="#-wechat-group" target="_Self">🧲 WeChat Group</a>
83
+ - <a href="#%EF%B8%8F-citation" target="_Self">✏️ Citation</a>
84
+ - <a href="#-license" target="_Self">🔐 License</a>
85
+ - <a href="#-related-links" target="_Self">🔗 Related Links</a>
86
+
87
+ </details>
88
+ <br>
89
+
90
+ **B**ilingual and **C**rosslingual **Embedding** (`BCEmbedding`), developed by NetEase Youdao, encompasses `EmbeddingModel` and `RerankerModel`. The `EmbeddingModel` specializes in generating semantic vectors, playing a crucial role in semantic search and question-answering, and the `RerankerModel` excels at refining search results and ranking tasks.
91
+
92
+ `BCEmbedding` serves as the cornerstone of Youdao's Retrieval Augmented Generation (RAG) implmentation, notably [QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)], an open-source implementation widely integrated in various Youdao products like [Youdao Speed Reading](https://read.youdao.com/#/home) and [Youdao Translation](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation).
93
+
94
+ Distinguished for its bilingual and crosslingual proficiency, `BCEmbedding` excels in bridging Chinese and English linguistic gaps, which achieves
95
+ - **A high performence on <a href="#semantic-representation-evaluations-in-mteb">Semantic Representation Evaluations in MTEB</a>**;
96
+ - **A new benchmark in the realm of <a href="#rag-evaluations-in-llamaindex">RAG Evaluations in LlamaIndex</a>**.
97
+
98
+ `BCEmbedding`是由网易有道开发的双语和跨语种语义表征算法模型库,其中包含`EmbeddingModel`和`RerankerModel`两类基础模型。`EmbeddingModel`专门用于生成语义向量,在语义搜索和问答中起着关键作用,而`RerankerModel`擅长优化语义搜索结果和语义相关顺序精排。
99
+
100
+ `BCEmbedding`作为有道的检索增强生成式应用(RAG)的基石,特别是在[QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)]中发挥着重要作用。QAnything作为一个网易有道开源项目,在有道许多产品中有很好的应用实践,比如[有道速读](https://read.youdao.com/#/home)和[有道翻译](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation)
101
+
102
+ `BCEmbedding`以其出色的双语和跨语种能力而著称,在语义检索中消除中英语言之间的差异,从而实现:
103
+ - **强大的双语和跨语种语义表征能力【<a href="#semantic-representation-evaluations-in-mteb">基于MTEB的语义表征评测指标</a>】。**
104
+ - **基于LlamaIndex的RAG评测,表现SOTA【<a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>】。**
105
+
106
+ ## 🌐 Bilingual and Crosslingual Superiority
107
+
108
+ Existing embedding models often encounter performance challenges in bilingual and crosslingual scenarios, particularly in Chinese, English and their crosslingual tasks. `BCEmbedding`, leveraging the strength of Youdao's translation engine, excels in delivering superior performance across monolingual, bilingual, and crosslingual settings.
109
+
110
+ `EmbeddingModel` supports ***Chinese (ch) and English (en)*** (more languages support will come soon), while `RerankerModel` supports ***Chinese (ch), English (en), Japanese (ja) and Korean (ko)***.
111
+
112
+ 现有的单个语义表征模型在双语和跨语种场景中常常表现不佳,特别是在中文、英文及其跨语种任务中。`BCEmbedding`充分利用有道翻译引擎的优势,实现只需一个模型就可以在单语、双语和跨语种场景中表现出卓越的性能。
113
+
114
+ `EmbeddingModel`支持***中文和英文***(之后会支持更多语种);`RerankerModel`支持***中文,英文,日文和韩文***。
115
+
116
+ ## 💡 Key Features
117
+
118
+ - **Bilingual and Crosslingual Proficiency**: Powered by Youdao's translation engine, excelling in Chinese, English and their crosslingual retrieval task, with upcoming support for additional languages.
119
+
120
+ - **RAG-Optimized**: Tailored for diverse RAG tasks including **translation, summarization, and question answering**, ensuring accurate **query understanding**. See <a href=#rag-evaluations-in-llamaindex>RAG Evaluations in LlamaIndex</a>.
121
+
122
+ - **Efficient and Precise Retrieval**: Dual-encoder for efficient retrieval of `EmbeddingModel` in first stage, and cross-encoder of `RerankerModel` for enhanced precision and deeper semantic analysis in second stage.
123
+
124
+ - **Broad Domain Adaptability**: Trained on diverse datasets for superior performance across various fields.
125
+
126
+ - **User-Friendly Design**: Instruction-free, versatile use for multiple tasks without specifying query instruction for each task.
127
+
128
+ - **Meaningful Reranking Scores**: `RerankerModel` provides relevant scores to improve result quality and optimize large language model performance.
129
+
130
+ - **Proven in Production**: Successfully implemented and validated in Youdao's products.
131
+
132
+ - **双语和跨语种能力**:基于有道翻译引擎的强大能力,我们的`BCEmbedding`具备强大的中英双语和跨语种语义表征能力。
133
+
134
+ - **RAG适配**:面向RAG做了针对性优化,可以适配大多数相关任务,比如**翻译,摘要,问答**等。此外,针对**问题理解**(query understanding)也做了针对优化,详见 <a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>。
135
+
136
+ - **高效且精确的语义检索**:`EmbeddingModel`采用双编码器,可以在第一阶段实现高效的语义检索。`RerankerModel`采用交叉编码器,可以在第二阶段实现更高精度的语义顺序精排。
137
+
138
+ - **更好的领域泛化性**:为了在更多场景实现更好的效果,我们收集了多种多样的领域数据。
139
+
140
+ - **用户友好**:语义检索时不需要特殊指令前缀。也就是,你不需要为各种任务绞尽脑汁设计指令前缀。
141
+
142
+ - **有意义的重排序分数**:`RerankerModel`可以提供有意义的语义相关性分数(不仅仅是排序),可以用于过滤无意义文本片段,提高大模型生成效果。
143
+
144
+ - **产品化检验**:`BCEmbedding`已经被有道众多真实产品检验。
145
+
146
+ ## 🚀 Latest Updates
147
+
148
+ - ***2024-01-03***: **Model Releases** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) and [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) are available.
149
+ - ***2024-01-03***: **Eval Datasets** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - Evaluate the performence of RAG, using [LlamaIndex](https://github.com/run-llama/llama_index).
150
+ - ***2024-01-03***: **Eval Datasets** [[Details](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - Evaluate the performence of crosslingual semantic representation, using [MTEB](https://github.com/embeddings-benchmark/mteb).
151
+
152
+ - ***2024-01-03***: **模型发布** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)和[bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)已发布.
153
+ - ***2024-01-03***: **RAG评测数据** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - 基于[LlamaIndex](https://github.com/run-llama/llama_index)的RAG评测数据已发布。
154
+ - ***2024-01-03***: **跨语种语义表征评测数据** [[详情](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - 基于[MTEB](https://github.com/embeddings-benchmark/mteb)的跨语种评测数据已发布.
155
+
156
+ ## 🍎 Model List
157
+
158
+ | Model Name | Model Type | Languages | Parameters | Weights |
159
+ |:-------------------------------|:--------:|:--------:|:--------:|:--------:|
160
+ | bce-embedding-base_v1 | `EmbeddingModel` | ch, en | 279M | [download](https://huggingface.co/maidalun1020/bce-embedding-base_v1) |
161
+ | bce-reranker-base_v1 | `RerankerModel` | ch, en, ja, ko | 279M | [download](https://huggingface.co/maidalun1020/bce-reranker-base_v1) |
162
+
163
+ ## 📖 Manual
164
+
165
+ ### Installation
166
+
167
+ First, create a conda environment and activate it.
168
+
169
+ ```bash
170
+ conda create --name bce python=3.10 -y
171
+ conda activate bce
172
+ ```
173
+
174
+ Then install `BCEmbedding` for minimal installation:
175
+
176
+ ```bash
177
+ pip install BCEmbedding==0.1.1
178
+ ```
179
+
180
+ Or install from source:
181
+
182
+ ```bash
183
+ git clone [email protected]:netease-youdao/BCEmbedding.git
184
+ cd BCEmbedding
185
+ pip install -v -e .
186
+ ```
187
+
188
+ ### Quick Start
189
+
190
+ #### 1. Based on `BCEmbedding`
191
+
192
+ Use `EmbeddingModel`, and `cls` [pooler](./BCEmbedding/models/embedding.py#L24) is default.
193
+
194
+ ```python
195
+ from BCEmbedding import EmbeddingModel
196
+
197
+ # list of sentences
198
+ sentences = ['sentence_0', 'sentence_1', ...]
199
+
200
+ # init embedding model
201
+ model = EmbeddingModel(model_name_or_path="maidalun1020/bce-embedding-base_v1")
202
+
203
+ # extract embeddings
204
+ embeddings = model.encode(sentences)
205
+ ```
206
+
207
+ Use `RerankerModel` to calculate relevant scores and rerank:
208
+
209
+ ```python
210
+ from BCEmbedding import RerankerModel
211
+
212
+ # your query and corresponding passages
213
+ query = 'input_query'
214
+ passages = ['passage_0', 'passage_1', ...]
215
+
216
+ # construct sentence pairs
217
+ sentence_pairs = [[query, passage] for passage in passages]
218
+
219
+ # init reranker model
220
+ model = RerankerModel(model_name_or_path="maidalun1020/bce-reranker-base_v1")
221
+
222
+ # method 0: calculate scores of sentence pairs
223
+ scores = model.compute_score(sentence_pairs)
224
+
225
+ # method 1: rerank passages
226
+ rerank_results = model.rerank(query, passages)
227
+ ```
228
+
229
+ NOTE:
230
+
231
+ - In [`RerankerModel.rerank`](./BCEmbedding/models/reranker.py#L137) method, we provide an advanced preproccess that we use in production for making `sentence_pairs`, when "passages" are very long.
232
+
233
+ #### 2. Based on `transformers`
234
+
235
+ For `EmbeddingModel`:
236
+
237
+ ```python
238
+ from transformers import AutoModel, AutoTokenizer
239
+
240
+ # list of sentences
241
+ sentences = ['sentence_0', 'sentence_1', ...]
242
+
243
+ # init model and tokenizer
244
+ tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-embedding-base_v1')
245
+ model = AutoModel.from_pretrained('maidalun1020/bce-embedding-base_v1')
246
+
247
+ device = 'cuda' # if no GPU, set "cpu"
248
+ model.to(device)
249
+
250
+ # get inputs
251
+ inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
252
+ inputs_on_device = {k: v.to(self.device) for k, v in inputs.items()}
253
+
254
+ # get embeddings
255
+ outputs = model(**inputs_on_device, return_dict=True)
256
+ embeddings = outputs.last_hidden_state[:, 0] # cls pooler
257
+ embeddings = embeddings / embeddings.norm(dim=1, keepdim=True) # normalize
258
+ ```
259
+
260
+ For `RerankerModel`:
261
+
262
+ ```python
263
+ import torch
264
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
265
+
266
+ # init model and tokenizer
267
+ tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-reranker-base_v1')
268
+ model = AutoModelForSequenceClassification.from_pretrained('maidalun1020/bce-reranker-base_v1')
269
+
270
+ device = 'cuda' # if no GPU, set "cpu"
271
+ model.to(device)
272
+
273
+ # get inputs
274
+ inputs = tokenizer(sentence_pairs, padding=True, truncation=True, max_length=512, return_tensors="pt")
275
+ inputs_on_device = {k: v.to(device) for k, v in inputs.items()}
276
+
277
+ # calculate scores
278
+ scores = model(**inputs_on_device, return_dict=True).logits.view(-1,).float()
279
+ scores = torch.sigmoid(scores)
280
+ ```
281
+
282
+ #### 3. Based on `sentence_transformers`
283
+
284
+ For `EmbeddingModel`:
285
+
286
+ ```python
287
+ from sentence_transformers import SentenceTransformer
288
+
289
+ # list of sentences
290
+ sentences = ['sentence_0', 'sentence_1', ...]
291
+
292
+ # init embedding model
293
+ ## New update for sentence-trnasformers. So clean up your "`SENTENCE_TRANSFORMERS_HOME`/maidalun1020_bce-embedding-base_v1" or "~/.cache/torch/sentence_transformers/maidalun1020_bce-embedding-base_v1" first for downloading new version.
294
+ model = SentenceTransformer("maidalun1020/bce-embedding-base_v1")
295
+
296
+ # extract embeddings
297
+ embeddings = model.encode(sentences, normalize_embeddings=True)
298
+ ```
299
+
300
+ For `RerankerModel`:
301
+
302
+ ```python
303
+ from sentence_transformers import CrossEncoder
304
+
305
+ # init reranker model
306
+ model = CrossEncoder('maidalun1020/bce-reranker-base_v1', max_length=512)
307
+
308
+ # calculate scores of sentence pairs
309
+ scores = model.predict(sentence_pairs)
310
+ ```
311
+
312
+ ### Integrations for RAG Frameworks
313
+
314
+ #### 1. Used in `langchain`
315
+
316
+ ```python
317
+ from langchain.embeddings import HuggingFaceEmbeddings
318
+ from langchain_community.vectorstores import FAISS
319
+ from langchain_community.vectorstores.utils import DistanceStrategy
320
+
321
+ query = 'apples'
322
+ passages = [
323
+ 'I like apples',
324
+ 'I like oranges',
325
+ 'Apples and oranges are fruits'
326
+ ]
327
+
328
+ # init embedding model
329
+ model_name = 'maidalun1020/bce-embedding-base_v1'
330
+ model_kwargs = {'device': 'cuda'}
331
+ encode_kwargs = {'batch_size': 64, 'normalize_embeddings': True, 'show_progress_bar': False}
332
+
333
+ embed_model = HuggingFaceEmbeddings(
334
+ model_name=model_name,
335
+ model_kwargs=model_kwargs,
336
+ encode_kwargs=encode_kwargs
337
+ )
338
+
339
+ # example #1. extract embeddings
340
+ query_embedding = embed_model.embed_query(query)
341
+ passages_embeddings = embed_model.embed_documents(passages)
342
+
343
+ # example #2. langchain retriever example
344
+ faiss_vectorstore = FAISS.from_texts(passages, embed_model, distance_strategy=DistanceStrategy.MAX_INNER_PRODUCT)
345
+
346
+ retriever = faiss_vectorstore.as_retriever(search_type="similarity", search_kwargs={"score_threshold": 0.5, "k": 3})
347
+
348
+ related_passages = retriever.get_relevant_documents(query)
349
+ ```
350
+
351
+ #### 2. Used in `llama_index`
352
+
353
+ ```python
354
+ from llama_index.embeddings import HuggingFaceEmbedding
355
+ from llama_index import VectorStoreIndex, ServiceContext, SimpleDirectoryReader
356
+ from llama_index.node_parser import SimpleNodeParser
357
+ from llama_index.llms import OpenAI
358
+
359
+ query = 'apples'
360
+ passages = [
361
+ 'I like apples',
362
+ 'I like oranges',
363
+ 'Apples and oranges are fruits'
364
+ ]
365
+
366
+ # init embedding model
367
+ model_args = {'model_name': 'maidalun1020/bce-embedding-base_v1', 'max_length': 512, 'embed_batch_size': 64, 'device': 'cuda'}
368
+ embed_model = HuggingFaceEmbedding(**model_args)
369
+
370
+ # example #1. extract embeddings
371
+ query_embedding = embed_model.get_query_embedding(query)
372
+ passages_embeddings = embed_model.get_text_embedding_batch(passages)
373
+
374
+ # example #2. rag example
375
+ llm = OpenAI(model='gpt-3.5-turbo-0613', api_key=os.environ.get('OPENAI_API_KEY'), api_base=os.environ.get('OPENAI_BASE_URL'))
376
+ service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
377
+
378
+ documents = SimpleDirectoryReader(input_files=["BCEmbedding/tools/eval_rag/eval_pdfs/Comp_en_llama2.pdf"]).load_data()
379
+ node_parser = SimpleNodeParser.from_defaults(chunk_size=512)
380
+ nodes = node_parser.get_nodes_from_documents(documents[0:36])
381
+ index = VectorStoreIndex(nodes, service_context=service_context)
382
+ query_engine = index.as_query_engine()
383
+ response = query_engine.query("What is llama?")
384
+ ```
385
+
386
+
387
+ ## ⚙️ Evaluation
388
+
389
+ ### Evaluate Semantic Representation by MTEB
390
+
391
+ We provide evaluateion tools for `embedding` and `reranker` models, based on [MTEB](https://github.com/embeddings-benchmark/mteb) and [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB).
392
+
393
+ 我们基于[MTEB](https://github.com/embeddings-benchmark/mteb)和[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB),提供`embedding`和`reranker`模型的语义表征评测工具��
394
+
395
+ #### 1. Embedding Models
396
+
397
+ Just run following cmd to evaluate `your_embedding_model` (e.g. `maidalun1020/bce-embedding-base_v1`) in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
398
+
399
+ 运行下面命令评测`your_embedding_model`(比如,`maidalun1020/bce-embedding-base_v1`)。评测任务将会在**双语和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
400
+
401
+ ```bash
402
+ python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path maidalun1020/bce-embedding-base_v1 --pooler cls
403
+ ```
404
+
405
+ The total evaluation tasks contain ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"**.
406
+
407
+ 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的 ***114个数据集***。
408
+
409
+ ***NOTE:***
410
+ - **All models are evaluated in their recommended pooling method (`pooler`)**.
411
+ - `mean` pooler: "jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large" and "gte-large".
412
+ - `cls` pooler: Other models.
413
+ - "jina-embeddings-v2-base-en" model should be loaded with `trust_remote_code`.
414
+
415
+ ```bash
416
+ python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path {moka-ai/m3e-base | moka-ai/m3e-large} --pooler mean
417
+
418
+ python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path jinaai/jina-embeddings-v2-base-en --pooler mean --trust_remote_code
419
+ ```
420
+
421
+ ***注意:***
422
+ - 所有模型的评测采用各自推荐的`pooler`。"jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large"和"gte-large"的 `pooler`采用`mean`,其他模型的`pooler`采用`cls`.
423
+ - "jina-embeddings-v2-base-en"模型在载入时需要`trust_remote_code`。
424
+
425
+ #### 2. Reranker Models
426
+
427
+ Run following cmd to evaluate `your_reranker_model` (e.g. "maidalun1020/bce-reranker-base_v1") in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
428
+
429
+ 运行下面命令评测`your_reranker_model`(比如,`maidalun1020/bce-reranker-base_v1`)。评测任务将会在 **双语种和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
430
+
431
+ ```bash
432
+ python BCEmbedding/tools/eval_mteb/eval_reranker_mteb.py --model_name_or_path maidalun1020/bce-reranker-base_v1
433
+ ```
434
+
435
+ The evaluation tasks contain ***12 datastes*** of **"Reranking"**.
436
+
437
+ 评测包含 **"Reranking"** 任务的 ***12个数据集***。
438
+
439
+ #### 3. Metrics Visualization Tool
440
+
441
+ We proveide a one-click script to sumarize evaluation results of `embedding` and `reranker` models as [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md) and [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
442
+
443
+ 我们提供了`embedding`和`reranker`模型的指标可视化一键脚本,输出一个markdown文件,详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)和[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)。
444
+
445
+ ```bash
446
+ python BCEmbedding/evaluation/mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir}
447
+ ```
448
+
449
+ ### Evaluate RAG by LlamaIndex
450
+
451
+ [LlamaIndex](https://github.com/run-llama/llama_index) is a famous data framework for LLM-based applications, particularly in RAG. Recently, the [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our `BCEmbedding`.
452
+
453
+ [LlamaIndex](https://github.com/run-llama/llama_index)是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证`BCEmbedding`在RAG中的效果。
454
+
455
+ First, install LlamaIndex:
456
+ ```bash
457
+ pip install llama-index==0.9.22
458
+ ```
459
+
460
+ #### 1. Metrics Definition
461
+
462
+ - Hit Rate:
463
+
464
+ Hit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. ***The larger, the better.***
465
+
466
+ - Mean Reciprocal Rank (MRR):
467
+
468
+ For each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. ***The larger, the better.***
469
+
470
+ - 命中率(Hit Rate)
471
+
472
+ 命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。***该指标越大越好。***
473
+
474
+ - 平均倒数排名(Mean Reciprocal Rank,MRR)
475
+
476
+ 对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。***该指标越大越好。***
477
+
478
+ #### 2. Reproduce [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
479
+
480
+ In order to compare our `BCEmbedding` with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our `BCEmbedding`:
481
+
482
+ 为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将`BCEmbedding`与其他embedding和reranker模型进行对比分析:
483
+
484
+ ```bash
485
+ # There should be two GPUs available at least.
486
+ CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_reproduce.py
487
+ ```
488
+
489
+ Then, sumarize the evaluation results by:
490
+ ```bash
491
+ python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_reproduce_results
492
+ ```
493
+
494
+ Results Reproduced from the LlamaIndex Blog can be checked in ***[Reproduced Summary of RAG Evaluation](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***, with some obvious ***conclusions***:
495
+ - In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
496
+ - With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
497
+ - ***The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA.***
498
+
499
+ 输出的指标汇总详见 ***[LlamaIndex RAG评测结果复现](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***。从该复现结果中,可以看出:
500
+ - 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`比其他embedding模型效果都要好。
501
+ - 在固定embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好。
502
+ - ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
503
+
504
+ #### 3. Broad Domain Adaptability
505
+
506
+ The evaluation of [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) is **monolingual, small amount of data, and specific domain** (just including "llama2" paper). In order to evaluate the **broad domain adaptability, bilingual and crosslingual capability**, we follow the blog to build a multiple domains evaluation dataset (includding "Computer Science", "Physics", "Biology", "Economics", "Math", and "Quantitative Finance"), named [CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset), **by OpenAI `gpt-4-1106-preview` for high quality**.
507
+
508
+ 在上述的[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)的评测数据只用了“llama2”这一篇文章,该评测是 **单语种,小数据量,特定领域** 的。为了兼容更真实更广的用户使用场景,评测算法模型的 **领域泛化性,双语和跨语种能力**,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)。**为了保证构建数据的高质量,我们采用OpenAI的`gpt-4-1106-preview`。**
509
+
510
+ First, run following cmd to evaluate the most popular and powerful embedding and reranker models:
511
+
512
+ ```bash
513
+ # There should be two GPUs available at least.
514
+ CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_multiple_domains.py
515
+ ```
516
+
517
+ Then, run the following script to sumarize the evaluation results:
518
+ ```bash
519
+ python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_results
520
+ ```
521
+
522
+ The summary of multiple domains evaluations can be seen in <a href=#1-multiple-domains-scenarios>Multiple Domains Scenarios</a>.
523
+
524
+ ## 📈 Leaderboard
525
+
526
+ ### Semantic Representation Evaluations in MTEB
527
+
528
+ #### 1. Embedding Models
529
+
530
+ | Model | Dimensions | Pooler | Instructions | Retrieval (47) | STS (19) | PairClassification (5) | Classification (21) | Reranking (12) | Clustering (15) | ***AVG*** (119) |
531
+ |:--------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
532
+ | bge-base-en-v1.5 | 768 | `cls` | Need | 37.14 | 55.06 | 75.45 | 59.73 | 43.00 | 37.74 | 47.19 |
533
+ | bge-base-zh-v1.5 | 768 | `cls` | Need | 47.63 | 63.72 | 77.40 | 63.38 | 54.95 | 32.56 | 53.62 |
534
+ | bge-large-en-v1.5 | 1024 | `cls` | Need | 37.18 | 54.09 | 75.00 | 59.24 | 42.47 | 37.32 | 46.80 |
535
+ | bge-large-zh-v1.5 | 1024 | `cls` | Need | 47.58 | 64.73 | 79.14 | 64.19 | 55.98 | 33.26 | 54.23 |
536
+ | e5-large-v2 | 1024 | `mean` | Need | 35.98 | 55.23 | 75.28 | 59.53 | 42.12 | 36.51 | 46.52 |
537
+ | gte-large | 1024 | `mean` | Free | 36.68 | 55.22 | 74.29 | 57.73 | 42.44 | 38.51 | 46.67 |
538
+ | gte-large-zh | 1024 | `cls` | Free | 41.15 | 64.62 | 77.58 | 62.04 | 55.62 | 33.03 | 51.51 |
539
+ | jina-embeddings-v2-base-en | 768 | `mean` | Free | 31.58 | 54.28 | 74.84 | 58.42 | 41.16 | 34.67 | 44.29 |
540
+ | m3e-base | 768 | `mean` | Free | 46.29 | 63.93 | 71.84 | 64.08 | 52.38 | 37.84 | 53.54 |
541
+ | m3e-large | 1024 | `mean` | Free | 34.85 | 59.74 | 67.69 | 60.07 | 48.99 | 31.62 | 46.78 |
542
+ | multilingual-e5-base | 768 | `mean` | Need | 54.73 | 65.49 | 76.97 | 69.72 | 55.01 | 38.44 | 58.34 |
543
+ | multilingual-e5-large | 1024 | `mean` | Need | 56.76 | 66.79 | 78.80 | 71.61 | 56.49 | 43.09 | 60.50 |
544
+ | ***bce-embedding-base_v1*** | 768 | `cls` | Free | 57.60 | 65.73 | 74.96 | 69.00 | 57.29 | 38.95 | 59.43 |
545
+
546
+ ***NOTE:***
547
+ - Our ***bce-embedding-base_v1*** outperforms other opensource embedding models with comparable model size.
548
+ - ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"** in `["en", "zh", "en-zh", "zh-en"]` setting.
549
+ - The [crosslingual evaluation datasets](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py) we released belong to `Retrieval` task.
550
+ - More evaluation details please check [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md).
551
+
552
+ ***要点:***
553
+ - 对比其他开源的相同规模的embedding模型,***bce-embedding-base_v1*** 表现最好,效果比最好的large模型稍差。
554
+ - 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的共 ***114个数据集***。
555
+ - 我们开源的[跨语种语义表征评测数据](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)属于`Retrieval`任务。
556
+ - 更详细的评测结果详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)。
557
+
558
+ #### 2. Reranker Models
559
+
560
+ | Model | Reranking (12) | ***AVG*** (12) |
561
+ | :--------------------------------- | :-------------: | :--------------------: |
562
+ | bge-reranker-base | 59.04 | 59.04 |
563
+ | bge-reranker-large | 60.86 | 60.86 |
564
+ | ***bce-reranker-base_v1*** | **61.29** | ***61.29*** |
565
+
566
+ ***NOTE:***
567
+ - Our ***bce-reranker-base_v1*** outperforms other opensource reranker models.
568
+ - ***12 datastes*** of **"Reranking"** in `["en", "zh", "en-zh", "zh-en"]` setting.
569
+ - More evaluation details please check [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
570
+
571
+ ***要点:***
572
+ - ***bce-reranker-base_v1*** 优于其他开源reranker模型。
573
+ - 评测包含 **"Reranking"** 任务的 ***12个数据集***。
574
+ - 更详细的评测结果详见[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)
575
+
576
+ ### RAG Evaluations in LlamaIndex
577
+
578
+ #### 1. Multiple Domains Scenarios
579
+
580
+ ![image/jpeg](assets/rag_eval_multiple_domains_summary.jpg)
581
+
582
+ ***NOTE:***
583
+ - Evaluated in **["en", "zh", "en-zh", "zh-en"] setting**.
584
+ - In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
585
+ - With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
586
+ - **The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA**.
587
+
588
+ ***要点:***
589
+ - 评测是在["en", "zh", "en-zh", "zh-en"]设置下。
590
+ - 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`优于其他Embedding模型,包括开源和闭源。
591
+ - 在固定Embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好,包括开源和闭源。
592
+ - ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
593
+
594
+ ## 🛠 Youdao's BCEmbedding API
595
+
596
+ For users who prefer a hassle-free experience without the need to download and configure the model on their own systems, `BCEmbedding` is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at [Youdao BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html). Here, you'll find all the necessary guidance to easily implement `BCEmbedding` across a variety of use cases, ensuring a smooth and effective integration for optimal results.
597
+
598
+ 对于那些更喜欢直接调用api的用户,有道提供方便的`BCEmbedding`调用api。该方式是一种简化和高效的方式,将`BCEmbedding`集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见[有道BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html)。
599
+
600
+ ## 🧲 WeChat Group
601
+
602
+ Welcome to scan the QR code below and join the WeChat group.
603
+
604
+ 欢迎大家扫码加入官方微信交流群。
605
+
606
+ ![image/jpeg](assets/Wechat.jpg)
607
+
608
+ ## ✏️ Citation
609
+
610
+ If you use `BCEmbedding` in your research or project, please feel free to cite and star it:
611
+
612
+ 如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~
613
+
614
+ ```
615
+ @misc{youdao_bcembedding_2023,
616
+ title={BCEmbedding: Bilingual and Crosslingual Embedding for RAG},
617
+ author={NetEase Youdao, Inc.},
618
+ year={2023},
619
+ howpublished={\url{https://github.com/netease-youdao/BCEmbedding}}
620
+ }
621
+ ```
622
+
623
+ ## 🔐 License
624
+
625
+ `BCEmbedding` is licensed under [Apache 2.0 License](https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE)
626
+
627
+ ## 🔗 Related Links
628
+
629
+ [Netease Youdao - QAnything](https://github.com/netease-youdao/qanything)
630
+
631
+ [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)
632
+
633
+ [MTEB](https://github.com/embeddings-benchmark/mteb)
634
+
635
+ [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
636
+
637
+ [LLama Index](https://github.com/run-llama/llama_index) | [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
bce-reranker-base_v1-FP16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac8e417a3dac9de92c1a391a529a10f9988c77e98f3af2ed192baa2a5d24c3c5
3
+ size 563951392
bce-reranker-base_v1-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b85825871bde2c0fe09b93136b7db2490db1960a380c147d5705029fad5bee14
3
+ size 198776448
bce-reranker-base_v1-Q3_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec26000941f191e671c9c599147e01b34ca5c26f71982ba42283d153d1ded2e2
3
+ size 208937088
bce-reranker-base_v1-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e28b52982d20bf52b423407ebc08c251fdcb54fa02f8c4a3c224b14e54cbc011
3
+ size 214508160
bce-reranker-base_v1-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68ef43861459e55bd84c7dc85a2f08cab63246fb97800727d0b1756d5946638f
3
+ size 219070080
bce-reranker-base_v1-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6310c00b00dc1add1a97c7484479495680b996cdc9240d9efeaff3783e07f13c
3
+ size 225198720
bce-reranker-base_v1-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2dc461f13523ea0bb167d17745b7747f8eb85dd3d61a0f31a8d6531d4194b298
3
+ size 227548800
bce-reranker-base_v1-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:377075a39d7cff2b672b826b782c09ea5fef5658bdc0e232f1a1da38a3de35f1
3
+ size 236557440
bce-reranker-base_v1-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff5848573cb9708d8d58a6214f942498d53e2757e00a28c1c2767df8f67bfa8a
3
+ size 303770752