hooman650 commited on
Commit
d8e90a9
1 Parent(s): 2fb50ad

Delete .ipynb_checkpoints

Browse files
.ipynb_checkpoints/README-checkpoint.md DELETED
@@ -1,57 +0,0 @@
1
- ---
2
- license: mit
3
- pipeline_tag: feature-extraction
4
- ---
5
-
6
- # bge-m3-onnx-o4
7
-
8
- This is `bge-m3-onnx-o4` weights of the original [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3). Why is this model cool?
9
-
10
- - [x] Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
11
- - [x] Multi-Linguality: It can support more than **100** working languages.
12
- - [x] Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to **8192** tokens.
13
-
14
- ## Usage
15
-
16
- ### Dense Retrieval
17
-
18
- ```
19
- # for cuda
20
- pip install --upgrade-strategy eager optimum[onnxruntime]
21
- ```
22
-
23
- ```python
24
-
25
- from optimum.onnxruntime import ORTModelForFeatureExtraction
26
- from transformers import AutoTokenizer
27
- import torch
28
-
29
- model = ORTModelForFeatureExtraction.from_pretrained("hooman650/bge-m3-onnx-o4", provider="CUDAExecutionProvider")
30
- tokenizer = AutoTokenizer.from_pretrained("hooman650/bge-m3-onnx-o4")
31
-
32
- sentences = [
33
- "English: The quick brown fox jumps over the lazy dog.",
34
- "Spanish: El rápido zorro marrón salta sobre el perro perezoso.",
35
- "French: Le renard brun rapide saute par-dessus le chien paresseux.",
36
- "German: Der schnelle braune Fuchs springt über den faulen Hund.",
37
- "Italian: La volpe marrone veloce salta sopra il cane pigro.",
38
- "Japanese: 速い茶色の狐が怠惰な犬を飛び越える。",
39
- "Chinese (Simplified): 快速的棕色狐狸跳过懒狗。",
40
- "Russian: Быстрая коричневая лиса прыгает через ленивую собаку.",
41
- "Arabic: الثعلب البني السريع يقفز فوق الكلب الكسول.",
42
- "Hindi: तेज़ भूरी लोमड़ी आलसी कुत्ते के ऊपर कूद जाती है।"
43
- ]
44
-
45
- encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt').to("cuda")
46
-
47
- # Get the embeddings
48
- out=model(**encoded_input,return_dict=True).last_hidden_state
49
-
50
- # normalize the embeddings
51
- dense_vecs = torch.nn.functional.normalize(out[:, 0], dim=-1)
52
- ```
53
- ### Multi-Vector (ColBERT)
54
-
55
- `coming soon...`
56
-
57
-