File size: 1,190 Bytes
7100214 44426ae 2e125a8 8180d43 2e125a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
dataset_info:
features:
- name: context
dtype: string
- name: name
dtype: string
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 717888
num_examples: 202
download_size: 1005715
dataset_size: 717888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- sentence-similarity
language:
- es
---
# Modelo
- "Alibaba-NLP/gte-multilingual-base"
Puedes obtener toda la información relacionado con el modelo <a href="https://huggingface.co/Alibaba-NLP/gte-multilingual-base">aquí</a>
# Busqueda
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
from datasets import load_dataset
import numpy as np
model_name = "Alibaba-NLP/gte-multilingual-base"
model = SentenceTransformer(model_name, trust_remote_code=True)
raw_data = load_dataset('Manyah/incrustaciones')
question = ""
question_embedding = model.encode(question)
sim = [cos_sim(raw_data['train'][i]['embedding'],question_embedding).numpy() for i in range(0,len(raw_data['train']))]
index = sim.index(max(sim))
print(raw_data['train'][index]['context'])
``` |