albert-small-kor-sbert-v1.1
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
albert-small-kor-v1 ๋ชจ๋ธ์ sentencebert๋ก ๋ง๋ ๋ชจ๋ธ.
Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence-transformers
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bongsoo/albert-small-kor-sbert-v1.1')
embeddings = model.encode(sentences)
print(embeddings)
Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bongsoo/albert-small-kor-sbert-v1.1')
model = AutoModel.from_pretrained('bongsoo/albert-small-kor-sbert-v1.1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
Evaluation Results
- ์ฑ๋ฅ ์ธก์ ์ ์ํ ๋ง๋ญ์น๋, ์๋ ํ๊ตญ์ด (kor), ์์ด(en) ํ๊ฐ ๋ง๋ญ์น๋ฅผ ์ด์ฉํจ
ํ๊ตญ์ด : korsts(1,379์๋ฌธ์ฅ) ์ klue-sts(519์๋ฌธ์ฅ)
์์ด : stsb_multi_mt(1,376์๋ฌธ์ฅ) ์ glue:stsb (1,500์๋ฌธ์ฅ) - ์ฑ๋ฅ ์งํ๋ cosin.spearman
- ํ๊ฐ ์ธก์ ์ฝ๋๋ ์ฌ๊ธฐ ์ฐธ์กฐ
๋ชจ๋ธ korsts klue-sts glue(stsb) stsb_multi_mt(en) distiluse-base-multilingual-cased-v2 0.7475 0.7855 0.8193 0.8075 paraphrase-multilingual-mpnet-base-v2 0.8201 0.7993 0.8907 0.8682 bongsoo/albert-small-kor-sbert-v1 0.8305 0.8588 0.8419 0.7965 bongsoo/klue-sbert-v1.0 0.8529 0.8952 0.8813 0.8469 bongsoo/kpf-sbert-v1.1 0.8750 0.8900 0.8863 0.8554 bongsoo/albert-small-kor-sbert-v1.1 0.8526 0.8833 0.8484 0.8286
For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net
Training
- albert-small-kor-v1 ๋ชจ๋ธ์ sts(10)-distil(10) ํ๋ จ๋ง ์ํด (nli-sts์ถ๊ฐ ํ๋ จ์ํค๋ฉด ์ ํฉ๋๊ฐ ๋จ์ด์ง)
- ๊ต์ฌ๋ชจ๋ธ์ kpf-sbert-v1.1 ์ด์ฉํจ.
The model was trained with the parameters:
๊ณตํต
- do_lower_case=1, correct_bios=0, polling_mode=cls
1.STS
- ๋ง๋ญ์น : korsts(5,749) + kluestsV1.1(11,668) + stsb_multi_mt(5,749) + mteb/sickr-sts(9,927) + glue stsb(5,749) (์ด:38,842)
- Param : lr: 1e-4, eps: 1e-6, warm_step=10%, epochs: 10, train_batch: 32, eval_batch: 64, max_token_len: 72
- ํ๋ จ์ฝ๋ ์ฌ๊ธฐ ์ฐธ์กฐ
2.distilation
- ๊ต์ฌ ๋ชจ๋ธ : kpf-sbert-v1.1(max_token_len:128)
- ๋ง๋ญ์น : news_talk_ko_en_train.tsv (ํ๊ตญ์ด-์์ด ๋ํ-๋ด์ค ๋ณ๋ ฌ ๋ง๋ญ์น : 1.38M)
- Param : lr: 5e-5, epochs: 10, train_batch: 32, eval/test_batch: 64, max_token_len: 128(๊ต์ฌ๋ชจ๋ธ์ด 128์ด๋ฏ๋ก ๋ง์ถฐ์ค)
- ํ๋ จ์ฝ๋ ์ฌ๊ธฐ ์ฐธ์กฐ
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': True}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
Citing & Authors
bongsoo
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.