File size: 1,364 Bytes
52c9221
 
 
 
 
 
 
 
e73236d
52c9221
 
 
4ca748a
52c9221
4ca748a
52c9221
4ca748a
52c9221
e73236d
52c9221
e73236d
52c9221
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# klue-roberta-base-kornli

This model trained with Korean dataset.
Input premise sentence and hypothesis sentence.
You can use English, but don't expect accuracy.
If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.

KLUE-RoBERTa-base-KorNLI DEMO: [Ainize DEMO](https://main-klue-roberta-base-kornli-ehdwns1516.endpoint.ainize.ai/)

KLUE-RoBERTa-base-KorNLI API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/klue-roberta-base_kornli)

## Overview

Language model: klue/roberta-base

Language: Korean

Training data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI)

Eval data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI)

Code: See [Ainize Workspace](https://a966119d3186.ngrok.io/notebooks/DJ/KLUE-NLI/klue-roberta-base-kornli.ipynb)

## Usage
## In Transformers

```
from transformers import AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/klue-roberta-base-kornli")

classifier = pipeline(
    "text-classification",
    model="ehdwns1516/klue-roberta-base-kornli",
    return_all_scores=True,
)

premise = "your premise"
hypothesis = "your hypothesis"

result = dict()
result[0] = classifier(premise + tokenizer.sep_token + hypothesis)[0]
```