Update README.md
Browse files
README.md
CHANGED
@@ -14,15 +14,21 @@ Does Not Require Visual Processing (0): The sentence is self-contained and can b
|
|
14 |
|
15 |
The model is fine-tuned for sequence classification tasks and provides a straightforward interface to make predictions.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
# Quick Start
|
18 |
|
19 |
To use the Vision_or_not model, you will need to install the following Python libraries:
|
20 |
-
|
21 |
pip install transformers torch
|
22 |
-
|
23 |
|
24 |
To use the model for making predictions, simply load the model and tokenizer, then pass your text to the prediction function. Below is an example code for usage:
|
25 |
-
|
26 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
27 |
import torch
|
28 |
|
@@ -62,4 +68,12 @@ if __name__ == "__main__":
|
|
62 |
print(f"Text: {text}")
|
63 |
print(f"Prediction: {prediction}\n")
|
64 |
|
65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
The model is fine-tuned for sequence classification tasks and provides a straightforward interface to make predictions.
|
16 |
|
17 |
+
# Fine-Tuning Information
|
18 |
+
This model is fine-tuned based on the mDeBERTa-v3-base-mnli-xn model, which is a multilingual version of DeBERTa (Decoding-enhanced BERT with disentangled attention). The fine-tuning data used is primarily in Traditional Chinese, which makes the model well-suited for processing texts in this language. However, the model has been tested and can also perform well with English inputs.
|
19 |
+
|
20 |
+
Base Model: [mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli)
|
21 |
+
Fine-Tuning Data: Traditional Chinese text data
|
22 |
+
|
23 |
# Quick Start
|
24 |
|
25 |
To use the Vision_or_not model, you will need to install the following Python libraries:
|
26 |
+
```
|
27 |
pip install transformers torch
|
28 |
+
```
|
29 |
|
30 |
To use the model for making predictions, simply load the model and tokenizer, then pass your text to the prediction function. Below is an example code for usage:
|
31 |
+
```
|
32 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
33 |
import torch
|
34 |
|
|
|
68 |
print(f"Text: {text}")
|
69 |
print(f"Prediction: {prediction}\n")
|
70 |
|
71 |
+
```
|
72 |
+
|
73 |
+
# Example Output
|
74 |
+
|
75 |
+
For the input text "Hello, how are you?", the model might output:
|
76 |
+
```
|
77 |
+
Text: Hello, how are you?
|
78 |
+
Prediction: No need for visual processing
|
79 |
+
```
|