Update README.md
Browse files
README.md
CHANGED
@@ -8,8 +8,8 @@ base_model:
|
|
8 |
- sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
|
9 |
library_name: transformers
|
10 |
---
|
11 |
-
Model Description
|
12 |
-
This model is a fine-tuned version of sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 for sentence similarity tasks. It was trained on the mteb/stsbenchmark-sts dataset to evaluate the similarity between sentence pairs.
|
13 |
|
14 |
Model Type: Sequence Classification (Regression)
|
15 |
Pre-trained Model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
|
@@ -26,10 +26,11 @@ Weight Decay: 0.01
|
|
26 |
Evaluation
|
27 |
The model was evaluated using Pearson correlation on the validation set of the mteb/stsbenchmark-sts dataset. Results indicate how well the model predicts similarity scores between sentence pairs.
|
28 |
|
29 |
-
Usage
|
30 |
To use this model for sentence similarity, follow these steps:
|
31 |
|
32 |
-
|
|
|
33 |
|
34 |
# Load the fine-tuned model
|
35 |
|
@@ -67,7 +68,7 @@ If using the model for generating sentence embeddings, you can use the following
|
|
67 |
Domain Specificity: The model is fine-tuned on the mteb/stsbenchmark-sts dataset and may perform differently on other types of text or datasets.
|
68 |
Biases: As with any model trained on human language data, it may inherit and reflect biases present in the training data.
|
69 |
|
70 |
-
Future Work
|
71 |
Potential improvements include fine-tuning on additional datasets, experimenting with different architectures or hyperparameters, and incorporating additional training techniques to improve performance and robustness.
|
72 |
|
73 |
Citation
|
@@ -80,5 +81,5 @@ If you use this model in your research, please cite it as follows:
|
|
80 |
}
|
81 |
|
82 |
|
83 |
-
License
|
84 |
This model is licensed under the MIT License. See the LICENSE file for more information.
|
|
|
8 |
- sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
|
9 |
library_name: transformers
|
10 |
---
|
11 |
+
# Model Description
|
12 |
+
This model is a fine-tuned version of sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 for sentence similarity tasks. It was trained on the mteb/stsbenchmark-sts dataset to evaluate the similarity between sentence pairs.
|
13 |
|
14 |
Model Type: Sequence Classification (Regression)
|
15 |
Pre-trained Model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
|
|
|
26 |
Evaluation
|
27 |
The model was evaluated using Pearson correlation on the validation set of the mteb/stsbenchmark-sts dataset. Results indicate how well the model predicts similarity scores between sentence pairs.
|
28 |
|
29 |
+
# Usage
|
30 |
To use this model for sentence similarity, follow these steps:
|
31 |
|
32 |
+
```python
|
33 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
34 |
|
35 |
# Load the fine-tuned model
|
36 |
|
|
|
68 |
Domain Specificity: The model is fine-tuned on the mteb/stsbenchmark-sts dataset and may perform differently on other types of text or datasets.
|
69 |
Biases: As with any model trained on human language data, it may inherit and reflect biases present in the training data.
|
70 |
|
71 |
+
# Future Work
|
72 |
Potential improvements include fine-tuning on additional datasets, experimenting with different architectures or hyperparameters, and incorporating additional training techniques to improve performance and robustness.
|
73 |
|
74 |
Citation
|
|
|
81 |
}
|
82 |
|
83 |
|
84 |
+
# License
|
85 |
This model is licensed under the MIT License. See the LICENSE file for more information.
|