mmazuecos's picture
Update README.md
9fc8cc9
|
raw
history blame
4.54 kB
metadata
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity
language:
  - es
dataset:
  - hackathon-pln-es/ESnli
widget:
  - text: >-
      A ver si nos tenemos que poner todos en huelga hasta cobrar lo que
      queramos.
  - text: >-
      La huelga es el método de lucha más eficaz para conseguir mejoras en el
      salario.
  - text: Tendremos que optar por hacer una huelga para cobrar lo que queremos.
  - text: Queda descartada la huelga aunque no cobremos lo que queramos.

bertin-roberta-base-finetuning-esnli

This is a sentence-transformers model trained on a collection of NLI tasks for Spanish. It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Based around the siamese networks approach from this paper.

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["Este es un ejemplo", "Cada oración es transformada"]

model = SentenceTransformer('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
embeddings = model.encode(sentences)
print(embeddings)

Evaluation Results

Our model was evaluated on the task of Semantic Textual Similarity using the SemEval-2015 Task for Spanish. We measure

BETO STS BERTIN STS (this model) Relative improvement
cosine_pearson 0.609803 0.670862 +10.01
cosine_spearman 0.528776 0.598593 +13.20
euclidean_pearson 0.590613 0.675257 +14.33
euclidean_spearman 0.526529 0.604656 +14.84
manhattan_pearson 0.589108 0.676706 +14.87
manhattan_spearman 0.525910 0.606461 +15.32
dot_pearson 0.544078 0.586429 +7.78
dot_spearman 0.460427 0.495614 +7.64

Training

The model was trained with the parameters:

Dataset

We used a collection of datasets of Natural Language Inference as training data:

  • ESXNLI, only the part in spanish
  • SNLI, automatically translated
  • MultiNLI, automatically translated

The whole dataset used is available here.

DataLoader:

sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader of length 1127 with parameters:

{'batch_size': 64}

Loss:

sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss with parameters:

{'scale': 20.0, 'similarity_fct': 'cos_sim'}

Parameters of the fit()-Method:

{
    "epochs": 20,
    "evaluation_steps": 0,
    "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'transformers.optimization.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 1127,
    "weight_decay": 0.01
}

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
  (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)

Authors

Anibal Pérez,

Emilio Tomás Ariza,

Lautaro Gesuelli y

Mauricio Mazuecos.