all-MiniLM-L6-v2 trained on MEDI-MTEB triplets

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2 on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Datasets:
    • NQ
    • pubmed
    • specter_train_triples
    • S2ORC_citations_abstracts
    • fever
    • gooaq_pairs
    • codesearchnet
    • wikihow
    • WikiAnswers
    • eli5_question_answer
    • amazon-qa
    • medmcqa
    • zeroshot
    • TriviaQA_pairs
    • PAQ_pairs
    • stackexchange_duplicate_questions_title-body_title-body
    • trex
    • flickr30k_captions
    • hotpotqa
    • task671_ambigqa_text_generation
    • task061_ropes_answer_generation
    • task285_imdb_answer_generation
    • task905_hate_speech_offensive_classification
    • task566_circa_classification
    • task184_snli_entailment_to_neutral_text_modification
    • task280_stereoset_classification_stereotype_type
    • task1599_smcalflow_classification
    • task1384_deal_or_no_dialog_classification
    • task591_sciq_answer_generation
    • task823_peixian-rtgender_sentiment_analysis
    • task023_cosmosqa_question_generation
    • task900_freebase_qa_category_classification
    • task924_event2mind_word_generation
    • task152_tomqa_find_location_easy_noise
    • task1368_healthfact_sentence_generation
    • task1661_super_glue_classification
    • task1187_politifact_classification
    • task1728_web_nlg_data_to_text
    • task112_asset_simple_sentence_identification
    • task1340_msr_text_compression_compression
    • task072_abductivenli_answer_generation
    • task1504_hatexplain_answer_generation
    • task684_online_privacy_policy_text_information_type_generation
    • task1290_xsum_summarization
    • task075_squad1.1_answer_generation
    • task1587_scifact_classification
    • task384_socialiqa_question_classification
    • task1555_scitail_answer_generation
    • task1532_daily_dialog_emotion_classification
    • task239_tweetqa_answer_generation
    • task596_mocha_question_generation
    • task1411_dart_subject_identification
    • task1359_numer_sense_answer_generation
    • task329_gap_classification
    • task220_rocstories_title_classification
    • task316_crows-pairs_classification_stereotype
    • task495_semeval_headline_classification
    • task1168_brown_coarse_pos_tagging
    • task348_squad2.0_unanswerable_question_generation
    • task049_multirc_questions_needed_to_answer
    • task1534_daily_dialog_question_classification
    • task322_jigsaw_classification_threat
    • task295_semeval_2020_task4_commonsense_reasoning
    • task186_snli_contradiction_to_entailment_text_modification
    • task034_winogrande_question_modification_object
    • task160_replace_letter_in_a_sentence
    • task469_mrqa_answer_generation
    • task105_story_cloze-rocstories_sentence_generation
    • task649_race_blank_question_generation
    • task1536_daily_dialog_happiness_classification
    • task683_online_privacy_policy_text_purpose_answer_generation
    • task024_cosmosqa_answer_generation
    • task584_udeps_eng_fine_pos_tagging
    • task066_timetravel_binary_consistency_classification
    • task413_mickey_en_sentence_perturbation_generation
    • task182_duorc_question_generation
    • task028_drop_answer_generation
    • task1601_webquestions_answer_generation
    • task1295_adversarial_qa_question_answering
    • task201_mnli_neutral_classification
    • task038_qasc_combined_fact
    • task293_storycommonsense_emotion_text_generation
    • task572_recipe_nlg_text_generation
    • task517_emo_classify_emotion_of_dialogue
    • task382_hybridqa_answer_generation
    • task176_break_decompose_questions
    • task1291_multi_news_summarization
    • task155_count_nouns_verbs
    • task031_winogrande_question_generation_object
    • task279_stereoset_classification_stereotype
    • task1336_peixian_equity_evaluation_corpus_gender_classifier
    • task508_scruples_dilemmas_more_ethical_isidentifiable
    • task518_emo_different_dialogue_emotions
    • task077_splash_explanation_to_sql
    • task923_event2mind_classifier
    • task470_mrqa_question_generation
    • task638_multi_woz_classification
    • task1412_web_questions_question_answering
    • task847_pubmedqa_question_generation
    • task678_ollie_actual_relationship_answer_generation
    • task290_tellmewhy_question_answerability
    • task575_air_dialogue_classification
    • task189_snli_neutral_to_contradiction_text_modification
    • task026_drop_question_generation
    • task162_count_words_starting_with_letter
    • task079_conala_concat_strings
    • task610_conllpp_ner
    • task046_miscellaneous_question_typing
    • task197_mnli_domain_answer_generation
    • task1325_qa_zre_question_generation_on_subject_relation
    • task430_senteval_subject_count
    • task672_nummersense
    • task402_grailqa_paraphrase_generation
    • task904_hate_speech_offensive_classification
    • task192_hotpotqa_sentence_generation
    • task069_abductivenli_classification
    • task574_air_dialogue_sentence_generation
    • task187_snli_entailment_to_contradiction_text_modification
    • task749_glucose_reverse_cause_emotion_detection
    • task1552_scitail_question_generation
    • task750_aqua_multiple_choice_answering
    • task327_jigsaw_classification_toxic
    • task1502_hatexplain_classification
    • task328_jigsaw_classification_insult
    • task304_numeric_fused_head_resolution
    • task1293_kilt_tasks_hotpotqa_question_answering
    • task216_rocstories_correct_answer_generation
    • task1326_qa_zre_question_generation_from_answer
    • task1338_peixian_equity_evaluation_corpus_sentiment_classifier
    • task1729_personachat_generate_next
    • task1202_atomic_classification_xneed
    • task400_paws_paraphrase_classification
    • task502_scruples_anecdotes_whoiswrong_verification
    • task088_identify_typo_verification
    • task221_rocstories_two_choice_classification
    • task200_mnli_entailment_classification
    • task074_squad1.1_question_generation
    • task581_socialiqa_question_generation
    • task1186_nne_hrngo_classification
    • task898_freebase_qa_answer_generation
    • task1408_dart_similarity_classification
    • task168_strategyqa_question_decomposition
    • task1357_xlsum_summary_generation
    • task390_torque_text_span_selection
    • task165_mcscript_question_answering_commonsense
    • task1533_daily_dialog_formal_classification
    • task002_quoref_answer_generation
    • task1297_qasc_question_answering
    • task305_jeopardy_answer_generation_normal
    • task029_winogrande_full_object
    • task1327_qa_zre_answer_generation_from_question
    • task326_jigsaw_classification_obscene
    • task1542_every_ith_element_from_starting
    • task570_recipe_nlg_ner_generation
    • task1409_dart_text_generation
    • task401_numeric_fused_head_reference
    • task846_pubmedqa_classification
    • task1712_poki_classification
    • task344_hybridqa_answer_generation
    • task875_emotion_classification
    • task1214_atomic_classification_xwant
    • task106_scruples_ethical_judgment
    • task238_iirc_answer_from_passage_answer_generation
    • task1391_winogrande_easy_answer_generation
    • task195_sentiment140_classification
    • task163_count_words_ending_with_letter
    • task579_socialiqa_classification
    • task569_recipe_nlg_text_generation
    • task1602_webquestion_question_genreation
    • task747_glucose_cause_emotion_detection
    • task219_rocstories_title_answer_generation
    • task178_quartz_question_answering
    • task103_facts2story_long_text_generation
    • task301_record_question_generation
    • task1369_healthfact_sentence_generation
    • task515_senteval_odd_word_out
    • task496_semeval_answer_generation
    • task1658_billsum_summarization
    • task1204_atomic_classification_hinderedby
    • task1392_superglue_multirc_answer_verification
    • task306_jeopardy_answer_generation_double
    • task1286_openbookqa_question_answering
    • task159_check_frequency_of_words_in_sentence_pair
    • task151_tomqa_find_location_easy_clean
    • task323_jigsaw_classification_sexually_explicit
    • task037_qasc_generate_related_fact
    • task027_drop_answer_type_generation
    • task1596_event2mind_text_generation_2
    • task141_odd-man-out_classification_category
    • task194_duorc_answer_generation
    • task679_hope_edi_english_text_classification
    • task246_dream_question_generation
    • task1195_disflqa_disfluent_to_fluent_conversion
    • task065_timetravel_consistent_sentence_classification
    • task351_winomt_classification_gender_identifiability_anti
    • task580_socialiqa_answer_generation
    • task583_udeps_eng_coarse_pos_tagging
    • task202_mnli_contradiction_classification
    • task222_rocstories_two_chioce_slotting_classification
    • task498_scruples_anecdotes_whoiswrong_classification
    • task067_abductivenli_answer_generation
    • task616_cola_classification
    • task286_olid_offense_judgment
    • task188_snli_neutral_to_entailment_text_modification
    • task223_quartz_explanation_generation
    • task820_protoqa_answer_generation
    • task196_sentiment140_answer_generation
    • task1678_mathqa_answer_selection
    • task349_squad2.0_answerable_unanswerable_question_classification
    • task154_tomqa_find_location_hard_noise
    • task333_hateeval_classification_hate_en
    • task235_iirc_question_from_subtext_answer_generation
    • task1554_scitail_classification
    • task210_logic2text_structured_text_generation
    • task035_winogrande_question_modification_person
    • task230_iirc_passage_classification
    • task1356_xlsum_title_generation
    • task1726_mathqa_correct_answer_generation
    • task302_record_classification
    • task380_boolq_yes_no_question
    • task212_logic2text_classification
    • task748_glucose_reverse_cause_event_detection
    • task834_mathdataset_classification
    • task350_winomt_classification_gender_identifiability_pro
    • task191_hotpotqa_question_generation
    • task236_iirc_question_from_passage_answer_generation
    • task217_rocstories_ordering_answer_generation
    • task568_circa_question_generation
    • task614_glucose_cause_event_detection
    • task361_spolin_yesand_prompt_response_classification
    • task421_persent_sentence_sentiment_classification
    • task203_mnli_sentence_generation
    • task420_persent_document_sentiment_classification
    • task153_tomqa_find_location_hard_clean
    • task346_hybridqa_classification
    • task1211_atomic_classification_hassubevent
    • task360_spolin_yesand_response_generation
    • task510_reddit_tifu_title_summarization
    • task511_reddit_tifu_long_text_summarization
    • task345_hybridqa_answer_generation
    • task270_csrg_counterfactual_context_generation
    • task307_jeopardy_answer_generation_final
    • task001_quoref_question_generation
    • task089_swap_words_verification
    • task1196_atomic_classification_oeffect
    • task080_piqa_answer_generation
    • task1598_nyc_long_text_generation
    • task240_tweetqa_question_generation
    • task615_moviesqa_answer_generation
    • task1347_glue_sts-b_similarity_classification
    • task114_is_the_given_word_longest
    • task292_storycommonsense_character_text_generation
    • task115_help_advice_classification
    • task431_senteval_object_count
    • task1360_numer_sense_multiple_choice_qa_generation
    • task177_para-nmt_paraphrasing
    • task132_dais_text_modification
    • task269_csrg_counterfactual_story_generation
    • task233_iirc_link_exists_classification
    • task161_count_words_containing_letter
    • task1205_atomic_classification_isafter
    • task571_recipe_nlg_ner_generation
    • task1292_yelp_review_full_text_categorization
    • task428_senteval_inversion
    • task311_race_question_generation
    • task429_senteval_tense
    • task403_creak_commonsense_inference
    • task929_products_reviews_classification
    • task582_naturalquestion_answer_generation
    • task237_iirc_answer_from_subtext_answer_generation
    • task050_multirc_answerability
    • task184_break_generate_question
    • task669_ambigqa_answer_generation
    • task169_strategyqa_sentence_generation
    • task500_scruples_anecdotes_title_generation
    • task241_tweetqa_classification
    • task1345_glue_qqp_question_paraprashing
    • task218_rocstories_swap_order_answer_generation
    • task613_politifact_text_generation
    • task1167_penn_treebank_coarse_pos_tagging
    • task1422_mathqa_physics
    • task247_dream_answer_generation
    • task199_mnli_classification
    • task164_mcscript_question_answering_text
    • task1541_agnews_classification
    • task516_senteval_conjoints_inversion
    • task294_storycommonsense_motiv_text_generation
    • task501_scruples_anecdotes_post_type_verification
    • task213_rocstories_correct_ending_classification
    • task821_protoqa_question_generation
    • task493_review_polarity_classification
    • task308_jeopardy_answer_generation_all
    • task1595_event2mind_text_generation_1
    • task040_qasc_question_generation
    • task231_iirc_link_classification
    • task1727_wiqa_what_is_the_effect
    • task578_curiosity_dialogs_answer_generation
    • task310_race_classification
    • task309_race_answer_generation
    • task379_agnews_topic_classification
    • task030_winogrande_full_person
    • task1540_parsed_pdfs_summarization
    • task039_qasc_find_overlapping_words
    • task1206_atomic_classification_isbefore
    • task157_count_vowels_and_consonants
    • task339_record_answer_generation
    • task453_swag_answer_generation
    • task848_pubmedqa_classification
    • task673_google_wellformed_query_classification
    • task676_ollie_relationship_answer_generation
    • task268_casehold_legal_answer_generation
    • task844_financial_phrasebank_classification
    • task330_gap_answer_generation
    • task595_mocha_answer_generation
    • task1285_kpa_keypoint_matching
    • task234_iirc_passage_line_answer_generation
    • task494_review_polarity_answer_generation
    • task670_ambigqa_question_generation
    • task289_gigaword_summarization
    • npr
    • nli
    • SimpleWiki
    • amazon_review_2018
    • ccnews_title_text
    • agnews
    • xsum
    • msmarco
    • yahoo_answers_title_answer
    • squad_pairs
    • wow
    • mteb-amazon_counterfactual-avs_triplets
    • mteb-amazon_massive_intent-avs_triplets
    • mteb-amazon_massive_scenario-avs_triplets
    • mteb-amazon_reviews_multi-avs_triplets
    • mteb-banking77-avs_triplets
    • mteb-emotion-avs_triplets
    • mteb-imdb-avs_triplets
    • mteb-mtop_domain-avs_triplets
    • mteb-mtop_intent-avs_triplets
    • mteb-toxic_conversations_50k-avs_triplets
    • mteb-tweet_sentiment_extraction-avs_triplets
    • covid-bing-query-gpt4-avs_triplets
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): RandomProjection({'in_features': 384, 'out_features': 768, 'seed': 42, 'requires_grad': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-trainable-512-final")
# Run inference
sentences = [
    'Should I stay or should I go?',
    'The aim of this study was to examine the experiences of parents encountering the critical deterioration and resuscitative care of other children in the pediatric intensive care unit where their own child was admitted.Grounded theory qualitative methodology.Pediatric intensive care unit of a pediatric tertiary care center in Montreal, Canada.Ten parents of critically ill children who witnessed resuscitative measures on another child.None.Semistructured interviews were conducted. While witnessing resuscitation, parents struggled with "Should I stay or should I go?" Their decision depended on specific contributing factors that were intrinsic to parents (curiosity or apprehension, the child\'s sake, trust or distrust) or extrinsic (limited space). These parents were not "spectators." Despite using coping strategies, the experiences were distressing in the majority of cases, although sometimes comforting. The impact on witnessing critical events had divergent effects on parental trust with healthcare professionals.',
    'Several recent studies suggest that acceleration of the head at impact during sporting activities may have a detrimental effect on cognitive function. Reducing acceleration of impact in these sports could reduce neurologic sequelae.To measure the effectiveness of a regulation football helmet to reduce acceleration of impact for both low- and moderate-force impacts.An experimental paired study design was used. Male volunteers between 16 and 30 years of age headed soccer balls traveling approximately 35 miles per hour bareheaded and with a helmet. An intraoral accelerometer worn inside a plastic mouthpiece measured acceleration of the head. The helmet also had an accelerometer placed inside the padding. For more forceful impacts, cadaver heads, both with and without helmets, were instrumented with intraoral (IO) and intracranial (IC) accelerometers and struck with a pendulum device. Simultaneous IO and IC accelerations were measured and compared between helmeted and unhelmeted cadaver heads. The main outcome was mean peak acceleration of the head and/or brain associated with low- and moderate-force impacts with and without protective headgear.Mean peak Gs, measured by the mouthpiece accelerometer, were significantly reduced when the participants heading soccer balls were wearing a helmet (7.7 Gs with vs 19.2 Gs without, p = 0.01). Wearing a helmet also significantly lowered the peak Gs measured intraorally and intracranially in cadavers subjected to moderate-force pendulum impacts: 28.7 Gs with vs 62.6 Gs without, p<0.001; and 56.4 Gs with vs 81.6 Gs without, p<0.001, respectively.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9145

Training Details

Training Datasets

NQ

  • Dataset: NQ
  • Size: 49,676 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 11.84 tokens
    • max: 25 tokens
    • min: 110 tokens
    • mean: 138.08 tokens
    • max: 211 tokens
    • min: 111 tokens
    • mean: 138.38 tokens
    • max: 252 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

pubmed

  • Dataset: pubmed
  • Size: 29,908 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 22.86 tokens
    • max: 53 tokens
    • min: 74 tokens
    • mean: 239.63 tokens
    • max: 256 tokens
    • min: 64 tokens
    • mean: 240.96 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

specter_train_triples

  • Dataset: specter_train_triples
  • Size: 49,676 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 15.31 tokens
    • max: 48 tokens
    • min: 4 tokens
    • mean: 13.8 tokens
    • max: 38 tokens
    • min: 4 tokens
    • mean: 15.7 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

S2ORC_citations_abstracts

  • Dataset: S2ORC_citations_abstracts
  • Size: 99,352 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 202.95 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 205.71 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 204.68 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

fever

  • Dataset: fever
  • Size: 74,514 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 12.36 tokens
    • max: 51 tokens
    • min: 48 tokens
    • mean: 112.22 tokens
    • max: 148 tokens
    • min: 35 tokens
    • mean: 113.94 tokens
    • max: 158 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

gooaq_pairs

  • Dataset: gooaq_pairs
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 11.84 tokens
    • max: 24 tokens
    • min: 14 tokens
    • mean: 60.08 tokens
    • max: 144 tokens
    • min: 15 tokens
    • mean: 63.21 tokens
    • max: 165 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

codesearchnet

  • Dataset: codesearchnet
  • Size: 15,210 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 29.23 tokens
    • max: 163 tokens
    • min: 28 tokens
    • mean: 133.98 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 161.35 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

wikihow

  • Dataset: wikihow
  • Size: 5,070 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 8.03 tokens
    • max: 21 tokens
    • min: 12 tokens
    • mean: 44.27 tokens
    • max: 89 tokens
    • min: 10 tokens
    • mean: 36.71 tokens
    • max: 100 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

WikiAnswers

  • Dataset: WikiAnswers
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 12.85 tokens
    • max: 39 tokens
    • min: 6 tokens
    • mean: 12.95 tokens
    • max: 43 tokens
    • min: 6 tokens
    • mean: 13.06 tokens
    • max: 43 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

eli5_question_answer

  • Dataset: eli5_question_answer
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 20.78 tokens
    • max: 63 tokens
    • min: 11 tokens
    • mean: 98.02 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 109.02 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

amazon-qa

  • Dataset: amazon-qa
  • Size: 99,352 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 22.78 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 53.82 tokens
    • max: 256 tokens
    • min: 17 tokens
    • mean: 62.55 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

medmcqa

  • Dataset: medmcqa
  • Size: 29,908 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 20.65 tokens
    • max: 180 tokens
    • min: 3 tokens
    • mean: 113.32 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 117.89 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

zeroshot

  • Dataset: zeroshot
  • Size: 15,210 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 8.57 tokens
    • max: 20 tokens
    • min: 23 tokens
    • mean: 110.88 tokens
    • max: 174 tokens
    • min: 14 tokens
    • mean: 116.58 tokens
    • max: 192 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

TriviaQA_pairs

  • Dataset: TriviaQA_pairs
  • Size: 49,676 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.27 tokens
    • max: 64 tokens
    • min: 18 tokens
    • mean: 244.2 tokens
    • max: 256 tokens
    • min: 56 tokens
    • mean: 235.19 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

PAQ_pairs

  • Dataset: PAQ_pairs
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 12.61 tokens
    • max: 21 tokens
    • min: 112 tokens
    • mean: 136.23 tokens
    • max: 225 tokens
    • min: 108 tokens
    • mean: 135.98 tokens
    • max: 254 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

stackexchange_duplicate_questions_title-body_title-body

  • Dataset: stackexchange_duplicate_questions_title-body_title-body
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 150.03 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 138.73 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 199.2 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

trex

  • Dataset: trex
  • Size: 29,908 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 9.57 tokens
    • max: 21 tokens
    • min: 21 tokens
    • mean: 103.3 tokens
    • max: 193 tokens
    • min: 10 tokens
    • mean: 117.84 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

flickr30k_captions

  • Dataset: flickr30k_captions
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 15.71 tokens
    • max: 56 tokens
    • min: 7 tokens
    • mean: 16.19 tokens
    • max: 64 tokens
    • min: 7 tokens
    • mean: 16.43 tokens
    • max: 52 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

hotpotqa

  • Dataset: hotpotqa
  • Size: 40,048 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 24.82 tokens
    • max: 110 tokens
    • min: 35 tokens
    • mean: 112.64 tokens
    • max: 146 tokens
    • min: 31 tokens
    • mean: 114.86 tokens
    • max: 178 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task671_ambigqa_text_generation

  • Dataset: task671_ambigqa_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 12.7 tokens
    • max: 26 tokens
    • min: 11 tokens
    • mean: 12.52 tokens
    • max: 23 tokens
    • min: 11 tokens
    • mean: 12.23 tokens
    • max: 19 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task061_ropes_answer_generation

  • Dataset: task061_ropes_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 117 tokens
    • mean: 210.02 tokens
    • max: 256 tokens
    • min: 117 tokens
    • mean: 209.38 tokens
    • max: 256 tokens
    • min: 119 tokens
    • mean: 211.18 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task285_imdb_answer_generation

  • Dataset: task285_imdb_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 46 tokens
    • mean: 209.41 tokens
    • max: 256 tokens
    • min: 49 tokens
    • mean: 203.91 tokens
    • max: 256 tokens
    • min: 46 tokens
    • mean: 209.41 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task905_hate_speech_offensive_classification

  • Dataset: task905_hate_speech_offensive_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 41.18 tokens
    • max: 164 tokens
    • min: 13 tokens
    • mean: 40.19 tokens
    • max: 198 tokens
    • min: 13 tokens
    • mean: 31.8 tokens
    • max: 135 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task566_circa_classification

  • Dataset: task566_circa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 27.83 tokens
    • max: 48 tokens
    • min: 19 tokens
    • mean: 27.23 tokens
    • max: 44 tokens
    • min: 20 tokens
    • mean: 27.51 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task184_snli_entailment_to_neutral_text_modification

  • Dataset: task184_snli_entailment_to_neutral_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 29.81 tokens
    • max: 72 tokens
    • min: 16 tokens
    • mean: 28.92 tokens
    • max: 60 tokens
    • min: 17 tokens
    • mean: 30.3 tokens
    • max: 100 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task280_stereoset_classification_stereotype_type

  • Dataset: task280_stereoset_classification_stereotype_type
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 18.5 tokens
    • max: 53 tokens
    • min: 8 tokens
    • mean: 16.84 tokens
    • max: 53 tokens
    • min: 8 tokens
    • mean: 16.87 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1599_smcalflow_classification

  • Dataset: task1599_smcalflow_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 11.34 tokens
    • max: 37 tokens
    • min: 3 tokens
    • mean: 10.56 tokens
    • max: 38 tokens
    • min: 5 tokens
    • mean: 16.3 tokens
    • max: 45 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1384_deal_or_no_dialog_classification

  • Dataset: task1384_deal_or_no_dialog_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 59.36 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 59.64 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 58.78 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task591_sciq_answer_generation

  • Dataset: task591_sciq_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 17.57 tokens
    • max: 70 tokens
    • min: 7 tokens
    • mean: 17.14 tokens
    • max: 43 tokens
    • min: 6 tokens
    • mean: 16.71 tokens
    • max: 75 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task823_peixian-rtgender_sentiment_analysis

  • Dataset: task823_peixian-rtgender_sentiment_analysis
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 57.13 tokens
    • max: 179 tokens
    • min: 16 tokens
    • mean: 59.67 tokens
    • max: 153 tokens
    • min: 14 tokens
    • mean: 60.2 tokens
    • max: 169 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task023_cosmosqa_question_generation

  • Dataset: task023_cosmosqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 78.87 tokens
    • max: 159 tokens
    • min: 34 tokens
    • mean: 80.11 tokens
    • max: 165 tokens
    • min: 35 tokens
    • mean: 79.1 tokens
    • max: 161 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task900_freebase_qa_category_classification

  • Dataset: task900_freebase_qa_category_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 20.4 tokens
    • max: 88 tokens
    • min: 8 tokens
    • mean: 18.23 tokens
    • max: 62 tokens
    • min: 8 tokens
    • mean: 19.02 tokens
    • max: 69 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task924_event2mind_word_generation

  • Dataset: task924_event2mind_word_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 32.06 tokens
    • max: 64 tokens
    • min: 17 tokens
    • mean: 32.09 tokens
    • max: 70 tokens
    • min: 17 tokens
    • mean: 31.44 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task152_tomqa_find_location_easy_noise

  • Dataset: task152_tomqa_find_location_easy_noise
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 53.08 tokens
    • max: 79 tokens
    • min: 37 tokens
    • mean: 52.45 tokens
    • max: 78 tokens
    • min: 37 tokens
    • mean: 52.77 tokens
    • max: 82 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1368_healthfact_sentence_generation

  • Dataset: task1368_healthfact_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 91 tokens
    • mean: 240.4 tokens
    • max: 256 tokens
    • min: 84 tokens
    • mean: 239.58 tokens
    • max: 256 tokens
    • min: 97 tokens
    • mean: 245.19 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1661_super_glue_classification

  • Dataset: task1661_super_glue_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 141.18 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 143.01 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 143.2 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1187_politifact_classification

  • Dataset: task1187_politifact_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 33.28 tokens
    • max: 79 tokens
    • min: 10 tokens
    • mean: 31.53 tokens
    • max: 75 tokens
    • min: 13 tokens
    • mean: 31.93 tokens
    • max: 71 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1728_web_nlg_data_to_text

  • Dataset: task1728_web_nlg_data_to_text
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 42.96 tokens
    • max: 152 tokens
    • min: 7 tokens
    • mean: 46.5 tokens
    • max: 152 tokens
    • min: 8 tokens
    • mean: 42.77 tokens
    • max: 152 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task112_asset_simple_sentence_identification

  • Dataset: task112_asset_simple_sentence_identification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 51.97 tokens
    • max: 136 tokens
    • min: 18 tokens
    • mean: 51.73 tokens
    • max: 144 tokens
    • min: 22 tokens
    • mean: 51.88 tokens
    • max: 114 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1340_msr_text_compression_compression

  • Dataset: task1340_msr_text_compression_compression
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 41.88 tokens
    • max: 116 tokens
    • min: 14 tokens
    • mean: 44.35 tokens
    • max: 133 tokens
    • min: 12 tokens
    • mean: 40.06 tokens
    • max: 141 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task072_abductivenli_answer_generation

  • Dataset: task072_abductivenli_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 26.88 tokens
    • max: 56 tokens
    • min: 16 tokens
    • mean: 26.17 tokens
    • max: 47 tokens
    • min: 16 tokens
    • mean: 26.41 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1504_hatexplain_answer_generation

  • Dataset: task1504_hatexplain_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 28.86 tokens
    • max: 72 tokens
    • min: 5 tokens
    • mean: 24.52 tokens
    • max: 86 tokens
    • min: 5 tokens
    • mean: 28.07 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task684_online_privacy_policy_text_information_type_generation

  • Dataset: task684_online_privacy_policy_text_information_type_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 29.9 tokens
    • max: 68 tokens
    • min: 10 tokens
    • mean: 30.16 tokens
    • max: 61 tokens
    • min: 14 tokens
    • mean: 30.06 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1290_xsum_summarization

  • Dataset: task1290_xsum_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 226.47 tokens
    • max: 256 tokens
    • min: 50 tokens
    • mean: 229.89 tokens
    • max: 256 tokens
    • min: 34 tokens
    • mean: 229.29 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task075_squad1.1_answer_generation

  • Dataset: task075_squad1.1_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 48 tokens
    • mean: 168.28 tokens
    • max: 256 tokens
    • min: 45 tokens
    • mean: 172.9 tokens
    • max: 256 tokens
    • min: 46 tokens
    • mean: 179.79 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1587_scifact_classification

  • Dataset: task1587_scifact_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 88 tokens
    • mean: 242.74 tokens
    • max: 256 tokens
    • min: 90 tokens
    • mean: 246.86 tokens
    • max: 256 tokens
    • min: 86 tokens
    • mean: 244.66 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task384_socialiqa_question_classification

  • Dataset: task384_socialiqa_question_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 35.43 tokens
    • max: 78 tokens
    • min: 22 tokens
    • mean: 34.4 tokens
    • max: 59 tokens
    • min: 22 tokens
    • mean: 34.6 tokens
    • max: 57 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1555_scitail_answer_generation

  • Dataset: task1555_scitail_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 36.84 tokens
    • max: 90 tokens
    • min: 18 tokens
    • mean: 36.35 tokens
    • max: 80 tokens
    • min: 18 tokens
    • mean: 36.61 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1532_daily_dialog_emotion_classification

  • Dataset: task1532_daily_dialog_emotion_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 136.34 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 141.12 tokens
    • max: 256 tokens
    • min: 17 tokens
    • mean: 135.54 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task239_tweetqa_answer_generation

  • Dataset: task239_tweetqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 56.04 tokens
    • max: 91 tokens
    • min: 29 tokens
    • mean: 56.6 tokens
    • max: 92 tokens
    • min: 25 tokens
    • mean: 56.03 tokens
    • max: 81 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task596_mocha_question_generation

  • Dataset: task596_mocha_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 34 tokens
    • mean: 80.75 tokens
    • max: 163 tokens
    • min: 12 tokens
    • mean: 96.28 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 44.76 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1411_dart_subject_identification

  • Dataset: task1411_dart_subject_identification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 14.91 tokens
    • max: 74 tokens
    • min: 6 tokens
    • mean: 14.08 tokens
    • max: 37 tokens
    • min: 6 tokens
    • mean: 14.34 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1359_numer_sense_answer_generation

  • Dataset: task1359_numer_sense_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 18.77 tokens
    • max: 30 tokens
    • min: 10 tokens
    • mean: 18.41 tokens
    • max: 33 tokens
    • min: 10 tokens
    • mean: 18.35 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task329_gap_classification

  • Dataset: task329_gap_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 40 tokens
    • mean: 123.98 tokens
    • max: 256 tokens
    • min: 62 tokens
    • mean: 127.13 tokens
    • max: 256 tokens
    • min: 58 tokens
    • mean: 128.92 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task220_rocstories_title_classification

  • Dataset: task220_rocstories_title_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 53 tokens
    • mean: 80.86 tokens
    • max: 116 tokens
    • min: 51 tokens
    • mean: 81.24 tokens
    • max: 108 tokens
    • min: 55 tokens
    • mean: 79.88 tokens
    • max: 115 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task316_crows-pairs_classification_stereotype

  • Dataset: task316_crows-pairs_classification_stereotype
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.8 tokens
    • max: 51 tokens
    • min: 7 tokens
    • mean: 18.23 tokens
    • max: 41 tokens
    • min: 7 tokens
    • mean: 19.89 tokens
    • max: 52 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task495_semeval_headline_classification

  • Dataset: task495_semeval_headline_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 24.5 tokens
    • max: 42 tokens
    • min: 15 tokens
    • mean: 24.16 tokens
    • max: 41 tokens
    • min: 15 tokens
    • mean: 24.26 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1168_brown_coarse_pos_tagging

  • Dataset: task1168_brown_coarse_pos_tagging
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 43.79 tokens
    • max: 142 tokens
    • min: 12 tokens
    • mean: 43.05 tokens
    • max: 197 tokens
    • min: 12 tokens
    • mean: 44.64 tokens
    • max: 197 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task348_squad2.0_unanswerable_question_generation

  • Dataset: task348_squad2.0_unanswerable_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 30 tokens
    • mean: 153.11 tokens
    • max: 256 tokens
    • min: 38 tokens
    • mean: 161.73 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 167.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task049_multirc_questions_needed_to_answer

  • Dataset: task049_multirc_questions_needed_to_answer
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 174 tokens
    • mean: 252.74 tokens
    • max: 256 tokens
    • min: 169 tokens
    • mean: 252.74 tokens
    • max: 256 tokens
    • min: 178 tokens
    • mean: 252.9 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1534_daily_dialog_question_classification

  • Dataset: task1534_daily_dialog_question_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 126.44 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 131.14 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 136.07 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task322_jigsaw_classification_threat

  • Dataset: task322_jigsaw_classification_threat
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 55.03 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 61.38 tokens
    • max: 249 tokens
    • min: 6 tokens
    • mean: 62.47 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task295_semeval_2020_task4_commonsense_reasoning

  • Dataset: task295_semeval_2020_task4_commonsense_reasoning
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 44.9 tokens
    • max: 92 tokens
    • min: 25 tokens
    • mean: 45.21 tokens
    • max: 95 tokens
    • min: 25 tokens
    • mean: 44.7 tokens
    • max: 88 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task186_snli_contradiction_to_entailment_text_modification

  • Dataset: task186_snli_contradiction_to_entailment_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.18 tokens
    • max: 102 tokens
    • min: 18 tokens
    • mean: 30.14 tokens
    • max: 65 tokens
    • min: 18 tokens
    • mean: 32.19 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task034_winogrande_question_modification_object

  • Dataset: task034_winogrande_question_modification_object
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 36.37 tokens
    • max: 53 tokens
    • min: 29 tokens
    • mean: 35.59 tokens
    • max: 54 tokens
    • min: 29 tokens
    • mean: 34.87 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task160_replace_letter_in_a_sentence

  • Dataset: task160_replace_letter_in_a_sentence
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 31.98 tokens
    • max: 49 tokens
    • min: 28 tokens
    • mean: 31.75 tokens
    • max: 41 tokens
    • min: 29 tokens
    • mean: 31.75 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task469_mrqa_answer_generation

  • Dataset: task469_mrqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 182.31 tokens
    • max: 256 tokens
    • min: 25 tokens
    • mean: 181.14 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 184.24 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task105_story_cloze-rocstories_sentence_generation

  • Dataset: task105_story_cloze-rocstories_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 36 tokens
    • mean: 55.56 tokens
    • max: 75 tokens
    • min: 35 tokens
    • mean: 54.95 tokens
    • max: 76 tokens
    • min: 36 tokens
    • mean: 55.97 tokens
    • max: 76 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task649_race_blank_question_generation

  • Dataset: task649_race_blank_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 36 tokens
    • mean: 252.97 tokens
    • max: 256 tokens
    • min: 36 tokens
    • mean: 252.53 tokens
    • max: 256 tokens
    • min: 157 tokens
    • mean: 254.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1536_daily_dialog_happiness_classification

  • Dataset: task1536_daily_dialog_happiness_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 127.43 tokens
    • max: 256 tokens
    • min: 13 tokens
    • mean: 133.98 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 142.13 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task683_online_privacy_policy_text_purpose_answer_generation

  • Dataset: task683_online_privacy_policy_text_purpose_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 29.92 tokens
    • max: 68 tokens
    • min: 10 tokens
    • mean: 30.2 tokens
    • max: 64 tokens
    • min: 14 tokens
    • mean: 29.84 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task024_cosmosqa_answer_generation

  • Dataset: task024_cosmosqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 45 tokens
    • mean: 92.82 tokens
    • max: 176 tokens
    • min: 47 tokens
    • mean: 93.13 tokens
    • max: 174 tokens
    • min: 42 tokens
    • mean: 94.96 tokens
    • max: 183 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task584_udeps_eng_fine_pos_tagging

  • Dataset: task584_udeps_eng_fine_pos_tagging
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 40.09 tokens
    • max: 120 tokens
    • min: 12 tokens
    • mean: 39.33 tokens
    • max: 186 tokens
    • min: 12 tokens
    • mean: 40.42 tokens
    • max: 148 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task066_timetravel_binary_consistency_classification

  • Dataset: task066_timetravel_binary_consistency_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 42 tokens
    • mean: 66.75 tokens
    • max: 93 tokens
    • min: 43 tokens
    • mean: 67.37 tokens
    • max: 94 tokens
    • min: 45 tokens
    • mean: 67.13 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task413_mickey_en_sentence_perturbation_generation

  • Dataset: task413_mickey_en_sentence_perturbation_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 13.72 tokens
    • max: 21 tokens
    • min: 7 tokens
    • mean: 13.8 tokens
    • max: 21 tokens
    • min: 7 tokens
    • mean: 13.28 tokens
    • max: 20 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task182_duorc_question_generation

  • Dataset: task182_duorc_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 99 tokens
    • mean: 242.87 tokens
    • max: 256 tokens
    • min: 120 tokens
    • mean: 246.6 tokens
    • max: 256 tokens
    • min: 99 tokens
    • mean: 246.31 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task028_drop_answer_generation

  • Dataset: task028_drop_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 76 tokens
    • mean: 230.68 tokens
    • max: 256 tokens
    • min: 86 tokens
    • mean: 234.65 tokens
    • max: 256 tokens
    • min: 81 tokens
    • mean: 236.16 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1601_webquestions_answer_generation

  • Dataset: task1601_webquestions_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 16.54 tokens
    • max: 28 tokens
    • min: 11 tokens
    • mean: 16.68 tokens
    • max: 28 tokens
    • min: 9 tokens
    • mean: 16.75 tokens
    • max: 27 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1295_adversarial_qa_question_answering

  • Dataset: task1295_adversarial_qa_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 45 tokens
    • mean: 165.69 tokens
    • max: 256 tokens
    • min: 54 tokens
    • mean: 166.56 tokens
    • max: 256 tokens
    • min: 48 tokens
    • mean: 167.89 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task201_mnli_neutral_classification

  • Dataset: task201_mnli_neutral_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 72.88 tokens
    • max: 218 tokens
    • min: 25 tokens
    • mean: 73.52 tokens
    • max: 170 tokens
    • min: 27 tokens
    • mean: 72.82 tokens
    • max: 205 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task038_qasc_combined_fact

  • Dataset: task038_qasc_combined_fact
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.28 tokens
    • max: 57 tokens
    • min: 19 tokens
    • mean: 30.54 tokens
    • max: 53 tokens
    • min: 18 tokens
    • mean: 30.82 tokens
    • max: 53 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task293_storycommonsense_emotion_text_generation

  • Dataset: task293_storycommonsense_emotion_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 40.64 tokens
    • max: 86 tokens
    • min: 15 tokens
    • mean: 40.58 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 38.51 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task572_recipe_nlg_text_generation

  • Dataset: task572_recipe_nlg_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 114.35 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 119.45 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 123.81 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task517_emo_classify_emotion_of_dialogue

  • Dataset: task517_emo_classify_emotion_of_dialogue
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 18.16 tokens
    • max: 78 tokens
    • min: 7 tokens
    • mean: 16.94 tokens
    • max: 59 tokens
    • min: 7 tokens
    • mean: 18.35 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task382_hybridqa_answer_generation

  • Dataset: task382_hybridqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 42.25 tokens
    • max: 70 tokens
    • min: 29 tokens
    • mean: 41.63 tokens
    • max: 74 tokens
    • min: 28 tokens
    • mean: 41.83 tokens
    • max: 75 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task176_break_decompose_questions

  • Dataset: task176_break_decompose_questions
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 17.41 tokens
    • max: 41 tokens
    • min: 8 tokens
    • mean: 17.22 tokens
    • max: 39 tokens
    • min: 8 tokens
    • mean: 15.72 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1291_multi_news_summarization

  • Dataset: task1291_multi_news_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 116 tokens
    • mean: 255.36 tokens
    • max: 256 tokens
    • min: 146 tokens
    • mean: 255.71 tokens
    • max: 256 tokens
    • min: 68 tokens
    • mean: 252.09 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task155_count_nouns_verbs

  • Dataset: task155_count_nouns_verbs
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 23 tokens
    • mean: 27.01 tokens
    • max: 56 tokens
    • min: 23 tokens
    • mean: 26.79 tokens
    • max: 43 tokens
    • min: 23 tokens
    • mean: 26.96 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task031_winogrande_question_generation_object

  • Dataset: task031_winogrande_question_generation_object
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 7.41 tokens
    • max: 11 tokens
    • min: 7 tokens
    • mean: 7.3 tokens
    • max: 11 tokens
    • min: 7 tokens
    • mean: 7.27 tokens
    • max: 11 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task279_stereoset_classification_stereotype

  • Dataset: task279_stereoset_classification_stereotype
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 17.86 tokens
    • max: 41 tokens
    • min: 8 tokens
    • mean: 15.56 tokens
    • max: 43 tokens
    • min: 8 tokens
    • mean: 17.31 tokens
    • max: 50 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1336_peixian_equity_evaluation_corpus_gender_classifier

  • Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.61 tokens
    • max: 17 tokens
    • min: 6 tokens
    • mean: 9.59 tokens
    • max: 16 tokens
    • min: 6 tokens
    • mean: 9.66 tokens
    • max: 16 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task508_scruples_dilemmas_more_ethical_isidentifiable

  • Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 29.76 tokens
    • max: 94 tokens
    • min: 12 tokens
    • mean: 28.61 tokens
    • max: 94 tokens
    • min: 12 tokens
    • mean: 28.63 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task518_emo_different_dialogue_emotions

  • Dataset: task518_emo_different_dialogue_emotions
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 47.87 tokens
    • max: 106 tokens
    • min: 28 tokens
    • mean: 45.55 tokens
    • max: 116 tokens
    • min: 26 tokens
    • mean: 45.87 tokens
    • max: 123 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task077_splash_explanation_to_sql

  • Dataset: task077_splash_explanation_to_sql
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 39.82 tokens
    • max: 126 tokens
    • min: 8 tokens
    • mean: 40.04 tokens
    • max: 126 tokens
    • min: 8 tokens
    • mean: 35.69 tokens
    • max: 111 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task923_event2mind_classifier

  • Dataset: task923_event2mind_classifier
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 20.66 tokens
    • max: 46 tokens
    • min: 11 tokens
    • mean: 18.63 tokens
    • max: 41 tokens
    • min: 11 tokens
    • mean: 19.53 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task470_mrqa_question_generation

  • Dataset: task470_mrqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 170.62 tokens
    • max: 256 tokens
    • min: 11 tokens
    • mean: 173.14 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 178.86 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task638_multi_woz_classification

  • Dataset: task638_multi_woz_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 78 tokens
    • mean: 223.5 tokens
    • max: 256 tokens
    • min: 76 tokens
    • mean: 220.19 tokens
    • max: 256 tokens
    • min: 64 tokens
    • mean: 220.04 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1412_web_questions_question_answering

  • Dataset: task1412_web_questions_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 10.32 tokens
    • max: 17 tokens
    • min: 6 tokens
    • mean: 10.18 tokens
    • max: 17 tokens
    • min: 6 tokens
    • mean: 10.06 tokens
    • max: 16 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task847_pubmedqa_question_generation

  • Dataset: task847_pubmedqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 249.63 tokens
    • max: 256 tokens
    • min: 21 tokens
    • mean: 249.29 tokens
    • max: 256 tokens
    • min: 43 tokens
    • mean: 249.18 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task678_ollie_actual_relationship_answer_generation

  • Dataset: task678_ollie_actual_relationship_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 41.02 tokens
    • max: 95 tokens
    • min: 19 tokens
    • mean: 38.03 tokens
    • max: 102 tokens
    • min: 18 tokens
    • mean: 41.19 tokens
    • max: 104 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task290_tellmewhy_question_answerability

  • Dataset: task290_tellmewhy_question_answerability
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 62.8 tokens
    • max: 95 tokens
    • min: 36 tokens
    • mean: 62.28 tokens
    • max: 94 tokens
    • min: 37 tokens
    • mean: 62.92 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task575_air_dialogue_classification

  • Dataset: task575_air_dialogue_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 14.16 tokens
    • max: 45 tokens
    • min: 4 tokens
    • mean: 13.56 tokens
    • max: 43 tokens
    • min: 4 tokens
    • mean: 12.33 tokens
    • max: 42 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task189_snli_neutral_to_contradiction_text_modification

  • Dataset: task189_snli_neutral_to_contradiction_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.85 tokens
    • max: 60 tokens
    • min: 18 tokens
    • mean: 30.74 tokens
    • max: 57 tokens
    • min: 18 tokens
    • mean: 33.3 tokens
    • max: 105 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task026_drop_question_generation

  • Dataset: task026_drop_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 82 tokens
    • mean: 219.29 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 222.74 tokens
    • max: 256 tokens
    • min: 96 tokens
    • mean: 231.78 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task162_count_words_starting_with_letter

  • Dataset: task162_count_words_starting_with_letter
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 32.21 tokens
    • max: 56 tokens
    • min: 28 tokens
    • mean: 31.76 tokens
    • max: 45 tokens
    • min: 28 tokens
    • mean: 31.63 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task079_conala_concat_strings

  • Dataset: task079_conala_concat_strings
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 39.59 tokens
    • max: 76 tokens
    • min: 11 tokens
    • mean: 34.2 tokens
    • max: 80 tokens
    • min: 11 tokens
    • mean: 33.73 tokens
    • max: 76 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task610_conllpp_ner

  • Dataset: task610_conllpp_ner
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 19.53 tokens
    • max: 62 tokens
    • min: 4 tokens
    • mean: 20.2 tokens
    • max: 62 tokens
    • min: 4 tokens
    • mean: 14.13 tokens
    • max: 54 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task046_miscellaneous_question_typing

  • Dataset: task046_miscellaneous_question_typing
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 25.34 tokens
    • max: 70 tokens
    • min: 16 tokens
    • mean: 24.82 tokens
    • max: 70 tokens
    • min: 16 tokens
    • mean: 25.11 tokens
    • max: 57 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task197_mnli_domain_answer_generation

  • Dataset: task197_mnli_domain_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 43.82 tokens
    • max: 197 tokens
    • min: 12 tokens
    • mean: 44.72 tokens
    • max: 211 tokens
    • min: 11 tokens
    • mean: 39.27 tokens
    • max: 115 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1325_qa_zre_question_generation_on_subject_relation

  • Dataset: task1325_qa_zre_question_generation_on_subject_relation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 50.73 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 49.55 tokens
    • max: 180 tokens
    • min: 22 tokens
    • mean: 54.03 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task430_senteval_subject_count

  • Dataset: task430_senteval_subject_count
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 17.3 tokens
    • max: 35 tokens
    • min: 7 tokens
    • mean: 15.39 tokens
    • max: 34 tokens
    • min: 7 tokens
    • mean: 16.22 tokens
    • max: 34 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task672_nummersense

  • Dataset: task672_nummersense
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 15.72 tokens
    • max: 30 tokens
    • min: 7 tokens
    • mean: 15.3 tokens
    • max: 27 tokens
    • min: 7 tokens
    • mean: 15.26 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task402_grailqa_paraphrase_generation

  • Dataset: task402_grailqa_paraphrase_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 23 tokens
    • mean: 130.07 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 139.63 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 136.8 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task904_hate_speech_offensive_classification

  • Dataset: task904_hate_speech_offensive_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 34.21 tokens
    • max: 157 tokens
    • min: 8 tokens
    • mean: 33.94 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 27.51 tokens
    • max: 148 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task192_hotpotqa_sentence_generation

  • Dataset: task192_hotpotqa_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 125.68 tokens
    • max: 256 tokens
    • min: 35 tokens
    • mean: 124.36 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 133.49 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task069_abductivenli_classification

  • Dataset: task069_abductivenli_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 52.06 tokens
    • max: 86 tokens
    • min: 33 tokens
    • mean: 52.09 tokens
    • max: 95 tokens
    • min: 33 tokens
    • mean: 51.87 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task574_air_dialogue_sentence_generation

  • Dataset: task574_air_dialogue_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 54 tokens
    • mean: 144.3 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 143.64 tokens
    • max: 256 tokens
    • min: 66 tokens
    • mean: 147.62 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task187_snli_entailment_to_contradiction_text_modification

  • Dataset: task187_snli_entailment_to_contradiction_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 30.16 tokens
    • max: 69 tokens
    • min: 16 tokens
    • mean: 30.0 tokens
    • max: 104 tokens
    • min: 17 tokens
    • mean: 29.36 tokens
    • max: 71 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task749_glucose_reverse_cause_emotion_detection

  • Dataset: task749_glucose_reverse_cause_emotion_detection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 38 tokens
    • mean: 67.56 tokens
    • max: 106 tokens
    • min: 37 tokens
    • mean: 67.11 tokens
    • max: 104 tokens
    • min: 39 tokens
    • mean: 68.44 tokens
    • max: 107 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1552_scitail_question_generation

  • Dataset: task1552_scitail_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 18.34 tokens
    • max: 53 tokens
    • min: 7 tokens
    • mean: 17.55 tokens
    • max: 46 tokens
    • min: 7 tokens
    • mean: 15.92 tokens
    • max: 54 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task750_aqua_multiple_choice_answering

  • Dataset: task750_aqua_multiple_choice_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 70.42 tokens
    • max: 194 tokens
    • min: 32 tokens
    • mean: 68.55 tokens
    • max: 194 tokens
    • min: 28 tokens
    • mean: 68.5 tokens
    • max: 165 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task327_jigsaw_classification_toxic

  • Dataset: task327_jigsaw_classification_toxic
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 37.15 tokens
    • max: 234 tokens
    • min: 5 tokens
    • mean: 41.69 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 46.13 tokens
    • max: 244 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1502_hatexplain_classification

  • Dataset: task1502_hatexplain_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 28.69 tokens
    • max: 73 tokens
    • min: 5 tokens
    • mean: 26.72 tokens
    • max: 110 tokens
    • min: 5 tokens
    • mean: 26.94 tokens
    • max: 90 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task328_jigsaw_classification_insult

  • Dataset: task328_jigsaw_classification_insult
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 51.2 tokens
    • max: 247 tokens
    • min: 5 tokens
    • mean: 60.74 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 64.45 tokens
    • max: 249 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task304_numeric_fused_head_resolution

  • Dataset: task304_numeric_fused_head_resolution
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 120.8 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 121.88 tokens
    • max: 256 tokens
    • min: 11 tokens
    • mean: 134.4 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1293_kilt_tasks_hotpotqa_question_answering

  • Dataset: task1293_kilt_tasks_hotpotqa_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 24.64 tokens
    • max: 114 tokens
    • min: 9 tokens
    • mean: 24.23 tokens
    • max: 114 tokens
    • min: 8 tokens
    • mean: 23.72 tokens
    • max: 84 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task216_rocstories_correct_answer_generation

  • Dataset: task216_rocstories_correct_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 59.39 tokens
    • max: 83 tokens
    • min: 36 tokens
    • mean: 58.25 tokens
    • max: 92 tokens
    • min: 39 tokens
    • mean: 58.03 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1326_qa_zre_question_generation_from_answer

  • Dataset: task1326_qa_zre_question_generation_from_answer
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 46.52 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 45.41 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 49.19 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1338_peixian_equity_evaluation_corpus_sentiment_classifier

  • Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.71 tokens
    • max: 16 tokens
    • min: 6 tokens
    • mean: 9.72 tokens
    • max: 16 tokens
    • min: 6 tokens
    • mean: 9.6 tokens
    • max: 17 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1729_personachat_generate_next

  • Dataset: task1729_personachat_generate_next
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 146.41 tokens
    • max: 256 tokens
    • min: 43 tokens
    • mean: 142.25 tokens
    • max: 256 tokens
    • min: 50 tokens
    • mean: 144.54 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1202_atomic_classification_xneed

  • Dataset: task1202_atomic_classification_xneed
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 19.55 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 19.38 tokens
    • max: 31 tokens
    • min: 14 tokens
    • mean: 19.24 tokens
    • max: 28 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task400_paws_paraphrase_classification

  • Dataset: task400_paws_paraphrase_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 52.27 tokens
    • max: 97 tokens
    • min: 18 tokens
    • mean: 51.78 tokens
    • max: 98 tokens
    • min: 19 tokens
    • mean: 53.05 tokens
    • max: 97 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task502_scruples_anecdotes_whoiswrong_verification

  • Dataset: task502_scruples_anecdotes_whoiswrong_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 230.21 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 236.63 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 235.13 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task088_identify_typo_verification

  • Dataset: task088_identify_typo_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 15.1 tokens
    • max: 48 tokens
    • min: 10 tokens
    • mean: 15.07 tokens
    • max: 47 tokens
    • min: 10 tokens
    • mean: 15.41 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task221_rocstories_two_choice_classification

  • Dataset: task221_rocstories_two_choice_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 47 tokens
    • mean: 72.6 tokens
    • max: 108 tokens
    • min: 48 tokens
    • mean: 72.61 tokens
    • max: 109 tokens
    • min: 46 tokens
    • mean: 73.22 tokens
    • max: 108 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task200_mnli_entailment_classification

  • Dataset: task200_mnli_entailment_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 72.98 tokens
    • max: 198 tokens
    • min: 23 tokens
    • mean: 72.91 tokens
    • max: 224 tokens
    • min: 23 tokens
    • mean: 74.11 tokens
    • max: 226 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task074_squad1.1_question_generation

  • Dataset: task074_squad1.1_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 30 tokens
    • mean: 149.8 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 160.42 tokens
    • max: 256 tokens
    • min: 38 tokens
    • mean: 164.58 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task581_socialiqa_question_generation

  • Dataset: task581_socialiqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 26.36 tokens
    • max: 69 tokens
    • min: 14 tokens
    • mean: 25.61 tokens
    • max: 48 tokens
    • min: 15 tokens
    • mean: 25.76 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1186_nne_hrngo_classification

  • Dataset: task1186_nne_hrngo_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 33.81 tokens
    • max: 79 tokens
    • min: 19 tokens
    • mean: 33.53 tokens
    • max: 74 tokens
    • min: 20 tokens
    • mean: 33.4 tokens
    • max: 77 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task898_freebase_qa_answer_generation

  • Dataset: task898_freebase_qa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.14 tokens
    • max: 125 tokens
    • min: 8 tokens
    • mean: 17.5 tokens
    • max: 49 tokens
    • min: 8 tokens
    • mean: 17.33 tokens
    • max: 79 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1408_dart_similarity_classification

  • Dataset: task1408_dart_similarity_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 59.5 tokens
    • max: 147 tokens
    • min: 22 tokens
    • mean: 61.98 tokens
    • max: 154 tokens
    • min: 20 tokens
    • mean: 48.3 tokens
    • max: 124 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task168_strategyqa_question_decomposition

  • Dataset: task168_strategyqa_question_decomposition
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 42 tokens
    • mean: 80.45 tokens
    • max: 181 tokens
    • min: 42 tokens
    • mean: 78.96 tokens
    • max: 179 tokens
    • min: 42 tokens
    • mean: 77.07 tokens
    • max: 166 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1357_xlsum_summary_generation

  • Dataset: task1357_xlsum_summary_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 67 tokens
    • mean: 241.84 tokens
    • max: 256 tokens
    • min: 69 tokens
    • mean: 243.75 tokens
    • max: 256 tokens
    • min: 67 tokens
    • mean: 246.71 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task390_torque_text_span_selection

  • Dataset: task390_torque_text_span_selection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 47 tokens
    • mean: 110.13 tokens
    • max: 196 tokens
    • min: 42 tokens
    • mean: 110.78 tokens
    • max: 195 tokens
    • min: 48 tokens
    • mean: 110.6 tokens
    • max: 196 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task165_mcscript_question_answering_commonsense

  • Dataset: task165_mcscript_question_answering_commonsense
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 147 tokens
    • mean: 198.18 tokens
    • max: 256 tokens
    • min: 145 tokens
    • mean: 196.56 tokens
    • max: 256 tokens
    • min: 147 tokens
    • mean: 198.4 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1533_daily_dialog_formal_classification

  • Dataset: task1533_daily_dialog_formal_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 130.59 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 137.09 tokens
    • max: 256 tokens
    • min: 17 tokens
    • mean: 137.38 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task002_quoref_answer_generation

  • Dataset: task002_quoref_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 214 tokens
    • mean: 255.57 tokens
    • max: 256 tokens
    • min: 214 tokens
    • mean: 255.53 tokens
    • max: 256 tokens
    • min: 224 tokens
    • mean: 255.61 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1297_qasc_question_answering

  • Dataset: task1297_qasc_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 61 tokens
    • mean: 84.55 tokens
    • max: 134 tokens
    • min: 59 tokens
    • mean: 85.56 tokens
    • max: 130 tokens
    • min: 58 tokens
    • mean: 84.73 tokens
    • max: 125 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task305_jeopardy_answer_generation_normal

  • Dataset: task305_jeopardy_answer_generation_normal
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 27.6 tokens
    • max: 59 tokens
    • min: 9 tokens
    • mean: 27.42 tokens
    • max: 45 tokens
    • min: 11 tokens
    • mean: 27.39 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task029_winogrande_full_object

  • Dataset: task029_winogrande_full_object
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 7.37 tokens
    • max: 12 tokens
    • min: 7 tokens
    • mean: 7.33 tokens
    • max: 11 tokens
    • min: 7 tokens
    • mean: 7.24 tokens
    • max: 10 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1327_qa_zre_answer_generation_from_question

  • Dataset: task1327_qa_zre_answer_generation_from_question
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 54.55 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 51.77 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 55.1 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task326_jigsaw_classification_obscene

  • Dataset: task326_jigsaw_classification_obscene
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 65.28 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 77.4 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 73.69 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1542_every_ith_element_from_starting

  • Dataset: task1542_every_ith_element_from_starting
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 124.81 tokens
    • max: 245 tokens
    • min: 13 tokens
    • mean: 123.13 tokens
    • max: 244 tokens
    • min: 13 tokens
    • mean: 120.4 tokens
    • max: 238 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task570_recipe_nlg_ner_generation

  • Dataset: task570_recipe_nlg_ner_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 73.85 tokens
    • max: 250 tokens
    • min: 5 tokens
    • mean: 73.41 tokens
    • max: 256 tokens
    • min: 8 tokens
    • mean: 75.34 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1409_dart_text_generation

  • Dataset: task1409_dart_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 67.42 tokens
    • max: 174 tokens
    • min: 18 tokens
    • mean: 72.58 tokens
    • max: 170 tokens
    • min: 17 tokens
    • mean: 67.42 tokens
    • max: 164 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task401_numeric_fused_head_reference

  • Dataset: task401_numeric_fused_head_reference
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 110.13 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 116.49 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 120.79 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task846_pubmedqa_classification

  • Dataset: task846_pubmedqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 32 tokens
    • mean: 85.71 tokens
    • max: 246 tokens
    • min: 33 tokens
    • mean: 85.04 tokens
    • max: 225 tokens
    • min: 28 tokens
    • mean: 93.66 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1712_poki_classification

  • Dataset: task1712_poki_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 52.65 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 55.69 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 63.02 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task344_hybridqa_answer_generation

  • Dataset: task344_hybridqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 22.2 tokens
    • max: 50 tokens
    • min: 8 tokens
    • mean: 22.05 tokens
    • max: 58 tokens
    • min: 7 tokens
    • mean: 22.14 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task875_emotion_classification

  • Dataset: task875_emotion_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 23.06 tokens
    • max: 75 tokens
    • min: 4 tokens
    • mean: 18.42 tokens
    • max: 63 tokens
    • min: 5 tokens
    • mean: 20.46 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1214_atomic_classification_xwant

  • Dataset: task1214_atomic_classification_xwant
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 19.67 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 19.41 tokens
    • max: 29 tokens
    • min: 14 tokens
    • mean: 19.54 tokens
    • max: 31 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task106_scruples_ethical_judgment

  • Dataset: task106_scruples_ethical_judgment
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 29.99 tokens
    • max: 70 tokens
    • min: 14 tokens
    • mean: 28.92 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 28.76 tokens
    • max: 58 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task238_iirc_answer_from_passage_answer_generation

  • Dataset: task238_iirc_answer_from_passage_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 138 tokens
    • mean: 242.65 tokens
    • max: 256 tokens
    • min: 165 tokens
    • mean: 243.0 tokens
    • max: 256 tokens
    • min: 173 tokens
    • mean: 243.1 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1391_winogrande_easy_answer_generation

  • Dataset: task1391_winogrande_easy_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 31.66 tokens
    • max: 54 tokens
    • min: 26 tokens
    • mean: 31.34 tokens
    • max: 48 tokens
    • min: 25 tokens
    • mean: 31.16 tokens
    • max: 49 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task195_sentiment140_classification

  • Dataset: task195_sentiment140_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 22.7 tokens
    • max: 118 tokens
    • min: 4 tokens
    • mean: 18.87 tokens
    • max: 79 tokens
    • min: 5 tokens
    • mean: 21.32 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task163_count_words_ending_with_letter

  • Dataset: task163_count_words_ending_with_letter
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 32.05 tokens
    • max: 54 tokens
    • min: 28 tokens
    • mean: 31.69 tokens
    • max: 57 tokens
    • min: 28 tokens
    • mean: 31.59 tokens
    • max: 43 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task579_socialiqa_classification

  • Dataset: task579_socialiqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 54.07 tokens
    • max: 132 tokens
    • min: 36 tokens
    • mean: 53.57 tokens
    • max: 103 tokens
    • min: 40 tokens
    • mean: 54.15 tokens
    • max: 84 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task569_recipe_nlg_text_generation

  • Dataset: task569_recipe_nlg_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 193.55 tokens
    • max: 256 tokens
    • min: 55 tokens
    • mean: 193.45 tokens
    • max: 256 tokens
    • min: 37 tokens
    • mean: 197.57 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1602_webquestion_question_genreation

  • Dataset: task1602_webquestion_question_genreation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 23.56 tokens
    • max: 112 tokens
    • min: 12 tokens
    • mean: 24.19 tokens
    • max: 112 tokens
    • min: 12 tokens
    • mean: 22.42 tokens
    • max: 120 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task747_glucose_cause_emotion_detection

  • Dataset: task747_glucose_cause_emotion_detection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 67.92 tokens
    • max: 112 tokens
    • min: 36 tokens
    • mean: 68.12 tokens
    • max: 108 tokens
    • min: 36 tokens
    • mean: 68.76 tokens
    • max: 99 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task219_rocstories_title_answer_generation

  • Dataset: task219_rocstories_title_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 42 tokens
    • mean: 67.68 tokens
    • max: 97 tokens
    • min: 45 tokens
    • mean: 66.65 tokens
    • max: 97 tokens
    • min: 41 tokens
    • mean: 66.86 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task178_quartz_question_answering

  • Dataset: task178_quartz_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 57.92 tokens
    • max: 110 tokens
    • min: 28 tokens
    • mean: 57.34 tokens
    • max: 111 tokens
    • min: 28 tokens
    • mean: 56.88 tokens
    • max: 102 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task103_facts2story_long_text_generation

  • Dataset: task103_facts2story_long_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 52 tokens
    • mean: 80.36 tokens
    • max: 143 tokens
    • min: 51 tokens
    • mean: 82.32 tokens
    • max: 157 tokens
    • min: 49 tokens
    • mean: 79.01 tokens
    • max: 145 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task301_record_question_generation

  • Dataset: task301_record_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 140 tokens
    • mean: 210.77 tokens
    • max: 256 tokens
    • min: 139 tokens
    • mean: 209.96 tokens
    • max: 256 tokens
    • min: 143 tokens
    • mean: 208.7 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1369_healthfact_sentence_generation

  • Dataset: task1369_healthfact_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 110 tokens
    • mean: 242.84 tokens
    • max: 256 tokens
    • min: 101 tokens
    • mean: 242.29 tokens
    • max: 256 tokens
    • min: 113 tokens
    • mean: 251.55 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task515_senteval_odd_word_out

  • Dataset: task515_senteval_odd_word_out
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 19.77 tokens
    • max: 36 tokens
    • min: 7 tokens
    • mean: 19.13 tokens
    • max: 38 tokens
    • min: 7 tokens
    • mean: 19.04 tokens
    • max: 35 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task496_semeval_answer_generation

  • Dataset: task496_semeval_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 28.15 tokens
    • max: 46 tokens
    • min: 18 tokens
    • mean: 27.81 tokens
    • max: 45 tokens
    • min: 19 tokens
    • mean: 27.71 tokens
    • max: 45 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1658_billsum_summarization

  • Dataset: task1658_billsum_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1204_atomic_classification_hinderedby

  • Dataset: task1204_atomic_classification_hinderedby
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 21.99 tokens
    • max: 35 tokens
    • min: 14 tokens
    • mean: 21.95 tokens
    • max: 34 tokens
    • min: 14 tokens
    • mean: 21.53 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1392_superglue_multirc_answer_verification

  • Dataset: task1392_superglue_multirc_answer_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 128 tokens
    • mean: 242.19 tokens
    • max: 256 tokens
    • min: 127 tokens
    • mean: 242.46 tokens
    • max: 256 tokens
    • min: 136 tokens
    • mean: 242.46 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task306_jeopardy_answer_generation_double

  • Dataset: task306_jeopardy_answer_generation_double
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 27.76 tokens
    • max: 47 tokens
    • min: 10 tokens
    • mean: 27.16 tokens
    • max: 46 tokens
    • min: 11 tokens
    • mean: 27.69 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1286_openbookqa_question_answering

  • Dataset: task1286_openbookqa_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 39.48 tokens
    • max: 85 tokens
    • min: 23 tokens
    • mean: 38.88 tokens
    • max: 96 tokens
    • min: 22 tokens
    • mean: 38.37 tokens
    • max: 89 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task159_check_frequency_of_words_in_sentence_pair

  • Dataset: task159_check_frequency_of_words_in_sentence_pair
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 50.33 tokens
    • max: 67 tokens
    • min: 44 tokens
    • mean: 50.32 tokens
    • max: 67 tokens
    • min: 44 tokens
    • mean: 50.55 tokens
    • max: 66 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task151_tomqa_find_location_easy_clean

  • Dataset: task151_tomqa_find_location_easy_clean
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 50.73 tokens
    • max: 79 tokens
    • min: 37 tokens
    • mean: 50.35 tokens
    • max: 74 tokens
    • min: 37 tokens
    • mean: 50.49 tokens
    • max: 74 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task323_jigsaw_classification_sexually_explicit

  • Dataset: task323_jigsaw_classification_sexually_explicit
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 66.13 tokens
    • max: 248 tokens
    • min: 5 tokens
    • mean: 76.82 tokens
    • max: 248 tokens
    • min: 6 tokens
    • mean: 75.58 tokens
    • max: 251 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task037_qasc_generate_related_fact

  • Dataset: task037_qasc_generate_related_fact
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 22.04 tokens
    • max: 50 tokens
    • min: 13 tokens
    • mean: 22.02 tokens
    • max: 42 tokens
    • min: 13 tokens
    • mean: 21.89 tokens
    • max: 40 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task027_drop_answer_type_generation

  • Dataset: task027_drop_answer_type_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 87 tokens
    • mean: 228.99 tokens
    • max: 256 tokens
    • min: 74 tokens
    • mean: 230.62 tokens
    • max: 256 tokens
    • min: 71 tokens
    • mean: 232.24 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1596_event2mind_text_generation_2

  • Dataset: task1596_event2mind_text_generation_2
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.94 tokens
    • max: 18 tokens
    • min: 6 tokens
    • mean: 10.01 tokens
    • max: 19 tokens
    • min: 6 tokens
    • mean: 10.02 tokens
    • max: 18 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task141_odd-man-out_classification_category

  • Dataset: task141_odd-man-out_classification_category
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 18.43 tokens
    • max: 28 tokens
    • min: 16 tokens
    • mean: 18.37 tokens
    • max: 26 tokens
    • min: 16 tokens
    • mean: 18.47 tokens
    • max: 25 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task194_duorc_answer_generation

  • Dataset: task194_duorc_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 149 tokens
    • mean: 251.82 tokens
    • max: 256 tokens
    • min: 147 tokens
    • mean: 252.12 tokens
    • max: 256 tokens
    • min: 148 tokens
    • mean: 251.83 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task679_hope_edi_english_text_classification

  • Dataset: task679_hope_edi_english_text_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 27.83 tokens
    • max: 199 tokens
    • min: 4 tokens
    • mean: 27.29 tokens
    • max: 205 tokens
    • min: 5 tokens
    • mean: 29.94 tokens
    • max: 194 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task246_dream_question_generation

  • Dataset: task246_dream_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 80.17 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 81.39 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 87.22 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1195_disflqa_disfluent_to_fluent_conversion

  • Dataset: task1195_disflqa_disfluent_to_fluent_conversion
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 19.77 tokens
    • max: 41 tokens
    • min: 9 tokens
    • mean: 19.87 tokens
    • max: 40 tokens
    • min: 2 tokens
    • mean: 20.23 tokens
    • max: 44 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task065_timetravel_consistent_sentence_classification

  • Dataset: task065_timetravel_consistent_sentence_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 55 tokens
    • mean: 79.36 tokens
    • max: 117 tokens
    • min: 51 tokens
    • mean: 79.2 tokens
    • max: 110 tokens
    • min: 53 tokens
    • mean: 79.93 tokens
    • max: 110 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task351_winomt_classification_gender_identifiability_anti

  • Dataset: task351_winomt_classification_gender_identifiability_anti
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 21.77 tokens
    • max: 30 tokens
    • min: 16 tokens
    • mean: 21.66 tokens
    • max: 31 tokens
    • min: 16 tokens
    • mean: 21.79 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task580_socialiqa_answer_generation

  • Dataset: task580_socialiqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 52.3 tokens
    • max: 107 tokens
    • min: 35 tokens
    • mean: 51.1 tokens
    • max: 86 tokens
    • min: 35 tokens
    • mean: 50.9 tokens
    • max: 87 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task583_udeps_eng_coarse_pos_tagging

  • Dataset: task583_udeps_eng_coarse_pos_tagging
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 41.2 tokens
    • max: 185 tokens
    • min: 12 tokens
    • mean: 40.1 tokens
    • max: 185 tokens
    • min: 12 tokens
    • mean: 40.88 tokens
    • max: 185 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task202_mnli_contradiction_classification

  • Dataset: task202_mnli_contradiction_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 73.76 tokens
    • max: 190 tokens
    • min: 28 tokens
    • mean: 76.08 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 74.63 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task222_rocstories_two_chioce_slotting_classification

  • Dataset: task222_rocstories_two_chioce_slotting_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 48 tokens
    • mean: 73.1 tokens
    • max: 105 tokens
    • min: 48 tokens
    • mean: 73.25 tokens
    • max: 100 tokens
    • min: 49 tokens
    • mean: 71.83 tokens
    • max: 102 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task498_scruples_anecdotes_whoiswrong_classification

  • Dataset: task498_scruples_anecdotes_whoiswrong_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 226.49 tokens
    • max: 256 tokens
    • min: 47 tokens
    • mean: 232.9 tokens
    • max: 256 tokens
    • min: 47 tokens
    • mean: 231.94 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task067_abductivenli_answer_generation

  • Dataset: task067_abductivenli_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 26.68 tokens
    • max: 40 tokens
    • min: 14 tokens
    • mean: 26.09 tokens
    • max: 42 tokens
    • min: 15 tokens
    • mean: 26.32 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task616_cola_classification

  • Dataset: task616_cola_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 12.68 tokens
    • max: 33 tokens
    • min: 5 tokens
    • mean: 12.51 tokens
    • max: 33 tokens
    • min: 6 tokens
    • mean: 12.4 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task286_olid_offense_judgment

  • Dataset: task286_olid_offense_judgment
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 32.65 tokens
    • max: 145 tokens
    • min: 5 tokens
    • mean: 30.75 tokens
    • max: 171 tokens
    • min: 5 tokens
    • mean: 30.25 tokens
    • max: 169 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task188_snli_neutral_to_entailment_text_modification

  • Dataset: task188_snli_neutral_to_entailment_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.75 tokens
    • max: 79 tokens
    • min: 18 tokens
    • mean: 31.28 tokens
    • max: 84 tokens
    • min: 18 tokens
    • mean: 32.97 tokens
    • max: 84 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task223_quartz_explanation_generation

  • Dataset: task223_quartz_explanation_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 31.53 tokens
    • max: 68 tokens
    • min: 13 tokens
    • mean: 31.86 tokens
    • max: 68 tokens
    • min: 13 tokens
    • mean: 28.89 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task820_protoqa_answer_generation

  • Dataset: task820_protoqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 14.54 tokens
    • max: 29 tokens
    • min: 7 tokens
    • mean: 14.46 tokens
    • max: 27 tokens
    • min: 6 tokens
    • mean: 14.07 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task196_sentiment140_answer_generation

  • Dataset: task196_sentiment140_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 36.07 tokens
    • max: 72 tokens
    • min: 17 tokens
    • mean: 32.8 tokens
    • max: 61 tokens
    • min: 17 tokens
    • mean: 36.08 tokens
    • max: 72 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1678_mathqa_answer_selection

  • Dataset: task1678_mathqa_answer_selection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 70.39 tokens
    • max: 177 tokens
    • min: 30 tokens
    • mean: 69.02 tokens
    • max: 146 tokens
    • min: 33 tokens
    • mean: 69.69 tokens
    • max: 160 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task349_squad2.0_answerable_unanswerable_question_classification

  • Dataset: task349_squad2.0_answerable_unanswerable_question_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 53 tokens
    • mean: 175.27 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 175.46 tokens
    • max: 256 tokens
    • min: 53 tokens
    • mean: 175.28 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task154_tomqa_find_location_hard_noise

  • Dataset: task154_tomqa_find_location_hard_noise
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 129 tokens
    • mean: 176.58 tokens
    • max: 253 tokens
    • min: 126 tokens
    • mean: 176.32 tokens
    • max: 249 tokens
    • min: 128 tokens
    • mean: 178.35 tokens
    • max: 254 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task333_hateeval_classification_hate_en

  • Dataset: task333_hateeval_classification_hate_en
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 38.52 tokens
    • max: 117 tokens
    • min: 7 tokens
    • mean: 37.28 tokens
    • max: 109 tokens
    • min: 7 tokens
    • mean: 36.88 tokens
    • max: 113 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task235_iirc_question_from_subtext_answer_generation

  • Dataset: task235_iirc_question_from_subtext_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 52.76 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 50.84 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 55.7 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1554_scitail_classification

  • Dataset: task1554_scitail_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 16.81 tokens
    • max: 38 tokens
    • min: 7 tokens
    • mean: 25.67 tokens
    • max: 68 tokens
    • min: 7 tokens
    • mean: 24.35 tokens
    • max: 59 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task210_logic2text_structured_text_generation

  • Dataset: task210_logic2text_structured_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 31.73 tokens
    • max: 101 tokens
    • min: 13 tokens
    • mean: 30.82 tokens
    • max: 94 tokens
    • min: 12 tokens
    • mean: 32.82 tokens
    • max: 89 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task035_winogrande_question_modification_person

  • Dataset: task035_winogrande_question_modification_person
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 31 tokens
    • mean: 36.18 tokens
    • max: 50 tokens
    • min: 31 tokens
    • mean: 35.78 tokens
    • max: 55 tokens
    • min: 31 tokens
    • mean: 35.43 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task230_iirc_passage_classification

  • Dataset: task230_iirc_passage_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1356_xlsum_title_generation

  • Dataset: task1356_xlsum_title_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 59 tokens
    • mean: 239.78 tokens
    • max: 256 tokens
    • min: 58 tokens
    • mean: 241.1 tokens
    • max: 256 tokens
    • min: 64 tokens
    • mean: 248.41 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1726_mathqa_correct_answer_generation

  • Dataset: task1726_mathqa_correct_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 43.66 tokens
    • max: 156 tokens
    • min: 12 tokens
    • mean: 42.54 tokens
    • max: 129 tokens
    • min: 11 tokens
    • mean: 42.63 tokens
    • max: 133 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task302_record_classification

  • Dataset: task302_record_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 194 tokens
    • mean: 253.52 tokens
    • max: 256 tokens
    • min: 198 tokens
    • mean: 253.15 tokens
    • max: 256 tokens
    • min: 195 tokens
    • mean: 252.97 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task380_boolq_yes_no_question

  • Dataset: task380_boolq_yes_no_question
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 133.78 tokens
    • max: 256 tokens
    • min: 30 tokens
    • mean: 139.01 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 137.64 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task212_logic2text_classification

  • Dataset: task212_logic2text_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 32.95 tokens
    • max: 146 tokens
    • min: 14 tokens
    • mean: 31.8 tokens
    • max: 146 tokens
    • min: 14 tokens
    • mean: 32.68 tokens
    • max: 127 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task748_glucose_reverse_cause_event_detection

  • Dataset: task748_glucose_reverse_cause_event_detection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 67.7 tokens
    • max: 105 tokens
    • min: 38 tokens
    • mean: 66.98 tokens
    • max: 106 tokens
    • min: 39 tokens
    • mean: 68.95 tokens
    • max: 105 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task834_mathdataset_classification

  • Dataset: task834_mathdataset_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 27.68 tokens
    • max: 83 tokens
    • min: 6 tokens
    • mean: 27.92 tokens
    • max: 83 tokens
    • min: 5 tokens
    • mean: 27.06 tokens
    • max: 93 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task350_winomt_classification_gender_identifiability_pro

  • Dataset: task350_winomt_classification_gender_identifiability_pro
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 21.83 tokens
    • max: 30 tokens
    • min: 16 tokens
    • mean: 21.64 tokens
    • max: 30 tokens
    • min: 16 tokens
    • mean: 21.83 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task191_hotpotqa_question_generation

  • Dataset: task191_hotpotqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 198 tokens
    • mean: 255.88 tokens
    • max: 256 tokens
    • min: 238 tokens
    • mean: 255.93 tokens
    • max: 256 tokens
    • min: 255 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task236_iirc_question_from_passage_answer_generation

  • Dataset: task236_iirc_question_from_passage_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 135 tokens
    • mean: 238.65 tokens
    • max: 256 tokens
    • min: 155 tokens
    • mean: 237.61 tokens
    • max: 256 tokens
    • min: 154 tokens
    • mean: 239.64 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task217_rocstories_ordering_answer_generation

  • Dataset: task217_rocstories_ordering_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 45 tokens
    • mean: 72.44 tokens
    • max: 107 tokens
    • min: 48 tokens
    • mean: 72.32 tokens
    • max: 107 tokens
    • min: 48 tokens
    • mean: 71.0 tokens
    • max: 105 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task568_circa_question_generation

  • Dataset: task568_circa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 9.6 tokens
    • max: 25 tokens
    • min: 4 tokens
    • mean: 9.46 tokens
    • max: 20 tokens
    • min: 4 tokens
    • mean: 8.95 tokens
    • max: 20 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task614_glucose_cause_event_detection

  • Dataset: task614_glucose_cause_event_detection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 67.59 tokens
    • max: 102 tokens
    • min: 39 tokens
    • mean: 67.04 tokens
    • max: 106 tokens
    • min: 38 tokens
    • mean: 68.18 tokens
    • max: 103 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task361_spolin_yesand_prompt_response_classification

  • Dataset: task361_spolin_yesand_prompt_response_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 47.0 tokens
    • max: 137 tokens
    • min: 17 tokens
    • mean: 46.18 tokens
    • max: 119 tokens
    • min: 17 tokens
    • mean: 47.22 tokens
    • max: 128 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task421_persent_sentence_sentiment_classification

  • Dataset: task421_persent_sentence_sentiment_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 67.75 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 70.9 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 72.19 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task203_mnli_sentence_generation

  • Dataset: task203_mnli_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 38.63 tokens
    • max: 175 tokens
    • min: 14 tokens
    • mean: 35.26 tokens
    • max: 175 tokens
    • min: 13 tokens
    • mean: 34.05 tokens
    • max: 170 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task420_persent_document_sentiment_classification

  • Dataset: task420_persent_document_sentiment_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 219.85 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 233.01 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 227.46 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task153_tomqa_find_location_hard_clean

  • Dataset: task153_tomqa_find_location_hard_clean
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 161.05 tokens
    • max: 256 tokens
    • min: 39 tokens
    • mean: 160.33 tokens
    • max: 256 tokens
    • min: 39 tokens
    • mean: 164.23 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task346_hybridqa_classification

  • Dataset: task346_hybridqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 32.81 tokens
    • max: 68 tokens
    • min: 18 tokens
    • mean: 31.89 tokens
    • max: 63 tokens
    • min: 19 tokens
    • mean: 31.88 tokens
    • max: 75 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1211_atomic_classification_hassubevent

  • Dataset: task1211_atomic_classification_hassubevent
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 16.27 tokens
    • max: 31 tokens
    • min: 11 tokens
    • mean: 16.06 tokens
    • max: 29 tokens
    • min: 11 tokens
    • mean: 16.81 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task360_spolin_yesand_response_generation

  • Dataset: task360_spolin_yesand_response_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 22.6 tokens
    • max: 89 tokens
    • min: 6 tokens
    • mean: 21.15 tokens
    • max: 92 tokens
    • min: 7 tokens
    • mean: 20.62 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task510_reddit_tifu_title_summarization

  • Dataset: task510_reddit_tifu_title_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 216.87 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 218.0 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 222.17 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task511_reddit_tifu_long_text_summarization

  • Dataset: task511_reddit_tifu_long_text_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 239.48 tokens
    • max: 256 tokens
    • min: 76 tokens
    • mean: 239.53 tokens
    • max: 256 tokens
    • min: 43 tokens
    • mean: 244.89 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task345_hybridqa_answer_generation

  • Dataset: task345_hybridqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 22.23 tokens
    • max: 50 tokens
    • min: 10 tokens
    • mean: 21.72 tokens
    • max: 70 tokens
    • min: 8 tokens
    • mean: 20.75 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task270_csrg_counterfactual_context_generation

  • Dataset: task270_csrg_counterfactual_context_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 63 tokens
    • mean: 100.0 tokens
    • max: 158 tokens
    • min: 63 tokens
    • mean: 98.69 tokens
    • max: 142 tokens
    • min: 62 tokens
    • mean: 100.39 tokens
    • max: 141 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task307_jeopardy_answer_generation_final

  • Dataset: task307_jeopardy_answer_generation_final
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 29.58 tokens
    • max: 46 tokens
    • min: 15 tokens
    • mean: 29.27 tokens
    • max: 53 tokens
    • min: 15 tokens
    • mean: 29.1 tokens
    • max: 43 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task001_quoref_question_generation

  • Dataset: task001_quoref_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 201 tokens
    • mean: 255.03 tokens
    • max: 256 tokens
    • min: 99 tokens
    • mean: 254.28 tokens
    • max: 256 tokens
    • min: 173 tokens
    • mean: 255.09 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task089_swap_words_verification

  • Dataset: task089_swap_words_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 12.89 tokens
    • max: 28 tokens
    • min: 9 tokens
    • mean: 12.66 tokens
    • max: 24 tokens
    • min: 9 tokens
    • mean: 12.25 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1196_atomic_classification_oeffect

  • Dataset: task1196_atomic_classification_oeffect
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 18.8 tokens
    • max: 41 tokens
    • min: 14 tokens
    • mean: 18.59 tokens
    • max: 30 tokens
    • min: 14 tokens
    • mean: 18.5 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task080_piqa_answer_generation

  • Dataset: task080_piqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 10.86 tokens
    • max: 33 tokens
    • min: 3 tokens
    • mean: 10.76 tokens
    • max: 24 tokens
    • min: 3 tokens
    • mean: 10.15 tokens
    • max: 26 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1598_nyc_long_text_generation

  • Dataset: task1598_nyc_long_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 35.54 tokens
    • max: 56 tokens
    • min: 17 tokens
    • mean: 35.73 tokens
    • max: 56 tokens
    • min: 20 tokens
    • mean: 36.68 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task240_tweetqa_question_generation

  • Dataset: task240_tweetqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 51.1 tokens
    • max: 94 tokens
    • min: 25 tokens
    • mean: 50.65 tokens
    • max: 92 tokens
    • min: 20 tokens
    • mean: 51.51 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task615_moviesqa_answer_generation

  • Dataset: task615_moviesqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 11.47 tokens
    • max: 23 tokens
    • min: 7 tokens
    • mean: 11.46 tokens
    • max: 19 tokens
    • min: 5 tokens
    • mean: 11.41 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1347_glue_sts-b_similarity_classification

  • Dataset: task1347_glue_sts-b_similarity_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 31.07 tokens
    • max: 88 tokens
    • min: 16 tokens
    • mean: 31.04 tokens
    • max: 92 tokens
    • min: 16 tokens
    • mean: 30.88 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task114_is_the_given_word_longest

  • Dataset: task114_is_the_given_word_longest
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 28.89 tokens
    • max: 68 tokens
    • min: 25 tokens
    • mean: 28.46 tokens
    • max: 48 tokens
    • min: 25 tokens
    • mean: 28.7 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task292_storycommonsense_character_text_generation

  • Dataset: task292_storycommonsense_character_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 43 tokens
    • mean: 67.89 tokens
    • max: 98 tokens
    • min: 46 tokens
    • mean: 67.1 tokens
    • max: 104 tokens
    • min: 43 tokens
    • mean: 69.01 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task115_help_advice_classification

  • Dataset: task115_help_advice_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 2 tokens
    • mean: 19.96 tokens
    • max: 91 tokens
    • min: 3 tokens
    • mean: 18.23 tokens
    • max: 92 tokens
    • min: 4 tokens
    • mean: 19.29 tokens
    • max: 137 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task431_senteval_object_count

  • Dataset: task431_senteval_object_count
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 16.73 tokens
    • max: 37 tokens
    • min: 7 tokens
    • mean: 15.13 tokens
    • max: 36 tokens
    • min: 7 tokens
    • mean: 15.8 tokens
    • max: 35 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1360_numer_sense_multiple_choice_qa_generation

  • Dataset: task1360_numer_sense_multiple_choice_qa_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 32 tokens
    • mean: 40.53 tokens
    • max: 54 tokens
    • min: 32 tokens
    • mean: 40.27 tokens
    • max: 53 tokens
    • min: 32 tokens
    • mean: 40.17 tokens
    • max: 60 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task177_para-nmt_paraphrasing

  • Dataset: task177_para-nmt_paraphrasing
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.93 tokens
    • max: 82 tokens
    • min: 9 tokens
    • mean: 18.96 tokens
    • max: 58 tokens
    • min: 9 tokens
    • mean: 18.21 tokens
    • max: 36 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task132_dais_text_modification

  • Dataset: task132_dais_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.31 tokens
    • max: 15 tokens
    • min: 6 tokens
    • mean: 9.1 tokens
    • max: 15 tokens
    • min: 6 tokens
    • mean: 10.13 tokens
    • max: 15 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task269_csrg_counterfactual_story_generation

  • Dataset: task269_csrg_counterfactual_story_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 49 tokens
    • mean: 79.94 tokens
    • max: 111 tokens
    • min: 53 tokens
    • mean: 79.58 tokens
    • max: 116 tokens
    • min: 48 tokens
    • mean: 79.43 tokens
    • max: 114 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task233_iirc_link_exists_classification

  • Dataset: task233_iirc_link_exists_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 145 tokens
    • mean: 236.06 tokens
    • max: 256 tokens
    • min: 142 tokens
    • mean: 234.18 tokens
    • max: 256 tokens
    • min: 151 tokens
    • mean: 235.38 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task161_count_words_containing_letter

  • Dataset: task161_count_words_containing_letter
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 31.0 tokens
    • max: 53 tokens
    • min: 27 tokens
    • mean: 30.81 tokens
    • max: 61 tokens
    • min: 27 tokens
    • mean: 30.5 tokens
    • max: 42 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1205_atomic_classification_isafter

  • Dataset: task1205_atomic_classification_isafter
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 20.87 tokens
    • max: 37 tokens
    • min: 14 tokens
    • mean: 20.68 tokens
    • max: 35 tokens
    • min: 14 tokens
    • mean: 21.48 tokens
    • max: 37 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task571_recipe_nlg_ner_generation

  • Dataset: task571_recipe_nlg_ner_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 117.95 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 118.72 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 110.85 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1292_yelp_review_full_text_categorization

  • Dataset: task1292_yelp_review_full_text_categorization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 137.06 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 146.76 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 145.5 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task428_senteval_inversion

  • Dataset: task428_senteval_inversion
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 16.59 tokens
    • max: 32 tokens
    • min: 7 tokens
    • mean: 14.56 tokens
    • max: 31 tokens
    • min: 7 tokens
    • mean: 15.26 tokens
    • max: 34 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task311_race_question_generation

  • Dataset: task311_race_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 115 tokens
    • mean: 254.69 tokens
    • max: 256 tokens
    • min: 137 tokens
    • mean: 254.55 tokens
    • max: 256 tokens
    • min: 171 tokens
    • mean: 255.44 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task429_senteval_tense

  • Dataset: task429_senteval_tense
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 15.91 tokens
    • max: 37 tokens
    • min: 6 tokens
    • mean: 14.14 tokens
    • max: 33 tokens
    • min: 7 tokens
    • mean: 15.3 tokens
    • max: 36 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task403_creak_commonsense_inference

  • Dataset: task403_creak_commonsense_inference
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 30.21 tokens
    • max: 104 tokens
    • min: 13 tokens
    • mean: 29.49 tokens
    • max: 108 tokens
    • min: 13 tokens
    • mean: 29.38 tokens
    • max: 122 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task929_products_reviews_classification

  • Dataset: task929_products_reviews_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 69.46 tokens
    • max: 126 tokens
    • min: 6 tokens
    • mean: 70.47 tokens
    • max: 123 tokens
    • min: 6 tokens
    • mean: 70.61 tokens
    • max: 123 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task582_naturalquestion_answer_generation

  • Dataset: task582_naturalquestion_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 11.69 tokens
    • max: 25 tokens
    • min: 10 tokens
    • mean: 11.64 tokens
    • max: 24 tokens
    • min: 10 tokens
    • mean: 11.71 tokens
    • max: 25 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task237_iirc_answer_from_subtext_answer_generation

  • Dataset: task237_iirc_answer_from_subtext_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 66.35 tokens
    • max: 256 tokens
    • min: 25 tokens
    • mean: 65.17 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 61.37 tokens
    • max: 161 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task050_multirc_answerability

  • Dataset: task050_multirc_answerability
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 32.54 tokens
    • max: 112 tokens
    • min: 14 tokens
    • mean: 31.68 tokens
    • max: 93 tokens
    • min: 15 tokens
    • mean: 32.17 tokens
    • max: 159 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task184_break_generate_question

  • Dataset: task184_break_generate_question
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 39.69 tokens
    • max: 147 tokens
    • min: 13 tokens
    • mean: 39.22 tokens
    • max: 149 tokens
    • min: 13 tokens
    • mean: 39.64 tokens
    • max: 148 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task669_ambigqa_answer_generation

  • Dataset: task669_ambigqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 12.93 tokens
    • max: 23 tokens
    • min: 10 tokens
    • mean: 12.86 tokens
    • max: 27 tokens
    • min: 11 tokens
    • mean: 12.76 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task169_strategyqa_sentence_generation

  • Dataset: task169_strategyqa_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 35.03 tokens
    • max: 63 tokens
    • min: 22 tokens
    • mean: 34.25 tokens
    • max: 60 tokens
    • min: 19 tokens
    • mean: 33.41 tokens
    • max: 65 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task500_scruples_anecdotes_title_generation

  • Dataset: task500_scruples_anecdotes_title_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 225.52 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 233.24 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 234.88 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task241_tweetqa_classification

  • Dataset: task241_tweetqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 31 tokens
    • mean: 61.68 tokens
    • max: 92 tokens
    • min: 36 tokens
    • mean: 62.12 tokens
    • max: 106 tokens
    • min: 31 tokens
    • mean: 61.63 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1345_glue_qqp_question_paraprashing

  • Dataset: task1345_glue_qqp_question_paraprashing
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 16.82 tokens
    • max: 60 tokens
    • min: 6 tokens
    • mean: 15.82 tokens
    • max: 69 tokens
    • min: 6 tokens
    • mean: 16.68 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task218_rocstories_swap_order_answer_generation

  • Dataset: task218_rocstories_swap_order_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 48 tokens
    • mean: 72.68 tokens
    • max: 118 tokens
    • min: 48 tokens
    • mean: 72.45 tokens
    • max: 102 tokens
    • min: 47 tokens
    • mean: 72.18 tokens
    • max: 106 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task613_politifact_text_generation

  • Dataset: task613_politifact_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 24.59 tokens
    • max: 75 tokens
    • min: 7 tokens
    • mean: 23.45 tokens
    • max: 56 tokens
    • min: 5 tokens
    • mean: 22.74 tokens
    • max: 61 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1167_penn_treebank_coarse_pos_tagging

  • Dataset: task1167_penn_treebank_coarse_pos_tagging
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 53.7 tokens
    • max: 200 tokens
    • min: 16 tokens
    • mean: 53.4 tokens
    • max: 220 tokens
    • min: 16 tokens
    • mean: 54.98 tokens
    • max: 202 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1422_mathqa_physics

  • Dataset: task1422_mathqa_physics
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 34 tokens
    • mean: 72.49 tokens
    • max: 164 tokens
    • min: 38 tokens
    • mean: 71.74 tokens
    • max: 157 tokens
    • min: 39 tokens
    • mean: 72.41 tokens
    • max: 155 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task247_dream_answer_generation

  • Dataset: task247_dream_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 38 tokens
    • mean: 159.27 tokens
    • max: 256 tokens
    • min: 39 tokens
    • mean: 157.53 tokens
    • max: 256 tokens
    • min: 41 tokens
    • mean: 166.97 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task199_mnli_classification

  • Dataset: task199_mnli_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 43.16 tokens
    • max: 127 tokens
    • min: 11 tokens
    • mean: 44.73 tokens
    • max: 149 tokens
    • min: 11 tokens
    • mean: 44.0 tokens
    • max: 113 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task164_mcscript_question_answering_text

  • Dataset: task164_mcscript_question_answering_text
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 150 tokens
    • mean: 199.47 tokens
    • max: 256 tokens
    • min: 150 tokens
    • mean: 199.59 tokens
    • max: 256 tokens
    • min: 142 tokens
    • mean: 199.69 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1541_agnews_classification

  • Dataset: task1541_agnews_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 53.71 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 53.0 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 53.65 tokens
    • max: 161 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task516_senteval_conjoints_inversion

  • Dataset: task516_senteval_conjoints_inversion
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 20.14 tokens
    • max: 34 tokens
    • min: 8 tokens
    • mean: 18.98 tokens
    • max: 34 tokens
    • min: 8 tokens
    • mean: 18.93 tokens
    • max: 34 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task294_storycommonsense_motiv_text_generation

  • Dataset: task294_storycommonsense_motiv_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 40.52 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 41.02 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 40.19 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task501_scruples_anecdotes_post_type_verification

  • Dataset: task501_scruples_anecdotes_post_type_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 231.62 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 235.28 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 234.52 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task213_rocstories_correct_ending_classification

  • Dataset: task213_rocstories_correct_ending_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 62 tokens
    • mean: 86.19 tokens
    • max: 125 tokens
    • min: 60 tokens
    • mean: 85.48 tokens
    • max: 131 tokens
    • min: 59 tokens
    • mean: 85.92 tokens
    • max: 131 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task821_protoqa_question_generation

  • Dataset: task821_protoqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 14.95 tokens
    • max: 61 tokens
    • min: 5 tokens
    • mean: 15.01 tokens
    • max: 35 tokens
    • min: 5 tokens
    • mean: 13.96 tokens
    • max: 93 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task493_review_polarity_classification

  • Dataset: task493_review_polarity_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 100.45 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 105.43 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 111.94 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task308_jeopardy_answer_generation_all

  • Dataset: task308_jeopardy_answer_generation_all
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 27.73 tokens
    • max: 50 tokens
    • min: 10 tokens
    • mean: 27.01 tokens
    • max: 44 tokens
    • min: 9 tokens
    • mean: 27.41 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1595_event2mind_text_generation_1

  • Dataset: task1595_event2mind_text_generation_1
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.87 tokens
    • max: 18 tokens
    • min: 6 tokens
    • mean: 9.98 tokens
    • max: 20 tokens
    • min: 6 tokens
    • mean: 10.03 tokens
    • max: 20 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task040_qasc_question_generation

  • Dataset: task040_qasc_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 15.07 tokens
    • max: 29 tokens
    • min: 7 tokens
    • mean: 15.06 tokens
    • max: 30 tokens
    • min: 8 tokens
    • mean: 13.89 tokens
    • max: 32 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task231_iirc_link_classification

  • Dataset: task231_iirc_link_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 179 tokens
    • mean: 246.32 tokens
    • max: 256 tokens
    • min: 170 tokens
    • mean: 246.01 tokens
    • max: 256 tokens
    • min: 161 tokens
    • mean: 246.97 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1727_wiqa_what_is_the_effect

  • Dataset: task1727_wiqa_what_is_the_effect
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 95.72 tokens
    • max: 183 tokens
    • min: 44 tokens
    • mean: 95.92 tokens
    • max: 185 tokens
    • min: 43 tokens
    • mean: 96.1 tokens
    • max: 183 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task578_curiosity_dialogs_answer_generation

  • Dataset: task578_curiosity_dialogs_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 229.92 tokens
    • max: 256 tokens
    • min: 118 tokens
    • mean: 235.91 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 229.32 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task310_race_classification

  • Dataset: task310_race_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 101 tokens
    • mean: 254.9 tokens
    • max: 256 tokens
    • min: 218 tokens
    • mean: 255.78 tokens
    • max: 256 tokens
    • min: 101 tokens
    • mean: 254.9 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task309_race_answer_generation

  • Dataset: task309_race_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 75 tokens
    • mean: 254.96 tokens
    • max: 256 tokens
    • min: 204 tokens
    • mean: 255.55 tokens
    • max: 256 tokens
    • min: 75 tokens
    • mean: 255.19 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task379_agnews_topic_classification

  • Dataset: task379_agnews_topic_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 54.92 tokens
    • max: 193 tokens
    • min: 20 tokens
    • mean: 54.52 tokens
    • max: 175 tokens
    • min: 21 tokens
    • mean: 54.79 tokens
    • max: 187 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task030_winogrande_full_person

  • Dataset: task030_winogrande_full_person
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 7.6 tokens
    • max: 12 tokens
    • min: 7 tokens
    • mean: 7.48 tokens
    • max: 12 tokens
    • min: 7 tokens
    • mean: 7.37 tokens
    • max: 11 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1540_parsed_pdfs_summarization

  • Dataset: task1540_parsed_pdfs_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 186.35 tokens
    • max: 256 tokens
    • min: 46 tokens
    • mean: 190.31 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 191.57 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task039_qasc_find_overlapping_words

  • Dataset: task039_qasc_find_overlapping_words
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 30.55 tokens
    • max: 55 tokens
    • min: 16 tokens
    • mean: 30.12 tokens
    • max: 57 tokens
    • min: 16 tokens
    • mean: 30.7 tokens
    • max: 60 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1206_atomic_classification_isbefore

  • Dataset: task1206_atomic_classification_isbefore
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 21.25 tokens
    • max: 40 tokens
    • min: 14 tokens
    • mean: 20.82 tokens
    • max: 31 tokens
    • min: 14 tokens
    • mean: 21.36 tokens
    • max: 31 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task157_count_vowels_and_consonants

  • Dataset: task157_count_vowels_and_consonants
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 28.01 tokens
    • max: 41 tokens
    • min: 24 tokens
    • mean: 27.93 tokens
    • max: 41 tokens
    • min: 24 tokens
    • mean: 28.34 tokens
    • max: 39 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task339_record_answer_generation

  • Dataset: task339_record_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 171 tokens
    • mean: 234.34 tokens
    • max: 256 tokens
    • min: 171 tokens
    • mean: 233.74 tokens
    • max: 256 tokens
    • min: 171 tokens
    • mean: 231.83 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task453_swag_answer_generation

  • Dataset: task453_swag_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 18.35 tokens
    • max: 60 tokens
    • min: 9 tokens
    • mean: 18.21 tokens
    • max: 63 tokens
    • min: 9 tokens
    • mean: 17.37 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task848_pubmedqa_classification

  • Dataset: task848_pubmedqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 248.87 tokens
    • max: 256 tokens
    • min: 21 tokens
    • mean: 249.9 tokens
    • max: 256 tokens
    • min: 84 tokens
    • mean: 251.62 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task673_google_wellformed_query_classification

  • Dataset: task673_google_wellformed_query_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 11.6 tokens
    • max: 27 tokens
    • min: 6 tokens
    • mean: 11.2 tokens
    • max: 24 tokens
    • min: 6 tokens
    • mean: 11.34 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task676_ollie_relationship_answer_generation

  • Dataset: task676_ollie_relationship_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 51.48 tokens
    • max: 113 tokens
    • min: 29 tokens
    • mean: 49.36 tokens
    • max: 134 tokens
    • min: 30 tokens
    • mean: 51.78 tokens
    • max: 113 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task268_casehold_legal_answer_generation

  • Dataset: task268_casehold_legal_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 235 tokens
    • mean: 255.94 tokens
    • max: 256 tokens
    • min: 156 tokens
    • mean: 255.47 tokens
    • max: 256 tokens
    • min: 226 tokens
    • mean: 255.94 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task844_financial_phrasebank_classification

  • Dataset: task844_financial_phrasebank_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 39.79 tokens
    • max: 86 tokens
    • min: 13 tokens
    • mean: 38.32 tokens
    • max: 78 tokens
    • min: 15 tokens
    • mean: 38.9 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task330_gap_answer_generation

  • Dataset: task330_gap_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 106.94 tokens
    • max: 256 tokens
    • min: 44 tokens
    • mean: 107.99 tokens
    • max: 256 tokens
    • min: 45 tokens
    • mean: 110.9 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task595_mocha_answer_generation

  • Dataset: task595_mocha_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 94.34 tokens
    • max: 178 tokens
    • min: 21 tokens
    • mean: 96.76 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 118.67 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1285_kpa_keypoint_matching

  • Dataset: task1285_kpa_keypoint_matching
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 30 tokens
    • mean: 52.28 tokens
    • max: 92 tokens
    • min: 29 tokens
    • mean: 50.11 tokens
    • max: 84 tokens
    • min: 31 tokens
    • mean: 53.14 tokens
    • max: 88 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task234_iirc_passage_line_answer_generation

  • Dataset: task234_iirc_passage_line_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 143 tokens
    • mean: 235.5 tokens
    • max: 256 tokens
    • min: 155 tokens
    • mean: 235.46 tokens
    • max: 256 tokens
    • min: 146 tokens
    • mean: 236.58 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task494_review_polarity_answer_generation

  • Dataset: task494_review_polarity_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 106.22 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 112.48 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 112.83 tokens
    • max: 249 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task670_ambigqa_question_generation

  • Dataset: task670_ambigqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 12.71 tokens
    • max: 26 tokens
    • min: 11 tokens
    • mean: 12.5 tokens
    • max: 23 tokens
    • min: 11 tokens
    • mean: 12.26 tokens
    • max: 18 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task289_gigaword_summarization

  • Dataset: task289_gigaword_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 51.49 tokens
    • max: 87 tokens
    • min: 27 tokens
    • mean: 51.94 tokens
    • max: 87 tokens
    • min: 25 tokens
    • mean: 51.41 tokens
    • max: 87 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

npr

  • Dataset: npr
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 12.44 tokens
    • max: 28 tokens
    • min: 16 tokens
    • mean: 149.63 tokens
    • max: 256 tokens
    • min: 11 tokens
    • mean: 112.81 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

nli

  • Dataset: nli
  • Size: 49,676 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 21.04 tokens
    • max: 120 tokens
    • min: 4 tokens
    • mean: 11.95 tokens
    • max: 45 tokens
    • min: 4 tokens
    • mean: 12.04 tokens
    • max: 31 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

SimpleWiki

  • Dataset: SimpleWiki
  • Size: 5,070 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 28.83 tokens
    • max: 115 tokens
    • min: 8 tokens
    • mean: 33.2 tokens
    • max: 158 tokens
    • min: 9 tokens
    • mean: 55.53 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

amazon_review_2018

  • Dataset: amazon_review_2018
  • Size: 99,352 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 11.48 tokens
    • max: 33 tokens
    • min: 12 tokens
    • mean: 89.18 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 73.14 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

ccnews_title_text

  • Dataset: ccnews_title_text
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 15.27 tokens
    • max: 53 tokens
    • min: 21 tokens
    • mean: 212.75 tokens
    • max: 256 tokens
    • min: 21 tokens
    • mean: 194.35 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

agnews

  • Dataset: agnews
  • Size: 44,606 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 11.72 tokens
    • max: 33 tokens
    • min: 10 tokens
    • mean: 39.99 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 46.2 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

xsum

  • Dataset: xsum
  • Size: 10,140 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 28.21 tokens
    • max: 86 tokens
    • min: 15 tokens
    • mean: 225.81 tokens
    • max: 256 tokens
    • min: 2 tokens
    • mean: 231.78 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

msmarco

  • Dataset: msmarco
  • Size: 173,354 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 9.11 tokens
    • max: 42 tokens
    • min: 15 tokens
    • mean: 81.24 tokens
    • max: 197 tokens
    • min: 19 tokens
    • mean: 79.32 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

yahoo_answers_title_answer

  • Dataset: yahoo_answers_title_answer
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 17.38 tokens
    • max: 109 tokens
    • min: 5 tokens
    • mean: 81.62 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 86.67 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

squad_pairs

  • Dataset: squad_pairs
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 14.62 tokens
    • max: 43 tokens
    • min: 31 tokens
    • mean: 153.71 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 162.39 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

wow

  • Dataset: wow
  • Size: 29,908 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 88.78 tokens
    • max: 256 tokens
    • min: 100 tokens
    • mean: 111.42 tokens
    • max: 149 tokens
    • min: 70 tokens
    • mean: 113.19 tokens
    • max: 159 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_counterfactual-avs_triplets

  • Dataset: mteb-amazon_counterfactual-avs_triplets
  • Size: 4,055 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 27.46 tokens
    • max: 108 tokens
    • min: 12 tokens
    • mean: 27.21 tokens
    • max: 137 tokens
    • min: 12 tokens
    • mean: 27.22 tokens
    • max: 137 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_massive_intent-avs_triplets

  • Dataset: mteb-amazon_massive_intent-avs_triplets
  • Size: 11,661 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 9.54 tokens
    • max: 27 tokens
    • min: 3 tokens
    • mean: 9.15 tokens
    • max: 24 tokens
    • min: 3 tokens
    • mean: 9.35 tokens
    • max: 26 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_massive_scenario-avs_triplets

  • Dataset: mteb-amazon_massive_scenario-avs_triplets
  • Size: 11,661 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 9.53 tokens
    • max: 32 tokens
    • min: 3 tokens
    • mean: 9.12 tokens
    • max: 30 tokens
    • min: 3 tokens
    • mean: 9.44 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_reviews_multi-avs_triplets

  • Dataset: mteb-amazon_reviews_multi-avs_triplets
  • Size: 198,192 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 50.41 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 49.84 tokens
    • max: 256 tokens
    • min: 8 tokens
    • mean: 48.57 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-banking77-avs_triplets

  • Dataset: mteb-banking77-avs_triplets
  • Size: 10,139 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 15.81 tokens
    • max: 66 tokens
    • min: 5 tokens
    • mean: 15.35 tokens
    • max: 65 tokens
    • min: 4 tokens
    • mean: 16.29 tokens
    • max: 71 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-emotion-avs_triplets

  • Dataset: mteb-emotion-avs_triplets
  • Size: 16,224 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 22.17 tokens
    • max: 64 tokens
    • min: 5 tokens
    • mean: 17.85 tokens
    • max: 65 tokens
    • min: 5 tokens
    • mean: 22.06 tokens
    • max: 72 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-imdb-avs_triplets

  • Dataset: mteb-imdb-avs_triplets
  • Size: 24,839 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 208.39 tokens
    • max: 256 tokens
    • min: 48 tokens
    • mean: 222.82 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 208.71 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-mtop_domain-avs_triplets

  • Dataset: mteb-mtop_domain-avs_triplets
  • Size: 15,715 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 10.23 tokens
    • max: 25 tokens
    • min: 4 tokens
    • mean: 9.62 tokens
    • max: 26 tokens
    • min: 3 tokens
    • mean: 10.35 tokens
    • max: 26 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-mtop_intent-avs_triplets

  • Dataset: mteb-mtop_intent-avs_triplets
  • Size: 15,715 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 10.13 tokens
    • max: 27 tokens
    • min: 3 tokens
    • mean: 9.45 tokens
    • max: 35 tokens
    • min: 3 tokens
    • mean: 10.03 tokens
    • max: 26 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-toxic_conversations_50k-avs_triplets

  • Dataset: mteb-toxic_conversations_50k-avs_triplets
  • Size: 49,677 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 69.68 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 92.84 tokens
    • max: 245 tokens
    • min: 3 tokens
    • mean: 66.07 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-tweet_sentiment_extraction-avs_triplets

  • Dataset: mteb-tweet_sentiment_extraction-avs_triplets
  • Size: 27,373 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 20.99 tokens
    • max: 95 tokens
    • min: 2 tokens
    • mean: 20.31 tokens
    • max: 67 tokens
    • min: 3 tokens
    • mean: 20.83 tokens
    • max: 64 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

covid-bing-query-gpt4-avs_triplets

  • Dataset: covid-bing-query-gpt4-avs_triplets
  • Size: 5,070 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 15.3 tokens
    • max: 38 tokens
    • min: 16 tokens
    • mean: 37.38 tokens
    • max: 239 tokens
    • min: 16 tokens
    • mean: 38.55 tokens
    • max: 167 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 18,269 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 15.71 tokens
    • max: 61 tokens
    • min: 5 tokens
    • mean: 142.42 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 144.64 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 512
  • learning_rate: 5.656854249492381e-05
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • fp16: True
  • gradient_checkpointing: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 512
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5.656854249492381e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss medi-mteb-dev_cosine_accuracy
0 0 - - 0.8394
0.1308 500 2.671 1.1796 0.8794
0.2616 1000 1.9941 1.1051 0.8880
0.3925 1500 2.0147 1.0550 0.8926
0.5233 2000 1.7696 1.0167 0.8948
0.6541 2500 1.892 0.9942 0.8973
0.7849 3000 1.7924 1.0039 0.9000
0.9158 3500 1.8434 1.0105 0.8958
1.0466 4000 1.7597 0.9599 0.9011
1.1774 4500 1.8684 1.0748 0.9027
1.3082 5000 1.692 0.9666 0.9032
1.4390 5500 1.7115 1.0497 0.9031
1.5699 6000 1.6607 1.0262 0.9040
1.7007 6500 1.6804 0.9984 0.9052
1.8315 7000 1.6108 0.9315 0.9048
1.9623 7500 1.5806 1.0537 0.9062
2.0931 8000 1.6489 1.0271 0.9075
2.2240 8500 1.5841 1.1238 0.9078
2.3548 9000 1.6315 1.0886 0.9069
2.4856 9500 1.4484 1.0287 0.9079
2.6164 10000 1.5661 1.1722 0.9095
2.7473 10500 1.4791 1.0988 0.9090
2.8781 11000 1.5247 1.0828 0.9100
3.0089 11500 1.4124 1.0981 0.9096
3.1397 12000 1.569 1.0372 0.9111
3.2705 12500 1.4468 0.9301 0.9106
3.4014 13000 1.5556 1.0313 0.9118
3.5322 13500 1.346 1.0433 0.9078
3.6630 14000 1.4514 0.9846 0.9101
3.7938 14500 1.3815 1.1034 0.9131
3.9246 15000 1.4323 1.0120 0.9103
4.0555 15500 1.3485 0.9873 0.9117
4.1863 16000 1.4595 1.0307 0.9103
4.3171 16500 1.3718 1.1036 0.9134
4.4479 17000 1.3685 1.0405 0.9102
4.5788 17500 1.3662 1.0109 0.9112
4.7096 18000 1.3363 1.0407 0.9130
4.8404 18500 1.3321 1.0848 0.9123
4.9712 19000 1.3313 1.0468 0.9130
5.1020 19500 1.3656 0.9708 0.9121
5.2329 20000 1.3311 1.0208 0.9148
5.3637 20500 1.403 1.0025 0.9115
5.4945 21000 1.2109 1.0739 0.9131
5.6253 21500 1.3038 1.1280 0.9120
5.7561 22000 1.2577 1.0245 0.9131
5.8870 22500 1.3112 0.9378 0.9149
6.0178 23000 1.2141 1.0292 0.9126
6.1486 23500 1.3696 1.1213 0.9141
6.2794 24000 1.2436 0.9875 0.9141
6.4103 24500 1.3514 1.0064 0.9146
6.5411 25000 1.1827 1.0174 0.9117
6.6719 25500 1.2619 1.0304 0.9120
6.8027 26000 1.1997 1.0499 0.9149
6.9335 26500 1.2609 1.0160 0.9141
7.0644 27000 1.2065 1.0216 0.9140
7.1952 27500 1.2802 1.0620 0.9135
7.3260 28000 1.2501 1.0798 0.9155
7.4568 28500 1.201 1.0196 0.9142
7.5877 29000 1.2249 1.0325 0.9143
7.7185 29500 1.1867 1.0195 0.9130
7.8493 30000 1.1917 1.0016 0.9137
7.9801 30500 1.194 1.0858 0.9156
8.1109 31000 1.2351 0.9960 0.9144
8.2418 31500 1.1834 1.0464 0.9161
8.3726 32000 1.3046 1.0395 0.9145
8.5034 32500 1.106 1.0235 0.9140
8.6342 33000 1.1845 1.0615 0.9134
8.7650 33500 1.1372 1.0205 0.9146
8.8959 34000 1.2218 0.9796 0.9148
9.0267 34500 1.0983 1.0065 0.9147
9.1575 35000 1.2656 1.0339 0.9154
9.2883 35500 1.1522 1.0168 0.9154
9.4192 36000 1.2407 1.0145 0.9150
9.5500 36500 1.1091 1.0321 0.9150
9.6808 37000 1.1689 1.0270 0.9145
9.8116 37500 1.1116 1.0237 0.9148
9.9424 38000 1.1824 1.0135 0.9145

Framework Versions

  • Python: 3.10.10
  • Sentence Transformers: 3.4.0.dev0
  • Transformers: 4.46.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 0.34.2
  • Datasets: 2.21.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
22
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-trainable-512-final

Finetuned
(181)
this model

Evaluation results