rouge_raw / README.md
Martin Dočekal
reuirements
c03043d
|
raw
history blame
2.94 kB
metadata
title: RougeRaw
emoji: 🤗
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
tags:
  - evaluate
  - metric
description: >-
  ROUGE RAW is language-agnostic variant of ROUGE without stemmer, stop words
  and synonymas.  This is a wrapper around the original
  http://hdl.handle.net/11234/1-2615 script.

Metric Card for RougeRaw

Metric Description

ROUGE RAW is language-agnostic variant of ROUGE without stemmer, stop words and synonymas. This is a wrapper around the original http://hdl.handle.net/11234/1-2615 script.

How to Use

import evaluate
rougeraw = evaluate.load('CZLC/rouge_raw')
predictions = ["the cat is on the mat", "hello there"]
references = ["the cat is on the mat", "hello there"]
results = rougeraw.compute(predictions=predictions, references=references)
print(results)
{'rougeraw1_precision': 1.0, 'rougeraw1_recall': 1.0, 'rougeraw1_fmeasure': 1.0, 'rougeraw2_precision': 1.0, 'rougeraw2_recall': 1.0, 'rougeraw2_fmeasure': 1.0, 'rougerawl_precision': 1.0, 'rougerawl_recall': 1.0, 'rougerawl_fmeasure': 1.0}

Inputs

predictions: list of predictions to evaluate. Each prediction should be a string with tokens separated by spaces. references: list of reference for each prediction. Each reference should be a string with tokens separated by space

Output Values

  • rougeraw1_precision
  • rougeraw1_recall
  • rougeraw1_fmeasure
  • rougeraw2_precision
  • rougeraw2_recall
  • rougeraw2_fmeasure
  • rougerawl_precision
  • rougerawl_recall
  • rougerawl_fmeasure

Output Example(s):

{'rougeraw1_precision': 1.0, 'rougeraw1_recall': 1.0, 'rougeraw1_fmeasure': 1.0, 'rougeraw2_precision': 1.0, 'rougeraw2_recall': 1.0, 'rougeraw2_fmeasure': 1.0, 'rougerawl_precision': 1.0, 'rougerawl_recall': 1.0, 'rougerawl_fmeasure': 1.0}

This metric outputs a dictionary, containing the scores.

Citation(s)

@inproceedings{straka-etal-2018-sumeczech,
    title = "{S}ume{C}zech: Large {C}zech News-Based Summarization Dataset",
    author = "Straka, Milan  and
      Mediankin, Nikita  and
      Kocmi, Tom  and
      {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k  and
      Hude{\v{c}}ek, Vojt{\v{e}}ch  and
      Haji{\v{c}}, Jan",
    editor = "Calzolari, Nicoletta  and
      Choukri, Khalid  and
      Cieri, Christopher  and
      Declerck, Thierry  and
      Goggi, Sara  and
      Hasida, Koiti  and
      Isahara, Hitoshi  and
      Maegaard, Bente  and
      Mariani, Joseph  and
      Mazo, H{\'e}l{\`e}ne  and
      Moreno, Asuncion  and
      Odijk, Jan  and
      Piperidis, Stelios  and
      Tokunaga, Takenobu",
    booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
    month = may,
    year = "2018",
    address = "Miyazaki, Japan",
    publisher = "European Language Resources Association (ELRA)",
    url = "https://aclanthology.org/L18-1551",
}