Spaces:
Running
Running
File size: 3,416 Bytes
f1750bb 3518e1e f1750bb 3518e1e f1750bb 3518e1e f1750bb 067c136 f1750bb 3518e1e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
title: Table Markdown Metrics
emoji: π
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 5.29.0
app_file: app.py
pinned: false
tags:
- evaluate
- metric
- table
- markdown
description: >-
Table evaluation metrics for assessing the matching degree between predicted
and reference tables. It calculates precision, recall, and F1 score for table
data extraction or generation tasks.
---
# Metric Card for Table Markdown Metrics
## Metric Description
This metric evaluates the accuracy of table data extraction or generation by comparing predicted tables with reference tables. It calculates:
1. Precision: The ratio of correctly predicted cells to the total number of cells in the predicted table
2. Recall: The ratio of correctly predicted cells to the total number of cells in the reference table
3. F1 Score: The harmonic mean of precision and recall
## How to Use
This metric requires predictions and references as inputs in Markdown table format.
```python
>>> table_metric = evaluate.load("table_markdown")
>>> results = table_metric.compute(
... predictions="| | lobby | search | band | charge | chain ||--|--|--|--|--|--|| desire | 5 | 8 | 7 | 5 | 9 || wage | 1 | 5 | 3 | 8 | 5 |",
... references="| | lobby | search | band | charge | chain ||--|--|--|--|--|--|| desire | 1 | 6 | 7 | 5 | 9 || wage | 1 | 5 | 2 | 8 | 5 |"
... )
>>> print(results)
{'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'true_positives': 7, 'false_positives': 3, 'false_negatives': 3}
```
### Inputs
- **predictions** (`str`): Predicted table in Markdown format.
- **references** (`str`): Reference table in Markdown format.
### Output Values
- **precision** (`float`): Precision score. Range: [0,1]
- **recall** (`float`): Recall score. Range: [0,1]
- **f1** (`float`): F1 score. Range: [0,1]
- **true_positives** (`int`): Number of correctly predicted cells
- **false_positives** (`int`): Number of incorrectly predicted cells
- **false_negatives** (`int`): Number of cells that were not predicted
### Examples
Example - Complex table comparison:
```python
>>> table_metric = evaluate.load("table_markdown")
>>> results = table_metric.compute(
... predictions="""
... | | lobby | search | band |
... |--|-------|--------|------|
... | desire | 5 | 8 | 7 |
... | wage | 1 | 5 | 3 |
... """,
... references="""
... | | lobby | search | band |
... |--|-------|--------|------|
... | desire | 5 | 8 | 7 |
... | wage | 1 | 5 | 3 |
... """
... )
>>> print(results)
{'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'true_positives': 6, 'false_positives': 0, 'false_negatives': 0}
```
## Limitations and Bias
1. The metric assumes that tables are well-formed in Markdown format
2. The comparison is case-sensitive
3. The metric does not handle merged cells or complex table structures
4. The metric treats each cell as a separate unit and does not consider the semantic meaning of the content
## Citation(s)
```bibtex
@article{ChineseChartExtractor,
title={Research on Chinese Chart Data Extraction Methods},
author={Qiuping Ma,Hangshuo Bi,Qi Zhang,Xiaofan Zhao},
journal={None},
volume={0},
pages={0--0},
year={2025}
}
```
## Further References
- [Markdown Tables](https://www.markdownguide.org/extended-syntax/#tables)
- [Table Structure Recognition](https://paperswithcode.com/task/table-structure-recognition)
- [Table Extraction](https://paperswithcode.com/task/table-extraction) |