Spaces:
Running
Running
Delete README.md
Browse files
README.md
DELETED
@@ -1,117 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Table Markdown Metrics
|
3 |
-
emoji: 📊
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 5.29.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
tags:
|
11 |
-
- evaluate
|
12 |
-
- metric
|
13 |
-
- table
|
14 |
-
- markdown
|
15 |
-
description: >-
|
16 |
-
Table evaluation metrics for assessing the matching degree between predicted
|
17 |
-
and reference tables. It calculates precision, recall, and F1 score for table
|
18 |
-
data extraction or generation tasks.
|
19 |
-
---
|
20 |
-
|
21 |
-
# Metric Card for Table Markdown Metrics
|
22 |
-
|
23 |
-
## Metric Description
|
24 |
-
|
25 |
-
This metric evaluates the accuracy of table data extraction or generation by comparing predicted tables with reference tables. It calculates:
|
26 |
-
|
27 |
-
1. Precision: The ratio of correctly predicted cells to the total number of cells in the predicted table
|
28 |
-
2. Recall: The ratio of correctly predicted cells to the total number of cells in the reference table
|
29 |
-
3. F1 Score: The harmonic mean of precision and recall
|
30 |
-
|
31 |
-
## How to Use
|
32 |
-
|
33 |
-
This metric requires predictions and references as inputs in Markdown table format.
|
34 |
-
|
35 |
-
```python
|
36 |
-
>>> table_metric = evaluate.load("table_markdown")
|
37 |
-
>>> results = table_metric.compute(
|
38 |
-
... predictions="|A|B|\n|1|2|",
|
39 |
-
... references="|A|B|\n|1|3|"
|
40 |
-
... )
|
41 |
-
>>> print(results)
|
42 |
-
{'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'true_positives': 1, 'false_positives': 1, 'false_negatives': 1}
|
43 |
-
```
|
44 |
-
|
45 |
-
### Inputs
|
46 |
-
- **predictions** (`str`): Predicted table in Markdown format.
|
47 |
-
- **references** (`str`): Reference table in Markdown format.
|
48 |
-
|
49 |
-
### Output Values
|
50 |
-
- **precision** (`float`): Precision score. Range: [0,1]
|
51 |
-
- **recall** (`float`): Recall score. Range: [0,1]
|
52 |
-
- **f1** (`float`): F1 score. Range: [0,1]
|
53 |
-
- **true_positives** (`int`): Number of correctly predicted cells
|
54 |
-
- **false_positives** (`int`): Number of incorrectly predicted cells
|
55 |
-
- **false_negatives** (`int`): Number of cells that were not predicted
|
56 |
-
|
57 |
-
### Examples
|
58 |
-
|
59 |
-
Example 1 - Simple table comparison:
|
60 |
-
```python
|
61 |
-
>>> table_metric = evaluate.load("table_markdown")
|
62 |
-
>>> results = table_metric.compute(
|
63 |
-
... predictions="| | lobby | search | band | charge | chain ||--|--|--|--|--|--|| desire | 5 | 8 | 7 | 5 | 9 || wage | 1 | 5 | 3 | 8 | 5 |",
|
64 |
-
... references="| | lobby | search | band | charge | chain ||--|--|--|--|--|--|| desire | 1 | 6 | 7 | 5 | 9 || wage | 1 | 5 | 2 | 8 | 5 |"
|
65 |
-
... )
|
66 |
-
>>> print(results)
|
67 |
-
{'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'true_positives': 7, 'false_positives': 3, 'false_negatives': 3}
|
68 |
-
```
|
69 |
-
|
70 |
-
Example 2 - Complex table comparison:
|
71 |
-
```python
|
72 |
-
>>> table_metric = evaluate.load("table_markdown")
|
73 |
-
>>> results = table_metric.compute(
|
74 |
-
... predictions="""
|
75 |
-
... | | lobby | search | band |
|
76 |
-
... |--|-------|--------|------|
|
77 |
-
... | desire | 5 | 8 | 7 |
|
78 |
-
... | wage | 1 | 5 | 3 |
|
79 |
-
... """,
|
80 |
-
... references="""
|
81 |
-
... | | lobby | search | band |
|
82 |
-
... |--|-------|--------|------|
|
83 |
-
... | desire | 5 | 8 | 7 |
|
84 |
-
... | wage | 1 | 5 | 3 |
|
85 |
-
... """
|
86 |
-
... )
|
87 |
-
>>> print(results)
|
88 |
-
{'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'true_positives': 6, 'false_positives': 0, 'false_negatives': 0}
|
89 |
-
```
|
90 |
-
|
91 |
-
## Limitations and Bias
|
92 |
-
|
93 |
-
1. The metric assumes that tables are well-formed in Markdown format
|
94 |
-
2. The comparison is case-sensitive
|
95 |
-
3. The metric does not handle merged cells or complex table structures
|
96 |
-
4. The metric treats each cell as a separate unit and does not consider the semantic meaning of the content
|
97 |
-
|
98 |
-
## Citation(s)
|
99 |
-
```bibtex
|
100 |
-
@article{scikit-learn,
|
101 |
-
title={Scikit-learn: Machine Learning in {P}ython},
|
102 |
-
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
|
103 |
-
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
|
104 |
-
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
|
105 |
-
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
|
106 |
-
journal={Journal of Machine Learning Research},
|
107 |
-
volume={12},
|
108 |
-
pages={2825--2830},
|
109 |
-
year={2011}
|
110 |
-
}
|
111 |
-
```
|
112 |
-
|
113 |
-
## Further References
|
114 |
-
|
115 |
-
- [Markdown Tables](https://www.markdownguide.org/extended-syntax/#tables)
|
116 |
-
- [Table Structure Recognition](https://paperswithcode.com/task/table-structure-recognition)
|
117 |
-
- [Table Extraction](https://paperswithcode.com/task/table-extraction)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|