sasha HF Staff commited on
Commit
953be11
Β·
1 Parent(s): 047b7a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -1,10 +1,21 @@
1
  ---
2
  title: README
3
- emoji: 🐨
4
  colorFrom: green
5
- colorTo: indigo
6
  sdk: static
7
  pinned: false
 
 
 
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card πŸ”₯
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: README
3
+ emoji: πŸ€—
4
  colorFrom: green
5
+ colorTo: purple
6
  sdk: static
7
  pinned: false
8
+ tags:
9
+ - evaluate
10
+ - measurement
11
  ---
12
 
13
+ πŸ€— Evaluate provides access to a wide range of evaluation tools. It covers a range of modalities such as text, computer vision, audio, etc. as well as tools to evaluate models or datasets.
14
+
15
+ It has three types of evaluations:
16
+ - **Measurement**: for gaining more insights on datasets and model predictions based on their properties and characteristics -- these are covered in this space.
17
+ - **Metric**: measures the performance of a model on a given dataset, usually by comparing the model's predictions to some ground truth labels -- covered in the [Evaluate Metric](https://huggingface.co/evaluate-metric) Spaces.
18
+ - **Comparison**: used useful to compare the performance of two or more models on a single test dataset., e.g. by comparing their predictions to ground truth labels and computing their agreement -- covered in the [Evaluate Comparison](https://huggingface.co/spaces/evaluate-comparison) Spaces.
19
+
20
+
21
+ All three types of evaluation supported by the πŸ€— Evaluate library are meant to be mutually complementary, and help our community carry out more mindful and responsible evaluation!