Spaces:
Running
A newer version of the Gradio SDK is available:
5.9.1
title: MCC
datasets:
- dataset
tags:
- evaluate
- metric
description: >-
Matthews correlation coefficient (MCC) is a correlation coefficient used in
machine learning as a measure of the quality of binary and multiclass
classifications.
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
Metric Card for MCC
Metric Description
*Give a brief overview of this metric, including what task(s) it is usually used for, if any.Matthews correlation coefficient (MCC) is a correlation coefficient used in machine learning as a measure of the quality of binary and multiclass classifications. MCC takes into account true and false positives and negatives and is generally regarded as a balanced metric that can be used even if the classes are of different sizes. It can be computed with the equation:
MCC = (TP * TN - FP * FN) / sqrt((TP + FP)(TP + FN)(TN + FP)*(TN + FN))
where TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives and FN is the number of false negatives.
How to Use
At minimum, this metric takes as input two lists, each containing ints: predictions and references.
`
mcc_metric = evaluate.load('mcc') results = mcc_metric.compute(references=[0, 1], predictions=[0, 1]) print(results) ["{'mcc': 1.0}"] `
Inputs
- predictions (list of int): The predicted labels.
- references (list of int): The ground truth labels.
Output Values
mcc(float): The Matthews correlation coefficient. Minimum possible value is -1. Maximum possible value is 1. A higher MCC means a better quality of classification, 1 being a perfect prediction, 0 being a random prediction and -1 being a completely wrong prediction. Output Example(s): {'mcc': 1.0}
Examples
Example 1 - A simple example with all correct predictions
mcc_metric = evaluate.load('mcc') results = mcc_metric.compute(references=[1, 0, 1], predictions=[1, 0, 1]) print(results) {'mcc': 1.0}
Example 2 - A simple example with all incorrect predictions
mcc_metric = evaluate.load('mcc') results = mcc_metric.compute(references=[1, 0, 1], predictions=[0, 1, 0]) print(results) {'mcc': -1.0}
Example 3 - A simple example with a random prediction
mcc_metric = evaluate.load('mcc') results = mcc_metric.compute(references=[1, 0, 1], predictions=[1, 1, 0]) print(results) {'mcc': 0.0}
Limitations and Bias
Note any known limitations or biases that the metric has, with links and references if possible.
Citation
- Sklearn - "https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html"