# Bias Monitoring There are two components to Arthur's approach to bias: monitoring and mitigation. Monitoring for bias is the process of measuring the disparity of a model on different subgroups of a population. **_This guide is a walkthrough on bias monitoring._** To learn more about Arthur's bias mitigation process, see the explanation of our current mitigation methods from an algorithms perspective {ref}`here `. ## Monitoring Attributes for Bias For some of the attributes in your model, you may want to pay particular attention to how your model's outputs are potentially different for each subpopulation of that attribute. We refer to this as _monitoring an attribute for bias_. A common example might be so-called _protected basis variables_ such as race, age or gender. But you can define any attributes that are important for your business case and select them for bias monitoring. ### Model Input Attributes If you have already registered a few input attributes of your model, you can set a subset of them to be monitored for bias. For a categorical variable, each possible level of the attribute will be treated as a distinct sub-population for analysis. For example, if you had an attribute for "Gender" which comprised the three possible values `Male`, `Female` and `Non-Binary`, then you would simply add the following to your model onboarding. ```python arthur_model.get_attribute("SEX", stage=Stage.ModelPipelineInput ).monitor_for_bias = True ``` For a continuous variable, you need to break the continuous range into a fixed number of groupings so that we can create sub-populations. You can do this by providing cutoff thresholds for each grouping. For example, if we have a continuous attribute called `AGE`, we can create three age brackets such as `< 35`, `35 - 55`, and `> 55`. We create these groups by providing the upper-cutoff values for each group. ```python arthur_model.get_attribute("AGE", stage=Stage.ModelPipelineInput ).monitor_for_bias = True arthur_model.get_attribute("AGE", stage=Stage.ModelPipelineInput ).set(bins = [None, 35, 55, None]) ``` ### Non-Input Attributes You can monitor for bias even if the sensitive attributes are not direct input to your model. ```python arthur_model.get_attribute("SEX", stage=Stage.NonInputData ).monitor_for_bias = True ``` Then, when you send inferences, you can include this side information in the `NonInputData` fields. This allows you to determine if your model potentially has a disparate impact on certain sub-populations. ### Viewing Fairness Metrics On the Arthur Dashboard, you can view the outcomes of your model with respect to the sensitive attributes and sub-populations you defined. We have also defined a few convenience functions on the `ArthurModel` object through the [`bias.metrics` submodule](https://docs.arthur.ai/sdk/sdk_v3/apiref/arthurai.core.bias.bias_metrics.BiasMetrics.html#arthurai.core.bias.bias_metrics.BiasMetrics): ```python arthur_model.bias.metrics.demographic_parity('') arthur_model.bias.metrics.group_confusion_matrices('') ``` Finer-grained metrics can also be achieved through our {doc}`query endpoint `.