File size: 5,856 Bytes
ad8da65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
# CV Onboarding

This page shows the basics of setting up computer vision (CV) models and onboarding to the Arthur system to monitor 
vision-specific performance.

## Getting Started

The first step is to import functions from the `arthurai` package and establish a connection with Arthur.

```python
# Arthur imports
from arthurai import ArthurAI
from arthurai.common.constants import InputType, OutputType, Stage

arthur = ArthurAI(url="https://app.arthur.ai", 
                  login="<YOUR_USERNAME_OR_EMAIL>")
```

## Registering a CV Model

Each computer vision model is created with `input_type = InputType.Image` and with specified width and height 
dimensions for the processed images. Here, we register a classification model on 1024x1024 images:

```python
arthur_cv_model = arthur.model(name="ImageQuickstart",
                               input_type=InputType.Image,
                               model_type=OutputType.Multiclass,
                               pixel_height=1024,
                               pixel_width=1024)
```

```{note} You can send images to the Arthur platform with any dimensions, and we'll keep the original you send as well 
as a resized copy in the model dimensions. If you enable explainability for your model, the resized versions will be 
passed to it to generate explanations.
```

The different `OutputType` values currently supported for computer vision models are classification, regression, and 
object detection.

## Formatting Data

Computer vision models require the same structure as Tabular and NLP models. However, the attribute value for `Image` 
attributes should be a valid path to the image file for that inference.

Here is an example of a valid `reference_data` dataframe to build an ArthurModel with:

```python
    image_attr             pred_value   ground_truth    non_input_1   
0   'img_path/img_0.png'   0.1          0               0.2  
1   'img_path/img_1.png'   0.05         0              -0.3 
2   'img_path/img_2.png'   0.02         1               0.7     ...
3   'img_path/img_3.png'   0.8          1               1.2 
4   'img_path/img_4.png'   0.4          0              -0.5  
                                ...
```

### Non-Input Attributes

Any non-pixel features to be tracked in images for performance comparison or bias detection should be added as 
non-input attributes. For example, any metadata about the identities of people captured in images for a CV model should 
be included as non-input attributes.

### Reviewing the Model Schema

Before you call `arthur_model.save()`, you can call `arthur_model.review()` the model schema to check that your data is 
parsed correctly.

For an image model, the model schema should look like this:

```python
     name           stage                 value_type    categorical   is_unique  
0    image_attr     PIPELINE_INPUT        IMAGE         False         True   
1    pred_value     PREDICTED_VALUE       FLOAT         False         False      ...
2    ground_truth   GROUND_TRUTH          INTEGER       True          False   
3    non_input_1    NON_INPUT_DATA        FLOAT         False         False   
                                        ...
```

## Object Detection

### Formatting Bounding Boxes

If using an Object Detection model, bounding boxes should be formatted as lists in the form:

`[class_id, confidence, top_left_x, top_left_y, width, height]`

The first two components of the bounding box list represent the classification being made within the bounding box. The 
`class_id` represents the ID of the class detected within the bounding box, and the `confidence` represents the % 
confidence the model has in this prediction (`0.0` for completely unconfident and `1.0` for completely confident).

The next four components of the bounding box list represent the location of the bounding box within the image: the 
`top_left_x` and `top_left_y` represent the X and Y pixel coordinates of the top-left corner of the bounding box. These pixel coordinates are calculated from the origin, which is in the top left corner of the image. This means that each coordinate is calculated by counting pixels from the left of the image or the top of the image, respectively. The `width` represents the number of pixels the bounding box covers from left to right, and the `height` represents the number of pixels the bounding box covers from top to bottom.

So using the following model schema as an example:

```python
	name	                stage	           value_type	
0	image_attr              PIPELINE_INPUT	   IMAGE	
1	label	                GROUND_TRUTH	   BOUNDING_BOX
2	objects_detected        PREDICTED_VALUE	   BOUNDING_BOX	
```

a valid dataset would look like

```python
#    image_attr              objects_detected              ground_truth             non_input_1   
0,   'img_path/img_0.png',   [[0, 0.98, 12, 20, 50, 25],   [0, 1, 14, 22, 48, 29],  0.2
                             [1, 0.47, 92, 140, 80, 36]]     
1,   'img_path/img_1.png',   [[1, 0.22, 4, 5, 14, 32]]     [1, 1, 25, 43, 49, 25]   -0.3     #...
#                                ...
```

## Finishing Onboarding

Once you have finished formatting your reference data and your model schema looks correct using `arthur_model.review()`,
you are finished locally configuring your model and its attributes - so you are ready to complete onboarding your model.

To finish onboarding your CV model, the following steps apply, which is the same for CV models as it is for models
of any `InputType` and `OutputType`:

```{include} finishing_onboarding.md
```

## Enrichments

For an overview of configuring enrichments for image models, see the {doc}`enrichments guide </user-guide/walkthroughs/enrichments>`.

For a step-by-step walkthrough of setting up the explainability Enrichment for image models, see
{ref}`cv_explainability`.