Update W&B README (#5006)
Browse files- utils/loggers/wandb/README.md +45 -38
utils/loggers/wandb/README.md
CHANGED
@@ -1,41 +1,44 @@
|
|
1 |
-
π This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 π.
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
|
8 |
## About Weights & Biases
|
9 |
Think of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models β architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions.
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
* [Debug](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Free-2) model performance in real time
|
14 |
-
* [GPU usage](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#System-4)
|
15 |
* [Custom charts](https://wandb.ai/wandb/customizable-charts/reports/Powerful-Custom-Charts-To-Debug-Model-Peformance--VmlldzoyNzY4ODI) for powerful, extensible visualization
|
16 |
* [Share insights](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Share-8) interactively with collaborators
|
17 |
* [Optimize hyperparameters](https://docs.wandb.com/sweeps) efficiently
|
18 |
* [Track](https://docs.wandb.com/artifacts) datasets, pipelines, and production models
|
19 |
-
|
20 |
-
|
21 |
<details open>
|
22 |
<summary> Toggle Details </summary>
|
23 |
When you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device.
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
```shell
|
28 |
$ python train.py --project ... --name ...
|
29 |
```
|
30 |
-
|
31 |
-
<img alt=""
|
|
|
|
|
|
|
32 |
</details>
|
33 |
-
|
34 |
## Viewing Runs
|
35 |
<details open>
|
36 |
<summary> Toggle Details </summary>
|
37 |
-
|
38 |
-
|
39 |
* Training & Validation losses
|
40 |
* Metrics: Precision, Recall, [email protected], [email protected]:0.95
|
41 |
* Learning Rate over time
|
@@ -44,8 +47,10 @@ When you first train, W&B will prompt you to create a new account and will gener
|
|
44 |
* System: Disk I/0, CPU utilization, RAM memory usage
|
45 |
* Your trained model as W&B Artifact
|
46 |
* Environment: OS and Python types, Git repository and state, **training command**
|
47 |
-
|
48 |
-
|
|
|
|
|
49 |
</details>
|
50 |
|
51 |
## Advanced Usage
|
@@ -119,22 +124,24 @@ Any run can be resumed using artifacts if the <code>--resume</code> argument sta
|
|
119 |
</details>
|
120 |
|
121 |
|
122 |
-
|
123 |
<h3> Reports </h3>
|
124 |
-
|
125 |
-
|
126 |
-
<img alt="" width="800" src="https://user-images.githubusercontent.com/26833433/98185222-794ba000-1f0c-11eb-850f-3e9c45ad6949.jpg">
|
127 |
-
|
128 |
-
## Environments
|
129 |
-
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
|
130 |
-
|
131 |
-
* **Google Colab and Kaggle** notebooks with free GPU: [](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb) [](https://www.kaggle.com/ultralytics/yolov5)
|
132 |
-
* **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
|
133 |
-
* **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
|
134 |
-
* **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) [](https://hub.docker.com/r/ultralytics/yolov5)
|
135 |
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
140 |
|
|
|
|
1 |
+
π This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 π. UPDATED 29 September 2021.
|
2 |
+
* [About Weights & Biases](#about-weights-&-biases)
|
3 |
+
* [First-Time Setup](#first-time-setup)
|
4 |
+
* [Viewing runs](#viewing-runs)
|
5 |
+
* [Advanced Usage: Dataset Versioning and Evaluation](#advanced-usage)
|
6 |
+
* [Reports: Share your work with the world!](#reports)
|
7 |
|
8 |
## About Weights & Biases
|
9 |
Think of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models β architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions.
|
10 |
+
|
11 |
+
Used by top researchers including teams at OpenAI, Lyft, Github, and MILA, W&B is part of the new standard of best practices for machine learning. How W&B can help you optimize your machine learning workflows:
|
12 |
+
|
13 |
* [Debug](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Free-2) model performance in real time
|
14 |
+
* [GPU usage](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#System-4) visualized automatically
|
15 |
* [Custom charts](https://wandb.ai/wandb/customizable-charts/reports/Powerful-Custom-Charts-To-Debug-Model-Peformance--VmlldzoyNzY4ODI) for powerful, extensible visualization
|
16 |
* [Share insights](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Share-8) interactively with collaborators
|
17 |
* [Optimize hyperparameters](https://docs.wandb.com/sweeps) efficiently
|
18 |
* [Track](https://docs.wandb.com/artifacts) datasets, pipelines, and production models
|
19 |
+
|
20 |
+
## First-Time Setup
|
21 |
<details open>
|
22 |
<summary> Toggle Details </summary>
|
23 |
When you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device.
|
24 |
+
|
25 |
+
W&B will create a cloud **project** (default is 'YOLOv5') for your training runs, and each new training run will be provided a unique run **name** within that project as project/name. You can also manually set your project and run name as:
|
26 |
+
|
27 |
```shell
|
28 |
$ python train.py --project ... --name ...
|
29 |
```
|
30 |
+
|
31 |
+
YOLOv5 notebook example: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
|
32 |
+
<img width="960" alt="Screen Shot 2021-09-29 at 10 23 13 PM" src="https://user-images.githubusercontent.com/26833433/135392431-1ab7920a-c49d-450a-b0b0-0c86ec86100e.png">
|
33 |
+
|
34 |
+
|
35 |
</details>
|
36 |
+
|
37 |
## Viewing Runs
|
38 |
<details open>
|
39 |
<summary> Toggle Details </summary>
|
40 |
+
Run information streams from your environment to the W&B cloud console as you train. This allows you to monitor and even cancel runs in <b>realtime</b> . All important information is logged:
|
41 |
+
|
42 |
* Training & Validation losses
|
43 |
* Metrics: Precision, Recall, [email protected], [email protected]:0.95
|
44 |
* Learning Rate over time
|
|
|
47 |
* System: Disk I/0, CPU utilization, RAM memory usage
|
48 |
* Your trained model as W&B Artifact
|
49 |
* Environment: OS and Python types, Git repository and state, **training command**
|
50 |
+
|
51 |
+
<p align="center"><img width="900" alt="Weights & Biases dashboard" src="https://user-images.githubusercontent.com/26833433/135390767-c28b050f-8455-4004-adb0-3b730386e2b2.png"></p>
|
52 |
+
|
53 |
+
|
54 |
</details>
|
55 |
|
56 |
## Advanced Usage
|
|
|
124 |
</details>
|
125 |
|
126 |
|
|
|
127 |
<h3> Reports </h3>
|
128 |
+
W&B Reports can be created from your saved runs for sharing online. Once a report is created you will receive a link you can use to publically share your results. Here is an example report created from the COCO128 tutorial trainings of all four YOLOv5 models ([link](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY)).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
130 |
+
<img width="900" alt="Weights & Biases Reports" src="https://user-images.githubusercontent.com/26833433/135394029-a17eaf86-c6c1-4b1d-bb80-b90e83aaffa7.png">
|
131 |
+
|
132 |
+
|
133 |
+
## Environments
|
134 |
+
|
135 |
+
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
|
136 |
+
|
137 |
+
- **Google Colab and Kaggle** notebooks with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
|
138 |
+
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
|
139 |
+
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
|
140 |
+
- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
|
141 |
+
|
142 |
+
|
143 |
+
## Status
|
144 |
+
|
145 |
+

|
146 |
|
147 |
+
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), validation ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
|