Commit
·
ab8809b
1
Parent(s):
462e84f
chore(tools): fix potential bug and update docs (#89)
Browse files- .gitignore +4 -0
- docs/train_custom_data.md +21 -19
- requirements.txt +6 -1
- tools/export_onnx.py +29 -28
.gitignore
CHANGED
@@ -1,6 +1,10 @@
|
|
1 |
### Linux ###
|
2 |
*~
|
3 |
|
|
|
|
|
|
|
|
|
4 |
# temporary files which can be created if a process still has a handle open of a deleted file
|
5 |
.fuse_hidden*
|
6 |
|
|
|
1 |
### Linux ###
|
2 |
*~
|
3 |
|
4 |
+
# user experiments directory
|
5 |
+
YOLOX_outputs/
|
6 |
+
datasets/
|
7 |
+
|
8 |
# temporary files which can be created if a process still has a handle open of a deleted file
|
9 |
.fuse_hidden*
|
10 |
|
docs/train_custom_data.md
CHANGED
@@ -7,11 +7,11 @@ We take an example of fine-tuning YOLOX-S model on VOC dataset to give a more cl
|
|
7 |
Clone this repo and follow the [README](../README.md) to install YOLOX.
|
8 |
|
9 |
## 1. Create your own dataset
|
10 |
-
**Step 1** Prepare your own dataset with images and labels first. For labeling images, you
|
11 |
|
12 |
-
**Step 2** Then, you should write the corresponding Dataset Class which can load images and labels through
|
13 |
|
14 |
-
You can also write the Dataset by
|
15 |
```python
|
16 |
@Dataset.resize_getitem
|
17 |
def __getitem__(self, index):
|
@@ -23,27 +23,28 @@ You can also write the Dataset by you own. Let's take the [VOC](../yolox/data/da
|
|
23 |
return img, target, img_info, img_id
|
24 |
```
|
25 |
|
26 |
-
One more thing worth noting is that you should also implement
|
27 |
|
28 |
**Step 3** Prepare the evaluator. We currently have [COCO evaluator](../yolox/evaluators/coco_evaluator.py) and [VOC evaluator](../yolox/evaluators/voc_evaluator.py).
|
29 |
-
If you have your own format data or evaluation metric, you
|
|
|
|
|
30 |
|
31 |
-
**Step 4** Put your dataset under $YOLOX_DIR/datasets$, for VOC:
|
32 |
```shell
|
33 |
ln -s /path/to/your/VOCdevkit ./datasets/VOCdevkit
|
34 |
```
|
35 |
-
* The path "VOCdevkit" will be used in your exp file described in next section.Specifically, in
|
36 |
|
37 |
## 2. Create your Exp file to control everything
|
38 |
We put everything involved in a model to one single Exp file, including model setting, training setting, and testing setting.
|
39 |
|
40 |
**A complete Exp file is at [yolox_base.py](../yolox/exp/yolox_base.py).** It may be too long to write for every exp, but you can inherit the base Exp file and only overwrite the changed part.
|
41 |
|
42 |
-
Let's
|
43 |
|
44 |
-
We select YOLOX-S model here, so we should change the network depth and width. VOC has only 20 classes, so we should also change the num_classes
|
45 |
|
46 |
-
These configs are changed in the init()
|
47 |
```python
|
48 |
class Exp(MyExp):
|
49 |
def __init__(self):
|
@@ -54,19 +55,19 @@ class Exp(MyExp):
|
|
54 |
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
|
55 |
```
|
56 |
|
57 |
-
Besides, you should also overwrite the dataset and evaluator
|
58 |
|
59 |
-
Please see
|
60 |
|
61 |
## 3. Train
|
62 |
-
Except special cases, we always recommend to use our [COCO pretrained weights](../README.md) for initializing.
|
63 |
|
64 |
-
Once you get the Exp file and the COCO pretrained weights we provided, you can train your own model by the following command:
|
65 |
```bash
|
66 |
python tools/train.py -f /path/to/your/Exp/file -d 8 -b 64 --fp16 -o -c /path/to/the/pretrained/weights
|
67 |
```
|
68 |
|
69 |
-
or take the YOLOX-S VOC training for example:
|
70 |
```bash
|
71 |
python tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 8 -b 64 --fp16 -o -c /path/to/yolox_s.pth.tar
|
72 |
```
|
@@ -75,16 +76,17 @@ python tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 8 -b 64 --fp16
|
|
75 |
|
76 |
## 4. Tips for Best Training Results
|
77 |
|
78 |
-
As YOLOX is an anchor-free detector with only several hyper-parameters, most of the time good results can be obtained with no changes to the models or training settings.
|
79 |
We thus always recommend you first train with all default training settings.
|
80 |
|
81 |
-
If at first you don't get good results, there are steps you could consider to
|
82 |
|
83 |
-
**Model Selection** We provide YOLOX-Nano
|
84 |
|
85 |
-
If your deployment meets
|
86 |
|
87 |
**Training Configs** If your training overfits early, then you can reduce max\_epochs or decrease the base\_lr and min\_lr\_ratio in your Exp file:
|
|
|
88 |
```python
|
89 |
# -------------- training config --------------------- #
|
90 |
self.warmup_epochs = 5
|
|
|
7 |
Clone this repo and follow the [README](../README.md) to install YOLOX.
|
8 |
|
9 |
## 1. Create your own dataset
|
10 |
+
**Step 1** Prepare your own dataset with images and labels first. For labeling images, you can use tools like [Labelme](https://github.com/wkentaro/labelme) or [CVAT](https://github.com/openvinotoolkit/cvat).
|
11 |
|
12 |
+
**Step 2** Then, you should write the corresponding Dataset Class which can load images and labels through `__getitem__` method. We currently support COCO format and VOC format.
|
13 |
|
14 |
+
You can also write the Dataset by your own. Let's take the [VOC](../yolox/data/datasets/voc.py#L151) Dataset file for example:
|
15 |
```python
|
16 |
@Dataset.resize_getitem
|
17 |
def __getitem__(self, index):
|
|
|
23 |
return img, target, img_info, img_id
|
24 |
```
|
25 |
|
26 |
+
One more thing worth noting is that you should also implement [pull_item](../yolox/data/datasets/voc.py#L129) and [load_anno](../yolox/data/datasets/voc.py#L121) method for the `Mosiac` and `MixUp` augmentations.
|
27 |
|
28 |
**Step 3** Prepare the evaluator. We currently have [COCO evaluator](../yolox/evaluators/coco_evaluator.py) and [VOC evaluator](../yolox/evaluators/voc_evaluator.py).
|
29 |
+
If you have your own format data or evaluation metric, you can write your own evaluator.
|
30 |
+
|
31 |
+
**Step 4** Put your dataset under `$YOLOX_DIR/datasets`, for VOC:
|
32 |
|
|
|
33 |
```shell
|
34 |
ln -s /path/to/your/VOCdevkit ./datasets/VOCdevkit
|
35 |
```
|
36 |
+
* The path "VOCdevkit" will be used in your exp file described in next section. Specifically, in `get_data_loader` and `get_eval_loader` function.
|
37 |
|
38 |
## 2. Create your Exp file to control everything
|
39 |
We put everything involved in a model to one single Exp file, including model setting, training setting, and testing setting.
|
40 |
|
41 |
**A complete Exp file is at [yolox_base.py](../yolox/exp/yolox_base.py).** It may be too long to write for every exp, but you can inherit the base Exp file and only overwrite the changed part.
|
42 |
|
43 |
+
Let's take the [VOC Exp file](../exps/example/yolox_voc/yolox_voc_s.py) as an example.
|
44 |
|
45 |
+
We select `YOLOX-S` model here, so we should change the network depth and width. VOC has only 20 classes, so we should also change the `num_classes`.
|
46 |
|
47 |
+
These configs are changed in the `init()` method:
|
48 |
```python
|
49 |
class Exp(MyExp):
|
50 |
def __init__(self):
|
|
|
55 |
self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
|
56 |
```
|
57 |
|
58 |
+
Besides, you should also overwrite the `dataset` and `evaluator`, prepared before training the model on your own data.
|
59 |
|
60 |
+
Please see [get_data_loader](../exps/example/yolox_voc/yolox_voc_s.py#L20), [get_eval_loader](../exps/example/yolox_voc/yolox_voc_s.py#L82), and [get_evaluator](../exps/example/yolox_voc/yolox_voc_s.py#L113) for more details.
|
61 |
|
62 |
## 3. Train
|
63 |
+
Except special cases, we always recommend to use our [COCO pretrained weights](../README.md) for initializing the model.
|
64 |
|
65 |
+
Once you get the Exp file and the COCO pretrained weights we provided, you can train your own model by the following below command:
|
66 |
```bash
|
67 |
python tools/train.py -f /path/to/your/Exp/file -d 8 -b 64 --fp16 -o -c /path/to/the/pretrained/weights
|
68 |
```
|
69 |
|
70 |
+
or take the `YOLOX-S` VOC training for example:
|
71 |
```bash
|
72 |
python tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 8 -b 64 --fp16 -o -c /path/to/yolox_s.pth.tar
|
73 |
```
|
|
|
76 |
|
77 |
## 4. Tips for Best Training Results
|
78 |
|
79 |
+
As **YOLOX** is an anchor-free detector with only several hyper-parameters, most of the time good results can be obtained with no changes to the models or training settings.
|
80 |
We thus always recommend you first train with all default training settings.
|
81 |
|
82 |
+
If at first you don't get good results, there are steps you could consider to improve the model.
|
83 |
|
84 |
+
**Model Selection** We provide `YOLOX-Nano`, `YOLOX-Tiny`, and `YOLOX-S` for mobile deployments, while `YOLOX-M`/`L`/`X` for cloud or high performance GPU deployments.
|
85 |
|
86 |
+
If your deployment meets any compatibility issues. we recommend `YOLOX-DarkNet53`.
|
87 |
|
88 |
**Training Configs** If your training overfits early, then you can reduce max\_epochs or decrease the base\_lr and min\_lr\_ratio in your Exp file:
|
89 |
+
|
90 |
```python
|
91 |
# -------------- training config --------------------- #
|
92 |
self.warmup_epochs = 5
|
requirements.txt
CHANGED
@@ -1,3 +1,4 @@
|
|
|
|
1 |
numpy
|
2 |
torch>=1.7
|
3 |
opencv_python
|
@@ -10,4 +11,8 @@ thop
|
|
10 |
ninja
|
11 |
tabulate
|
12 |
tensorboard
|
13 |
-
|
|
|
|
|
|
|
|
|
|
1 |
+
# TODO: Update with exact module version
|
2 |
numpy
|
3 |
torch>=1.7
|
4 |
opencv_python
|
|
|
11 |
ninja
|
12 |
tabulate
|
13 |
tensorboard
|
14 |
+
|
15 |
+
# verified versions
|
16 |
+
onnx==1.8.1
|
17 |
+
onnxruntime==1.8.0
|
18 |
+
onnx-simplifier==0.3.5
|
tools/export_onnx.py
CHANGED
@@ -16,30 +16,25 @@ from yolox.utils import replace_module
|
|
16 |
|
17 |
def make_parser():
|
18 |
parser = argparse.ArgumentParser("YOLOX onnx deploy")
|
19 |
-
parser.add_argument(
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
parser.add_argument("--output", default="output", type=str,
|
24 |
-
|
25 |
-
parser.add_argument("
|
26 |
-
|
27 |
-
parser.add_argument(
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
type=str,
|
32 |
-
help="expriment description file",
|
33 |
-
)
|
34 |
parser.add_argument("-expn", "--experiment-name", type=str, default=None)
|
35 |
-
parser.add_argument("-n", "--name", type=str, default=None,
|
36 |
-
|
37 |
-
parser.add_argument(
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
nargs=argparse.REMAINDER,
|
42 |
-
)
|
43 |
|
44 |
return parser
|
45 |
|
@@ -61,8 +56,8 @@ def main():
|
|
61 |
else:
|
62 |
ckpt_file = args.ckpt
|
63 |
|
64 |
-
ckpt = torch.load(ckpt_file, map_location="cpu")
|
65 |
# load the model state dict
|
|
|
66 |
|
67 |
model.eval()
|
68 |
if "model" in ckpt:
|
@@ -71,7 +66,7 @@ def main():
|
|
71 |
model = replace_module(model, nn.SiLU, SiLU)
|
72 |
model.head.decode_in_inference = False
|
73 |
|
74 |
-
logger.info("
|
75 |
dummy_input = torch.randn(1, 3, exp.test_size[0], exp.test_size[1])
|
76 |
torch.onnx._export(
|
77 |
model,
|
@@ -81,12 +76,18 @@ def main():
|
|
81 |
output_names=[args.output],
|
82 |
opset_version=args.opset,
|
83 |
)
|
84 |
-
logger.info("
|
85 |
|
86 |
if not args.no_onnxsim:
|
|
|
|
|
|
|
87 |
# use onnxsimplify to reduce reduent model.
|
88 |
-
|
89 |
-
|
|
|
|
|
|
|
90 |
|
91 |
|
92 |
if __name__ == "__main__":
|
|
|
16 |
|
17 |
def make_parser():
|
18 |
parser = argparse.ArgumentParser("YOLOX onnx deploy")
|
19 |
+
parser.add_argument("--output-name", type=str, default="yolox.onnx",
|
20 |
+
help="output name of models")
|
21 |
+
parser.add_argument("--input", default="images", type=str,
|
22 |
+
help="input node name of onnx model")
|
23 |
+
parser.add_argument("--output", default="output", type=str,
|
24 |
+
help="output node name of onnx model")
|
25 |
+
parser.add_argument("-o", "--opset", default=11, type=int,
|
26 |
+
help="onnx opset version")
|
27 |
+
parser.add_argument("--no-onnxsim", action="store_true",
|
28 |
+
help="use onnxsim or not")
|
29 |
+
parser.add_argument("-f", "--exp_file", default=None, type=str,
|
30 |
+
help="expriment description file",)
|
|
|
|
|
|
|
31 |
parser.add_argument("-expn", "--experiment-name", type=str, default=None)
|
32 |
+
parser.add_argument("-n", "--name", type=str, default=None,
|
33 |
+
help="model name")
|
34 |
+
parser.add_argument("-c", "--ckpt", default=None, type=str,
|
35 |
+
help="ckpt path")
|
36 |
+
parser.add_argument("opts", help="Modify config options using the command-line",
|
37 |
+
default=None, nargs=argparse.REMAINDER,)
|
|
|
|
|
38 |
|
39 |
return parser
|
40 |
|
|
|
56 |
else:
|
57 |
ckpt_file = args.ckpt
|
58 |
|
|
|
59 |
# load the model state dict
|
60 |
+
ckpt = torch.load(ckpt_file, map_location="cpu")
|
61 |
|
62 |
model.eval()
|
63 |
if "model" in ckpt:
|
|
|
66 |
model = replace_module(model, nn.SiLU, SiLU)
|
67 |
model.head.decode_in_inference = False
|
68 |
|
69 |
+
logger.info("loading checkpoint done.")
|
70 |
dummy_input = torch.randn(1, 3, exp.test_size[0], exp.test_size[1])
|
71 |
torch.onnx._export(
|
72 |
model,
|
|
|
76 |
output_names=[args.output],
|
77 |
opset_version=args.opset,
|
78 |
)
|
79 |
+
logger.info("generated onnx model named {}".format(args.output_name))
|
80 |
|
81 |
if not args.no_onnxsim:
|
82 |
+
import onnx
|
83 |
+
from onnxsim import simplify
|
84 |
+
|
85 |
# use onnxsimplify to reduce reduent model.
|
86 |
+
onnx_model = onnx.load(args.output_name)
|
87 |
+
model_simp, check = simplify(onnx_model)
|
88 |
+
assert check, "Simplified ONNX model could not be validated"
|
89 |
+
onnx.save(model_simp, args.output_name)
|
90 |
+
logger.info("generated simplified onnx model named {}".format(args.output_name))
|
91 |
|
92 |
|
93 |
if __name__ == "__main__":
|