drkareemkamal commited on
Commit
a07203e
·
verified ·
1 Parent(s): 16cc839

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -65
README.md DELETED
@@ -1,65 +0,0 @@
1
- # Overview
2
- This repository provides an ensemble model to combine a YoloV8 model exported from the [Ultralytics](https://github.com/ultralytics/ultralytics) repository with NMS post-processing. The NMS post-processing code contained in [models/postprocess/1/model.py](models/postprocess/1/model.py) is adapted from the [Ultralytics ONNX Example](https://github.com/ultralytics/ultralytics/blob/4b866c97180842b546fe117610869d3c8d69d8ae/examples/YOLOv8-OpenCV-ONNX-Python/main.py).
3
-
4
-
5
- For more information about Triton's Ensemble Models, see their documentation on [Architecture.md](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/architecture.md) and some of their [preprocessing examples](https://github.com/triton-inference-server/python_backend/tree/main/examples/preprocessing).
6
-
7
- # Directory Structure
8
- ```
9
- models/
10
- yolov8_onnx/
11
- 1/
12
- model.onnx
13
- config.pbtxt
14
-
15
- postprocess/
16
- 1/
17
- model.py
18
- config.pbtxt
19
-
20
- yolov8_ensemble/
21
- 1/
22
- <Empty Directory>
23
- config.pbtxt
24
- README.md
25
- main.py
26
- ```
27
-
28
-
29
- # Quick Start
30
- 1. Install [Ultralytics](https://github.com/ultralytics/ultralytics) and TritonClient
31
- ```
32
- pip install ultralytics==8.0.51 tritonclient[all]==2.31.0
33
- ```
34
-
35
- 2. Export a model to ONNX format:
36
- ```
37
- yolo export model=yolov8n.pt format=onnx dynamic=True opset=16
38
- ```
39
-
40
- 3. Rename the model file to `model.onnx` and place it under the `/models/yolov8_onnx/1` directory (see directory structure above).
41
-
42
- 4. (Optional): Update the Score and NMS threshold in [models/postprocess/1/model.py](models/postprocess/1/model.py#L59)
43
-
44
- 5. (Optional): Update the [models/yolov8_ensemble/config.pbtxt](models/yolov8_ensemble/config.pbtxt) file if your input resolution has changed.
45
-
46
- 6. Build the Docker Container for Triton Inference:
47
- ```
48
- DOCKER_NAME="yolov8-triton"
49
- docker build -t $DOCKER_NAME .
50
- ```
51
-
52
- 6. Run Triton Inference Server:
53
- ```
54
- DOCKER_NAME="yolov8-triton"
55
- docker run --gpus all \
56
- -it --rm \
57
- --net=host \
58
- -v ./models:/models \
59
- $DOCKER_NAME
60
- ```
61
-
62
- 7. Run the script with `python main.py`. The overlay image will be written to `output.jpg`.
63
-
64
-
65
-