Spaces:
Sleeping
Sleeping
File size: 15,514 Bytes
7370e5c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 |
---
comments: true
description: Learn to integrate YOLOv8 in Python for object detection, segmentation, and classification. Load, train models, and make predictions easily with our comprehensive guide.
keywords: YOLOv8, Python, object detection, segmentation, classification, machine learning, AI, pretrained models, train models, make predictions
---
# Python Usage
Welcome to the YOLOv8 Python Usage documentation! This guide is designed to help you seamlessly integrate YOLOv8 into your Python projects for object detection, segmentation, and classification. Here, you'll learn how to load and use pretrained models, train new models, and perform predictions on images. The easy-to-use Python interface is a valuable resource for anyone looking to incorporate YOLOv8 into their Python projects, allowing you to quickly implement advanced object detection capabilities. Let's get started!
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/GsXGnb-A4Kc?start=58"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Mastering Ultralytics YOLOv8: Python
</p>
For example, users can load a model, train it, evaluate its performance on a validation set, and even export it to ONNX format with just a few lines of code.
!!! Example "Python"
```python
from ultralytics import YOLO
# Create a new YOLO model from scratch
model = YOLO("yolov8n.yaml")
# Load a pretrained YOLO model (recommended for training)
model = YOLO("yolov8n.pt")
# Train the model using the 'coco8.yaml' dataset for 3 epochs
results = model.train(data="coco8.yaml", epochs=3)
# Evaluate the model's performance on the validation set
results = model.val()
# Perform object detection on an image using the model
results = model("https://ultralytics.com/images/bus.jpg")
# Export the model to ONNX format
success = model.export(format="onnx")
```
## [Train](../modes/train.md)
Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can accurately predict the classes and locations of objects in an image.
!!! Example "Train"
=== "From pretrained(recommended)"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # pass any model type
results = model.train(epochs=5)
```
=== "From scratch"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml")
results = model.train(data="coco8.yaml", epochs=5)
```
=== "Resume"
```python
model = YOLO("last.pt")
results = model.train(resume=True)
```
[Train Examples](../modes/train.md){ .md-button }
## [Val](../modes/val.md)
Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters of the model to improve its performance.
!!! Example "Val"
=== "Val after training"
```python
from ultralytics import YOLO
# Load a YOLOv8 model
model = YOLO("yolov8n.yaml")
# Train the model
model.train(data="coco8.yaml", epochs=5)
# Validate on training data
model.val()
```
=== "Val on another dataset"
```python
from ultralytics import YOLO
# Load a YOLOv8 model
model = YOLO("yolov8n.yaml")
# Train the model
model.train(data="coco8.yaml", epochs=5)
# Validate on separate data
model.val(data="path/to/separate/data.yaml")
```
[Val Examples](../modes/val.md){ .md-button }
## [Predict](../modes/predict.md)
Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model predicts the classes and locations of objects in the input images or videos.
!!! Example "Predict"
=== "From source"
```python
import cv2
from PIL import Image
from ultralytics import YOLO
model = YOLO("model.pt")
# accepts all formats - image/dir/Path/URL/video/PIL/ndarray. 0 for webcam
results = model.predict(source="0")
results = model.predict(source="folder", show=True) # Display preds. Accepts all YOLO predict arguments
# from PIL
im1 = Image.open("bus.jpg")
results = model.predict(source=im1, save=True) # save plotted images
# from ndarray
im2 = cv2.imread("bus.jpg")
results = model.predict(source=im2, save=True, save_txt=True) # save predictions as labels
# from list of PIL/ndarray
results = model.predict(source=[im1, im2])
```
=== "Results usage"
```python
# results would be a list of Results object including all the predictions by default
# but be careful as it could occupy a lot memory when there're many images,
# especially the task is segmentation.
# 1. return as a list
results = model.predict(source="folder")
# results would be a generator which is more friendly to memory by setting stream=True
# 2. return as a generator
results = model.predict(source=0, stream=True)
for result in results:
# Detection
result.boxes.xyxy # box with xyxy format, (N, 4)
result.boxes.xywh # box with xywh format, (N, 4)
result.boxes.xyxyn # box with xyxy format but normalized, (N, 4)
result.boxes.xywhn # box with xywh format but normalized, (N, 4)
result.boxes.conf # confidence score, (N, 1)
result.boxes.cls # cls, (N, 1)
# Segmentation
result.masks.data # masks, (N, H, W)
result.masks.xy # x,y segments (pixels), List[segment] * N
result.masks.xyn # x,y segments (normalized), List[segment] * N
# Classification
result.probs # cls prob, (num_class, )
# Each result is composed of torch.Tensor by default,
# in which you can easily use following functionality:
result = result.cuda()
result = result.cpu()
result = result.to("cpu")
result = result.numpy()
```
[Predict Examples](../modes/predict.md){ .md-button }
## [Export](../modes/export.md)
Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is converted to a format that can be used by other software applications or hardware devices. This mode is useful when deploying the model to production environments.
!!! Example "Export"
=== "Export to ONNX"
Export an official YOLOv8n model to ONNX with dynamic batch-size and image-size.
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.export(format="onnx", dynamic=True)
```
=== "Export to TensorRT"
Export an official YOLOv8n model to TensorRT on `device=0` for acceleration on CUDA devices.
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.export(format="onnx", device=0)
```
[Export Examples](../modes/export.md){ .md-button }
## [Track](../modes/track.md)
Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful for applications such as surveillance systems or self-driving cars.
!!! Example "Track"
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official detection model
model = YOLO("yolov8n-seg.pt") # load an official segmentation model
model = YOLO("path/to/best.pt") # load a custom model
# Track with the model
results = model.track(source="https://youtu.be/LNwODJXcvt4", show=True)
results = model.track(source="https://youtu.be/LNwODJXcvt4", show=True, tracker="bytetrack.yaml")
```
[Track Examples](../modes/track.md){ .md-button }
## [Benchmark](../modes/benchmark.md)
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide information on the size of the exported format, its `mAP50-95` metrics (for object detection and segmentation) or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy.
!!! Example "Benchmark"
=== "Python"
Benchmark an official YOLOv8n model across all export formats.
```python
from ultralytics.utils.benchmarks import benchmark
# Benchmark
benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
```
[Benchmark Examples](../modes/benchmark.md){ .md-button }
## Explorer
Explorer API can be used to explore datasets with advanced semantic, vector-similarity and SQL search among other features. It also enabled searching for images based on their content using natural language by utilizing the power of LLMs. The Explorer API allows you to write your own dataset exploration notebooks or scripts to get insights into your datasets.
!!! Example "Semantic Search Using Explorer"
=== "Using Images"
```python
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data="coco8.yaml", model="yolov8n.pt")
exp.create_embeddings_table()
similar = exp.get_similar(img="https://ultralytics.com/images/bus.jpg", limit=10)
print(similar.head())
# Search using multiple indices
similar = exp.get_similar(
img=["https://ultralytics.com/images/bus.jpg", "https://ultralytics.com/images/bus.jpg"], limit=10
)
print(similar.head())
```
=== "Using Dataset Indices"
```python
from ultralytics import Explorer
# create an Explorer object
exp = Explorer(data="coco8.yaml", model="yolov8n.pt")
exp.create_embeddings_table()
similar = exp.get_similar(idx=1, limit=10)
print(similar.head())
# Search using multiple indices
similar = exp.get_similar(idx=[1, 10], limit=10)
print(similar.head())
```
[Explorer](../datasets/explorer/index.md){ .md-button }
## Using Trainers
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits from `BaseTrainer`.
!!! Tip "Detection Trainer Example"
```python
from ultralytics.models.yolo import DetectionPredictor, DetectionTrainer, DetectionValidator
# trainer
trainer = DetectionTrainer(overrides={})
trainer.train()
trained_model = trainer.best
# Validator
val = DetectionValidator(args=...)
val(model=trained_model)
# predictor
pred = DetectionPredictor(overrides={})
pred(source=SOURCE, model=trained_model)
# resume from last weight
overrides["resume"] = trainer.last
trainer = detect.DetectionTrainer(overrides=overrides)
```
You can easily customize Trainers to support custom tasks or explore R&D ideas. Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization Section.
[Customization tutorials](engine.md){ .md-button }
## FAQ
### How can I integrate YOLOv8 into my Python project for object detection?
Integrating Ultralytics YOLOv8 into your Python projects is simple. You can load a pre-trained model or train a new model from scratch. Here's how to get started:
```python
from ultralytics import YOLO
# Load a pretrained YOLO model
model = YOLO("yolov8n.pt")
# Perform object detection on an image
results = model("https://ultralytics.com/images/bus.jpg")
# Visualize the results
for result in results:
result.show()
```
See more detailed examples in our [Predict Mode](../modes/predict.md) section.
### What are the different modes available in YOLOv8?
Ultralytics YOLOv8 provides various modes to cater to different machine learning workflows. These include:
- **[Train](../modes/train.md)**: Train a model using custom datasets.
- **[Val](../modes/val.md)**: Validate model performance on a validation set.
- **[Predict](../modes/predict.md)**: Make predictions on new images or video streams.
- **[Export](../modes/export.md)**: Export models to various formats like ONNX, TensorRT.
- **[Track](../modes/track.md)**: Real-time object tracking in video streams.
- **[Benchmark](../modes/benchmark.md)**: Benchmark model performance across different configurations.
Each mode is designed to provide comprehensive functionalities for different stages of model development and deployment.
### How do I train a custom YOLOv8 model using my dataset?
To train a custom YOLOv8 model, you need to specify your dataset and other hyperparameters. Here's a quick example:
```python
from ultralytics import YOLO
# Load the YOLO model
model = YOLO("yolov8n.yaml")
# Train the model with custom dataset
model.train(data="path/to/your/dataset.yaml", epochs=10)
```
For more details on training and hyperlinks to example usage, visit our [Train Mode](../modes/train.md) page.
### How do I export YOLOv8 models for deployment?
Exporting YOLOv8 models in a format suitable for deployment is straightforward with the `export` function. For example, you can export a model to ONNX format:
```python
from ultralytics import YOLO
# Load the YOLO model
model = YOLO("yolov8n.pt")
# Export the model to ONNX format
model.export(format="onnx")
```
For various export options, refer to the [Export Mode](../modes/export.md) documentation.
### Can I validate my YOLOv8 model on different datasets?
Yes, validating YOLOv8 models on different datasets is possible. After training, you can use the validation mode to evaluate the performance:
```python
from ultralytics import YOLO
# Load a YOLOv8 model
model = YOLO("yolov8n.yaml")
# Train the model
model.train(data="coco8.yaml", epochs=5)
# Validate the model on a different dataset
model.val(data="path/to/separate/data.yaml")
```
Check the [Val Mode](../modes/val.md) page for detailed examples and usage.
|