Spaces:
Runtime error
Runtime error
File size: 3,601 Bytes
0a5ad3d d784ede a36ea09 0a5ad3d 028df33 d8186bf 028df33 83c7032 d8186bf 0a5ad3d d8186bf 0a5ad3d d8186bf 028df33 a36ea09 d8186bf d160a75 6f97605 743046a 6f97605 d8186bf 62811e7 d8186bf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
import gradio as gr
import torch
from ultralyticsplus import YOLO, render_result
yolo_model = YOLO('eeshawn11/naruto_hand_seal_detection')
yolo_model.overrides['max_det'] = 10
device = 'cuda' if torch.cuda.is_available() else 'cpu'
yolo_model.to(device)
def seal_detection(
image,
conf_threshold,
):
"""
Object detection with YOLOv8 model, detecting basic Naruto hand seals.
Args:
image: Input image
conf_threshold: Confidence threshold
Returns:
Rendered image
"""
results = yolo_model.predict(image, conf=conf_threshold)
render = render_result(model=yolo_model, image=image, result=results[0])
return render
def clear():
image_upload = gr.update(value=None)
conf_slider = gr.update(value=0.5)
return image_upload, conf_slider
with gr.Blocks() as demo:
gr.Markdown("# Naruto Hand Seal Detection with YOLOv8")
with gr.Accordion("README", open=False):
gr.Markdown(
"""
### Introduction
As a data science practitioner and a fan of Japanese manga, I was eager to apply my skills to a project that combined these interests. Among my favourite animes from my childhood, I decided to develop a computer vision model that could detect hand seals from the **Naruto** anime.
Hand seals are an integral part of the Naruto universe, used by characters to activate powerful techniques. There are twelve basic seals, each named after an animal in the Chinese Zodiac, and different sequences of hand seals are required for different techniques.
As a fan of the series, I knew that accurately detecting and classifying hand seals would be a difficult but rewarding challenge, and I was excited to tackle it using my expertise in machine learning and computer vision. One key challenge to overcome would be the lack of a good dataset of labelled images for training, so I had to develop my own. Besides capturing images of myself performing the seals, I augmented my dataset with a YouTube screenshots consisting of both real persons and anime characters performing the seals.
### Problem Statement
The challenge was to develop a model that could accurately identify the hand seal being performed.
In this project, I leveraged transfer learning from the <a href="https://github.com/ultralytics/ultralytics" target="_blank">YOLOv8</a> model to customize an object detection model specifically for the hand seals.
"""
)
with gr.Row():
with gr.Column():
inputs = [
gr.Image(source="upload", type="pil", label="Image Upload", interactive=True),
gr.Slider(minimum=0.05, maximum=1.0, value=0.5, step=0.05, label="Confidence Threshold"),
]
with gr.Row():
clear_form = gr.Button("Reset")
submit = gr.Button("Predict")
outputs = gr.Image(type="filepath", label="Output Image", interactive=False)
gr.Markdown(
"""
<p style="text-align:center">
Happy to connect on <a href="https://www.linkedin.com/in/shawn-sing/" target="_blank">LinkedIn</a> or visit my <a href="https://github.com/eeshawn11/" target="_blank">GitHub</a> to check out my other projects.
"""
)
clear_form.click(fn=clear, inputs=None, outputs=inputs)
submit.click(fn=seal_detection, inputs=inputs, outputs=outputs)
demo.queue(api_open=False, max_size=10)
demo.launch() |