eeshawn commited on
Commit
626470d
·
1 Parent(s): 6bbb0ff

update app.py

Browse files
Files changed (1) hide show
  1. app.py +21 -14
app.py CHANGED
@@ -1,6 +1,9 @@
1
  import gradio as gr
2
  import torch
3
  from ultralyticsplus import YOLO, render_result
 
 
 
4
 
5
  yolo_model = YOLO('eeshawn11/naruto_hand_seal_detection')
6
  yolo_model.overrides['max_det'] = 10
@@ -28,9 +31,9 @@ def clear():
28
  """
29
  Reset inputs.
30
  """
31
- image_upload = gr.update(value=None)
32
- conf_slider = gr.update(value=0.5)
33
- return image_upload, conf_slider
34
 
35
  with gr.Blocks() as demo:
36
  gr.Markdown("# Naruto Hand Seal Detection with YOLOv8")
@@ -43,25 +46,25 @@ with gr.Blocks() as demo:
43
 
44
  Hand seals are an integral part of the Naruto universe, used by characters to activate powerful techniques. There are twelve basic seals, each named after an animal in the Chinese Zodiac, and different sequences of hand seals are required for different techniques.
45
 
46
- As a fan of the series, I knew that accurately detecting and classifying hand seals would be a difficult but rewarding challenge, and I was excited to tackle it using my expertise in machine learning and computer vision. One key challenge to overcome would be the lack of a good dataset of labelled images for training, so I had to develop my own. Besides capturing images of myself performing the seals, I augmented my dataset with a YouTube screenshots consisting of both real persons and anime characters performing the seals.
47
 
48
  ### Problem Statement
49
 
50
  The challenge was to develop a model that could accurately identify the hand seal being performed.
51
 
52
- In this project, I leveraged transfer learning from the <a href="https://github.com/ultralytics/ultralytics" target="_blank">YOLOv8</a> model to customize an object detection model specifically for the hand seals.
53
  """
54
  )
55
  with gr.Row():
56
  with gr.Column():
57
- inputs = [
58
- gr.Image(source="upload", type="pil", label="Image Upload", interactive=True),
59
- gr.Slider(minimum=0.05, maximum=1.0, value=0.5, step=0.05, label="Confidence Threshold"),
60
- ]
61
  with gr.Row():
62
  clear_form = gr.Button("Reset")
63
  submit = gr.Button("Predict")
64
- outputs = gr.Image(type="filepath", label="Output Image", interactive=False)
 
 
65
 
66
  gr.Markdown(
67
  """
@@ -70,8 +73,12 @@ with gr.Blocks() as demo:
70
  """
71
  )
72
 
73
- clear_form.click(fn=clear, inputs=None, outputs=inputs, show_progress=False)
74
- submit.click(fn=seal_detection, inputs=inputs, outputs=outputs)
 
 
 
75
 
76
- demo.queue(api_open=False, max_size=10)
77
- demo.launch()
 
 
1
  import gradio as gr
2
  import torch
3
  from ultralyticsplus import YOLO, render_result
4
+ import os
5
+
6
+ HF_TOKEN = os.getenv('HF_TOKEN')
7
 
8
  yolo_model = YOLO('eeshawn11/naruto_hand_seal_detection')
9
  yolo_model.overrides['max_det'] = 10
 
31
  """
32
  Reset inputs.
33
  """
34
+ return gr.update(value=None), gr.update(value=0.5)
35
+
36
+ callback = gr.HuggingFaceDatasetSaver(hf_token=HF_TOKEN, dataset_name="crowdsourced_hand_seals", private=True)
37
 
38
  with gr.Blocks() as demo:
39
  gr.Markdown("# Naruto Hand Seal Detection with YOLOv8")
 
46
 
47
  Hand seals are an integral part of the Naruto universe, used by characters to activate powerful techniques. There are twelve basic seals, each named after an animal in the Chinese Zodiac, and different sequences of hand seals are required for different techniques.
48
 
49
+ As a fan of the series, I knew that accurately detecting and classifying hand seals would be a difficult but rewarding challenge, and I was excited to tackle it using my expertise in machine learning and computer vision. One key challenge to overcome would be the lack of a good dataset of labelled images for training, so I had to develop my own. Besides capturing images of myself performing the seals, I augmented my dataset with YouTube screenshots consisting of both real persons and anime characters performing the seals.
50
 
51
  ### Problem Statement
52
 
53
  The challenge was to develop a model that could accurately identify the hand seal being performed.
54
 
55
+ In this project, I leveraged transfer learning from the <a href="https://github.com/ultralytics/ultralytics" target="_blank">YOLOv8</a> model to customize an object detection model specifically for the hand seals. Developed by the Ultralytics team, YOLOv8 is the latest addition to the YOLO family and offers high performance while being easy to train and use.
56
  """
57
  )
58
  with gr.Row():
59
  with gr.Column():
60
+ img_input = gr.Image(source="upload", type="pil", label="Image Upload", interactive=True),
61
+ conf_input = gr.Slider(minimum=0.05, maximum=1.0, value=0.5, step=0.05, label="Confidence Threshold"),
 
 
62
  with gr.Row():
63
  clear_form = gr.Button("Reset")
64
  submit = gr.Button("Predict")
65
+ with gr.Column():
66
+ outputs = gr.Image(type="filepath", label="Output Image", interactive=False)
67
+ flag = gr.Button("Flag")
68
 
69
  gr.Markdown(
70
  """
 
73
  """
74
  )
75
 
76
+ callback.setup([img_input, conf_input, outputs], "flagged_data")
77
+
78
+ clear_form.click(fn=clear, inputs=None, outputs=[img_input, conf_input], show_progress=False)
79
+ submit.click(fn=seal_detection, inputs=[img_input, conf_input], outputs=outputs)
80
+ flag.click(lambda *args: callback.flag(args), [img_input, conf_input, outputs], None, preprocess=False)
81
 
82
+ if __name__ == "__main__":
83
+ demo.queue(api_open=False, max_size=10)
84
+ demo.launch()