yellowdolphin commited on
Commit
c437056
·
1 Parent(s): 26cb6e7

add app description

Browse files
Files changed (1) hide show
  1. app.py +35 -1
app.py CHANGED
@@ -155,6 +155,40 @@ def pred_fn(image, fake=False):
155
 
156
  examples = [str(image_root / f'negative{i:03d}.jpg') for i in range(3)]
157
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
  demo = gr.Interface(fn=pred_fn, inputs="image", outputs=["image", "text"],
159
- examples=examples)
 
 
 
160
  demo.launch()
 
155
 
156
  examples = [str(image_root / f'negative{i:03d}.jpg') for i in range(3)]
157
 
158
+ description = """
159
+ Is it possible to identify and track individual marine mammals based on
160
+ community photos, taken by tourist whale-watchers on their cameras or
161
+ smartphones?
162
+
163
+ Researchers use [photographic identification](https://whalescientists.com/photo-id/)
164
+ (photo-ID) of individual whales since
165
+ decades to study their migration, population, and behavior. While this is a
166
+ tedious and costly process, it is tempting to leverage the huge amount of
167
+ image data collected by the whale-watching community and private encounters around
168
+ the globe. Organizations like [flukebook](www.flukebook.org) or
169
+ [Happywhale](www.happywhale.com) develop AI models for automated identification at
170
+ scale. To push the state-of-the-art, Happywhale hosted two competitions on kaggle,
171
+ the 2018 [Humpback Whale Identification](https://www.kaggle.com/c/humpback-whale-identification)
172
+ and the 2022 [Happywhale](https://www.kaggle.com/competitions/happy-whale-and-dolphin)
173
+ competition, which included 28 marine whale and dolphin species.
174
+
175
+ Top solutions used a two-step process of cropping the raw image using an
176
+ image detector like [YOLOv5](https://pytorch.org/hub/ultralytics_yolov5)
177
+ and presenting high-resolution crops to an identifier trained with an
178
+ ArcFace-based loss function. The detector had to be fine-tuned on the
179
+ competition images with auto- or manually generated labels.
180
+
181
+ Below you can test my solution (down-cut version) on your own images.
182
+ The detector is an ensemble of five YOLOv5 models, the identifier ensembles three
183
+ models with EfficientNet-B7, EfficientNetV2-XL, and ConvNext-base backbone.
184
+ """ # appears between title and input/output
185
+
186
+ article = """
187
+ """ # appears below input/output
188
+
189
  demo = gr.Interface(fn=pred_fn, inputs="image", outputs=["image", "text"],
190
+ examples=examples,
191
+ title='Happywhale: Individual Identification for Maritime Animals',
192
+ description=description,
193
+ article=None,)
194
  demo.launch()