HugoHE commited on
Commit
4f7f5ba
·
1 Parent(s): b6623a0

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -2
app.py CHANGED
@@ -180,14 +180,15 @@ with gr.Blocks(theme='gradio/monochrome') as demo:
180
  gr.Markdown(
181
  """This interactive demo is based on the box abstraction-based monitors for Faster R-CNN model. The model is trained using [Detectron2](https://github.com/facebookresearch/detectron2) library on the in-distribution dataset [Berkeley DeepDrive-100k](https://www.bdd100k.com/), which contains objects within autonomous driving domain. The monitors are constructed by abstraction of extracted feature from the training data. The demo showcases the monitors' capacity to reject problematic detections due to out-of-distribution(OOD) objects.
182
 
183
- To utilize the demo, upload an image and click on "Infer" to view the following results:
184
 
185
  - **Detection**: outputs of Object Detector
186
  - **Detection summary**: a summary of the detection outputs
187
  - **Verdict**: verdicts from Monitors (problematic detections caused by out-of-distribution(OOD) objects will be identified as OOD objects)
188
  - **Explainable AI**: visual explanation generated by [grad-cam](https://github.com/jacobgil/pytorch-grad-cam) library which is based on Class Activation Mapping(CAM) method.
189
 
190
- You can also select an image from the cached **examples** to quickly try out.
 
191
  In case the output image seems too small, simply right-click on the image, and choose “Open image in new tab” to visualize it in full size.
192
  """
193
  )
 
180
  gr.Markdown(
181
  """This interactive demo is based on the box abstraction-based monitors for Faster R-CNN model. The model is trained using [Detectron2](https://github.com/facebookresearch/detectron2) library on the in-distribution dataset [Berkeley DeepDrive-100k](https://www.bdd100k.com/), which contains objects within autonomous driving domain. The monitors are constructed by abstraction of extracted feature from the training data. The demo showcases the monitors' capacity to reject problematic detections due to out-of-distribution(OOD) objects.
182
 
183
+ To utilize the demo, upload an image and click on *"Infer"* to view the following results:
184
 
185
  - **Detection**: outputs of Object Detector
186
  - **Detection summary**: a summary of the detection outputs
187
  - **Verdict**: verdicts from Monitors (problematic detections caused by out-of-distribution(OOD) objects will be identified as OOD objects)
188
  - **Explainable AI**: visual explanation generated by [grad-cam](https://github.com/jacobgil/pytorch-grad-cam) library which is based on Class Activation Mapping(CAM) method.
189
 
190
+ You can also select an image from the cached **Examples** to quickly try out. Without clicking *"Infer"*, the cached outputs will be loaded automatically.)
191
+
192
  In case the output image seems too small, simply right-click on the image, and choose “Open image in new tab” to visualize it in full size.
193
  """
194
  )