Matteo Sirri commited on
Commit
dbe481d
Β·
1 Parent(s): 169e11c

style: add readme

Browse files
Files changed (1) hide show
  1. README.md +11 -75
README.md CHANGED
@@ -1,75 +1,11 @@
1
- # School in AI Project Work
2
-
3
- This repository contains the code to train and evaluate a pedestrian detector for
4
- the "School in Ai 2Β° edition"@[@UNIMORE](https://www.unimore.it/)
5
-
6
- ## Installation
7
-
8
- N.B.: Installation only avaiable in win64 environments
9
-
10
- Create and activate an environment with all required packages:
11
-
12
- ```
13
- conda create --name ped_detector --file deps/wins/conda_environment.txt
14
- # or conda env create -f deps/win/conda_environment.yml
15
- conda activate cvcspw
16
- pip install -r deps/win/pip_requirements.txt
17
- ```
18
-
19
- ## Dataset download and preparation:
20
- ### Solution 1 - From Google Drice
21
- Download the storage folder directly from Google Drive [here](link google drive)
22
- and place it in the root dir of the project
23
- After runnning this step, your storage directory should look like this:
24
- ```text
25
- storage
26
- β”œβ”€β”€ MOTChallenge
27
- β”œβ”€β”€ MOT17
28
- β”œβ”€β”€ motcha_coco_annotations
29
- β”œβ”€β”€ MOTSynth
30
- β”œβ”€β”€ annotations
31
- β”œβ”€β”€ comb_annotations
32
- β”œβ”€β”€ frames
33
- β”œβ”€β”€ motsynth_output
34
- ```
35
- ### Solution 2 - From scratch
36
- #### Prepare MOTSynth dataset
37
- 1. Download MOTSynth_1.
38
- ```
39
- wget -P ./storage/MOTSynth https://motchallenge.net/data/MOTSynth_1.zip
40
- unzip ./storage/MOTSynth/MOTSynth_1.zip
41
- rm ./storage/MOTSynth/MOTSynth_1.zip
42
- ```
43
- 2. Delete video from 123 to 256
44
- 2. Extract frames from the videos
45
- ```
46
- python tools/anns/to_frames.py --motsynth-root ./storage/MOTSynth
47
-
48
- # now you can delete other videos
49
- rm -r ./storage/MOTSynth/MOTSynth_1
50
- ```
51
- 3. Download and extract annotations
52
- ```
53
- wget -P ./storage/MOTSynth https://motchallenge.net/data/MOTSynth_coco_annotations.zip
54
- unzip ./storage/MOTSynth/MOTSynth_coco_annotations.zip
55
- rm ./storage/MOTSynth/MOTSynth_coco_annotations.zip
56
- ```
57
- 4. Prepare combined annotations for MOTSynth from the original coco annotations
58
- ```
59
- python tools/anns/combine_anns.py --motsynth-path ./storage/MOTSynth
60
- ```
61
- #### Prepare MOT17 dataset
62
-
63
-
64
- ## Colab Usage
65
-
66
- You can also use [Google Colab](https://colab.research.google.com) if you need remote resources like GPUs.
67
- In the notebook folder you can find some useful .ipynb files and remember to load the storage folder in your GDrive before usage.
68
-
69
- ## Object Detection
70
-
71
- An adaption of torchvision's detection reference code is done to train Faster R-CNN on a portion of the MOTSynth dataset. To train the model you can run:
72
- ```
73
- ./scripts/train_detector
74
- ```
75
-
 
1
+ ---
2
+ title: School in AI Project Work
3
+ emoji: 🐨
4
+ colorFrom: blue
5
+ colorTo: yellow
6
+ sdk: gradio
7
+ sdk_version: 3.9
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ ---