NimaBoscarino
commited on
Commit
·
3fedb47
1
Parent(s):
e31566c
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,71 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
tags:
|
4 |
+
- object-detection
|
5 |
+
- object-tracking
|
6 |
+
- video
|
7 |
+
- video-object-segmentation
|
8 |
+
inference: false
|
9 |
---
|
10 |
+
|
11 |
+
# unicorn_track_large_mot_challenge_mask
|
12 |
+
|
13 |
+
## Table of Contents
|
14 |
+
- [unicorn_track_large_mot_challenge_mask](#-model_id--defaultmymodelname-true)
|
15 |
+
- [Table of Contents](#table-of-contents)
|
16 |
+
- [Model Details](#model-details)
|
17 |
+
- [Uses](#uses)
|
18 |
+
- [Direct Use](#direct-use)
|
19 |
+
- [Evaluation Results](#evaluation-results)
|
20 |
+
|
21 |
+
|
22 |
+
<model_details>
|
23 |
+
|
24 |
+
## Model Details
|
25 |
+
|
26 |
+
Unicorn accomplishes the great unification of the network architecture and the learning paradigm for four tracking tasks. Unicorn puts forwards new state-of-the-art performance on many challenging tracking benchmarks using the same model parameters. This model has an input size of 800x1280.
|
27 |
+
|
28 |
+
- License: This model is licensed under the MIT license
|
29 |
+
- Resources for more information:
|
30 |
+
- [Research Paper](https://arxiv.org/abs/2111.12085)
|
31 |
+
- [GitHub Repo](https://github.com/MasterBin-IIAU/Unicorn)
|
32 |
+
|
33 |
+
</model_details>
|
34 |
+
|
35 |
+
<uses>
|
36 |
+
|
37 |
+
## Uses
|
38 |
+
|
39 |
+
#### Direct Use
|
40 |
+
|
41 |
+
This model can be used for:
|
42 |
+
|
43 |
+
* Single Object Tracking (SOT)
|
44 |
+
* Multiple Object Tracking (MOT)
|
45 |
+
* Video Object Segmentation (VOS)
|
46 |
+
* Multi-Object Tracking and Segmentation (MOTS)
|
47 |
+
|
48 |
+
This model can simultaneously deal with SOT, MOT17, VOS, and MOTS Challenge
|
49 |
+
|
50 |
+
<Eval_Results>
|
51 |
+
|
52 |
+
## Evaluation Results
|
53 |
+
|
54 |
+
MOT17 MOTA (%): 77.2
|
55 |
+
MOTS sMOTSA (%): 65.3
|
56 |
+
|
57 |
+
</Eval_Results>
|
58 |
+
|
59 |
+
<Cite>
|
60 |
+
|
61 |
+
## Citation Information
|
62 |
+
|
63 |
+
```bibtex
|
64 |
+
@inproceedings{unicorn,
|
65 |
+
title={Towards Grand Unification of Object Tracking},
|
66 |
+
author={Yan, Bin and Jiang, Yi and Sun, Peize and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},
|
67 |
+
booktitle={ECCV},
|
68 |
+
year={2022}
|
69 |
+
}
|
70 |
+
```
|
71 |
+
</Cite>
|