konsa15 commited on
Commit
9c341e3
verified
1 Parent(s): f6b8fd7

Update index.html

Browse files
Files changed (1) hide show
  1. index.html +75 -8
index.html CHANGED
@@ -7,13 +7,80 @@
7
  <link rel="stylesheet" href="style.css" />
8
  </head>
9
  <body>
10
- <div class="card">
11
- <h1>Welcome to your static Space!</h1>
12
- <p>You can modify this app directly by editing <i>index.html</i> in the Files and versions tab.</p>
13
- <p>
14
- Also don't forget to check the
15
- <a href="https://huggingface.co/docs/hub/spaces" target="_blank">Spaces documentation</a>.
16
- </p>
17
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  </body>
19
  </html>
 
7
  <link rel="stylesheet" href="style.css" />
8
  </head>
9
  <body>
10
+
11
+
12
+ <div align="center">
13
+ <h1>Adaptive Parametric Activation </h1>
14
+
15
+ <a href="https://kostas1515.github.io/">Konstantinos Panagiotis Alexandridis</a><sup>1</sup>,
16
+ <a href="https://jiankangdeng.github.io/">Jiankang Deng</a><sup>1</sup>,
17
+ <a href="https://cgi.csc.liv.ac.uk/~anguyen/">Anh Nguyen</a><sup>2</sup>,
18
+ <a href="https://shanluo.github.io/">Shan Luo</a><sup>3</sup>,
19
+
20
+ <sup>1</sup> Huawei Noah&#39;s Ark Lab,
21
+ <sup>2</sup> University of Liverpool,
22
+ <sup>3</sup> King&#39;s College London
23
+
24
+ <a href="https://link.springer.com/chapter/10.1007/978-3-031-72949-2_26"><img src="https://img.shields.io/badge/ECCV_2024-APA-blue" alt="Static Badge"></a>
25
+ <a href="https://arxiv.org/pdf/2407.08567"><img src="https://img.shields.io/badge/arxiv-2407.08567-blue" alt="Static Badge"></a>
26
+ <a href="https://paperswithcode.com/sota/long-tail-learning-on-places-lt?p=adaptive-parametric-activation"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adaptive-parametric-activation/long-tail-learning-on-places-lt" alt="PWC"></a>
27
+ <a href="https://paperswithcode.com/sota/instance-segmentation-on-lvis-v1-0-val?p=adaptive-parametric-activation"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adaptive-parametric-activation/instance-segmentation-on-lvis-v1-0-val" alt="PWC"></a>
28
+ <a href="https://paperswithcode.com/sota/long-tail-learning-on-imagenet-lt?p=adaptive-parametric-activation"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adaptive-parametric-activation/long-tail-learning-on-imagenet-lt" alt="PWC"></a>
29
+ <a href="https://paperswithcode.com/sota/long-tail-learning-on-inaturalist-2018?p=adaptive-parametric-activation"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adaptive-parametric-activation/long-tail-learning-on-inaturalist-2018" alt="PWC"></a>
30
+
31
+ </div>
32
+
33
+ <p>This is the official implementation of Adaptive Parametric Activation (APA) for ECCV2024 accepted paper. </p>
34
+ <p> <img src="./assets/unified_activations_combined.jpg"
35
+ alt="APA unifies most activation functions under the same formula."
36
+ style="float: left; margin-right: 10px;"
37
+ /></p>
38
+ <h3>Abstract</h3>
39
+
40
+ <p>The activation function plays a crucial role in model optimisation, yet the optimal choice remains unclear. For example, the Sigmoid activation is the de-facto activation in balanced classification tasks, however, in imbalanced classification, it proves inappropriate due to bias towards frequent classes. In this work, we delve deeper in this phenomenon by performing a comprehensive statistical analysis in the classification and intermediate layers of both balanced and imbalanced networks and we empirically show that aligning the activation function with the data distribution, enhances the performance in both balanced and imbalanced tasks. To this end, we propose the Adaptive Parametric Activation (APA) function, a novel and versatile activation function that unifies most common activation functions under a single formula. APA can be applied in both intermediate layers and attention layers, significantly outperforming the state-of-the-art on several imbalanced benchmarks such as ImageNet-LT, iNaturalist2018, Places-LT, CIFAR100-LT and LVIS and balanced benchmarks such as ImageNet1K, COCO and V3DET.</p>
41
+ <h3>Definition</h3>
42
+
43
+ <p>APA is defined as: $APA(z,位,魏) = (位 exp(鈭捨簔) + 1) ^{\frac{1}{鈭捨粆}$. APA unifies most activation functions under the same formula.</p>
44
+ <p>APA can be used insed the intermediate layers using Adaptive Generalised Linear Unit (AGLU): $AGLU(z,位,魏) = z APA(z,位,魏)$. The derivatives of AGLU with respect to 魏 (top), 位 (middle) and z (bottom) are shown below:
45
+ <img src="./assets/derivative_visualisations.jpg"
46
+ alt="APA unifies most activation functions under the same formula."
47
+ style="float: left; margin-right: 10px;"
48
+ /></p>
49
+ <h3> Simple Code implementation </h3>
50
+
51
+ <pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Unified</span><span class="hljs-params">(nn.Module)</span>:</span>
52
+ <span class="hljs-string">"""Unified activation function module."""</span>
53
+
54
+ <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span><span class="hljs-params">(self, device=None, dtype=None)</span> -&gt; <span class="hljs-keyword">None</span>:</span>
55
+ <span class="hljs-string">"""Initialize the Unified activation function."""</span>
56
+ factory_kwargs = {<span class="hljs-string">"device"</span>: device, <span class="hljs-string">"dtype"</span>: dtype}
57
+ super().__init__()
58
+ lambda_param = torch.nn.init.uniform_(torch.empty(<span class="hljs-number">1</span>, **factory_kwargs))
59
+ kappa_param = torch.nn.init.uniform_(torch.empty(<span class="hljs-number">1</span>, **factory_kwargs))
60
+ self.softplus = nn.Softplus(beta=<span class="hljs-number">-1.0</span>)
61
+ self.lambda_param = nn.Parameter(lambda_param)
62
+ self.kappa_param = nn.Parameter(kappa_param)
63
+
64
+ <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">forward</span><span class="hljs-params">(self, input: torch.Tensor)</span> -&gt; torch.Tensor:</span>
65
+ <span class="hljs-string">"""Compute the forward pass of the Unified activation function."""</span>
66
+ l = torch.clamp(self.lambda_param, min=<span class="hljs-number">0.0001</span>)
67
+ p = torch.exp((<span class="hljs-number">1</span> / l) * self.softplus((self.kappa_param * input) - torch.log(l)))
68
+ <span class="hljs-keyword">return</span> p <span class="hljs-comment"># for AGLU simply return p*input</span>
69
+ </code></pre>
70
+
71
+ <h2 id="bibtex">BibTeX</h2>
72
+ <pre><code class="lang-bibtex">@inproceedings{alexandridis2024adaptive,
73
+ <span class="hljs-attr">title={Adaptive</span> Parametric Activation},
74
+ <span class="hljs-attr">author={Alexandridis,</span> Konstantinos Panagiotis <span class="hljs-literal">and</span> Deng, Jiankang <span class="hljs-literal">and</span> Nguyen, Anh <span class="hljs-literal">and</span> Luo, Shan},
75
+ <span class="hljs-attr">booktitle={European</span> Conference on Computer Vision},
76
+ <span class="hljs-attr">pages={455--476},</span>
77
+ <span class="hljs-attr">year={2024},</span>
78
+ <span class="hljs-attr">organization={Springer}</span>
79
+ }
80
+ </code></pre>
81
+ <h2 id="acknowledgements">Acknowledgements</h2>
82
+ <p>This code uses <a href='https://pytorch.org/'>Pytorch</a> and the <a href='https://github.com/open-mmlab/mmdetection'>mmdet</a> framework. Thank you for your wonderfull work!</p>
83
+
84
+
85
  </body>
86
  </html>