Update index.html
Browse files- index.html +61 -43
index.html
CHANGED
@@ -9,74 +9,92 @@
|
|
9 |
<body>
|
10 |
|
11 |
|
12 |
-
<div
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
<h1>Adaptive Parametric Activation </h1>
|
14 |
|
15 |
-
<a href="https://kostas1515.github.io/">Konstantinos Panagiotis Alexandridis</a><sup>1</sup>,
|
16 |
-
<a href="https://jiankangdeng.github.io/">Jiankang Deng</a><sup>1</sup>,
|
17 |
-
<a href="https://cgi.csc.liv.ac.uk/~anguyen/">Anh Nguyen</a><sup>2</sup>,
|
18 |
-
<a href="https://shanluo.github.io/">Shan Luo</a><sup>3</sup
|
19 |
-
|
20 |
-
<sup>1</sup> Huawei Noah's Ark Lab,
|
21 |
<sup>2</sup> University of Liverpool,
|
22 |
-
<sup>3</sup> King
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
24 |
</div>
|
25 |
|
26 |
-
<
|
27 |
-
<
|
28 |
-
<
|
29 |
-
|
30 |
-
|
31 |
-
/>
|
32 |
-
</div>
|
33 |
<h3>Abstract</h3>
|
34 |
|
35 |
<p>The activation function plays a crucial role in model optimisation, yet the optimal choice remains unclear. For example, the Sigmoid activation is the de-facto activation in balanced classification tasks, however, in imbalanced classification, it proves inappropriate due to bias towards frequent classes. In this work, we delve deeper in this phenomenon by performing a comprehensive statistical analysis in the classification and intermediate layers of both balanced and imbalanced networks and we empirically show that aligning the activation function with the data distribution, enhances the performance in both balanced and imbalanced tasks. To this end, we propose the Adaptive Parametric Activation (APA) function, a novel and versatile activation function that unifies most common activation functions under a single formula. APA can be applied in both intermediate layers and attention layers, significantly outperforming the state-of-the-art on several imbalanced benchmarks such as ImageNet-LT, iNaturalist2018, Places-LT, CIFAR100-LT and LVIS and balanced benchmarks such as ImageNet1K, COCO and V3DET.</p>
|
36 |
<h3>Definition</h3>
|
37 |
|
38 |
-
<p>APA is defined as:
|
39 |
-
<p>APA can be used insed the intermediate layers using Adaptive Generalised Linear Unit (AGLU):
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
</
|
|
|
|
|
45 |
<h3> Simple Code implementation </h3>
|
46 |
|
47 |
-
<pre><code class="
|
48 |
<span class="hljs-string">"""Unified activation function module."""</span>
|
49 |
|
50 |
-
<span class="hljs-
|
51 |
<span class="hljs-string">"""Initialize the Unified activation function."""</span>
|
52 |
factory_kwargs = {<span class="hljs-string">"device"</span>: device, <span class="hljs-string">"dtype"</span>: dtype}
|
53 |
-
super().__init__()
|
54 |
lambda_param = torch.nn.init.uniform_(torch.empty(<span class="hljs-number">1</span>, **factory_kwargs))
|
55 |
kappa_param = torch.nn.init.uniform_(torch.empty(<span class="hljs-number">1</span>, **factory_kwargs))
|
56 |
-
self.softplus = nn.Softplus(beta
|
57 |
self.lambda_param = nn.Parameter(lambda_param)
|
58 |
self.kappa_param = nn.Parameter(kappa_param)
|
59 |
|
60 |
-
<span class="hljs-
|
61 |
<span class="hljs-string">"""Compute the forward pass of the Unified activation function."""</span>
|
62 |
-
l = torch.clamp(self.lambda_param, min
|
63 |
-
p = torch.exp((<span class="hljs-number">1</span> / l) * self.softplus((self.kappa_param * input) - torch.log(l)))
|
64 |
<span class="hljs-keyword">return</span> p <span class="hljs-comment"># for AGLU simply return p*input</span>
|
65 |
</code></pre>
|
66 |
-
|
67 |
-
<
|
68 |
-
<
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
}
|
76 |
</code></pre>
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
</body>
|
82 |
</html>
|
|
|
9 |
<body>
|
10 |
|
11 |
|
12 |
+
<div class="relative min-h-[100px] rounded-b-lg border border-t-0 leading-tight dark:border-gray-800 dark:bg-gray-925">
|
13 |
+
|
14 |
+
<div class="py-4 px-4 sm:px-6 prose hf-sanitized hf-sanitized-LkzsSYSqbDSFqH1JQZWuD"><div class="not-prose bg-linear-to-t -mx-6 -mt-4 mb-8 max-h-[300px] min-w-full overflow-auto border-b from-gray-50 px-6 pb-5 pt-4 font-mono text-xs transition-all dark:from-gray-900 dark:to-gray-950"><div class="mb-2 inline-block rounded-lg border px-2 py-1 font-mono text-xs leading-none">metadata</div>
|
15 |
+
<pre><!-- HTML_TAG_START --><span class="hljs-attr">title:</span> <span class="hljs-string">AGLU</span>
|
16 |
+
<span class="hljs-attr">emoji:</span> <span class="hljs-string">🏆</span>
|
17 |
+
<span class="hljs-attr">colorFrom:</span> <span class="hljs-string">green</span>
|
18 |
+
<span class="hljs-attr">colorTo:</span> <span class="hljs-string">red</span>
|
19 |
+
<span class="hljs-attr">sdk:</span> <span class="hljs-string">static</span>
|
20 |
+
<span class="hljs-attr">pinned:</span> <span class="hljs-literal">false</span>
|
21 |
+
<!-- HTML_TAG_END --></pre></div>
|
22 |
+
<!-- HTML_TAG_START --><div align="center">
|
23 |
<h1>Adaptive Parametric Activation </h1>
|
24 |
|
25 |
+
<p><a rel="nofollow" href="https://kostas1515.github.io/">Konstantinos Panagiotis Alexandridis</a><sup>1</sup>,
|
26 |
+
<a rel="nofollow" href="https://jiankangdeng.github.io/">Jiankang Deng</a><sup>1</sup>,
|
27 |
+
<a rel="nofollow" href="https://cgi.csc.liv.ac.uk/~anguyen/">Anh Nguyen</a><sup>2</sup>,
|
28 |
+
<a rel="nofollow" href="https://shanluo.github.io/">Shan Luo</a><sup>3</sup>,</p>
|
29 |
+
<p><sup>1</sup> Huawei Noah's Ark Lab,
|
|
|
30 |
<sup>2</sup> University of Liverpool,
|
31 |
+
<sup>3</sup> King's College London</p>
|
32 |
+
<p><a rel="nofollow" href="https://link.springer.com/chapter/10.1007/978-3-031-72949-2_26"><img alt="Static Badge" src="https://img.shields.io/badge/ECCV_2024-APA-blue"></a>
|
33 |
+
<a rel="nofollow" href="https://arxiv.org/pdf/2407.08567"><img alt="Static Badge" src="https://img.shields.io/badge/arxiv-2407.08567-blue"></a>
|
34 |
+
<a rel="nofollow" href="https://paperswithcode.com/sota/long-tail-learning-on-places-lt?p=adaptive-parametric-activation"><img alt="PWC" src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adaptive-parametric-activation/long-tail-learning-on-places-lt"></a>
|
35 |
+
<a rel="nofollow" href="https://paperswithcode.com/sota/instance-segmentation-on-lvis-v1-0-val?p=adaptive-parametric-activation"><img alt="PWC" src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adaptive-parametric-activation/instance-segmentation-on-lvis-v1-0-val"></a>
|
36 |
+
<a rel="nofollow" href="https://paperswithcode.com/sota/long-tail-learning-on-imagenet-lt?p=adaptive-parametric-activation"><img alt="PWC" src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adaptive-parametric-activation/long-tail-learning-on-imagenet-lt"></a>
|
37 |
+
<a rel="nofollow" href="https://paperswithcode.com/sota/long-tail-learning-on-inaturalist-2018?p=adaptive-parametric-activation"><img alt="PWC" src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/adaptive-parametric-activation/long-tail-learning-on-inaturalist-2018"></a></p>
|
38 |
</div>
|
39 |
|
40 |
+
<figure class="image text-center">
|
41 |
+
<img alt="APA activation" src="https://huggingface.co/spaces/konsa15/AGLU/resolve/main/assets/unified_activations_combined.jpg">
|
42 |
+
<figcaption> Figure 1: APA unifies most activation functions under the same formula.</figcaption>
|
43 |
+
</figure>
|
44 |
+
|
|
|
|
|
45 |
<h3>Abstract</h3>
|
46 |
|
47 |
<p>The activation function plays a crucial role in model optimisation, yet the optimal choice remains unclear. For example, the Sigmoid activation is the de-facto activation in balanced classification tasks, however, in imbalanced classification, it proves inappropriate due to bias towards frequent classes. In this work, we delve deeper in this phenomenon by performing a comprehensive statistical analysis in the classification and intermediate layers of both balanced and imbalanced networks and we empirically show that aligning the activation function with the data distribution, enhances the performance in both balanced and imbalanced tasks. To this end, we propose the Adaptive Parametric Activation (APA) function, a novel and versatile activation function that unifies most common activation functions under a single formula. APA can be applied in both intermediate layers and attention layers, significantly outperforming the state-of-the-art on several imbalanced benchmarks such as ImageNet-LT, iNaturalist2018, Places-LT, CIFAR100-LT and LVIS and balanced benchmarks such as ImageNet1K, COCO and V3DET.</p>
|
48 |
<h3>Definition</h3>
|
49 |
|
50 |
+
<p>The Adaptive Parametric Activation APA is defined as: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi><mi>P</mi><mi>A</mi><mo stretchy="false">(</mo><mi>z</mi><mo separator="true">,</mo><mi>λ</mi><mo separator="true">,</mo><mi>κ</mi><mo stretchy="false">)</mo><mo>=</mo><mo stretchy="false">(</mo><mi>λ</mi><mi>e</mi><mi>x</mi><mi>p</mi><mo stretchy="false">(</mo><mo>−</mo><mi>κ</mi><mi>z</mi><mo stretchy="false">)</mo><mo>+</mo><mn>1</mn><msup><mo stretchy="false">)</mo><mfrac><mn>1</mn><mrow><mo>−</mo><mi>λ</mi></mrow></mfrac></msup></mrow>APA(z,λ,κ) = (λ exp(−κz) + 1) ^{\frac{1}{−λ}}</math></span><span aria-hidden="true" class="katex-html"><span class="base"><span style="height:1em;vertical-align:-0.25em;" class="strut"></span><span class="mord mathnormal">A</span><span style="margin-right:0.13889em;" class="mord mathnormal">P</span><span class="mord mathnormal">A</span><span class="mopen">(</span><span style="margin-right:0.04398em;" class="mord mathnormal">z</span><span class="mpunct">,</span><span style="margin-right:0.1667em;" class="mspace"></span><span class="mord mathnormal">λ</span><span class="mpunct">,</span><span style="margin-right:0.1667em;" class="mspace"></span><span class="mord mathnormal">κ</span><span class="mclose">)</span><span style="margin-right:0.2778em;" class="mspace"></span><span class="mrel">=</span><span style="margin-right:0.2778em;" class="mspace"></span></span><span class="base"><span style="height:1em;vertical-align:-0.25em;" class="strut"></span><span class="mopen">(</span><span class="mord mathnormal">λ</span><span class="mord mathnormal">e</span><span class="mord mathnormal">x</span><span class="mord mathnormal">p</span><span class="mopen">(</span><span class="mord">−</span><span class="mord mathnormal">κ</span><span style="margin-right:0.04398em;" class="mord mathnormal">z</span><span class="mclose">)</span><span style="margin-right:0.2222em;" class="mspace"></span><span class="mbin">+</span><span style="margin-right:0.2222em;" class="mspace"></span></span><span class="base"><span style="height:1.2312em;vertical-align:-0.25em;" class="strut"></span><span class="mord">1</span><span class="mclose"><span class="mclose">)</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span style="height:0.9812em;" class="vlist"><span style="top:-3.3902em;margin-right:0.05em;"><span style="height:3em;" class="pstrut"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight"><span class="mopen nulldelimiter sizing reset-size3 size6"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span style="height:0.8443em;" class="vlist"><span style="top:-2.656em;"><span style="height:3em;" class="pstrut"></span><span class="sizing reset-size3 size1 mtight"><span class="mord mtight"><span class="mord mtight">−</span><span class="mord mathnormal mtight">λ</span></span></span></span><span style="top:-3.2255em;"><span style="height:3em;" class="pstrut"></span><span style="border-bottom-width:0.049em;" class="frac-line mtight"></span></span><span style="top:-3.384em;"><span style="height:3em;" class="pstrut"></span><span class="sizing reset-size3 size1 mtight"><span class="mord mtight"><span class="mord mtight">1</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span style="height:0.4035em;" class="vlist"><span></span></span></span></span></span><span class="mclose nulldelimiter sizing reset-size3 size6"></span></span></span></span></span></span></span></span></span></span></span></span></span>. APA unifies most activation functions under the same formula as shwon in Figure 1.</p>
|
51 |
+
<p>APA can be used insed the intermediate layers using Adaptive Generalised Linear Unit (AGLU): <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi><mi>G</mi><mi>L</mi><mi>U</mi><mo stretchy="false">(</mo><mi>z</mi><mo separator="true">,</mo><mi>λ</mi><mo separator="true">,</mo><mi>κ</mi><mo stretchy="false">)</mo><mo>=</mo><mi>z</mi><mi>A</mi><mi>P</mi><mi>A</mi><mo stretchy="false">(</mo><mi>z</mi><mo separator="true">,</mo><mi>λ</mi><mo separator="true">,</mo><mi>κ</mi><mo stretchy="false">)</mo></mrow>AGLU(z,λ,κ) = z APA(z,λ,κ)</math></span><span aria-hidden="true" class="katex-html"><span class="base"><span style="height:1em;vertical-align:-0.25em;" class="strut"></span><span class="mord mathnormal">A</span><span class="mord mathnormal">G</span><span style="margin-right:0.10903em;" class="mord mathnormal">LU</span><span class="mopen">(</span><span style="margin-right:0.04398em;" class="mord mathnormal">z</span><span class="mpunct">,</span><span style="margin-right:0.1667em;" class="mspace"></span><span class="mord mathnormal">λ</span><span class="mpunct">,</span><span style="margin-right:0.1667em;" class="mspace"></span><span class="mord mathnormal">κ</span><span class="mclose">)</span><span style="margin-right:0.2778em;" class="mspace"></span><span class="mrel">=</span><span style="margin-right:0.2778em;" class="mspace"></span></span><span class="base"><span style="height:1em;vertical-align:-0.25em;" class="strut"></span><span style="margin-right:0.04398em;" class="mord mathnormal">z</span><span class="mord mathnormal">A</span><span style="margin-right:0.13889em;" class="mord mathnormal">P</span><span class="mord mathnormal">A</span><span class="mopen">(</span><span style="margin-right:0.04398em;" class="mord mathnormal">z</span><span class="mpunct">,</span><span style="margin-right:0.1667em;" class="mspace"></span><span class="mord mathnormal">λ</span><span class="mpunct">,</span><span style="margin-right:0.1667em;" class="mspace"></span><span class="mord mathnormal">κ</span><span class="mclose">)</span></span></span></span>.
|
52 |
+
The derivatives of AGLU with respect to κ (top), λ (middle) and z (bottom) are shown in Figure 2:</p>
|
53 |
+
<figure class="image text-center">
|
54 |
+
<img alt="AGLU derivatives" src="https://huggingface.co/spaces/konsa15/AGLU/resolve/main/assets/derivative_visualisations.jpg">
|
55 |
+
<figcaption> Figure 2: The derivatives of AGLU with respect to κ (top), λ (middle) and z (bottom).</figcaption>
|
56 |
+
</figure>
|
57 |
+
|
58 |
+
|
59 |
<h3> Simple Code implementation </h3>
|
60 |
|
61 |
+
<pre><code class="language-python"><span class="hljs-keyword">class</span> <span class="hljs-title class_">Unified</span>(nn.Module):
|
62 |
<span class="hljs-string">"""Unified activation function module."""</span>
|
63 |
|
64 |
+
<span class="hljs-keyword">def</span> <span class="hljs-title function_">__init__</span>(<span class="hljs-params">self, device=<span class="hljs-literal">None</span>, dtype=<span class="hljs-literal">None</span></span>) -> <span class="hljs-literal">None</span>:
|
65 |
<span class="hljs-string">"""Initialize the Unified activation function."""</span>
|
66 |
factory_kwargs = {<span class="hljs-string">"device"</span>: device, <span class="hljs-string">"dtype"</span>: dtype}
|
67 |
+
<span class="hljs-built_in">super</span>().__init__()
|
68 |
lambda_param = torch.nn.init.uniform_(torch.empty(<span class="hljs-number">1</span>, **factory_kwargs))
|
69 |
kappa_param = torch.nn.init.uniform_(torch.empty(<span class="hljs-number">1</span>, **factory_kwargs))
|
70 |
+
self.softplus = nn.Softplus(beta=-<span class="hljs-number">1.0</span>)
|
71 |
self.lambda_param = nn.Parameter(lambda_param)
|
72 |
self.kappa_param = nn.Parameter(kappa_param)
|
73 |
|
74 |
+
<span class="hljs-keyword">def</span> <span class="hljs-title function_">forward</span>(<span class="hljs-params">self, <span class="hljs-built_in">input</span>: torch.Tensor</span>) -> torch.Tensor:
|
75 |
<span class="hljs-string">"""Compute the forward pass of the Unified activation function."""</span>
|
76 |
+
l = torch.clamp(self.lambda_param, <span class="hljs-built_in">min</span>=<span class="hljs-number">0.0001</span>)
|
77 |
+
p = torch.exp((<span class="hljs-number">1</span> / l) * self.softplus((self.kappa_param * <span class="hljs-built_in">input</span>) - torch.log(l)))
|
78 |
<span class="hljs-keyword">return</span> p <span class="hljs-comment"># for AGLU simply return p*input</span>
|
79 |
</code></pre>
|
80 |
+
<h2 class="relative group flex items-center">
|
81 |
+
<a rel="nofollow" href="#bibtex" class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" id="bibtex">
|
82 |
+
<span class="header-link"><svg viewBox="0 0 256 256" preserveAspectRatio="xMidYMid meet" height="1em" width="1em" role="img" aria-hidden="true" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4"><path fill="currentColor" d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z"></path></svg></span>
|
83 |
+
</a>
|
84 |
+
<span>
|
85 |
+
BibTeX
|
86 |
+
</span>
|
87 |
+
</h2>
|
88 |
+
<pre><code class="language-bibtex">@inproceedings{alexandridis2024adaptive,
|
89 |
+
title={Adaptive Parametric Activation},
|
90 |
+
author={Alexandridis, Konstantinos Panagiotis and Deng, Jiankang and Nguyen, Anh and Luo, Shan},
|
91 |
+
booktitle={European Conference on Computer Vision},
|
92 |
+
pages={455--476},
|
93 |
+
year={2024},
|
94 |
+
organization={Springer}
|
95 |
}
|
96 |
</code></pre>
|
97 |
+
<!-- HTML_TAG_END --></div>
|
98 |
+
</div>
|
|
|
|
|
99 |
</body>
|
100 |
</html>
|