Update index.html
Browse files- index.html +3 -13
index.html
CHANGED
@@ -8,18 +8,7 @@
|
|
8 |
</head>
|
9 |
<body>
|
10 |
|
11 |
-
|
12 |
-
<div class="relative min-h-[100px] rounded-b-lg border border-t-0 leading-tight dark:border-gray-800 dark:bg-gray-925">
|
13 |
-
|
14 |
-
<div class="py-4 px-4 sm:px-6 prose hf-sanitized hf-sanitized-LkzsSYSqbDSFqH1JQZWuD"><div class="not-prose bg-linear-to-t -mx-6 -mt-4 mb-8 max-h-[300px] min-w-full overflow-auto border-b from-gray-50 px-6 pb-5 pt-4 font-mono text-xs transition-all dark:from-gray-900 dark:to-gray-950"><div class="mb-2 inline-block rounded-lg border px-2 py-1 font-mono text-xs leading-none">metadata</div>
|
15 |
-
<pre><!-- HTML_TAG_START --><span class="hljs-attr">title:</span> <span class="hljs-string">AGLU</span>
|
16 |
-
<span class="hljs-attr">emoji:</span> <span class="hljs-string">🏆</span>
|
17 |
-
<span class="hljs-attr">colorFrom:</span> <span class="hljs-string">green</span>
|
18 |
-
<span class="hljs-attr">colorTo:</span> <span class="hljs-string">red</span>
|
19 |
-
<span class="hljs-attr">sdk:</span> <span class="hljs-string">static</span>
|
20 |
-
<span class="hljs-attr">pinned:</span> <span class="hljs-literal">false</span>
|
21 |
-
<!-- HTML_TAG_END --></pre></div>
|
22 |
-
<!-- HTML_TAG_START --><div align="center">
|
23 |
<h1>Adaptive Parametric Activation </h1>
|
24 |
|
25 |
<p><a href="https://kostas1515.github.io/">Konstantinos Panagiotis Alexandridis</a><sup>1</sup>,
|
@@ -41,9 +30,10 @@
|
|
41 |
</div>
|
42 |
|
43 |
|
44 |
-
|
45 |
|
46 |
<p>The activation function plays a crucial role in model optimisation, yet the optimal choice remains unclear. For example, the Sigmoid activation is the de-facto activation in balanced classification tasks, however, in imbalanced classification, it proves inappropriate due to bias towards frequent classes. In this work, we delve deeper in this phenomenon by performing a comprehensive statistical analysis in the classification and intermediate layers of both balanced and imbalanced networks and we empirically show that aligning the activation function with the data distribution, enhances the performance in both balanced and imbalanced tasks. To this end, we propose the Adaptive Parametric Activation (APA) function, a novel and versatile activation function that unifies most common activation functions under a single formula. APA can be applied in both intermediate layers and attention layers, significantly outperforming the state-of-the-art on several imbalanced benchmarks such as ImageNet-LT, iNaturalist2018, Places-LT, CIFAR100-LT and LVIS and balanced benchmarks such as ImageNet1K, COCO and V3DET.</p>
|
|
|
47 |
<h3>Definition</h3>
|
48 |
|
49 |
<p>The Adaptive Parametric Activation APA is defined as: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi><mi>P</mi><mi>A</mi><mo stretchy="false">(</mo><mi>z</mi><mo separator="true">,</mo><mi>λ</mi><mo separator="true">,</mo><mi>κ</mi><mo stretchy="false">)</mo><mo>=</mo><mo stretchy="false">(</mo><mi>λ</mi><mi>e</mi><mi>x</mi><mi>p</mi><mo stretchy="false">(</mo><mo>−</mo><mi>κ</mi><mi>z</mi><mo stretchy="false">)</mo><mo>+</mo><mn>1</mn><msup><mo stretchy="false">)</mo><mfrac><mn>1</mn><mrow><mo>−</mo><mi>λ</mi></mrow></mfrac></msup></mrow>APA(z,λ,κ) = (λ exp(−κz) + 1) ^{\frac{1}{−λ}}</math></span>. APA unifies most activation functions under the same formula as shwon in Figure 1.</p>
|
|
|
8 |
</head>
|
9 |
<body>
|
10 |
|
11 |
+
<div align="center">
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
<h1>Adaptive Parametric Activation </h1>
|
13 |
|
14 |
<p><a href="https://kostas1515.github.io/">Konstantinos Panagiotis Alexandridis</a><sup>1</sup>,
|
|
|
30 |
</div>
|
31 |
|
32 |
|
33 |
+
<h3>Abstract</h3>
|
34 |
|
35 |
<p>The activation function plays a crucial role in model optimisation, yet the optimal choice remains unclear. For example, the Sigmoid activation is the de-facto activation in balanced classification tasks, however, in imbalanced classification, it proves inappropriate due to bias towards frequent classes. In this work, we delve deeper in this phenomenon by performing a comprehensive statistical analysis in the classification and intermediate layers of both balanced and imbalanced networks and we empirically show that aligning the activation function with the data distribution, enhances the performance in both balanced and imbalanced tasks. To this end, we propose the Adaptive Parametric Activation (APA) function, a novel and versatile activation function that unifies most common activation functions under a single formula. APA can be applied in both intermediate layers and attention layers, significantly outperforming the state-of-the-art on several imbalanced benchmarks such as ImageNet-LT, iNaturalist2018, Places-LT, CIFAR100-LT and LVIS and balanced benchmarks such as ImageNet1K, COCO and V3DET.</p>
|
36 |
+
<p>The arxiv link is <a href="https://arxiv.org/abs/2407.08567">here</a> </p>
|
37 |
<h3>Definition</h3>
|
38 |
|
39 |
<p>The Adaptive Parametric Activation APA is defined as: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi><mi>P</mi><mi>A</mi><mo stretchy="false">(</mo><mi>z</mi><mo separator="true">,</mo><mi>λ</mi><mo separator="true">,</mo><mi>κ</mi><mo stretchy="false">)</mo><mo>=</mo><mo stretchy="false">(</mo><mi>λ</mi><mi>e</mi><mi>x</mi><mi>p</mi><mo stretchy="false">(</mo><mo>−</mo><mi>κ</mi><mi>z</mi><mo stretchy="false">)</mo><mo>+</mo><mn>1</mn><msup><mo stretchy="false">)</mo><mfrac><mn>1</mn><mrow><mo>−</mo><mi>λ</mi></mrow></mfrac></msup></mrow>APA(z,λ,κ) = (λ exp(−κz) + 1) ^{\frac{1}{−λ}}</math></span>. APA unifies most activation functions under the same formula as shwon in Figure 1.</p>
|