_id
stringlengths 36
36
| text
stringlengths 5
665k
| marker
stringlengths 3
6
| marker_offsets
sequence | label
stringlengths 28
32
|
---|---|---|---|---|
86f8517d-2113-4639-9069-746f5fea6e5d | Network compression [1]} is a common technique to reduce the number of operations, model size, energy consumption, and over-training of deep neural networks. As neural network synapses and neurons can be redundant, compression techniques attempt to reduce the total number of them, effectively reducing multipliers. Several approaches have been successfully deployed without much loss in accuracy, including parameter pruning [2]}, [3]}, [4]} (selective removal of parameters based on a particular ranking and regularization), low-rank factorisation [5]}, [6]}, [7]} (using matrix decomposition to estimate informative parameters), compact network architectures [8]}, [9]}, [10]}, [11]}, and knowledge distillation [12]} (training a compact network with distilled knowledge of a large network).
| [11] | [
[
681,
685
]
] | https://openalex.org/W2279221249 |
ed66cc46-4099-42be-9576-cd663367564a | The Jet-SSD architecture is shown in Figure REF . We modified the original architecture [1]}. Due to target hardware constraints, all filters in convolution layers are of size \(3\times 3\) with no dilatation and all pooling layers have \(2\times 2\) filters. Each convolution block is followed by batch normalization [2]}, [3]} and parametric rectified linear unit (PReLU) layers. To compress the model we used half of the channels of the VGG-16 in each layer and removed bias from all convolution layers. We also noticed that the extra layers proposed by the original paper do not contribute to accurate detection due to the size of jets and thus we removed them already at the training time. Retaining the deeper layers in the base network does not show improvements in the final detection results either, but we noticed that they are critical during training due to additional signal during back-propagation. Hence, we only purge them at inference.
| [1] | [
[
88,
91
]
] | https://openalex.org/W3106250896 |
6708efd9-8b2c-4892-bef2-41d51511a72b | The Jet-SSD architecture is shown in Figure REF . We modified the original architecture [1]}. Due to target hardware constraints, all filters in convolution layers are of size \(3\times 3\) with no dilatation and all pooling layers have \(2\times 2\) filters. Each convolution block is followed by batch normalization [2]}, [3]} and parametric rectified linear unit (PReLU) layers. To compress the model we used half of the channels of the VGG-16 in each layer and removed bias from all convolution layers. We also noticed that the extra layers proposed by the original paper do not contribute to accurate detection due to the size of jets and thus we removed them already at the training time. Retaining the deeper layers in the base network does not show improvements in the final detection results either, but we noticed that they are critical during training due to additional signal during back-propagation. Hence, we only purge them at inference.
| [2] | [
[
320,
323
]
] | https://openalex.org/W2949117887 |
ca190aa0-ba89-411b-9c97-2fa7351399cc | FL at mobile edge networks suffers from high training latency due to limited communication bandwidth. To address that, various communication-efficient distributed learning algorithms have been proposed to improve the communication efficiency of FL. Specifically, FedAvg was proposed in [1]} to reduce the number of communication rounds by running multiple steps of SGD update on devices before aggregating their updates at the edge server to compute the new model. Various communication compression techniques such as sparsification [2]} and quantization [3]} were also designed in FedAvg to reduce the size of messages transmitted between edge server and devices in each communication round of FL. Considering the resource constraints of mobile edge networks, learning and resource allocation are jointly optimized in [4]}, [5]}, [6]}, [7]}, [8]}, [9]} to minimize the training latency of FedAvg at mobile edge networks. All of the aforementioned studies assume a single edge server that aggregates model updates from all devices in each communication round. However, since the coverage of a single edge server is inherently limited, the proposed solutions cannot scale to a large number of devices.
| [2] | [
[
533,
536
]
] | https://openalex.org/W2964267428 |
2f073529-4c9a-4177-a0b3-338ab7667385 | FL at mobile edge networks suffers from high training latency due to limited communication bandwidth. To address that, various communication-efficient distributed learning algorithms have been proposed to improve the communication efficiency of FL. Specifically, FedAvg was proposed in [1]} to reduce the number of communication rounds by running multiple steps of SGD update on devices before aggregating their updates at the edge server to compute the new model. Various communication compression techniques such as sparsification [2]} and quantization [3]} were also designed in FedAvg to reduce the size of messages transmitted between edge server and devices in each communication round of FL. Considering the resource constraints of mobile edge networks, learning and resource allocation are jointly optimized in [4]}, [5]}, [6]}, [7]}, [8]}, [9]} to minimize the training latency of FedAvg at mobile edge networks. All of the aforementioned studies assume a single edge server that aggregates model updates from all devices in each communication round. However, since the coverage of a single edge server is inherently limited, the proposed solutions cannot scale to a large number of devices.
| [4] | [
[
819,
822
]
] | https://openalex.org/W2920095265 |
a26f5ab3-b663-453d-b2d1-0e42980f237b | FL at mobile edge networks suffers from high training latency due to limited communication bandwidth. To address that, various communication-efficient distributed learning algorithms have been proposed to improve the communication efficiency of FL. Specifically, FedAvg was proposed in [1]} to reduce the number of communication rounds by running multiple steps of SGD update on devices before aggregating their updates at the edge server to compute the new model. Various communication compression techniques such as sparsification [2]} and quantization [3]} were also designed in FedAvg to reduce the size of messages transmitted between edge server and devices in each communication round of FL. Considering the resource constraints of mobile edge networks, learning and resource allocation are jointly optimized in [4]}, [5]}, [6]}, [7]}, [8]}, [9]} to minimize the training latency of FedAvg at mobile edge networks. All of the aforementioned studies assume a single edge server that aggregates model updates from all devices in each communication round. However, since the coverage of a single edge server is inherently limited, the proposed solutions cannot scale to a large number of devices.
| [5] | [
[
825,
828
]
] | https://openalex.org/W3090615085 |
3b0a99aa-e41d-4186-a540-4815b109ca04 | We now present a runtime analysis of CE-FedAvg. Here, the communication time of downloading models from the edge server by each device is ignored because the download bandwidth is usually much larger than upload bandwidth for the device-to-edge communication in practice [1]}. Similarly, the computation time for model aggregation at edge servers is ignored because the involved computation workload is rather small compared to the computation capabilities of edge servers.
| [1] | [
[
271,
274
]
] | https://openalex.org/W2995022099 |
f797df59-b65b-40cc-b257-aa07c758468b | Before stating our results, we make the following assumptions that are common in the literature[1]}, [2]} to facilitate our convergence analysis.
| [1] | [
[
95,
98
]
] | https://openalex.org/W4283798991 |
d5c726a5-d086-466c-97b4-7dbc4eaf19c4 | For CE-FedAvg, we set the mixing matrix \(\mathbf {H}\) following Assumption REF and the number of gossip steps in each global aggregation round \(\pi = 10\) in all experiments. To demonstrate the effectiveness of CE-FedAvg, we compare it with three baselines: FedAvg [1]}, Hier-FAvg [2]} and Local-Edge. For fair comparison, the baseline algorithms are adapted as follows:
| [2] | [
[
287,
290
]
] | https://openalex.org/W3045638580 |
d18f78c3-25f3-42dc-8209-d749836805d0 | In [1]} (based on the previous work [2]} in the case of quadratic nonlinearities), we have proposed a new variational approach to characterize
the periodic waves of the stationary equation (REF ) as minimizers of the following constrained variational problem:
\(r_{c,m}:= \inf _{u\in H^{1}_{\rm per}} \left\lbrace \mathcal {B}_c(u): \quad \oint u^4 dx=1, \quad \frac{1}{2\pi } \oint u dx = m \right\rbrace ,\)
| [2] | [
[
36,
39
]
] | https://openalex.org/W2954400949 |
e9e0fdda-6e20-4234-9ae8-be60f8f13cdb | We also recall that since \(\partial _x \psi \) has exactly two zeros on the \(2\pi \) -periodic domain, Sturm's nodal theory from [1]} (see also Proposition 2.4 in [2]}) implies that \(\mathcal {L}\) is not positive and admits at least one negative eigenvalue: \(n(\mathcal {L}) \ge 1\) .
Since \(\psi \) is related to a minimizer of the variational problem (REF ) with two constraints: \(n(\mathcal {L}) \le 2\) . Hence, we
have \(1 \le n(\mathcal {L}) \le 2\) .
| [1] | [
[
132,
135
]
] | https://openalex.org/W2137311213 |
3b1260e0-137f-4f94-8a32-62524dc0e4b5 | Once nodal forces are calculated, the set of nodal velocities \(v_i\) are computed and the equation of motion (REF ) is integrated during which the nodal positions \(\lbrace r_i\rbrace \) are advanced in time.
The time-step size is typically selected based on an error tolerance scheme [1]}, or by employing more sophisticated techniques such as multi-stepping or subcycling algorithms [2]}, [3]}.
| [1] | [
[
288,
291
]
] | https://openalex.org/W1990117573 |
df5b8feb-7ef3-4502-beed-cbfda90a606c | Eq. (REF ) is essentially a map from graph to vector per node, which is computationally similar to the task of constructing force fields or interatomic potentials, a very active research field of materials science [1]}.
Such interatomic potentials, especially when trained from quantum mechanical calculations, allow for orders of magnitude speed-ups in molecular dynamics (MD) simulations compared to ab initio MD with similar accuracy, e.g. [2]}.
Tremendous progress has been made utilizing either traditional force field models based on physical intuition of the underlying materials or ML methods that often take inspirations from rapid developments in deep neural networks. While the former often offer better efficiency, interpretability and generalizability, they may be more time-consuming to develop, especially when applied to complicated structures with a lack of quantitative physical justifications [3]}, [4]}, [5]}. Modern ML-based models, on the other hand, might be less efficient, interpretable or generalizable, but could be trained with enough data even in the absence of deep physical knowledge [1]}, [7]}, [8]}, [9]}, [10]}. Given our goal of mapping a relatively complicated target property – nodal displacements integrated over multiple steps – we take the latter approach.
| [2] | [
[
443,
446
]
] | https://openalex.org/W1978183953 |
596bcc86-e481-4bd6-865c-9013cf669323 | This section is due to Vakhnenko and Parkes (see Appendix A in
[1]}). For completeness and readability, we state it in the
following.
| [1] | [
[
63,
66
]
] | https://openalex.org/W2026231496 |
bc42b160-80da-4f30-a4f9-0b141fee84d7 | Let \({C}\) be a pointed Weierstrass model of \((C, P)\) . Let us recall
the definition of the pointed Weierstrass class \([\mathfrak {u}] \in \mathrm {Pic}(S)\) of \({C}\) similarly to [1]}, Definition 2.7.
Let
\(y^2+Q(x)y=P(x)\)
| [1] | [
[
189,
192
]
] | https://openalex.org/W2022862710 |
7da499f0-5343-4a6e-b364-ea35e4538f56 | This work provides considerable progress in this direction. By using simplified yet practical point-to-point information theoretical tools, namely by using standard linearly precoded Gaussian codes, treating interference as noise, and the non-coherent ergodic rate bounds popularized by the massive MIMO literature [1]}, [2]}, we propose a novel distributed precoding design, coined team MMSE (TMMSE) precoding, generalizing classical centralized MMSE precoding [2]} to systems with distributed CSIT. Its optimality in terms of achievable ergodic rates under a sum-power constraint is formally established by revisiting the uplink-downlink (UL-DL) duality principle [2]} in light of the distributed CSIT assumption. Our first main resultThe preliminary version of this work [5]} focuses on a simplified cell-free massive MIMO setup. This work extends [5]} to more general networks, Gaussian fading, and channel estimation errors; provides complete theoretical derivations; improves the comparison with the previous literature. is showing that the problem of optimal TMMSE precoding design can be solved by means of a useful set of optimality conditions in the form of an infinite dimensional linear system of equations, for which many standard solution tools exist. The key novelty lies in the introduction of previously unexplored elements from the theory of teams, a mathematical framework for multi-agent coordinated decision making in presence of asymmetry of information. This framework was pioneered in theoretical economics by Marschak and Radner [7]}, [8]}, and then further developed in the control theoretical literature (see the excellent survey in [9]}). Early applications of team theory to wireless communication, including the problem of distributed precoding design, are reported in [10]}, [11]}. However, compared to previous attempts for distributed precoding design, this work is the first exploiting (and partially extending) known results for the class of quadratic teams [8]}, [9]}, which is one of the few cases where solid globally optimal solution approaches are available.
| [10] | [
[
1799,
1803
]
] | https://openalex.org/W2155174176 |
572ca752-d06d-417e-9ea5-a945ce3e453f | In the second part of this work, the aforementioned optimality conditions are specialized to cell-free massive MIMO networks. To the best of the authors' knowledge, this is the first work connecting cell-free massive MIMO to the theory of teams. The first non-trivial application is the derivation of the optimal TMMSE precoders based on local CSIT only, improving upon previous local precoding strategies studied, e.g., in [1]}, [2]}, [3]}. We then consider a cell-free massive MIMO network with serial fronthaul, an efficient architecture also known as a radio stripe [4]}, [5]}. We derive optimal TMMSE precoders by assuming that CSIT is shared unidirectionally along the stripe. The proposed scheme can be efficiently implemented in a sequential fashion, an idea that has been explored already in [4]}, [7]}, [5]} for UL processing, and in [9]} under a different cellular context. As a byproduct, we also obtain a novel distributed implementation of classical centralized MMSE precoding tailored to radio stripes. Finally, we present extensive numerical results comparing the effects of different CSIT sharing patterns in a radio stripe system and evaluating the suboptimality of the competing schemes. Interestingly, our numerical results suggest that unidirectional information sharing is a promising candidate for enlarging the domain of applications of radio stripes beyond the regimes supported by centralized or local precoding; for instance, it may allow effective interference management for a wider range of mobility patterns. Moreover, we show that the known local MMSE precoding scheme studied, e.g., in [3]}, [7]}, is optimal in a non line-of-sight (NLoS) scenario, while it may be significantly outperformed by the TMMSE solution for local CSIT in the presence of line-of-sight (LoS) components.
| [1] | [
[
424,
427
]
] | https://openalex.org/W2286275639 |
92d2da84-0100-4c55-a786-3ab372525a56 | In the second part of this work, the aforementioned optimality conditions are specialized to cell-free massive MIMO networks. To the best of the authors' knowledge, this is the first work connecting cell-free massive MIMO to the theory of teams. The first non-trivial application is the derivation of the optimal TMMSE precoders based on local CSIT only, improving upon previous local precoding strategies studied, e.g., in [1]}, [2]}, [3]}. We then consider a cell-free massive MIMO network with serial fronthaul, an efficient architecture also known as a radio stripe [4]}, [5]}. We derive optimal TMMSE precoders by assuming that CSIT is shared unidirectionally along the stripe. The proposed scheme can be efficiently implemented in a sequential fashion, an idea that has been explored already in [4]}, [7]}, [5]} for UL processing, and in [9]} under a different cellular context. As a byproduct, we also obtain a novel distributed implementation of classical centralized MMSE precoding tailored to radio stripes. Finally, we present extensive numerical results comparing the effects of different CSIT sharing patterns in a radio stripe system and evaluating the suboptimality of the competing schemes. Interestingly, our numerical results suggest that unidirectional information sharing is a promising candidate for enlarging the domain of applications of radio stripes beyond the regimes supported by centralized or local precoding; for instance, it may allow effective interference management for a wider range of mobility patterns. Moreover, we show that the known local MMSE precoding scheme studied, e.g., in [3]}, [7]}, is optimal in a non line-of-sight (NLoS) scenario, while it may be significantly outperformed by the TMMSE solution for local CSIT in the presence of line-of-sight (LoS) components.
| [3] | [
[
436,
439
],
[
1619,
1622
]
] | https://openalex.org/W2965123227 |
93ae3a3c-464d-4b61-9dd5-ab22eb9cec25 | The long-term sum power constraint is chosen because it allows for strong analytical results and simplifies system design. This constraint may be directly relevant for systems such as the radio stripes, treated in Section REF , where all the TXs share the same power supply [1]}. However, note that many simple heuristic methods (such as power scaling factors) can be applied to adapt systems designed under a long-term sum power constraint to more restrictive cases such as per-TX power constraints. Further analyses on different power constraints are left for future work.
| [1] | [
[
274,
277
]
] | https://openalex.org/W3099019646 |
08f68feb-38a0-4471-b42b-522f97cb316b | The proof is based on connecting Problem () to the problem of ergodic rate maximization in a dual UL channel, where \(\mathbf {w}\) is an UL power allocation vector, \(\mathbb {t}_k\) is a distributed UL combiner, and where achievable rates are measured by using the so-called use-and-then-forget (UatF) bound [1]}. The details are given in Appendix REF .
Theorem REF states that the Pareto boundary of \(\mathcal {R}^{\mathrm {hard}}\) can be parametrized by \(K-1\) nonnegative real parameters, i.e., by the weights \(\mathbf {w}\in \mathcal {W}\) . A similar parametrization was already known for deterministic channels (see, e.g., [2]}), or, equivalently, for fading channels with perfect CSIT and CSIR. This work extends the aforementioned results to imperfect and possibly distributed CSIT, and no CSIR. In theory, \(\mathbf {w}\) should be selected according to some network utility (e.g., the sum-rate or the max-min rate). In practice, \(\mathbf {w}\) is often fixed heuristically (e.g., from the real UL powers), while the network utility is optimized a posteriori by varying the DL power allocation policy \(\lbrace p_k\rbrace _{k=1}^K\) .
| [2] | [
[
641,
644
]
] | https://openalex.org/W2022800466 |
d5ca3ed3-1450-411d-88b1-5444cc5dc5af | Problem () belongs to the known family of team decision problems [1]}, [2]}, which are generally difficult to solve for general information constraints \(\mathbb {t}_k \in \mathcal {T}\) . However, by rewriting the objective as \(\mathrm {MSE}_k(\mathbb {t}_k)=\mathsf {E}[c_k(\mathbb {H},\mathbb {t}_{1,k},\ldots ,\mathbb {t}_{L,k})]\) ,
\(c_k(\mathbf {H},\mathbf {t}_{1,k},\ldots ,\mathbf {t}_{L,k}):=\mathbf {t}_k^\mathsf {H}\mathbf {Q}\mathbf {t}_k -2\Re \left(\mathbf {g}_k^\mathsf {H}\mathbf {t}_k\right) +1,\)
| [1] | [
[
65,
68
]
] | https://openalex.org/W2092792265 |
49f22b00-e11d-42d7-b608-77081c74014f | Problem () belongs to the known family of team decision problems [1]}, [2]}, which are generally difficult to solve for general information constraints \(\mathbb {t}_k \in \mathcal {T}\) . However, by rewriting the objective as \(\mathrm {MSE}_k(\mathbb {t}_k)=\mathsf {E}[c_k(\mathbb {H},\mathbb {t}_{1,k},\ldots ,\mathbb {t}_{L,k})]\) ,
\(c_k(\mathbf {H},\mathbf {t}_{1,k},\ldots ,\mathbf {t}_{L,k}):=\mathbf {t}_k^\mathsf {H}\mathbf {Q}\mathbf {t}_k -2\Re \left(\mathbf {g}_k^\mathsf {H}\mathbf {t}_k\right) +1,\)
| [2] | [
[
71,
74
]
] | https://openalex.org/W585548487 |
67c5856c-f306-4795-98fc-a95a135a4070 | where the last equation follows from the definition of \(\bar{\mathbb {V}}_l\) , and where we identify another recursive structure among the remaining terms. By continuing until termination, we finally obtain \(\prod _{i=1}^{l-1}\bar{\mathbb {V}}_i +\sum _{j< l}\mathbb {P}_j\mathbb {V}_j\prod _{i=1}^{j-1}\bar{\mathbb {V}}_i = \mathbf {I}\) , which proves the main statement under the assumption that all the matrix inverses involved exist. This assumption is indeed always satisfied, as shown in Appendix REF .
By locally computing precoders based on \(S_l\) only, and at the expense of some performance loss, the scheme in (REF ) eliminates the additional overhead required by centralized precoding to share back the computed \(K\times LN\) precoding matrix from the CPU to the TXs. Furthermore, inspired by the schemes proposed in [1]}, [2]}, [3]} for UL processing exploiting the peculiarity of a serial fronthaul, the CSIT sharing overhead can be further reduced as follows:
| [2] | [
[
844,
847
]
] | https://openalex.org/W2974046131 |
6000306b-f498-4f0c-8e30-96f806d4360a | where the last equation follows from the definition of \(\bar{\mathbb {V}}_l\) , and where we identify another recursive structure among the remaining terms. By continuing until termination, we finally obtain \(\prod _{i=1}^{l-1}\bar{\mathbb {V}}_i +\sum _{j< l}\mathbb {P}_j\mathbb {V}_j\prod _{i=1}^{j-1}\bar{\mathbb {V}}_i = \mathbf {I}\) , which proves the main statement under the assumption that all the matrix inverses involved exist. This assumption is indeed always satisfied, as shown in Appendix REF .
By locally computing precoders based on \(S_l\) only, and at the expense of some performance loss, the scheme in (REF ) eliminates the additional overhead required by centralized precoding to share back the computed \(K\times LN\) precoding matrix from the CPU to the TXs. Furthermore, inspired by the schemes proposed in [1]}, [2]}, [3]} for UL processing exploiting the peculiarity of a serial fronthaul, the CSIT sharing overhead can be further reduced as follows:
| [3] | [
[
850,
853
]
] | https://openalex.org/W3183789091 |
077cc417-cc96-4801-adbb-630b3fc07ec8 | The solution to Problem () corresponds to the projection of \(\mathbb {t}_0\in \mathcal {H}\) onto the closed linear subspace \(\mathcal {T} \subseteq {\mathcal {H}}\) . By the Hilbert projection theorem, this projection is unique and always exists [1]}.
| [1] | [
[
250,
253
]
] | https://openalex.org/W1534416612 |
7c34f4c0-9576-464f-8bda-45a42db1c3d2 | [Order of the truncation [1]}]
Let \(z(t)\) be a continuously differentiable function in \(t\) and let \(h=t_{n}-t_{n-1}\) . Then,
\(\sum _{i=0}^m (-1)^i\alpha _{i,l,m} h^i\left. \frac{\mathrm {d}^iz(t)}{\mathrm {d}t^i}\right|_{t=t_n}- \sum _{i=0}^l \alpha _{i,m,l} h^i \left. \frac{\mathrm {d}^iz(t)}{\mathrm {d}t^i}\right|_{t=t_{n-1}} = \mathcal {O}\left(h^{l+m+1}\right)\)
| [1] | [
[
25,
28
]
] | https://openalex.org/W2151807147 |
afd558c0-3ec9-4fe5-a936-c74d8c1d4f17 | However, there exist various motivated BSM extensions where the new physics is desirable to be at the low scale. Light new physics, for example, can account for the explanation of existing anomalies such as the longstanding anomalous magnetic moment of muon [1]}, [2]}. Moreover, concerning the dark sector, several dark matter models require the presence of light mediators to account for dark matter self interaction [3]}, [4]} and/or the recent XENON1T anomaly [5]}, [6]}, [7]}. Finally, the Peccei-Quinn solutions of strong CP-problem also implies the presence of a light Nambu-Goldstone boson called axion [8]}, [9]}, [10]}, while neutrino mass models involving dynamical lepton number breaking often lead to a light pseudoscalar boson called Majoron (Majorana Neutrinos) or Diracon (Dirac neutrinos) [11]}, [12]}.
| [1] | [
[
258,
261
]
] | https://openalex.org/W4235490860 |
7042dc71-17d2-42c0-a394-9b5578ed0f88 | The resulting projected sensitivities in the (\(m_X, g_X\) ) parameter space are illustrated at 90% C.L. in the left and right panels of Fig. REF , respectively.
As can be seen, in a future CE\(\nu \) NS (E\(\nu \) ES) measurement among the different interaction channels the scalar (tensor) interaction will be constrained with maximum sensitivity, while for both CE\(\nu \) NS and E\(\nu \) ES the pseudoscalar interaction will be the least constrained. A direct comparison of CE\(\nu \) NS and E\(\nu \) ES sensitivities leads to the conclusion that future E\(\nu \) ES measurements will be a more powerful probe for the investigation of novel light mediators.
At this point it is interesting to compare our projected sensitivities with existing constraints in the literature. First, by focusing on the universal light vector mediator scenario, in the left panel of Fig. REF we compare our present results at 90% C.L. with existing constraints from the analysis of COHERENT-CsI data [1]} as well as with constraints coming from the recent CONNIE [2]} and CONUS [3]} data and the anomalous magnetic moment of the muon reported in Ref. [4]}. Also shown are the corresponding 90% C.L. constraints coming from the XENON1T excess using E\(\nu \) ES [5]}. As can be seen, the latter constraints will be overridden by the future CE\(\nu \) NS measurements at direct detection dark matter detectors. Finally, it becomes evident that the E\(\nu \) ES channel dominates over CE\(\nu \) NS in the low mass region for \(m_V \le 2 \) MeV.
| [4] | [
[
1138,
1141
]
] | https://openalex.org/W3049342414 |
9e5e6f26-ea5b-4345-92a0-42ff52b832dc | Left and right panels of Fig. REF illustrate the projected sensitivity at 90% C.L. for the studied \(B-L\) and \(L_\mu - L_\tau \) model, respectively. The results are shown for both CE\(\nu \) NS and E\(\nu \) ES at a future dark matter direct detection experiment with the same general conclusions as discussed previously. Moreover, as expected, the exclusion curves corresponding to the \(B-L\) model are more stringent. In order to compare with other experimental probes, existing limits placed by
dielectron resonances at ATLAS [1]}, electron beam-dump fixed target experiments [2]}, [3]} as well as Dark Photon searches at BaBar [4]}, [5]} and LHCb [6]}, are superimposed. Also shown are constraints derived from the analysis of the COHERENT data [7]} and the XENON1T excess [8]}.
As can be seen, our projected sensitivities for CE\(\nu \) NS dominate in mass range \(0.1 \le m_V \le 1\) GeV, being complementary to Babar and fixed target experiments. In the same vein, our projected sensitivities obtained using E\(\nu \) ES, are complementary to fixed target experiments and particularly relevant for \(m_V \le 1\) MeV.
| [8] | [
[
785,
788
]
] | https://openalex.org/W3049092069 |
2a3b2fd4-0f61-425b-9760-b6d30ccbad55 | To define the optimal number of cloudlets to have a trade-off between the QoS and the cost for the service provider, Peng et al. [1]} used an improved affinity propagation algorithm. In their clustering, they considered user movement and load balancing. They divided the density of mobile users before and after moving into three clusters: sparse, discrete, and dense. The place of cloudlets would be changed by the density to cover more users.
The authors generated a dataset for different numbers of mobile users, and their algorithm outperformed K-means and mini-batch K-means.
Wang et al. [2]} have proposed the first study for the edge server placement. They used mixed integer programming (MIP) for edge servers placement and base station allocation by considering the access delay and load balancing as the two objectives for their formulation. It is considered that the fixed number of identical edge servers is given. Their results, on Shanghai Telecom's dataset, are compared with the K-means, top-K, and random approaches for different numbers of base stations on a large scale, for 300 to 3000 base stations. Although their experiments showed K-means has less delay and top-K creates a more balanced cluster, in total, their approach has better results. In [3]}, for the same dataset, they have used the MIP and K-means combination for 20 to 200 base stations. Authors in [4]} used a three-step approach to find the edge server placement to minimize the delay while considering the workload balancing. They used a decision tree in the first level with the help of a genetic algorithm and multiple criteria decision-making techniques in the following steps. They compared their proposed approach with the two other greedy methods, where the nearest edge server or the edge server with the most available computing power is selected.
To find the optimal placement and resource allocation in MEC, various approaches are investigated, including mainly: clustering [1]}, top-K [6]}, MIP [2]}, combination of MIP and K-means [3]}, heuristics [9]}, [4]}, ILP and game theory [11]}.
| [6] | [
[
1984,
1987
]
] | https://openalex.org/W2525400794 |
9fb1cb91-1e1b-4850-9c6e-697251eb99fd | The performance in terms of accuracy for face identification and verification is defined as follows [1]}:
\(\texttt {Accuracy} = \frac{TP+TN}{TP+TN+FP+FN}\)
| [1] | [
[
100,
103
]
] | https://openalex.org/W2158698691 |
8be16442-6f00-486e-b3a2-3a0afada2514 | For this experiment, our model was trained for 12 epochs using a mini-batch size of 2 and the Stochastic Gradient Descent (SGD) optimizer, with the parameters learning rate of \(0.002\) , \(\beta _1=0.9\) , and \(\beta _2=0.0001\) . The learning rate is a tuning hyper-parameter that determines the step size at each iteration while minimizing the loss function. The \(\beta _1\) stands for the momentum, which adds a fraction of the previous weight update to the current one to avoid local minima and speeds up the training time. The \(\beta _2\) stands for the weight decay, a regularization technique that adds a small penalty to the loss function. As an object detector, the model applies two loss functions: the binary cross-entropy loss [1]} for classification and the smooth-L1 loss [2]} for the bounding box regression (localization).
| [1] | [
[
745,
748
]
] | https://openalex.org/W1503398984 |
aeaf8468-f9d9-45dd-b430-76fb4f468cc4 | In our experiment, we applied the ResNet [1]} backbone and the ResNext [2]}. We compared the ResNet-50 and Resnet-101 with 50 and 101 layers, respectively. For the ResNeXt, we considered the ResNeXt-101-32x4d, which stands for the architecture with 101 layers, 32 parallel pathways, and a bottleneck width of 4 dimensions. We also applied the ResNeXt-101-64x4d with higher cardinality of 64.
| [2] | [
[
71,
74
]
] | https://openalex.org/W2549139847 |
1f120e42-a445-40ce-8674-ed7fc5f002fb | A sequence of several CNN layers usually leads to an increase in the semantic value of feature maps, while the spatial dimension (resolution) decreases. To overcome the low resolution of the feature maps in the upper layers, we applied the Feature Pyramid Network (FPN) [1]}. It takes an image as an input and outputs the feature maps at multiple levels (different sizes) in a fully convolutional fashion, which improves the detection of small objects.
| [1] | [
[
270,
273
]
] | https://openalex.org/W2565639579 |
ea26ca2c-f7c6-4172-8bbb-0b742405a88f | Let us consider the annihilator bundle of \(S\) , \(\mathcal {A}S\) , namely the vector bundle of rank \(k\) , whose fibers are given by
\(\mathcal {A}_qS=\lbrace \lambda \in T^*_qM\mid \langle \lambda ,T_qS\rangle =0\rbrace ,\qquad \forall q\in S.\)
At a point \(q\in S\) , let us fix a basis of the fiber \(\mathcal {A}_qS\) , say \(\lbrace \lambda _1,\ldots ,\lambda _k\rbrace \) and define, for any \(j=1,\ldots ,k\) , the element \(v_j\in q\) dual to \(\lambda _i\) via the Hamiltonian \(H\) , i.e.
\(v_j=\pi _*\vec{H}(\lambda _j)=\sum _{i=1}^N\langle \lambda _j,X_i(q)\rangle X_i(q)\qquad j=1,\ldots ,k.\)
(Step 1) If \(q\) is non-characteristic, then the set \(\lbrace v_1,\ldots ,v_k\rbrace \) is linearly independent.
Indeed assume there exists constants \(\alpha _i\) for \(i=1,\ldots ,k\) , such that \(\sum _{i=1}^k\alpha _iv_i=0\) . Then,
\(0 =\sum _{j=1}^k\alpha _jv_j=\sum _{j=1}^k\alpha _j\sum _{i=1}^N\langle \lambda _j,X_i(q)\rangle X_i(q)=\sum _{i=1}^N\langle \sum _{j=1}^k\alpha _j\lambda _j,X_i(q)\rangle X_i(q)=\pi _*\vec{H}(\lambda ),\)
having set \(\lambda =\sum _{j=1}^k\alpha _j\lambda _j\in \mathcal {A}_qS\) . Notice that, by the Lagrange multiplier rule, denoting by \(v_\lambda =\pi _*\vec{H}(\lambda )\) , for any \(\lambda \in T_q^*M\) , we have
\(\Vert v_\lambda \Vert ^2_g=\inf \left\lbrace \sum _{i=1}^Nu_i^2\mid v_\lambda =\sum _{i=1}^Nu_iX_i(q)\right\rbrace =\sum _{i=1}^N\langle \lambda ,X_i(q)\rangle ^2=2H(\lambda ).\)
Therefore, (REF ) implies that \(\Vert \pi _*\vec{H}(\lambda )\Vert ^2_g=2H(\lambda )=0\) , or equivalently:
\(\langle \lambda ,q\rangle = 0.\)
Since \(\lambda \in \mathcal {A}_qS\) and \(q\) is non-characteristic, by (REF ), we deduce that \(\lambda =0\) . Thus:
\(0=\lambda =\sum _{j=1}^k\alpha _j\lambda _j\qquad \Rightarrow \qquad \alpha _j=0,\quad \text{for any }j=1,\ldots ,k,\)
since \(\lbrace \lambda _1,\ldots ,\lambda _k\rbrace \) was a basis of the fiber of \(\mathcal {A}S\) . This concludes the proof of the first step.
Define now the sub-Riemannian exponential map from \(S\) , i.e. the map
\(E\colon D\cap \mathcal {A}S\rightarrow M;\qquad E(\lambda )=\pi \circ e^{\vec{H}}(\lambda ),\)
where \(D\subset T^*M\) is the open set where the flow of \(\vec{H}\) is defined up to time 1. Consider also the zero section of the annihilator bundle, namely
\(i\colon S\rightarrow \mathcal {A}S; \qquad i(q)=(q,0)\in \mathcal {A}_qS.\)
(Step 2) \(E\) is a local diffeomorphism at points of \(i(S)\) .
To prove the claim, we consider a point \((q,0)\in i(S)\) and verify that \(d_{(q,0)}E\) is invertible. Identifying \(T_{(q,0)}(D\cap \mathcal {A}S)\cong T_qS\oplus \mathcal {A}_qS\) , we have, on the one hand \(E\circ i=Id_S\) , therefore for a vector \(v=(v,0)\in T_qS\oplus \mathcal {A}_qS\) ,
\(d_{(q,0)}E(v)=\left.\frac{d}{dt}\right|_{t=0}E(\lambda (t))=\left.\frac{d}{dt}\right|_{t=0}E\circ i(\gamma (t))=\left.\frac{d}{dt}\right|_{t=0}\gamma (t)=v,\)
since \(\lambda (t)=(\gamma (t),0)\) , with \(\gamma \colon (-\varepsilon ,\varepsilon )\rightarrow S\) , such that \(\gamma (0)=q\) and \(\dot{\gamma }(0)=v\) . On the other hand, take an element \(\lambda =(0,\lambda )\in T_qS\oplus \mathcal {A}_qS\) , then by definition, we obtain
\(d_{(q,0)}E(\lambda )=\left.\frac{d}{dt}\right|_{t=0}E(q,t\lambda )=\left.\frac{d}{dt}\right|_{t=0}E(q,t\lambda )=\left.\frac{d}{dt}\right|_{t=0}\pi \circ e^{t\vec{H}}(\lambda )=\pi _*\vec{H}(\lambda )=v_\lambda .\)
Thus, choosing any basis for \(T_qS\) and the basis \(\lbrace \lambda _1,\ldots ,\lambda _k\rbrace \) for \(\mathcal {A}_qS\) , as before, we may write the \(n\times n\) matrix representing the differential of \(E\) as
\(d_{(q,0)}E=\begin{pNiceArray}{cc|c}[margin]{2-2}{Id_{n-k}} & & \\& & v_1,\ldots ,v_k\\\qquad \mathbf {0} & &\end{pNiceArray},\)
where the vectors \(v_j\) are defined in (REF ). Since, by the previous step, the set \(\lbrace v_1,\ldots ,v_k\rbrace \) is linearly independent in \(q\) , we conclude that \(dE\) is invertible at \(i(S)\) .
(Step 3) There exists \(U\subset D\cap \mathcal {A}S\) , such that \(E\vert _U\) is a diffeomorphism on its image. Moreover, \(U\) can be chosen of the form:
\(U=\lbrace \lambda \in \mathcal {A}S\mid \sqrt{2H(\lambda )}<r_0\rbrace , \qquad \text{for some }r_0>0.\)
The proof of this step follows verbatim what has been done in [1]}, cf. also [2]}, once we have verified that \(\sqrt{2H(\cdot )}\) is a fiber-wise norm on the annihilator bundle. Since \(H\) is quadratic on fibers, it immediately follows that \(\sqrt{2H(\cdot )}\) is positive, 1-homogeneous and sub-additive. We are left to prove that, for \(\lambda \in \mathcal {A}_qS\) ,
\(\sqrt{2H(\lambda )}=0\qquad \Leftrightarrow \qquad \lambda =0.\)
As already remarked in (REF ), an element \(\lambda \in \mathcal {A}_qS\) , such that \(\sqrt{2H((q,\lambda ))}=0\) , annihilates both the distribution and \(T_qS\) , thus, being \(q\) non-characteristic, \(\lambda =0\) .
(Step 4) \(E(U)=\lbrace p\in M\mid \delta _\mathrm {S}(p)<r_0\rbrace =S_{r_0}\cup S\) and, for elements \((q,\lambda )\in U\) we have \(\delta _\mathrm {S}(E(q,\lambda ))=\sqrt{2H(\lambda )}\) . In particular, \(\delta _\mathrm {S}\in C^\infty (S_{r_0})\) .
Firstly, we recall that, for an element \(\lambda \in U\) , the length of the curve
\([0,1]\ni t\mapsto \pi \circ e^{t\vec{H}}(\lambda )\in M\)
is equal to \(\sqrt{2H(\lambda )}<r_0\) , as one can check using (REF ). Thus, \(E(U)\subset S_{r_0}\cup S\) . Secondly, we prove the opposite inclusion: up to restricting \(r_0\) , we may assume that \(S_{r_0}\subset K\) , for a compact set \(K\subset M\) . Therefore, for an element \(p\in S_{r_0}\) , there exists a minimizing geodesic \(\gamma \colon [0,1]\rightarrow M\) such that
\(\gamma (0)=q\in S,\qquad \gamma (1)=p\qquad \text{and}\quad \ell (\gamma )=\delta _\mathrm {S}(p).\)
Applying Lemma REF , we deduce that \(\gamma \) is not an abnormal geodesic, meaning that there exists a unique normal lift for \(\gamma \) , with initial covector given by \(\lambda \in T^*_qM\) , which implies
\(\gamma (t)=\pi \circ e^{t\vec{H}}(\lambda ),\)
and in particular, \(E(q,\lambda )=p\) . Moreover, \(\lambda \in U\) as, by optimality, it satisfies the transversality condition (REF ), and also
\(\ell (\gamma )=\sqrt{2H(\lambda )}<r_0,\)
being \(p\in S_{r_0}\) . Finally, we conclude that \(p\in E(U)\) and \(\delta _\mathrm {S}(E(q,\lambda ))=\sqrt{2H(\lambda )}\) , by (REF ). Since \(\sqrt{2H(\cdot )}\) is smooth, as long as \(H(\lambda )\ne 0\) , we also have that \(\delta _\mathrm {S}\) is smooth on the set \(E(U\setminus i(S))=S_{r_0}\) .
(Step 5) There exists a diffeomorphism \(G\colon (0,r_0)\times \lbrace \delta _\mathrm {S}=r_0\rbrace \rightarrow S_{r_0}\) satisfying item \((ii)\) of the statement. Moreover, \(\Vert \nabla \delta _\mathrm {S}\Vert _g=1\) in \(S_{r_0}\) .
Once again, this part of the proof follows verbatim [1]}.
| [1] | [
[
4345,
4348
],
[
6914,
6917
]
] | https://openalex.org/W3102844574 |
57f60507-12d9-4394-9c94-f4b9a44ca95a | Interestingly, as shown in [1]}, the new (smoothed) classifier is provably robust at \(x\) to \(\ell _2\) -bounded perturbations if the base classifier \(f\) is confident enough at \(x\) . However, the proof of certification heavily exploits the fact that classifiers are restricted to map an input to a fixed number of class probabilities. Thus, directly applying randomized smoothing to classifiers in metric space, such as in few-shot learning, is a challenging task.
| [1] | [
[
27,
30
]
] | https://openalex.org/W2911634294 |
da88c4d2-a563-4642-9c1e-69ea3e500cef | We consider a few-shot classification problem where we are given a set of labeled objects \((x_1, y_1), \dots , (x_m, y_m)\) where \(x_i \in \mathbb {R}^n\) and \(y_i \in \lbrace 1, \dots , K\rbrace \) are corresponding labels. We follow the notation from [1]} and denote \(S_k\) as the set of objects of class \(k.\)
| [1] | [
[
259,
262
]
] | https://openalex.org/W2601450892 |
0606a7d0-55af-4ce3-bd12-a70621c488b3 | In the original literature [1]}, [2]} the randomized smoothing is described as a technique of convolving a base classifier \(f\) with an isotropic Gaussian noise such that the new classifier \(g(x)\) returns the most probable prediction of \(f\) of a random variable \(\xi \sim \mathcal {N}(x, \sigma ^2I)\) , where the choice of Gaussian distribution is motivated by the restriction on \(g\) to be robust against additive perturbations of bounded norm. In this case, given a classifier \(f: \mathbb {R}^n \rightarrow [0,1]\) and smoothing distribution \(\mathcal {N}(0, \sigma ^2I)\) , the classifier \(g\) looks as follows:
\(g(x) = \frac{1}{(2\pi \sigma ^2)^\frac{n}{2}}\int _{\mathbb {R}^n}f(x + \varepsilon ) \exp \left(-\frac{\Vert \varepsilon \Vert ^2_2}{2\sigma ^2}\right)d\varepsilon .\)
| [1] | [
[
27,
30
]
] | https://openalex.org/W2963952467 |
2d46d3bc-2337-4c72-be4f-4cfae31e5c3f | Symmetric positive definite matrices are not new in the Machine Learning literature. They have been used in a plethora of applications [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}), although not always respecting the intrinsic structure or the positive definiteness constraint [18]}, [19]}, [20]}, [21]}, [17]}.
The alternative has been to map manifold points onto a tangent space and employ Euclidean-based tools. Unfortunately, this mapping distorts the metric structure in regions far
from the origin of the tangent space affecting the performance [23]}, [24]}.
| [1] | [
[
135,
138
]
] | https://openalex.org/W1986964250 |
93cc9eae-21ca-4906-b755-26cbd6cb7401 | Symmetric positive definite matrices are not new in the Machine Learning literature. They have been used in a plethora of applications [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}), although not always respecting the intrinsic structure or the positive definiteness constraint [18]}, [19]}, [20]}, [21]}, [17]}.
The alternative has been to map manifold points onto a tangent space and employ Euclidean-based tools. Unfortunately, this mapping distorts the metric structure in regions far
from the origin of the tangent space affecting the performance [23]}, [24]}.
| [10] | [
[
189,
193
]
] | https://openalex.org/W1983496390 |
b2b4713b-bbe0-4fb6-93a3-b75d1dd1e1e9 | Previous work has proposed alternatives to the basic neural building blocks respecting the geometry of the space. For example, transformation layers [1]}, [2]}, [3]}, alternate convolutional layers based on SPDs
[4]} and Riemannian means [5]}, or appended after the convolution [6]}, recurrent models [7]}, projections onto Euclidean spaces [8]}, [9]} and batch normalization [10]}. Our work follows this line, providing explicit formulas for translating Euclidean arithmetic notions into SPDs.
| [1] | [
[
149,
152
]
] | https://openalex.org/W2604865066 |
e3da80a5-3f63-44cc-90b2-36638ae8023c | Our general view, using the vector-valued distance function, allows us to treat Riemannian and Finsler metrics on SPD in a unified framework. Finsler metrics have previously been applied
in compressed sensing [1]}, information geometry [2]}, for clustering categorical distributions [3]}, and in robotics [4]}.
With regard to optimization, matrix backpropagation techniques have been explored [5]}, [6]}, [7]}, with some of them accounting for different Riemannian geometries [8]}, [9]}. Nonetheless, we opt for tangent space optimization [10]} by exploiting the explicit formulations of the exponential and logarithmic map.
| [7] | [
[
405,
408
]
] | https://openalex.org/W2204257188 |
af1ee2b9-f18c-4ad0-9a6f-f4ab3d889781 | Our general view, using the vector-valued distance function, allows us to treat Riemannian and Finsler metrics on SPD in a unified framework. Finsler metrics have previously been applied
in compressed sensing [1]}, information geometry [2]}, for clustering categorical distributions [3]}, and in robotics [4]}.
With regard to optimization, matrix backpropagation techniques have been explored [5]}, [6]}, [7]}, with some of them accounting for different Riemannian geometries [8]}, [9]}. Nonetheless, we opt for tangent space optimization [10]} by exploiting the explicit formulations of the exponential and logarithmic map.
| [9] | [
[
482,
485
]
] | https://openalex.org/W2963728031 |
59d81a91-45a8-44bf-9391-2d883de1f211 | Training:
We follow the standard data augmentation protocol by adding inverse relations to the datasets [1]}.
We optimize the cross-entropy loss with uniform negative sampling defined in Equation REF ,
where \(\mathcal {T}\) is the set of training triples, and \(Y_{t} = -1\) if \(t\) is a factual triple or \(Y_{t} = 1\) if \(t\) is a negative sample.
We employ the AdamW optimizer [2]}.
We conduct a grid search with matrices of dimension \(n \times n\) where \(n \in \lbrace 14, 20, 24\rbrace \) (this is the equivalent of \(\lbrace 105, 210, 300\rbrace \) degrees of freedom respectively) to select optimal dimensions, learning rate and weight decay, using the validation set. More details and set of hyperparameters in Appendix REF .
| [2] | [
[
388,
391
]
] | https://openalex.org/W2908510526 |
db39fc79-66cb-4bb1-b2a7-66e138186602 | Baselines: We compare our models with their respective equivalents in different metric spaces, which are also state-of-the-art models for the task.
For the scaling model, these are MuRE and MuRP [1]}, which perform the scaling operation in Euclidean and hyperbolic space respectively. For the isometric models, we compare to RotC [2]}, RotE and RotH, [3]} (rotations in Complex, Euclidean and hyperbolic space respectively), and RefE and RefH [3]} (reflections in Euclidean and hyperbolic space).
Baseline results are taken from the original papers.
We do not compare to previous work on SPD given that they lack the definition of an arithmetic operation in the space, thus a vis-a-vis comparison is not possible.
| [3] | [
[
351,
354
],
[
443,
446
]
] | https://openalex.org/W3035134435 |
4c235525-f5d0-4a2e-aee2-ad77ec36f005 | Baselines: We compare to TransE [1]}, RotC [2]}, MuRE and MuRP [3]} trained with 55 dimensions.
<TABLE> | [1] | [
[
32,
35
]
] | https://openalex.org/W2127795553 |
5ea0985b-ec42-4766-ba26-ca3feb097e99 | Baselines: We compare to TransE [1]}, RotC [2]}, MuRE and MuRP [3]} trained with 55 dimensions.
<TABLE> | [3] | [
[
63,
66
]
] | https://openalex.org/W2947871958 |
e9b051ee-da16-4439-aa72-658dcd23fd3b | Optimization in Riemannian manifolds normally requires Riemannian Stochastic Gradient Descent (RSGD) [1]} or other Riemannian techniques [2]}. We performed initial tests converting the Euclidean gradient into its Riemannian form, but found it to be less numerically stable and also slower than tangent space optimization [3]}.
With tangent space optimization, we can use standard Euclidean optimization techniques, and respect the geometry of the manifold.
Note that tangent space optimization is an exact procedure, which does not incur losses in representational power. This is the case in \(\operatorname{SPD}_n\) specifically because of a completeness property given by the choice of \(I \in \operatorname{SPD}_n\) as the basepoint: there is always a global bijection between the tangent space and the manifold.
| [3] | [
[
321,
324
]
] | https://openalex.org/W2970757764 |
0256fa52-8938-4ad8-8cde-2e05472adc12 | We train for 300 epochs, with 2 negative samples and early stopping based on the MRR if the model does not improve after 20 epochs. We use the burn-in strategy [1]} training with a 10 times smaller learning rate for the first 10 epochs. We report average \(\pm \) standard deviation of 3 runs.
We experiment with matrices of dimension \(14 \times 14\) (equivalent of 105 degrees of freedom respectively), batch size from \(\lbrace 512, 1024\rbrace \) , learning rate from \(\lbrace 1\rm {e-}4, 5\rm {e-}5, 1\rm {e-}5\rbrace \) and weight decays of \(1\rm {e-}3\) .
Same grid search was applied to baselines.
| [1] | [
[
160,
163
]
] | https://openalex.org/W2962936818 |
a10f407c-8fc5-4c52-9fac-38d9f6d71e08 | This subspace \(\mathcal {D}\) is in fact a maximal flat for \(\operatorname{SPD}_n\) , the largest dimensional totally geodesic Euclicean submanifold embedded in \(\operatorname{SPD}_n\) . For more information on the general theory of symmetric spaces from which the notion of maximal flats arises, see Helgason [1]}. For our purposes, it is only important to note the following fact.
| [1] | [
[
314,
317
]
] | https://openalex.org/W1983894121 |
7a6ca994-fafd-4bee-b77a-c224c6ad3797 | It is computed in [1]}
that the group of isometries of \((\operatorname{SPD}_n, d_{BW})\) is reduced to \(O(n)\) . As a result, once again, \(d_{BW}\) cannot be reconstructed from \(d_{vv}\) .
| [1] | [
[
18,
21
]
] | https://openalex.org/W2964068697 |
572999a7-40b4-4eab-89cf-56f32e1a73ff | Already in the definition of a Bishop-Cheng integration space a similar problem arises. Namely, the integral is supposed to
be defined on a subset \(L\) of the proper class \(\mathfrak {F}^{se}(X)\) , without specifying though, how such
a subset can be defined i.e., how a subclass of \(\mathfrak {F}^{se}(X)\) can be considered to be a set. It seems that
both in [1]} and in [2]}
the totality \(\mathfrak {F}^{se}(X)\) is taken to be a set. This fundamental impredicativity built in \(\mathrm {BCMT}\) directed the
subsequent constructive studies of measure theory to different
directionsOutside Bishop's constructivism there are various approaches
to measure theory.
The theory of measure [3]} within Brouwer's intuitionism contradicts the classical
theory, while measure theory [4]} within the computability framework of Type-2 Theory of Effectivity
is based on classical logic. Measure theory [5]}, [6]} within Russian constructivism
employs Markov's principle of unbounded
search. In intuitionisitic Martin-Löf type theory \((\mathrm {MLTT})\) [7]} the interest lies
mainly in probabilistic programming [8]}, while in homotopy type theory [9]}
univalent techniques, such as higher inductive types,
are applied to probabilistic programming too [10]}..
| [1] | [
[
366,
369
]
] | https://openalex.org/W2080360328 |
b9c32a81-0ee8-46bc-9891-777cf3379f9a | Already in the definition of a Bishop-Cheng integration space a similar problem arises. Namely, the integral is supposed to
be defined on a subset \(L\) of the proper class \(\mathfrak {F}^{se}(X)\) , without specifying though, how such
a subset can be defined i.e., how a subclass of \(\mathfrak {F}^{se}(X)\) can be considered to be a set. It seems that
both in [1]} and in [2]}
the totality \(\mathfrak {F}^{se}(X)\) is taken to be a set. This fundamental impredicativity built in \(\mathrm {BCMT}\) directed the
subsequent constructive studies of measure theory to different
directionsOutside Bishop's constructivism there are various approaches
to measure theory.
The theory of measure [3]} within Brouwer's intuitionism contradicts the classical
theory, while measure theory [4]} within the computability framework of Type-2 Theory of Effectivity
is based on classical logic. Measure theory [5]}, [6]} within Russian constructivism
employs Markov's principle of unbounded
search. In intuitionisitic Martin-Löf type theory \((\mathrm {MLTT})\) [7]} the interest lies
mainly in probabilistic programming [8]}, while in homotopy type theory [9]}
univalent techniques, such as higher inductive types,
are applied to probabilistic programming too [10]}..
| [2] | [
[
378,
381
]
] | https://openalex.org/W4239316923 |
ea90686f-356f-495f-b035-be643122fd57 | This indexisation method, roughly sketched in [1]}, is elaborated
within Bishop Set Theory \((\mathrm {BST})\) in [2]}.
Based on this, we present here the first crucial steps to
a predicative reconstruction \((\textnormal {\texttt {PBCMT}})\) of \(\mathrm {BCMT}\) . Following Bishop's explanations in [3]},
we replace a
totality of strongly extensional, real-valued, partial functions \(L\) in the original definition of a
Bishop-Cheng integration space by a set-indexed family \(\Lambda \) of such partial functions.
Applying tools and results from [2]}, we recover the concept of an integration space in
an indexised form. The predicative advantage of the indexisation method within \(\textnormal {\texttt {PBCMT}}\)
is that crucial quantifications
are over an index-set and not proper classes. Following [2]}, we elaborate the concept of
a pre-integration space in which the index-set \(I\) is equipped with all necessary
operations so that a pre-integral \(\int \) can be defined on \(I\) . A
pre-integration space induces a predicative integration space, the integral \(\int ^*\) of which on
the partial function \(f_i\) is given, for every \(i \in I\) , by
\(\int ^* f_i := \int i.\)
| [1] | [
[
46,
49
]
] | https://openalex.org/W2067914884 |
3b0bb1eb-702f-4b64-b7d3-640700f67853 | We work within \(\mathrm {BST}\) , which behaves as a high-level programming language.
For all notions and results of Bishop set theory that are used here without definition
or proof we refer to [1]}, in this journalIn [1]} the theory of spectra of Bishop spaces
(see [3]}-[4]} and [5]}-[6]}) is developed within \(\mathrm {BST}\) ., and to [7]}, [8]}.
For all notions and results
of constructive real analysis that are used here without definition
or proof we refer to [9]}. The type-theoretic interpretation of Bishop's set theory into the theory
of setoids (see especially the work of Palmgren [10]}-[11]}) has become nowadays the standard
way to understand Bishop setsFor an analysis of the relation between intensional \(\mathrm {MLTT}\) and Bishop's
theory of
sets see [7]}, Chapter 1.. Other formal systems for \(\mathrm {BISH}\) are Myhill's Constructive Set Theory \((\mathrm {CST})\) ,
introduced in [13]}, and Aczel's system \(\mathrm {CZF}\) (see [14]}).
| [7] | [
[
341,
344
],
[
776,
779
]
] | https://openalex.org/W3197179860 |
ee231161-138a-43c8-b701-655ac1141f41 | The advantage of \(\mathfrak {B}_2(X)\) over \(\mathfrak {B}_1(X)\) lies on the first two clauses of the following fact (also shown
in full detail in [1]}).
| [1] | [
[
152,
155
]
] | https://openalex.org/W4285136677 |
66a66c7b-4d18-4ef1-b610-3f6959e7bb70 | In this section we present the basic notions and facts on set-indexed families of subsets that are going to be used
in the rest of the paper.
Roughly speaking, a family of subsets of a set \(X\) indexed by some set \(I\) is an assignment routine
\(\lambda _0 : I \rightsquigarrow P̏(X)\) that behaves like a function i.e., if \(i =_I j\) , then \(\lambda _0(i) =_{P̏(X)}\lambda _0 (j)\) . The following definition is a formulation of this rough description that reveals the witnesses of the
equality \(\lambda _0(i) =_{P̏(X)} \lambda _0 (j)\) . This is done “internally”, through the embeddings of the subsets
into \(X\) . The equality \(\lambda _0(i) =_{V̥_0} \lambda _0 (j)\) , which is defined “externally” through
the transport maps (see [1]}, Definition 3.1), follows, and a family of subsets
is also a family of sets. We start by introducing some notation. For details we refer to [1]}.
| [1] | [
[
745,
748
],
[
890,
893
]
] | https://openalex.org/W3158407126 |
7e34f71a-07f2-4547-8578-df988aec518d | instead, we would need countable choice to express the limit to infinity of the terms
\(\mu \big (\lambda _0 (k)\big )\) .
Next we define the notion of a pre-measure space, giving an explicit formulation of Bishop's idea,
expressed in [1]}, p. 67, with respect to Definition REF .
The main idea is to define operations on \(I\) that correspond to the operations on complemented subsets,
and reformulate accordingly the clauses for the measure \(\mu \) . The fact that \(\mu \) is defined on
the index-set is already expressed in the definition of the set \(\lambda _0I (X)\) .
| [1] | [
[
235,
238
]
] | https://openalex.org/W1540066904 |
36371d7b-d804-4e44-9dd6-eec602654d5e | This stochastic background noise was calculated for the first time in
[1]} and it was found to be really strong. In fact, based on the isolated BH populations considered in Brito:2017wnc , the background was found to be loud enough (e.g., \(\Omega _{GW}\sim 10^{-6}\) for a boson of mass \(\sim 3.16\times 10^{-13}\) eV) for detection in LIGO data from the first observing run (O1).
| [1] | [
[
70,
73
]
] | https://openalex.org/W2624674605 |
cc633533-4dc1-45a2-baec-c88c9e3b7902 | For long-lasting signals that are shorter than the observation time of a detector, event rates can be calculated using [1]} ,
\(N = T_{\textrm {obs}}\int _{\rho >\rho _{\textrm {th}}}\frac{1}{1+z}\frac{d^2 \dot{n}}{dMd\chi }\frac{d V_c}{dz}dz~dM~d\chi .\)
| [1] | [
[
119,
122
]
] | https://openalex.org/W2685642551 |
08551b94-ba3e-4820-98c7-fe7cfad8e161 |
where \(\gamma _j > 0, j~\in ~\mathcal {N},\) are time constants, \(p^c_j\) is a local power command variable
available for design
(see Section REF ),
and \(A_j> 0\) and \(\kappa _j > 0,\) \(j~\in ~\mathcal {N},\) are damping and droop coefficients respectively.
Note that the analysis carried in this paper is valid for more general generation and demand dynamics, including cases of nonlinear and higher order dynamics, provided certain input-output conditions hold, following the analysis in e.g. [1]}, [2]}, [3]}, [4]}.
In this paper, we consider first-order generation and static uncontrollable demand dynamics for simplicity and to retain the focus of the paper on on-off loads.
| [1] | [
[
507,
510
]
] | https://openalex.org/W2963847880 |
c2899551-3fa3-4cec-ba44-4aea1c61898f |
where \(\gamma _j > 0, j~\in ~\mathcal {N},\) are time constants, \(p^c_j\) is a local power command variable
available for design
(see Section REF ),
and \(A_j> 0\) and \(\kappa _j > 0,\) \(j~\in ~\mathcal {N},\) are damping and droop coefficients respectively.
Note that the analysis carried in this paper is valid for more general generation and demand dynamics, including cases of nonlinear and higher order dynamics, provided certain input-output conditions hold, following the analysis in e.g. [1]}, [2]}, [3]}, [4]}.
In this paper, we consider first-order generation and static uncontrollable demand dynamics for simplicity and to retain the focus of the paper on on-off loads.
| [3] | [
[
519,
522
]
] | https://openalex.org/W2593031663 |
bab7d6da-cdec-47e6-b190-d83960f27845 | Remark 1
An issue of implementability may be raised due to the requirement for demand measurements in the Primal-Dual scheme (REF ).
This issue have been extensively considered in the literature [1]}, [2]}, [3]}, [4]}, [5]}, where schemes with different information structures have been proposed to relax the load measurements requirement.
Note that exact knowledge of the demand is not required for the stability properties of the power system, presented below, to hold.
In addition, demand estimates may be obtained from historical data and frequency measurements.
However, an inaccurate estimate of the demand will affect the equilibrium properties of the power system and might result in a suboptimal power allocation.
Hence, there exists a trade-off between the accuracy in knowledge of the demand and optimality.
Nevertheless, a reasonable demand estimate can be easily obtained, which will allow a close to optimal allocation.
| [3] | [
[
208,
211
]
] | https://openalex.org/W2520314131 |
dd56e2c1-b545-4760-81a2-ca6654b6ea4f | Remark 7 The presented scheme enables decentralized stability guarantees to be provided, as demonstrated in the proof of Theorem REF .
The latter enables to incorporate additional functionalities on on-off loads without compromising the stability of the power network.
A notable such example, considered in [1]}, is the use of on-off loads for the provision of ancillary services to the power network by switching when certain frequency thresholds are exceeded.
It can be shown that the presented stability properties are retained when the on-off load scheme in [1]} is implemented in conjunction with Algorithm REF .
This extension is omitted for brevity in presentation and to keep the focus of the paper on the optimality aspect of on-off loads.
| [1] | [
[
307,
310
],
[
562,
565
]
] | https://openalex.org/W2968889925 |
41f74b79-2f36-4bba-ba6e-5f8bd8e2f3c5 | Proof of Theorem REF :
First note that from Theorem REF , it follows that Algorithm REF converges after a finite amount of iterations, \(\hat{k}\) , and hence there exists some finite time \(T = t_{\hat{k}}\) such that \(\sigma (t)\) is constant for \(t \ge T\) . Below, we let \(\sigma ^* = \sigma (t), t \ge T\) .
We then consider a Lyapunov candidate function \(V\) which is demonstrated to be non-increasing within some compact set \(S\) for all \(t \ge T\) . Then, the results in [1]} allow to deduce global convergence to the set of equilibria within \(S\) , characterized by Lemma REF .
| [1] | [
[
490,
493
]
] | https://openalex.org/W1552094772 |
68699987-3aed-41e3-9dda-de2e59eacb85 | In the literature, TL problems are categorized in many different ways. Traditionally, TL problems are categorized based on the similarity between domains and on the availability of labeled and unlabeled data [1]} into Inductive TL, Transductive TL and Unsupervised TL. When labeled data is available only in source domain, it is called Transductive TL and when labeled data is available in both source and target domain, it is called Inductive TL. When there is no labeled data in both source and target domain, it would be called Unsupervised TL. However, in the recent years, a flexible taxonomy [2]} has emerged, which is based on the domain similarity irrespective of the availability of labeled and unlabeled data, as Homogeneous TL and Heterogeneous TL. In Homogeneous TL, the source and the target domain both have the same feature space (\(\mathcal {X}_s = \mathcal {X}_t\) and \(\mathcal {Y}_s = \mathcal {Y}_t\) ), whereas in Heterogeneous TL, source and target domains have different feature spaces(\(\mathcal {X}_s \ne \mathcal {X}_t\) and/or \(\mathcal {Y}_s \ne \mathcal {Y}_t\) ).
In surveys [1]} and [2]}, the authors divide the general TL methods based on the relationship between the source and the target domain and give a summary of the literature on TL including details of classic TL methods.
| [1] | [
[
208,
211
],
[
1110,
1113
]
] | https://openalex.org/W2165698076 |
bc9c0d5b-d80f-4ee7-88f3-a1512d381137 | The answer to this question involves the actual methods to transfer knowledge from source to target, which is done by adapting existing source task models to the target tasks.
The general TL methods are instance transfer, parameter transfer, and feature representation transfer [1]}. These methods are applicable to all TL scenarios.
| [1] | [
[
278,
281
]
] | https://openalex.org/W2395579298 |
fc23874b-2149-4ee3-b3c5-2f11e38c342a | In this paper, we adapt the object proposal generation system AttentionMask [1]} to apple localization. Applying an object proposal generation system to this task is reasonable since no classification is required. With its strong performance on localizing small objects, AttentionMask is well-suited for apple localization in orchard environments. From AttentionMask, we derive two variations to improve the localization of very small apples based on a new module for such apples and a tiling approach. Our evaluation reveals an improved performance for both variations compared to object proposal generation methods. Furthermore, a characterization of differences between the variations allows a task-specific choice.
| [1] | [
[
76,
79
]
] | https://openalex.org/W2962772163 |
3de7ad69-b9cf-487f-a894-99ef995937b2 | Our second approach for localizing small and very small apples, coined Tiled AttentionMask, embeds AttentionMask into a tiling framework. We extract 26 overlapping tiles from the input image, reducing the input image size from \(1280 \times 720\) (MinneApple dataset) to \(320\times 240\) . Since an input image is rescaled to a predefined size in AttentionMask, the relative size of the apples is essential. Thus, effectively each tile will be upsampled before being processed by the system leading to a larger representation of small apples. Consequentially, after subsampling in the backbone, small apples are still visible in the feature pyramid's base level. We use this tiling approach during training and testing similar to [1]}, [2]}.
| [1] | [
[
732,
735
]
] | https://openalex.org/W2963523428 |
fa0b2dc5-a320-4530-93f1-459a1aed1488 | For evaluating the effect of our proposed approaches for apple localization, we use a standard object proposal evaluation pipeline [1]}, [2]}, [3]}, [4]}, [5]}, [6]} with the MinneApple dataset [7]}. Thus, we use Average Recall (AR) [8]} as the evaluation measure. AR takes a predefined amount of proposals per image and determines how many annotated objects are localized and how well they are localized. We report AR for the first 10 (AR@10) and the first 100 (AR@100) proposals. Additionally, we follow [9]} and report AR for different absolute sizes of objects. We use two standard size categories M as well as S, and add a third category (XS) to account for the large number of very small objects (51% in the MinneApple dataset). An annotated object fits category M if its area is larger than \(32^2\) pixels. Category S covers objects between \(32^2\) pixels and \(22.5^2\) pixels, while XS covers all objects smaller than \(22.5^2\) pixels.
| [1] | [
[
131,
134
]
] | https://openalex.org/W2963721076 |
312a3143-40f4-44b3-9fa1-f5b63f3b588b | Depending on the disorder strength \(W/J\) and driving frequency \(\omega /J\) , the system can be in three distinct regimes [1]}, [2]}.
At low disorder strengths, the system thermalizes under its own dynamics if given enough time and follows the eigenstate thermalization hypothesis [1]}.
Under the assumption that the system can be considered closed over the entire process, the constant energy input provided by the drive leads to an effective temperature that is infinite [4]}, [1]}, [6]}, [7]}.
The time-scale over which this thermalization process takes place strongly depends on the drive frequency [1]}, [9]}.
For low-frequency drives, the system can efficiently respond and rapidly reaches this infinite-temperature limit corresponding to what is known as the driven thermalized phase [4]}, [1]}, [6]}, [7]}.
In the case where the driving frequency exceeds all relevant energy scales of the system however, the time required for thermalizing can be greatly extended leading to a long-lived prethermalization regime [1]}, [9]}.
Finally, in presence of large disorders, the system fails to thermalize at any time and is said to be in the driven MBL phase [16]}, [17]}, [7]}, [19]}, [6]}.
The two phases and the prethermalized regime are depicted in Fig. REF (a).
<FIGURE> | [1] | [
[
126,
129
],
[
285,
288
],
[
483,
486
],
[
607,
610
],
[
801,
804
],
[
1025,
1028
]
] | https://openalex.org/W2780647994 |
e8eefb39-6844-47d5-9e29-dd8e9c2b8aac | Depending on the disorder strength \(W/J\) and driving frequency \(\omega /J\) , the system can be in three distinct regimes [1]}, [2]}.
At low disorder strengths, the system thermalizes under its own dynamics if given enough time and follows the eigenstate thermalization hypothesis [1]}.
Under the assumption that the system can be considered closed over the entire process, the constant energy input provided by the drive leads to an effective temperature that is infinite [4]}, [1]}, [6]}, [7]}.
The time-scale over which this thermalization process takes place strongly depends on the drive frequency [1]}, [9]}.
For low-frequency drives, the system can efficiently respond and rapidly reaches this infinite-temperature limit corresponding to what is known as the driven thermalized phase [4]}, [1]}, [6]}, [7]}.
In the case where the driving frequency exceeds all relevant energy scales of the system however, the time required for thermalizing can be greatly extended leading to a long-lived prethermalization regime [1]}, [9]}.
Finally, in presence of large disorders, the system fails to thermalize at any time and is said to be in the driven MBL phase [16]}, [17]}, [7]}, [19]}, [6]}.
The two phases and the prethermalized regime are depicted in Fig. REF (a).
<FIGURE> | [9] | [
[
613,
616
],
[
1031,
1034
]
] | https://openalex.org/W3099772585 |
afa7239e-360d-457c-9f26-713c64dc76f3 | One of the standard approach to distinguish the different phases is based on the notion of level statistics of the unitary operator \(\hat{U}\) [1]}, [2]}.
Let \(|\phi _n\rangle \) be an eigenstate of the Floquet Hamiltonian with eigenvalue \(\epsilon _n\) , i.e. \(\hat{H}_F|\phi _n\rangle = \epsilon _n|\phi _n\rangle \) , it follows that
\(\hat{U}=\sum _n e^{i\theta _n}|\phi _n\rangle \langle \phi _n|,\)
| [1] | [
[
145,
148
]
] | https://openalex.org/W3106216759 |
ef71edf8-267a-4a2b-98da-6471743a55d0 | One of the standard approach to distinguish the different phases is based on the notion of level statistics of the unitary operator \(\hat{U}\) [1]}, [2]}.
Let \(|\phi _n\rangle \) be an eigenstate of the Floquet Hamiltonian with eigenvalue \(\epsilon _n\) , i.e. \(\hat{H}_F|\phi _n\rangle = \epsilon _n|\phi _n\rangle \) , it follows that
\(\hat{U}=\sum _n e^{i\theta _n}|\phi _n\rangle \langle \phi _n|,\)
| [2] | [
[
151,
154
]
] | https://openalex.org/W2135781929 |
c1e2fa48-2d59-443c-803a-ebdd8e1a0531 | The scenario is completely different for systems in the driven MBL phase where the level statistics is described by a Poisson distribution [1]},
\(\text{Pr}_{\rm POI}(r) = \frac{2}{(1+r)^2}.\)
| [1] | [
[
139,
142
]
] | https://openalex.org/W3099067495 |
9f2ccf4d-504b-493b-8cbb-dd65d9a73ad6 | In the case where the dynamics is such that the output state \(| \psi _m \rangle \) has an equal probability of being anywhere in the Hilbert space, its output distribution Pr\((p_m({\bf z}))\) follows the PT distribution [1]},
\( \text{PT}(p) = Ne^{-Np} \qquad {\rm for} \qquad N\gg 1,\)
| [1] | [
[
224,
227
]
] | https://openalex.org/W2482126025 |
1b5d7eaf-5272-4cfd-b0a7-6a571ee5278e | and the dynamics satisfies the anti-concentration condition Eq. (REF ) with \(\delta = 1\) and \(\gamma = 1/e\) .
As a consequence, the convergence of the output distribution toward the PT distribution is a key signature of quantum supremacy [1]}, [2]}, [3]}. The difference between these two distributions can be measured by the KLD [4]}, defined as (\(p = p_m({\bf z})\) for readability)
\( \text{KLD}(\text{Pr}(p)\parallel \text{PT}(p))\equiv \sum _{p}\text{Pr}(p)\log \left( \frac{ \text{Pr}(p) }{ \text{PT}(p)}\right)\ge 0.\)
| [3] | [
[
255,
258
]
] | https://openalex.org/W2758259983 |
f0ceff13-1221-4858-afd7-8f76a54e652e | In an isolated quantum system \(\mathcal {S}_{\rm tot}\) described by a pure state \(| \Psi \rangle \) , the concept of thermalization is usually understood as the emergence of homogeneous statistical properties of the reduced density matrices \(\hat{\rho }_\mathcal {S} = {\rm Tr}_{\mathcal {B}} | \Psi \rangle \langle \Psi | \equiv \exp {(-\hat{H}_\mathcal {S}/k_BT_{\rm eff})}/\mathcal {Z}\) .
Here, \(\mathcal {S}\) denotes any “small" subsystems with corresponding Hamiltonian \(\hat{H}_\mathcal {S}\) of dimensions \(N_\mathcal {S}\) and partition function \(\mathcal {Z}\) such that \(\mathcal {S}_{\rm tot} = \mathcal {S} + \mathcal {B}\) and \(N_\mathcal {B} \gg N_\mathcal {B}\) .
The effective temperature \(T_{\rm eff}\) is defined from the reduced density matrices and should become independent of frequency and of the subsystem choice as the full system reaches thermalization [1]}, [2]}.
By writing \(| \Psi \rangle = \sum _i^{N_{\mathcal {S}}} \sum _j^{N_{\mathcal {B}}} c_{ij} | i_\mathcal {S} \rangle | j_\mathcal {B} \rangle \) , where \(| j_\mathcal {S} \rangle \) and \(| i_\mathcal {B} \rangle \) are basis states of the subsystem and the bath respectively with \(c_{ij} \in {\mathbb {C}}\) , we obtain
\( \hat{\rho }_{\mathcal {S}} = \sum _{i,j = 1}^{N_{\mathcal {S}}} \left[ \sum _{k=1}^{N_{\mathcal {B}}} c_{ik} c^*_{jk} \right] | i_\mathcal {S} \rangle \langle j_\mathcal {S} |.\)
| [1] | [
[
901,
904
]
] | https://openalex.org/W3103855590 |
ed492159-59b0-449e-bb97-8cc80bf5b6a0 | Recently, there has been much study of the global convergence properties of policy gradient methods. It has been shown that PG and NPG methods can achieve \(\mathcal {O}(1/T)\) convergence [1]}, [2]} for unregularized MDPs. When entropy regularization is used, both PG and NPG methods can guarantee \(\text{exp}(-T)\) convergence [1]}, [4]} to the optimal solution of the regularized problem. NPG methods can be interpreted as mirror descent [5]}, [6]}, thereby enabling the adaptation of mirror descent techniques to analyze NPG-based methods.
| [2] | [
[
196,
199
]
] | https://openalex.org/W3039845099 |
1ed79568-b50f-4a18-a952-660247391496 | The global convergence analysis of the PG methods for MDPs has also been extended to CMDPs. [1]} proposed the NPG-PD algorithm which uses a primal-dual approach with NPG and showed that it can achieve \(\mathcal {O}(1/\sqrt{T})\) global convergence for both the optimality gap and the constraint violation. [2]} proposed a primal approach called constrained-rectified policy optimization (CRPO), which updates the policy alternatively between optimizing objective and decreasing constraint violation, and enjoys the same \(\mathcal {O}(1/\sqrt{T})\) global convergence. Our work focuses on achieving a faster convergence rate for the CMDP problem, motivated by the results for MDPs with a convergence rate faster than \(\mathcal {O}(1/\sqrt{T})\) (see Table REF ).
| [2] | [
[
308,
311
]
] | https://openalex.org/W3168269639 |
cf0ddcee-1aa5-45b0-8e81-2b1048106490 | In the work conducted concurrently with ours, but with different results, Ying et al. [1]} and Li et al. [2]} address the same question of developing PG-based algorithms for the CMDP problem. Ying et al. [1]} propose an NPG-aided dual approach, where the dual function is smoothed by entropy regularization in the objective function. They show an \(\tilde{\mathcal {O}}(1/T)\) convergence rate to the optimal policy of the entropy-regularized CMDP, but not to the true optimal policy, for which with a slow \(\mathcal {O}(1/\sqrt{T})\) convergence rate. They also make an additional strong assumption that the initial state distribution covers the entire state space. While such an assumption was initially used in the analysis of the global convergence of PG methods for MDPs [4]}, [5]}, it is not required when analyzing the global convergence of NPG methods [4]}, [7]}. Moreover, this assumption does not necessarily hold for safe RL or CMDP, since the algorithm needs to avoid dangerous states even at initialization and the optimal policy will depend on the initial state distribution. Li et al. [2]} propose a primal-dual approach with an \(\mathcal {O}(\log ^2(T)/T)\) convergence rate to the true optimal policy by smoothing the Lagrangian with suitable regularization on both primal and dual variables. However, they assume that the Markov chain induced by any stationary policy is ergodic in order to ensure the smoothness of the dual function. This assumption, though weaker than the assumption made by [1]}, will generally not hold in problems where one wants to avoid unsafe states altogether. In this work, we propose an algorithm with a faster \(\mathcal {O}(\log (T)/T)\) convergence rate to the true optimal policy without such assumptions. Moreover, we also present two important extensions of our approach to the settings with zero constraint violation and sample-based estimation.
| [5] | [
[
785,
788
]
] | https://openalex.org/W3034426742 |
6e0cb496-02ae-484e-b773-c5d75877f1ab | In the work conducted concurrently with ours, but with different results, Ying et al. [1]} and Li et al. [2]} address the same question of developing PG-based algorithms for the CMDP problem. Ying et al. [1]} propose an NPG-aided dual approach, where the dual function is smoothed by entropy regularization in the objective function. They show an \(\tilde{\mathcal {O}}(1/T)\) convergence rate to the optimal policy of the entropy-regularized CMDP, but not to the true optimal policy, for which with a slow \(\mathcal {O}(1/\sqrt{T})\) convergence rate. They also make an additional strong assumption that the initial state distribution covers the entire state space. While such an assumption was initially used in the analysis of the global convergence of PG methods for MDPs [4]}, [5]}, it is not required when analyzing the global convergence of NPG methods [4]}, [7]}. Moreover, this assumption does not necessarily hold for safe RL or CMDP, since the algorithm needs to avoid dangerous states even at initialization and the optimal policy will depend on the initial state distribution. Li et al. [2]} propose a primal-dual approach with an \(\mathcal {O}(\log ^2(T)/T)\) convergence rate to the true optimal policy by smoothing the Lagrangian with suitable regularization on both primal and dual variables. However, they assume that the Markov chain induced by any stationary policy is ergodic in order to ensure the smoothness of the dual function. This assumption, though weaker than the assumption made by [1]}, will generally not hold in problems where one wants to avoid unsafe states altogether. In this work, we propose an algorithm with a faster \(\mathcal {O}(\log (T)/T)\) convergence rate to the true optimal policy without such assumptions. Moreover, we also present two important extensions of our approach to the settings with zero constraint violation and sample-based estimation.
| [7] | [
[
869,
872
]
] | https://openalex.org/W3041970508 |
b9fde22a-9cb7-4de0-9818-c3bd556e2ce5 | This assumption is quite standard in the optimization literature for analyzing primal-dual algorithms [1]}. In particular, many related works in the CMDP literature (see, e.g., [2]}, [3]}, [4]}, [5]}) make the same strict feasibility assumption. Note that unlike previous primal-dual algorithms [2]}, [3]} for CMDPs, where \(\xi \) is required to be known a priori for the projection of dual variables, our proposed algorithm does not require the knowledge of \(\xi \) , and this assumption is made only for the analysis.
| [2] | [
[
177,
180
],
[
295,
298
]
] | https://openalex.org/W3101517963 |
c9bb5c61-6cfa-4e3e-933f-f3548441c2f6 | The constrained optimization problem in (REF ) can be reparameterized by using the discounted state-action visitation distribution as decision variables, as follows [1]}:
\(\min _{d \in \mathcal {D}} ~~ \frac{1}{1 - \gamma } \langle d, c_0 \rangle \quad \text{s.t.}~~ \frac{1}{1-\gamma } \langle d, c_i\rangle \le 0, \quad \forall i \in [m],\)
| [1] | [
[
165,
168
]
] | https://openalex.org/W1518931405 |
2098e07c-f3ac-4ed0-8369-1a84975a81dc |
where \(Z_t(s) = \sum _{a} \pi ^{(t)}(a|s) \exp (-\eta Q_{c_0 + \sum _{i=1}^m \lambda _i c_i}^{\pi ^{(t)}}(s, a))\) . It was shown that (REF ) is equivalent to a mirror descent update [1]}
\(\pi ^{(t+1)}(\cdot |s) = \arg \min _{\pi } &\left\lbrace \langle Q_{c_0 + \sum _{i=1}^m \lambda _i c_i}^{\pi ^{(t)}}(s, \cdot ), \pi (\cdot |s) \rangle + \frac{1}{\eta } D(\pi (\cdot |s) || \pi ^{(t)}(\cdot |s))\right\rbrace .\)
| [1] | [
[
185,
188
]
] | https://openalex.org/W3164106810 |
7c5d0bed-d689-46fa-9c94-341e630ece9f | The CMDP formalism is often used to model control problems with safety constraints [1]}, [2]}, [3]}. In many of these problems, it is important to ensure that the cumulative constraint violation is zero while finding the optimal policy. While the PMD-PD algorithm described in the previous section gives provable convergence to the optimal policy, it may incur a positive cumulative constraint violation during the implementation of the algorithm. Indeed, () in Theorem REF only gives an upper bound on the cumulative constraint violations. One important question in this context is: Can we design a policy gradient-based algorithm for CMDPs that can provably achieve fast global convergence while ensuring that the cumulative constraint violation is zero?
| [2] | [
[
89,
92
]
] | https://openalex.org/W3121342653 |
1b285b16-c6a6-4710-9037-d1db8da427d3 | In this section, we demonstrate the performance advantage of the sample-based PMD-PD algorithm (Algorithm ) in the same tabular CMDP described in Section and in a more complex environment Acrobot-v1 [1]}.
| [1] | [
[
200,
203
]
] | https://openalex.org/W3037207827 |
bf7aef13-1636-4e4c-94a1-6cee00d48281 | Lemma 3.3 ([1]})
There exists an absolute constant \(a>0\) such that the following holds for all \(\varepsilon \in (0,\frac{1}{2}]\) and every positive integer \(N\) . Any two-coloring of \(E(K_N)\) which is not \(\varepsilon \) -balanced contains a monochromatic clique of order \(\frac{a}{\varepsilon \log \frac{1}{\varepsilon }} \log N\) .
| [1] | [
[
11,
14
]
] | https://openalex.org/W2045774391 |
11de33f3-c96a-41b0-b2d9-9716cd00b6dc | Lemma 4.1 ([1]})
In any \(\varepsilon \) -balanced coloring of \(E(K_N)\) , at least \(\frac{\varepsilon }{2}N\) vertices of \(K_N\) have at least \(\frac{\varepsilon }{4} N\) neighbors in both colors.
| [1] | [
[
11,
14
]
] | https://openalex.org/W2340026689 |
613a5a53-f6aa-40e2-a1db-7c7904c53fd1 | In fact, it seems possible that \(r(H) \ge c\cdot r(G)\) holds for all but \(o(\vert D(G)\vert )\) of the graphs in \(D(G)\) . If this holds with appropriate control on the little-\(o\) , then it suffices for the original application of Conlon, Fox, and Sudakov; namely, such a result would show that \(\log r(G(n,p))\) is concentrated in an interval of length \(O(\sqrt{n})\) , by mimicking the proof of [1]}.
| [1] | [
[
408,
411
]
] | https://openalex.org/W3081713386 |
e29712e3-e09f-4ec1-8757-b44945390909 | In general, the discrepancy between the indirect metric (FLOPs) and the direct metric (speed) in recent ViTs can be attributed to two main reasons.
First, although self-attention is efficient on low-resolution feature maps, the quadratic complexity in both memory and time makes it much slower on high-resolution images due to intensive memory access cost [1]}, where fetching data from off-chip DRAM can be speed-consuming.
Second, some efficient attention mechanisms in ViTs have low theoretical complexity guarantee but are actually slow on GPUs due to particular operations that are not hardware-friendly or cannot be parallelized,
such as the multi-scale window partition [2]}, recursion [3]} and dilated window [4]}.
<FIGURE> | [1] | [
[
356,
359
]
] | https://openalex.org/W2883780447 |
5ff41272-7c70-403e-9a2d-29357de275b1 | Vision Transformers.
Vision Transformers are neural networks that adopt self-attention mechanisms into computer vision tasks. In [1]}, Dosovitskiy et al. propose a ViT for image classification, which inherits the similar architecture from a standard Transformer [2]} in natural language processing (NLP) tasks. Since then, subsequent works have been proposed to improve ViT by incorporating more convolutional layers [3]}, [4]}, introducing pyramid feature maps [5]}, [6]}, enhancing the locality [7]}, as well as automatically searching a well-performed architecture [8]}, [9]} with neural architecture search (NAS). Some others also seek for token pruning to accelerate the inference speed of ViTs [10]}. Compared to existing works, this paper focuses on a general ViT-based backbone for computer vision (CV) tasks and aims to achieve better efficiency on GPUs while maintaining competitive performance.
| [6] | [
[
468,
471
]
] | https://openalex.org/W3138516171 |
71059a2a-a076-4f8c-93c2-92fc85774497 | Efficient attention mechanisms.
Efficient attention mechanisms aim to reduce the quadratic complexity of standard MSAs. Existing efforts in NLP can be roughly categories into low-rank decomposition [1]}, kernelization [2]}, [3]}, memory [4]} and sparsity mechanism [5]}. However, simply adopting these method usually performs suboptimally in CV tasks [6]}, [7]}. In CV, representative efficient self-attention mechanisms includes spatial reduction attention (SRA) [8]}, local window attention [6]} and Twins attention [10]}. However, they only focus on either local or global attention at the same layer, which neglects the another. Some works consider both simultaneously, such as Focal attention [11]} and QuadTree [12]}. However, due to the inefficient operations which are not hardware-friendly and cannot be reflected in FLOPs (e.g., multi-scale window partition, recursion), they are slow on GPUs even compared to standard MSA. To this end, the proposed HiLo attention simultaneously captures rich local-global information at the same MSA layer and is faster and more memory-efficient compared to the existing works.
| [8] | [
[
464,
467
]
] | https://openalex.org/W3131500599 |
43189325-6b7e-405f-a8c6-1b3c13d3d024 | Efficient attention mechanisms.
Efficient attention mechanisms aim to reduce the quadratic complexity of standard MSAs. Existing efforts in NLP can be roughly categories into low-rank decomposition [1]}, kernelization [2]}, [3]}, memory [4]} and sparsity mechanism [5]}. However, simply adopting these method usually performs suboptimally in CV tasks [6]}, [7]}. In CV, representative efficient self-attention mechanisms includes spatial reduction attention (SRA) [8]}, local window attention [6]} and Twins attention [10]}. However, they only focus on either local or global attention at the same layer, which neglects the another. Some works consider both simultaneously, such as Focal attention [11]} and QuadTree [12]}. However, due to the inefficient operations which are not hardware-friendly and cannot be reflected in FLOPs (e.g., multi-scale window partition, recursion), they are slow on GPUs even compared to standard MSA. To this end, the proposed HiLo attention simultaneously captures rich local-global information at the same MSA layer and is faster and more memory-efficient compared to the existing works.
| [11] | [
[
698,
702
]
] | https://openalex.org/W3176153963 |
f1479ab8-0de9-47b8-811e-b5ba9bb3d1b3 | A standard vision Transformer as described in [1]} consists of a patch embedding layer, several blocks and a prediction head. Let \(l\) be the index of a block. Then each block contains an MSA layer and a position-wise feed-forward network (FFN), which can expressed as
\({\bf X}^{^{\prime }}_{l-1} &= {\bf X}_{l-1} + \mathrm {MSA}(\mathrm {LN}({\bf X}_{l-1})), \\{\bf X}_l &= {\bf X}^{^{\prime }}_{l-1} + \mathrm {FFN}(\mathrm {LN}({\bf X}^{^{\prime }}_{l-1})),\)
| [1] | [
[
46,
49
]
] | https://openalex.org/W3094502228 |
96351012-b85a-4443-80d7-693d192361aa |
In this section we conduct experiments to validate the effectiveness of the proposed LITv2.
Following common practice [1]}, [2]}, [3]}, [4]}, we experiment LITv2 on three tasks, including image classification on ImageNet-1K [5]}, object detection and instance segmentation on COCO [6]} and semantic segmentation on ADE20K [7]}.
<TABLE> | [5] | [
[
225,
228
]
] | https://openalex.org/W2117539524 |
7e6839a6-fc6f-4051-a7ad-b960d73b3ff2 | We conduct image classification experiments on ImageNet-1K [1]}, a large-scale image dataset which contains \(\sim \) 1.2M training images and 50K validation images from 1K categories. We measure the model performance by Top-1 accuracy. Furthermore, we report the FLOPs, throughput, as well as training/test memory consumption on GPUs.
We compare with two CNN-based models [2]}, [3]} and several representative SoTA ViTs [4]}, [5]}, [6]}, [7]}, [8]}. Note that this paper does not consider mobile-level architectures [9]}, [10]}. Instead, we focus on models with the similar model size. Besides, we are also not directly comparable with NAS-based methods [11]}, [12]} as LITv2 is manually designed.
| [8] | [
[
445,
448
]
] | https://openalex.org/W3211432419 |
1786bc21-7bde-4640-9829-54430b01a8d2 | Implementation details.
All backbones are initialized with pretrained weights on ImageNet-1K. We train each model on 8 GPUs with 1\(\times \) schedule (12 epochs) and a total batch size of 16. For a fair comparison, we adopt the same training strategy and hyperparameter settings as in LITv1 [1]}.
Note that we pretrain LITv2 with a local window size of 2 and \(\alpha = 0.9\) on ImageNet-1K. Under the same \(\alpha \) , a larger window size helps to achieve lower complexity and thus improves the speed at high resolution, as explained in Section REF . In this case, we also train models with a slightly larger window size of \(s=4\) for better efficiency, which we denote with “*”.
By default, FLOPs is evaluated based on the input resolution of \(1280\times 800\) . FPS is measured on one RTX 3090 GPU based on the mmdetection [2]} framework.
| [1] | [
[
293,
296
]
] | https://openalex.org/W3170642968 |
d527fcc5-1a36-4e88-ad24-58365f0f0d61 | The overall framework of LITv2 is depicted in Figure REF .
We also provide detailed architecture specifications of LITv2 in Table REF . In general, we set the same network depth and width as LITv1. It is worth noting that recent works [1]}, [2]}, [3]}, [4]}, [5]} usually adopt standard MSAs at the last stage, including LITv1. Following common practice, we set \(\alpha =1.0\) and \(s = 1\) at the last stage to make HiLo behave as a standard MSA. LITv2 also excludes MSAs in the first two stages due to the tiny receptive field of attention heads, as visualized in Figure 3 of LITv1 [6]}.
| [5] | [
[
259,
262
]
] | https://openalex.org/W3139773203 |
25309474-5f80-44fd-a4e0-24b215637738 | Based on LITv2-B, we conduct the experiments on COCO under the framework of RetinaNet [1]} and Mask R-CNN [2]}. The results are reported in Table REF . Similar to the results of LITv2-S and LITv2-M, the proposed LITv2-B significantly outperforms ResNeXt [3]} under both frameworks while being slightly slower on Mask R-CNN. Moreover, LITv2-B achieves advantage over SoTA ViTs in terms of both speed and accuracy. Finally, adopting a larger window size (i.e., \(s=4\) ) also helps LITv2-B achieve better efficiency with a slightly performance drop.
<FIGURE><FIGURE> | [1] | [
[
86,
89
]
] | https://openalex.org/W2963351448 |