context
stringlengths 80
2.5k
| A
stringlengths 80
2.59k
| B
stringlengths 80
1.95k
| C
stringlengths 80
3.07k
| D
stringlengths 80
3.07k
| label
stringclasses 4
values |
---|---|---|---|---|---|
For the task, our paper is highly relevant to the problems of lyrics and poetry writing, as well as multi-modal generation. | To verify the superiority of the proposed model, we manually label a new dataset for the task of sequential multi-modal AI creation. | For the task, our paper is highly relevant to the problems of lyrics and poetry writing, as well as multi-modal generation. | In the following, we introduce the classical models and recent advances in these fields more in detail. | For the technique, our method is built upon the prosperity of recent multi-modal representation models. | D |
We perform a set of experiments to compare the four widely discussed multimodal VAE approaches - MVAE (Wu & Goodman, 2018), with the Product-of-Experts multimodal integration, and MMVAE (Shi et al., 2019), presenting the Mixture-of-Experts strategy, MoPoE (Sutter et al., 2021) and DMVAE (Lee & Pavlovic, 2021). Firstly, to verify that the implementations are correct, we replicated some experiments presented in the original papers inside our toolkit. We used the same encoder and decoder networks and training hyperparameters as in the original implementation and compared the final performance.The results are shown in Appendix Sec. A.3. We then unified the implementations for all models so that they only differ in the modality mixing and trained them on the CelebA and MNIST-SVHN datasets. The results are shown in Tables 2 and 3. | Next, we trained all four models on the CdSprites+ dataset consecutively on all 5 levels of complexity and performed a hyperparameter grid search over the dimensionality of the latent space. You can find the used encoder and decoder architectures (fixed for all models) as well as the specific training details in Appendix Sec. A.2.1. We show the qualitative and quantitative results and discuss them in Section 5.2. | We perform a set of experiments to compare the four widely discussed multimodal VAE approaches - MVAE (Wu & Goodman, 2018), with the Product-of-Experts multimodal integration, and MMVAE (Shi et al., 2019), presenting the Mixture-of-Experts strategy, MoPoE (Sutter et al., 2021) and DMVAE (Lee & Pavlovic, 2021). Firstly, to verify that the implementations are correct, we replicated some experiments presented in the original papers inside our toolkit. We used the same encoder and decoder networks and training hyperparameters as in the original implementation and compared the final performance.The results are shown in Appendix Sec. A.3. We then unified the implementations for all models so that they only differ in the modality mixing and trained them on the CelebA and MNIST-SVHN datasets. The results are shown in Tables 2 and 3. | The detailed results for all experiments (including other datasets such as Sprites, MNIST-SVHN etc.) can be found in Appendix Sec. A.2.4. Here we demonstrate the utility of our toolkit and dataset by comparing the MVAE, MMVAE, MoPoE and DMVAE models on the CdSprites+ dataset consecutively on 5 levels of difficulty. For an overview of what attributes each of the levels includes, please see Section 3. In Fig. 2, we show the qualitative results for levels 1, 3 and 5 of the dataset for the four models. We show both the reconstructed and cross-generated (conditioned on the other modality) samples for images and text. Moreover, we report the cross- (Img→Txt→𝐼𝑚𝑔𝑇𝑥𝑡Img\rightarrow Txtitalic_I italic_m italic_g → italic_T italic_x italic_t and Txt→Img→𝑇𝑥𝑡𝐼𝑚𝑔Txt\rightarrow Imgitalic_T italic_x italic_t → italic_I italic_m italic_g) and joint- (Joint𝐽𝑜𝑖𝑛𝑡Jointitalic_J italic_o italic_i italic_n italic_t) coherency accuracies for all levels in Table 4. The Strict metrics show the percentage of completely correct samples, while Features and Letters show the average proportion of correct words (or visual features for image) or letters per sample. | In this work, we present a benchmarking toolkit and a CdSprites+ (Captioned disentangled Sprites+) dataset for a systematic evaluation and comparison of multimodal variational autoencoders. The tool enables the user to easily configure the experimental setup by specifying the dataset, encoder and decoder architectures, multimodal integration strategy and the desired training hyperparameters all in one config. The framework can be easily extended for new models, datasets, loss functions or the encoder and decoder architectures without the need to restructure the whole environment. In its current form, it includes 4 state-of-the-art models and 6 commonly used datasets. | A |
Table I presents the average F1-score, accuracy, and the respective standard deviations, over the training and test sets for the proposed MBURST and the baseline architectures. In this scenario, one can notice that the Unimodel does not have the capacity to provide competitive results, which is expected since the visual context is essential to introduce context to the noisy signal and boost the reconstruction. On the other hand, both the Multimodal and MBURST approaches obtained similar results, with the Multimodal approach achieving slightly better values (less than 2%percent22\%2 % on average). This can possibly be explained by the activation sparsity induced by the burst rate mechanism. Such results reinforce the relevance of visual context for clean speech reconstruction. | TABLE II: Area under the curve considering the average energy rate during training and testing for the Unimodal, Multimodal, and the proposed MBURST. | TABLE I: F1-score and Accuracy concerning the Unimodal, Multimodal, and the proposed MBURST for clean audio mask reconstruction over the train and test sets. | Table II analytically reinforces MBURST’s robustness in the context of energy efficiency by presenting the area under the curve over the average energy rate measured during the training and testing procedures. In this context, MBURST shows itself 48%percent4848\%48 % and 58%percent5858\%58 %, on average over train and test sets, more efficient than Unimodal and Multimodal, respectively. | Table I presents the average F1-score, accuracy, and the respective standard deviations, over the training and test sets for the proposed MBURST and the baseline architectures. In this scenario, one can notice that the Unimodel does not have the capacity to provide competitive results, which is expected since the visual context is essential to introduce context to the noisy signal and boost the reconstruction. On the other hand, both the Multimodal and MBURST approaches obtained similar results, with the Multimodal approach achieving slightly better values (less than 2%percent22\%2 % on average). This can possibly be explained by the activation sparsity induced by the burst rate mechanism. Such results reinforce the relevance of visual context for clean speech reconstruction. | B |
Because 𝑯(ρ^,𝝀^)𝑯^𝜌^𝝀\bm{H}(\hat{\rho},\hat{\bm{\lambda}})bold_italic_H ( over^ start_ARG italic_ρ end_ARG , over^ start_ARG bold_italic_λ end_ARG ) plays such a crucial role, we will refer to it as the ‘certificate matrix’. | First, we study the effect of noise in a quantitative analysis in Figure 4. We vary the measurement noise σdsubscript𝜎𝑑\sigma_{d}italic_σ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT and report the RMSE between the estimated and ground-truth trajectory as a function of the effective distance noise. Since we lack an optimal solver, we label solutions as globally optimal when they correspond to the smallest cost (up to numerical tolerance) for a given setup, and locally optimal otherwise. This method is reliable for small noise levels, as a big gap exists between the cost of the local and global solutions, but less so for higher noise. Using this method, we can identify that the majority of solutions (96%) are true positives — global solutions where the certificate holds. Even more importantly, we observe that there are no false positives — the certificate never holds for a suboptimal solution. Out of the uncertified solutions, 20% are false negatives, meaning the certificate misses an optimal solution. However, this happens only at the highest noise levels considered, suggesting the existence of a high noise threshold up to which strong duality holds. For lower noise levels, we conclude that although the certificate is only a sufficient condition in theory, it is effectively a necessary condition in practice: when the certificate does not hold, the solution is usually suboptimal. We also note that the method and certificate are robust to model mismatch: although the real trajectory is of the constant-velocity type, all motion priors yield satisfactory results. A study of the setups leading to local optima, given in the supplementary material [39], reveals that local minima are usually the result of poor (i.e., almost co-linear) anchor placement. | Note that the conditions are sufficient, but not necessary – if a solution does not satisfy all conditions it may still be an optimal solution. However, related works have shown that for sufficiently low noise levels, strong duality holds, and the certificate becomes both sufficient and necessary [37]. This is confirmed and further discussed in the simulated experiments in Section IV-A. | With the proposed solution, we can be both efficient and optimal: we use a continuous-time batch approach, but exploit sparsity to keep the cost low enough for online inference, both in solving and certifying the solution. We place no assumptions on noise or uniqueness of the solution – as long as the certificate holds, the solution is optimal. | In this section, we show the effectiveness of the proposed method in simulation and on a real-world dataset. We first study the certificate in simulation, showing that our local solver finds the optimal solution in the majority of cases for random setups, and when it fails to do so, the certificate does not hold for all but the highest considered noise levels. Incorrect local solutions are typically only found for few random initializations, suggesting that in practice, randomly reinitializing until the certificate holds is a viable strategy. | B |
The results provided in Section VII indicate the practical relevance of maximum-modularity partitions in comparison with 29 other alternatives for CD. Despite the well-document theoretical flaws of modularity [11], the high accuracy and stability of globally maximum-modularity partitions (shown in this study) are not challenged by other existing methods for community detection. | Community detection (CD), the data-driven process of partitioning nodes within a network [1], is a core problem in several fields including physics, mathematics, computer science, biology, and other computational sciences [2]. Among common approaches for CD are the algorithms which are designed to maximize a utility function, modularity [3], across all possible ways that the nodes of the input network can be partitioned into an unspecified number of communities. Modularity measures the fraction of edges within communities minus the expected fraction under a random degree-preserving distribution of the edges. Despite their name and design philosophy, current modularity maximization algorithms, which are used by no less than tens of thousands of peer-reviewed studies [4], generally fail to maximize modularity or guarantee proximity to a globally maximum-modularity partition (an optimal partition) [5]. They have a high risk of failing to obtain relevant communities [6] and may result in degenerate partitions [7] that are provably far from the underlying community structure [8, 9]. | While it is tempting to interpret our results as suggesting modularity is a better objective function than the objective functions of other optimization-based algorithms considered, we refrain from drawing such a conclusion because our comparative results of algorithms in Figures 1–2 and Figures 4 – 9 are confounded by the difference in the objective functions and the difference between mathematically rigorous optimization vs. local/greedy/heuristic optimization. We simply recommend using mathematically rigorous optimization in optimization-based tasks for small input. Our results justify the exact optimization approach for modularity where optimality matters [25] and sub-optimality has a disproportionate cost [5]. Future research may investigate the potential advantages of using global optimization and guaranteed approximations for other objective functions (e.g., Markov stability [42], description length [16], modularity density [21], surprise [20, 22], or a new objective function inspired by the map equation [14]) for CD. To the best of our knowledge, exact optimization of these alternative objectives are underexplored, and therefore each has the potential to outperform maximum-modularity partitions (and in turn outperform most other algorithms considered in this study) in retrieval tests and partition quality measures. We hope that our publicly available data and algorithm facilitate these prospective advances in the field. | The results from Figures 1 – 2 and Figures 4 – 9 offer additional insights when we focus on the performance of the nine algorithms that attempt to maximize modularity. The comparative performance of nine modularity-based algorithms (Bayan, greedy, Louvain, LN, Combo, Belief, Paris, Leiden, and EdMot) shows the practical benefits of global optimization in the context of using modularity for CD. The practical benefit of global optimization is in both achieving better performance in retrieving planted communities and achieving better partition quality measures (other than modularity). The key lesson learned is that in the context of community detection through modularity optimization, guaranteed closeness to optimality matters. This crucial, yet often overlooked [4], lesson was foreshadowed in [25, pp. 012811-5] which reads “to stress the importance of looking for even the minor gains in the modularity score, […] relatively small changes in this partition quality function can be reflected by macroscopic variation of the communities involved”. | The results provided in Sections VII–VIII allow us to answer the second question from a practical standpoint. The results in Subsections VII.1–VII.2 demonstrate the comparative suitability of Bayan based on retrieving planted partitions of LFR and ABCD benchmarks. Bayan was observed to be among the top algorithms in retrieving planted partitions across different mixing parameters. The results in Sections VII.3 show that, for networks where node labels are aligned with the structure, Bayan retrieves partitions that are similar with node labels. Overall, the results in Section VII demonstrate the suitability of Bayan as a community detection method for networks with up to 3000 edges. Our results in Section VIII show Bayan’s competitive advantages in comparison with 29 other methods based on five partition quality measures other than modularity: description length, average conductance, coverage, performance, and well clusteredness. In what follows, we discuss five key insights from our results. | C |
Recently, Yang et al. [31] proposed training a cohort of sub-(convolutional) networks with different configurations of network widths and input resolutions via DML to achieve accuracy-efficiency tradeoffs. | A total of 200200200200 epochs were run with a batch size of 128128128128, and the learning rate dropped by 0.20.20.20.2 every 60606060th epoch. | Recently, Yang et al. [31] proposed training a cohort of sub-(convolutional) networks with different configurations of network widths and input resolutions via DML to achieve accuracy-efficiency tradeoffs. | Park et al. [20] applied DML to deep metric learning beyond classification tasks and demonstrated its effectiveness over individual models. | Among these techniques is Deep Mutual Learning (DML) [34], an empirically powerful paradigm that is, despite its conceptual simplicity, highly effective in practice. | C |
Although as aforementioned, our contribution is not limited to proposing the TSFool approach, someone may still doubt whether it is indeed motivated enough to specifically design for RNN-based TSC, as more state-of-the-art solutions for TSC tasks are based on convolutional NNs or transformers [14]. Nevertheless, the fact is that to date RNN-based TSC applications are still popular in real-world practice [20, 53, 21], without an effective approach to measuring their robustness [25, 21], which leaves potential threats to the public. TSFool can be viewed as a “gray-box” method. In short, considering the impact of specific modes of RNN running to i-WFA extraction in different applications, TSFool may either be implemented in a black-box way, or rely on a part of white-box information. We leave a more detailed explanation in Section A.3. Please notice that this is just about the property of TSFool, instead of the background setting of this paper. Another point to be noticed is that as TSFool relies on existing vulnerable samples wrongly predicted by the target RNN classifier, a natural requirement is that such samples must exist. Generally, this should not be a serious concern, as they can be any real-world sample at inference time instead of just from the supervised dataset. In practice, TSFool provides several hyper-parameters for fine-tuning as shown in Section B.4 and supports both targeted and untargeted attacks, which makes it widely applicable. We also showcase its potential for adversarial training in Section C.2. | In this paper, given the lack of research and applicable approach in the field of adversarial samples for RNN-based TSC, we propose the TSFool attack, significantly outperforming existing methods in effectiveness, efficiency and imperceptibility. What’s important, the novel global optimization objective "Camouflage Coefficient" proposed to refine the adversarial attack as a multi-objective optimization problem may be instructive for the improvement of the current theory, and the methodology proposed based on latent manifold to heuristically approximate the solution of such a new optimization problem can also be easily transferred to other types of models and data, providing a new feasible way to craft imperceptible adversarial samples. For future works, further exploring the newly defined multi-objective optimization problem to find better approximation solutions is an interesting topic, and the attempt to realize our methodology in other kinds of real-world tasks is in progress at present. | In this paper, we propose an efficient method named TSFool to craft highly-imperceptible adversarial samples for RNN-based TSC. With an argument that local optimal perturbation under the conventional objective does not always lead to imperceptible adversarial samples, we propose a novel global optimization objective named Camouflage Coefficient, and add it to reduce the adversarial attack problem to a multi-objective optimization problem. In this way, we can take the relative position between adversarial samples and class clusters into consideration, to measure the imperceptibility of adversarial samples from the perspective of class distribution. Since the full gradient information of an RNN is not directly available, to efficiently approximate the optimization solution, we introduce a representation model built only upon the classifier’s outputs. It can fit the manifold hyperplane of a classifier but distinguish samples by their features like humans, and capture deeply embedded vulnerable samples whose features deviate from the latent manifold as guidance. Then we can pick target samples to craft perturbation in the direction of their interpolation, while imperceptibly crossing the classification hyperplane. | We propose a general methodology based on Manifold Hypothesis to solve the new optimization problem, and accordingly realize TSFool, the first method to our best knowledge, for RNN-based TSC, to craft real-world imperceptible adversarial time series. | We propose a novel optimization objective named “Camouflage Coefficient" to enhance the global imperceptibility of adversarial samples, with which we reduce the adversarial attack problem to a multi-objective optimization problem; and | A |
O}}(n^{2+1/k})∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = over~ start_ARG caligraphic_O end_ARG ( italic_n start_POSTSUPERSCRIPT italic_p + 1 end_POSTSUPERSCRIPT ) = over~ start_ARG caligraphic_O end_ARG ( italic_n start_POSTSUPERSCRIPT 2 + 1 / italic_k end_POSTSUPERSCRIPT ). | biased, noisy subgradient method to bound the excess risk of our algorithm (Theorem 12). We obtain our strongly convex bound (Theorem 18) by a reduction to the convex case, ala [HK14, FKT20]. | By using these results with the proof technique of [FKT20], we can obtain Theorem 12. | Following [FKT20], we use a folklore reduction to the convex case (detailed in Section G.3) in order to obtain the following upper bound via Theorem 12: | The excess risk bound in Theorem 12 nearly resembles the lower bound in Theorem 7, except that the upper bound scales with r~2ksubscript~𝑟2𝑘\widetilde{r}_{2k}over~ start_ARG italic_r end_ARG start_POSTSUBSCRIPT 2 italic_k end_POSTSUBSCRIPT instead of r~ksubscript~𝑟𝑘\widetilde{r}_{k}over~ start_ARG italic_r end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT.888In fact, the term r~2ksubscript~𝑟2𝑘\widetilde{r}_{2k}over~ start_ARG italic_r end_ARG start_POSTSUBSCRIPT 2 italic_k end_POSTSUBSCRIPT in Theorem 12 can be replaced by a smaller term, which is 𝒪(r2k)𝒪subscript𝑟2𝑘\mathcal{O}(r_{2k})caligraphic_O ( italic_r start_POSTSUBSCRIPT 2 italic_k end_POSTSUBSCRIPT ) as n→∞→𝑛n\to\inftyitalic_n → ∞ under mild assumptions. See Section G.2 and Section G.4. We conjecture that an appropriate modification of our algorithm and techniques can be used to get the optimal bound depending on r~ksubscript~𝑟𝑘\widetilde{r}_{k}over~ start_ARG italic_r end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT. We leave this as an interesting direction for future work. | C |
{}(c)~{}P(x,u,\boldsymbol{\mu},\boldsymbol{\nu})=P(x,u,\boldsymbol{\mu})( italic_a ) italic_r ( italic_x , italic_u , bold_italic_μ , bold_italic_ν ) = italic_r ( italic_x , italic_u , bold_italic_μ ) , ( italic_b ) italic_c ( italic_x , italic_u , bold_italic_μ , bold_italic_ν ) = italic_c ( italic_x , italic_u , bold_italic_μ ) , ( italic_c ) italic_P ( italic_x , italic_u , bold_italic_μ , bold_italic_ν ) = italic_P ( italic_x , italic_u , bold_italic_μ ) | We consider an N𝑁Nitalic_N-agent CMARL problem where at each instant the agents receive rewards as well as incur costs depending on their joint states, and actions. The goal is to maximize the time-discounted sum of rewards (called reward value or simple value) while ensuring that the discounted cumulative cost lies below a certain threshold. We show that the stated N𝑁Nitalic_N-agent CMARL problem can be well approximated by a CMFC problem with an appropriately adjusted constraint. In particular, our result (Lemma 1) states that the optimal value of the stated CMFC is at most at the distance of 𝒪(e)𝒪𝑒\mathcal{O}(e)caligraphic_O ( italic_e ) from the optimal CMARL value where e=1N[|𝒳|+|𝒰|]𝑒1𝑁delimited-[]𝒳𝒰e=\frac{1}{\sqrt{N}}[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}]italic_e = divide start_ARG 1 end_ARG start_ARG square-root start_ARG italic_N end_ARG end_ARG [ square-root start_ARG | caligraphic_X | end_ARG + square-root start_ARG | caligraphic_U | end_ARG ]. The terms |𝒳|,|𝒰|𝒳𝒰|\mathcal{X}|,|\mathcal{U}|| caligraphic_X | , | caligraphic_U | denote the sizes of state, and action spaces of individual agents respectively. We also show that if the optimal policy obtained by solving the CMFC is adopted into the N𝑁Nitalic_N-agent system, then it does not violate the constraint of CMARL, and yields an N𝑁Nitalic_N-agent cumulative reward that is 𝒪(e)𝒪𝑒\mathcal{O}(e)caligraphic_O ( italic_e ) error away from the optimal N𝑁Nitalic_N-agent value (Theorem 1). In a special case where the reward, cost, and state transition functions are independent of the action distribution of the population, we prove that the error improves to, e=|𝒳|/N𝑒𝒳𝑁e=\sqrt{|\mathcal{X}|}/\sqrt{N}italic_e = square-root start_ARG | caligraphic_X | end_ARG / square-root start_ARG italic_N end_ARG (Theorem 2). | Note that, the dependence of the approximation error on N𝑁Nitalic_N is still 𝒪(1/N)𝒪1𝑁\mathcal{O}(1/\sqrt{N})caligraphic_O ( 1 / square-root start_ARG italic_N end_ARG ). However, its dependence on the sizes of state, and action spaces has been reduced to 𝒪(|𝒳|)𝒪𝒳\mathcal{O}(\sqrt{|\mathcal{X}|})caligraphic_O ( square-root start_ARG | caligraphic_X | end_ARG ) from 𝒪(|𝒳|+|𝒰|)𝒪𝒳𝒰\mathcal{O}(\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|})caligraphic_O ( square-root start_ARG | caligraphic_X | end_ARG + square-root start_ARG | caligraphic_U | end_ARG ) stated in Lemma 1. Therefore, the stated approximation result may be useful in those situations where reward, cost, and transition functions are independent of the action distribution, and the size of action space of individual agents is large. Interestingly, we could not derive an approximation error that is independent of the size of the state space, |𝒳|𝒳|\mathcal{X}|| caligraphic_X |, by imposing the restriction that r,c𝑟𝑐r,citalic_r , italic_c, and P𝑃Pitalic_P are independent of the state distribution. This indicates an inherent asymmetry between the roles played by state, and action spaces in mean-field approximation. | Assumption 1(a), and 1(b) states that the reward, r𝑟ritalic_r, and the cost function, c𝑐citalic_c are bounded. The very definition of the transition function, P𝑃Pitalic_P makes it bounded. Hence, it is not listed as an assumption. On the other hand, Assumption 1(c)-(e) dictate that the functions r𝑟ritalic_r, c𝑐citalic_c, and P𝑃Pitalic_P are Lipschitz continuous with respect to their state, and action distribution arguments. Assumption 1 is common in the literature (Angiuli et al., 2022; Carmona et al., 2018). Our next assumption is on the class of admissible policies, ΠΠ\Piroman_Π. | Assumption 4 removes the dependence of r,c𝑟𝑐r,citalic_r , italic_c, and P𝑃Pitalic_P on the action distribution. However, for each agent, those functions still take the action executed by the same agent as an argument. Below we present our improved approximation result. | D |
The size of a Muller acceptor is the sum of the size of its automaton and k−1𝑘1k-1italic_k - 1. | Recall that if 𝒜𝒜{\mathcal{A}}caligraphic_A is an acceptor and q𝑞qitalic_q is a state of 𝒜𝒜{\mathcal{A}}caligraphic_A, | The set of words accepted by an acceptor 𝒜𝒜\mathcal{A}caligraphic_A is denoted by ⟦𝒜⟧delimited-⟦⟧𝒜{\llbracket}\mathcal{A}{\rrbracket}⟦ caligraphic_A ⟧. | We show that for each type of acceptor 𝒜𝒜\mathcal{A}caligraphic_A, if the input automaton ℳℳ\mathcal{M}caligraphic_M is isomorphic to the automaton of 𝒜𝒜\mathcal{A}caligraphic_A and the sample T𝑇Titalic_T is consistent with 𝒜𝒜\mathcal{A}caligraphic_A and subsumes the TAccsubscript𝑇𝐴𝑐𝑐T_{Acc}italic_T start_POSTSUBSCRIPT italic_A italic_c italic_c end_POSTSUBSCRIPT for 𝒜𝒜\mathcal{A}caligraphic_A, then the learning algorithm returns an acceptor that is equivalent to 𝒜𝒜\mathcal{A}caligraphic_A. | Let 𝒜𝒜\mathcal{A}caligraphic_A be an ICA. Because 𝒜𝒜\mathcal{A}caligraphic_A is deterministic and complete, if we let 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT denote the IBA with the same components as 𝒜𝒜\mathcal{A}caligraphic_A, then 𝒜′superscript𝒜′\mathcal{A}^{\prime}caligraphic_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT accepts the complement of the language 𝒜𝒜\mathcal{A}caligraphic_A, by Claim 2.2 (4). | B |
As supported in Section 4, our findings have shown that MONA generalizes well across different benchmark datasets (i.e., ACDC, LiTS, MMWHS) with diverse labeled settings (i.e., 1%, 5%, 10%). In the following subsection, we further demonstrate that our proposed principles (i.e., tailness, consistency, diversity) are beneficial to various state-of-the-art CL-based frameworks (i.e., MoCov2 [7], k𝑘kitalic_kNN-MoCo [21], SimCLR [4], BYOL [6], and ISD [24]) with different label settings. More details about these three principles can be found in Section 3.2. | Of note, MONA can consistently outperform the semi-supervised methods on diverse benchmark datasets with only 10% labeled ratio. | We then evaluate MONA on LiTS, using 1%, 5%, 10% labeled ratios. The results are summarized in Table I and Figure 7. The conclusions are highly consistent with the above ACDC case: First, at the different label ratios (i.e., 1%, 5%, 10%), MONA consistently outperforms all the other SSL methods, which again demonstrates the effectiveness of learning representations for the inter-class correlations and intra-class invariances under imbalanced class-distribution scenarios. In particular, our MONA, trained on a 1% labeled ratio (i.e., extremely limited labels), dramatically improves the previous best averaged Dice score from 59.3% to 64.1% by a large margin, and even performs on par with previous SSL methods using 10% labeled ratio. Second, our method consistently outperforms all the evaluated SSL methods under different label ratios (i.e., 1%, 5%, 10%). Third, as shown in Figure 7, we observe that MONA is able to produce more accurate results compared to the previous best schemes. | As is shown, MONA trained at the 1% labeled ratio significantly outperforms all other methods trained at the 1% labeled ratio, even over the 5% labeled ratio. Concretely, MONA trained at only 1% labeled ratio outperforms the second-best method (i.e., GCL) both at the 1% and 5% labeled, yielding 12.3% and 2.8% gains in Dice. We also observe the similar patterns that, MONA performs better or on par with all the other methods at 10% labeled, which again demonstrates the superiority of MONA in extremely limited labeled data regimes. | TABLE V: Ablation study of different contrastive learning frameworks on ACDC under three labeled ratio settings (1%, 5%, 10%). We compare two settings: with or without fine-tuning on the segmentation performance (DSC[%]/ASD[mm]). We denote ‘without fine-tuning” to only pretaining. On all three labeled settings, our methods (i.e., GLCon and MONA) significantly outperform all the state-of-the-art methods by a significant margin. All the experiments are run with three different random seeds. The best results are in bold. | A |
30: Set τ(k,i)=τ(k−1,i)subscript𝜏𝑘𝑖subscript𝜏𝑘1𝑖\tau_{(k,i)}=\tau_{(k-1,i)}italic_τ start_POSTSUBSCRIPT ( italic_k , italic_i ) end_POSTSUBSCRIPT = italic_τ start_POSTSUBSCRIPT ( italic_k - 1 , italic_i ) end_POSTSUBSCRIPT; | 3: Send τ(k,i)subscript𝜏𝑘𝑖\tau_{(k,i)}italic_τ start_POSTSUBSCRIPT ( italic_k , italic_i ) end_POSTSUBSCRIPT and 𝒘ksubscript𝒘𝑘\boldsymbol{w}_{k}bold_italic_w start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT to all clients; | When the parameter server receives G(k,i)subscript𝐺𝑘𝑖G_{(k,i)}italic_G start_POSTSUBSCRIPT ( italic_k , italic_i ) end_POSTSUBSCRIPT from each client, it updates 𝒘k+1subscript𝒘𝑘1\boldsymbol{w}_{k+1}bold_italic_w start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT (Line 7 of Algorithm 1) according to the second term in (6) where τksubscript𝜏𝑘\tau_{k}italic_τ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is calculated based on τ(k,i)subscript𝜏𝑘𝑖\tau_{(k,i)}italic_τ start_POSTSUBSCRIPT ( italic_k , italic_i ) end_POSTSUBSCRIPT in Line 23 of Algorithm 1. With the value of Fi(𝒘(k,i)λ=τi)subscript𝐹𝑖superscriptsubscript𝒘𝑘𝑖𝜆subscript𝜏𝑖F_{i}\big{(}\boldsymbol{w}_{(k,i)}^{\lambda=\tau_{i}}\big{)}italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( bold_italic_w start_POSTSUBSCRIPT ( italic_k , italic_i ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_λ = italic_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ) received from the clients (Line 4 of Algorithm 1) and Equation (4), we can estimate the value of the loss function of 𝒘k+1subscript𝒘𝑘1\boldsymbol{w}_{k+1}bold_italic_w start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT (Line 5 of Algorithm 1). Compare it with the set minimum loss function value Fmsubscript𝐹𝑚F_{m}italic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT, if it is less than, then accept this global update and assign the value to Fmsubscript𝐹𝑚F_{m}italic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT, and if not, then do not accept this global update (Line 8-12 of Algorithm 1). When the program proceeds to the set number of rounds k𝑘kitalic_k, the final model parameter 𝒘Ksubscript𝒘𝐾\boldsymbol{w}_{K}bold_italic_w start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT is obtained at the server in Lines 32-34 of Algorithm 1. Then we set the STOP𝑆𝑇𝑂𝑃STOPitalic_S italic_T italic_O italic_P flag to stop the server-side program and send the flag to all clients to stop the local program. | 33: Set STOP𝑆𝑇𝑂𝑃STOPitalic_S italic_T italic_O italic_P flag, and send it to all clients; | 14: Send ∇F(𝒘k−1)∇𝐹subscript𝒘𝑘1\nabla F\left(\boldsymbol{w}_{k-1}\right)∇ italic_F ( bold_italic_w start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT ) to all clients; | C |
Other works have realized that cloud and edge computing presents resilience challenges to resource allocation, including ultra-reliable offloading mechanisms using two-tier optimization [20], proactive failure mitigation for edge-based NFV [21], failure aware workflow scheduling for cloud computing [22], RL for replica placement in edge computing [23]. and graphical models to learn spatio-temporal dependencies between edge server and link failures [11]. The above works did not consider resilience in light of user mobility. | Researchers have been actively investigating how to jointly allocate computing and radio resources to computing tasks offloaded to edge servers [1, 3]. Nevertheless, user mobility across geographical areas presents a challenge towards task offloading [4, 5, 6]. Service migration, or moving service profiles to access points nearer the user as they move, has been proposed to maintain service continuity [5, 6]. | As multiple users share the resources at an edge node, users will experience a compute delay. Modelling the process at each AP as an M/M/1 queue, the total computing delay experienced across all APs at time t𝑡titalic_t is [8]: | The migration of users at overlapping coverage areas has been considered in [24], but here, users at the center of coverage areas still experience a higher latency as they are connected to the cloud. | Single User Service Profile Migrations: In our experiments, we have 9 base stations (edge access points). The user is highly mobile across these coverage areas, and its mobility pattern follows a transition matrix. | C |
Remark: Non-linear Categorical Parameterization. Although the stability above optimization conclusions are established on the linear categorical parameterization on Zπsuperscript𝑍𝜋Z^{\pi}italic_Z start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT, similar conclusions with a non-linear categorical parameterization can be naturally expected by non-convex optimization techniques proposed in [14]. We empirically validate our theoretical conclusions by directly applying practical neural network parameterized distributional RL algorithms. | Return Density Function Decomposition. To decompose the optimization impact of return distribution into its expectation and the remaining distribution part, we apply the return density function decomposition to decompose the target histogram density function ps,asuperscript𝑝𝑠𝑎p^{s,a}italic_p start_POSTSUPERSCRIPT italic_s , italic_a end_POSTSUPERSCRIPT. This decomposition was successfully applied to derive the distributional regularization effect of distributional RL and was rigorously justified in [29]. Based on the categorical parameterization, we denote ΔEsubscriptΔ𝐸\Delta_{E}roman_Δ start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT as the interval that 𝔼[Zπ(s,a)]𝔼delimited-[]𝑍𝜋𝑠𝑎\mathbb{E}\left[Z\pi(s,a)\right]blackboard_E [ italic_Z italic_π ( italic_s , italic_a ) ] falls into, i.e., 𝔼[Zπ(s,a)]∈ΔE𝔼delimited-[]𝑍𝜋𝑠𝑎subscriptΔ𝐸\mathbb{E}\left[Z\pi(s,a)\right]\in\Delta_{E}blackboard_E [ italic_Z italic_π ( italic_s , italic_a ) ] ∈ roman_Δ start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT, and the categorical parameterized ps,a(x)=∑i=1Nfiθ𝟙(x∈Δi)/Δsuperscript𝑝𝑠𝑎𝑥superscriptsubscript𝑖1𝑁superscriptsubscript𝑓𝑖𝜃1𝑥subscriptΔ𝑖Δp^{s,a}(x)=\sum_{i=1}^{N}f_{i}^{\theta}\mathds{1}(x\in\Delta_{i})/\Deltaitalic_p start_POSTSUPERSCRIPT italic_s , italic_a end_POSTSUPERSCRIPT ( italic_x ) = ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT blackboard_1 ( italic_x ∈ roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) / roman_Δ can be decomposed as | To characterize the acceleration effect of distributional RL, we additionally leverage the recently proposed return density function decomposition [29]. | In this paper, we study the optimization advantages of distributional RL over classical RL. Within the Neural FZI framework, our optimization analysis can not only sufficiently characterize offline distributional RL behaviors but also approximate the online setting. Within this framework, we study the uniform stability of distributional loss based on categorical parameterization. Owing to the smoothness properties of distributional loss, distributional RL algorithms tend to satisfy the uniform stability in the optimization process, thus enjoying stable gradient behaviors in the input space. In addition to the optimization stability, we also elaborate on the acceleration effect of distributional RL algorithms based on the return density decomposition technique proposed recently. Distributional RL can speed up the convergence and perform favorably if the return distribution is approximated appropriately, measured by the gradient estimates’ variance. Empirical results corroborate that distributional RL possesses stable gradient behaviors and acceleration effects by suggesting smaller gradient norms concerning the states and model parameters. Our study opens up many exciting research pathways in this domain through the lens of optimization, paving the way for future investigations to reveal more advantages of distributional RL. Our contributions can be summarized as follows: | The acceleration effects of distributional RL have also been demonstrated through the return density decomposition. We show that distributional RL can speed up convergence if the parameterization error of the return distribution is appropriate. | B |
With the aid of the Marcinkiewicz-Zygmund property [3], one can bypass the quadrature exactness as in [4], which can break the restriction of the application of hard thresholding hyperinterpolation. Once the quadrature exactness is not required, there are many quadrature rules that we can take, such as (Quasi) Monte-Carlo rules [15]. Furthermore, one may combine the springback penalty [6] with the weighted least squares problem (2.7) to obtain a more stable and effective approximation scheme. In addition, it seems promising to discuss the relation between different types of noise and denoising ability of hard thresholding hyperinterpolation. | In the following, we shall consider in particular an efficient spherical design proposed in [31] with a relatively small number of quadrature points N𝑁Nitalic_N and a uniformly bounded ratio of the covering radius to the packing radius. In our tests, since we intend to compute several type of hyperinterpolants of total degree at most n=15𝑛15n=15italic_n = 15, it is necessary to adopt a rule with algebraic degree of exactness 2n=302𝑛302n=302 italic_n = 30, consisting of N=482𝑁482N=482italic_N = 482 points. | The error of the best approximation to f∈𝒞(Ω)𝑓𝒞Ωf\in\mathcal{C}(\Omega)italic_f ∈ caligraphic_C ( roman_Ω ) by polynomials of degree at most n𝑛nitalic_n is defined by | The first author (C. An) of the research is partially supported by Tianfu Emei talent plan (No.1914) and National Natural Science Foundation of China (No. 12371099). We express our sincere thanks to Prof. Alvise Sommariva at University of Padova for his helpful suggestions. | Concisely speaking, hard thresholding hyperinterpolation is the unique solution to an ℓ0subscriptℓ0\ell_{0}roman_ℓ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT-regularized weighted discrete least square problem, and has been proved to be an effective tool in denoising from the numerical examples. Hard thresholding hyperinterpolation satisfies the Pythagorean theorem with respect to the discrete (semi) inner product, which is an important geometric property that Lasso hyperinterpolation [2] and hybrid hyperinterpolation [1] do not possess. Then we use the reciprocal of Christoffel function to prove that the upper bound of the uniform norm of hard thresholding hyperinterpolation operator is not greater than that of hyperinterpolation operator. What’s more, a practical criterion, using the sum of the difference between the regularization parameter and the product of noise coefficients and signs of hyperinterpolation coefficients, is established to judge the denoising abilities of hard thresholding hyperinterpolation and Lasso hyperinterpolation. | C |
To provide a broader picture of these models, we summarized the primary findings in Table 5, emphasizing the key advantages and shortcomings of various forecasting models. In this table, we classified the models into four main categories. The first category comprises deep learning-based models, which directly learn spatiotemporal features using 2D/3D convolutions and variants of recurrent neural networks (ConvLSTM, PhyDNet, PredRNN, etc.). The second category includes state-space type models (TT-DMD and MAR), which employ matrix or tensor-based spatiotemporal autoregression. The third category consists of models that squash all spatial features into a vector (snapshot) and then learn the temporal evolution of the snapshots using RNNs, LSTMs, etc. In this category, we do not utilize inter-spatial correlations in modelling. The last category is clustering-based models, which are hybrid; they incorporate local spatial correlations and model temporal evolutions using vector-based models. | DMD-based forecasting has received a lot of attention and has been investigated in several articles. In [66], researchers have proposed the optimized DMD algorithm with statistical bagging, and it was found that it can make stable forecasts. Researchers in [67] have used a DMD-based forecasting model for power load predictions and have shown how the proposed model learns a stochastic Gaussian process regression (see, e.g., [68, 69, 70, 71]). Multivariate forecasting with DMD using a block Hankel matrix was proposed in [72]. The main idea of DMD forecasting approaches is to learn an (approximate) linear model in the time-delay basis representation, i.e., it is assumed that the snapshot data is generated by a high-dimensional linear dynamic model or by a nonlinear model (which can be approximated with a high-dimensional linear model). Under this assumption, we learn this linear operator from the time-shifted copies of input data. After learning this linear dynamic operator, we evolve the states by feeding the most recent past state to generate forward point forecasts so it evolves as a linear dynamical system. However, these models cannot forecast matrices or higher-order tensors since they learn vector autoregression. For example, we cannot use these models directly to predict the temperature distribution over a spatial grid (a matrix where spatial coordinates enumerate the rows and columns); instead, a generalization of the forecasting approach that can inherently learn a multidimensional linear dynamic operator that can be used to predict the temperature distribution over spatial grids should be developed. This is the main motivation for this work. Based on the preceding section, we develop the forecasting method, which predicts 2D weather maps (temperature distributions over spatial grids) using a learned multilinear dynamic operator (𝚽¯)¯𝚽(\underline{\boldsymbol{\Phi}})( under¯ start_ARG bold_Φ end_ARG ). | We discussed various data-driven approaches for spatiotemporal forecasting, and we showed how to condition these models for forecasting weather states over multiple locations simultaneously by exploiting spatiotemporal correlations. We observed that incorporating spatiotemporal correlations improves performance and reduces the computational cost. We proposed an algebraic forecasting method based on tensor-train DMD. We found that its predictions are comparable to those of the state-of-the-art models without the need for training, and it significantly reduces computational time. We evaluated the proposed methods for short-term and long-term forecasting to ensure they can predict temperature fields multiple steps ahead. Moreover, the proposed model is an excellent choice for quick and short-term spatiotemporal predictions. It should be noted that TT-DMD may struggle to learn spatiotemporal correlations from massive, incomplete, or irregularly sampled datasets. Therefore, Future work should take care of these two issues, and it may also be possible to generalize this approach for other variants of DMDs like Extended DMD [76], Multi-Resolution DMD[77] and higher order DMD [78]. | We adapt tensor-based dynamic mode decomposition (DMD) for spatiotemporal forecasting; the proposed methodology extends the classical DMD forecasting approach to learn and predict from 2D matrices or higher-order tensors rather than vectors. We evaluated and compared the proposed model on the real-world weather datasets to standard baselines. | To draw a general conclusion about these models, we compared their performance for seven-step predictions, recorded their training and inference times and other error metrics, and presented the results in Table 1. Based on the results presented in Table 1, we can infer that ConvSTM performed the best while MAR performed the worst in terms of the error metric. The rest show roughly similar behaviour. TT-DMD is optimal in terms of the total time taken for data fitting and predictions. TT-DMD prediction ability is good even though we used the simple variant of DMD; it can be improved significantly, for example, by employing Extended DMD [76], Multi-Resolution DMD[77] and higher order DMD [78]. Models based on clustering and sampling took significantly longer to train and evaluate, suggesting they are not best suited for long-term predictions. In conclusion, spatiotemporal models that work directly with the original dimensions work significantly better since they capture both spatial and temporal correlations. Furthermore, TT-DMD is a good choice when quick spatiotemporal predictions are required. | B |
Our findings reveal that while both global and local features are crucial in explaining discussion trees’ structural properties, local features collectively explain significantly more variation than global features. This suggests that users primarily decide to interact with a post based on its specific attributes rather than the characteristics of the subreddit. Nevertheless, several global properties of subreddits also significantly influence discussion tree structures, emphasizing the need to consider both feature types. | Previous social media studies have emphasized the importance of local features, such as popularity and novelty, in determining post spread (Gómez et al., 2008; Lumbreras, 2016; Aragón et al., 2017a). Our study addresses a gap in the literature by considering both local and global features, drawing inspiration from related work exploring how these features influence various aspects of online communication (Zhang et al., 2017; Lambert et al., 2022; Coletto et al., 2017). By quantifying the relative importance of global versus local features in shaping discussion tree structures, we provide a more comprehensive understanding of the factors driving information spread on social media platforms. | In the broader field of computer-supported cooperative work (CSCW), related studies have examined other aspects of conversational structure, such as the impact of comment display formats on user retention (Budak et al., 2017) and the role of social catalysts in online interactions (Saveski et al., 2021a). By synthesizing these diverse strands of research and introducing a novel framework that considers both global and local features, our work aims to provide a more comprehensive understanding of the factors shaping online discussion structures across different community contexts. | The rapid expansion of the Internet in recent years has significantly reduced barriers to computer-mediated communication, ushering in a new era of online interaction. Today, online discussions via social media platforms—such as Reddit, Facebook, Instagram, and X—have become the dominant medium for conversations about events, ideas, and knowledge (Anderson and Caumont, 2014; Leonardi, 2014; Kittur et al., 2007; Diakopoulos and Naaman, 2011; Chancellor et al., 2018). These platforms have transformed the communication landscape, allowing users to engage in dialogues that transcend geographical boundaries. The exponential growth in online discussion participants has led to a corresponding increase in the volume and diversity of shared content. This surge in content creation and sharing necessitates effective content moderation strategies to manage and filter information (Berry and Taylor, 2017; Heatherly et al., 2017; Rhue and Sundararajan, 2014; Chandrasekharan et al., 2017; Cheng et al., 2017; Kriplean et al., 2012), which is crucial for maintaining a digital space free from misinformation (Del Vicario et al., 2016), online hate speech (Chetty and Alathur, 2018), and other undesirable online behaviors. | Our work builds upon a rich body of literature on studying online discussion threads or trees (Medvedev et al., 2019b), particularly focusing on generative models developed to explain their structural properties (see a comprehensive survey by Aragón et al. (2017b)). These models typically employ factors such as popularity, novelty, and reciprocity to analyze and predict the size, depth, and degrees of discussion trees (Kumar et al., 2010; Wang et al., 2012; Gómez et al., 2013; Aragón et al., 2017a). Some studies also incorporate a root-bias feature, which explicitly differentiates the root node (i.e., the original poster) and other commentators (Gómez et al., 2008; Lumbreras, 2016; Aragón et al., 2017a). Backstrom et al. (2013) explored how the arrival patterns of users influence the size of the discussion trees. Lumbreras (2016) investigated the distinct roles users take and their effects on the shape and size of the discussion trees. (Kim et al., 2023) examined the continuity of existing Reddit discussion trees by predicting whether a new comment will arrive or not. Nishi et al. (2016) analyzed reply trees on Twitter with a branching process model, particularly on the effects of segments without branching. Medvedev et al. (2019a) predicted the growth of discussion trees on Reddit over time using the Hawkes process. Horawalavithana et al. (2022) used a generative algorithm to forecast the discussion tree structure of groups of Reddit posts that share similar topics. Another line of research aimed to explain the depth of the tree, which is the longest path from the root of the tree to a leaf, or the size of the tree, which is the total number of participating comments (Kumar et al., 2010; Backstrom et al., 2013). Tree depth and size are proxies for general interest in the post since longer and larger trees indicate more participants. | A |
Now, observe that the reduction described above does not immediately yield a bipartite graph and in addition the maximum degree of the constructed graph is at most nine (this can happen if the vertex that represents the output of an AND gadget, is the input to a different AND gadget as well). | We begin this proof overview by defining a very weak version of Pure-Circuit. An instance of the problem consists of a Boolean circuit using the standard gates NOT, AND, and OR, but with the following tweak: the circuit is allowed to have cycles. A solution to the problem is an assignment of values to each node of the circuit, so that all gates are satisfied. If we are only allowed to assign values in {0,1}01\{0,1\}{ 0 , 1 } to the nodes, then it is easy to see that the problem is not a total search problem, i.e., some instances do not have a solution. For example, there is no way to assign consistent values to a cycle of three consecutive NOT gates. | However, it is not difficult to tweak the construction and get these two properties too. | The introduction of the PURIFY gate makes the problem non-trivial: if a PURIFY gate appears in the circuit, then assigning the “garbage” value ⊥bottom\bot⊥ to all nodes is no longer a solution. However, it turns out that one more modification is needed to make the problem PPAD-hard: we have to make the logical gates robust. For the AND gate, this means the following: if one of its two inputs is 00, then the output is 00, no matter what the other input is (even if it is not a pure bit, i.e., if it is ⊥bottom\bot⊥). Similarly, for the OR gate we require that the output be 1111 when at least one of the two inputs is 1111. Robustness is defined analogously for NAND and NOR. | For a gate g=(NOT,u,v)𝑔NOT𝑢𝑣g=(\textup{{NOT}},u,v)italic_g = ( NOT , italic_u , italic_v ), we use the following construction, which | B |
7: New error: error=metric(𝐲i+1,𝐲i{\rm error}={\rm metric}(\mathbf{y}^{i+1},\mathbf{y}^{i}roman_error = roman_metric ( bold_y start_POSTSUPERSCRIPT italic_i + 1 end_POSTSUPERSCRIPT , bold_y start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT) | 4:while iter≤maxiteritermaxiter{\rm iter}\leq{\rm maxiter}roman_iter ≤ roman_maxiter and error>toleranceerrortolerance{\rm error}>{\rm tolerance}roman_error > roman_tolerance do | 11:Compute loss wrt observations: loss(𝐲(𝐱,t),obs)loss𝐲𝐱𝑡obs{\rm loss}(\mathbf{y}(\mathbf{x},t),{\rm obs})roman_loss ( bold_y ( bold_x , italic_t ) , roman_obs ) | 4:while iter≤maxiteritermaxiter{\rm iter}\leq{\rm maxiter}roman_iter ≤ roman_maxiter and error>toleranceerrortolerance{\rm error}>{\rm tolerance}roman_error > roman_tolerance do | 10:Compute loss wrt observations: loss(𝐲(t),obs)loss𝐲𝑡obs{\rm loss}(\mathbf{y}(t),{\rm obs})roman_loss ( bold_y ( italic_t ) , roman_obs ) | D |
For the former, it mainly focuses on hand-designed texture-based feature extraction for in-the-lab FER datasets, on which the recognition accuracy is very high. | In recent years, FER has achieved impressive performance on laboratory-controlled datasets, such as CK+ [6], JAFFE [7] and RaFD [8], which are collected under ideal conditions and annotated by experts. | These samples are from the FERPlus dataset, with the corresponding labels in green, which are obtained by voting on the results of the crowdsourcing annotations. This label conversion strategy can introduce artificial noise. | Recently, with the demands of real-world applications, large-scale FER datasets in unconstrained environments, such as FERPlus [9], RAF-DB [10] and AffectNet [11], continue to emerge. | Since then, most studies tried to tackle the in-the-wild FER task. These large-scale datasets are collected in an unconstrained environment and annotated by crowdsourcing. | D |
\rangle|^{2}roman_min start_POSTSUBSCRIPT italic_p ∈ blackboard_P start_POSTSUBSCRIPT italic_J end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT | ⟨ italic_f ∘ italic_x - italic_p ∘ italic_x , italic_ψ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ⟩ | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | The SINDy-type methods have been shown to be very efficient at system identification under the assumption that the underlying dynamics is a linear combination of the chosen finite set of basis functions. In this paper, we will abandon this assumption and rephrase the SINDy-type methods in the context of surrogate modeling. For system x˙=f(x)˙𝑥𝑓𝑥\dot{x}=f(x)over˙ start_ARG italic_x end_ARG = italic_f ( italic_x ), if f𝑓fitalic_f is not in the span of the chosen finite set of basis functions φjsubscript𝜑𝑗\varphi_{j}italic_φ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, the SINDy-type methods will return a model, which is shown in the preceding section to be a projection of the underlying dynamics in the case of weak-SINDy. In this section, we study the viability of the surrogate model and investigate the difference between the solution to the projected surrogate model and the original solution. | From the results in Figure 3, we observe that the total error (L) still closely resembles the behavior of the projection error (R2) in the test space. In the case of Fourier test functions, the value of (R2) still converges spectrally when J≤K𝐽𝐾J\leq Kitalic_J ≤ italic_K as when using Legendre polynomials as test functions (cf. Figure 2), since the convergence rate of (R2) is independent to the choice of test functions, as shown in Proposition 3.13. | This paper asserts in the case of weak-SINDy that if a reasonable basis, composed of functions known to be dense in an underlying Hilbert space, is chosen, then the returned system is a good approximation of the original and has solutions reasonably close to the original. Section 2 gives an overview of SINDy-type techniques, illuminates a projection property of weak-SINDy, and exemplifies the structural similarities between these SINDy-type techniques giving possible avenues for extensions of this work. In Section 3, we give an in-depth error analysis of the surrogate models generated by a simplified version of weak-SINDy. In particular, in the scalar ordinary differential equation (ODE) case, we show that (i) the surrogate dynamics converges towards the true dynamics and (ii) the solution of the surrogate model is reasonably close to the true solution, under the assumption of a bounded composition operator. In Section 4, this error analysis is extended to systems of ODEs, and the results are applied to identify a surrogate ODE system over coordinates defined by proper orthogonal decomposition (POD) for solutions to partial differential equations (PDEs), similar to the approach taken in [18]. Section 5 contains a variety of numerical examples exemplifying the main results of this paper. | It is clear that the minimal value of Eq. (33) converges to the one of Eq. (32) as K→∞→𝐾K\to\inftyitalic_K → ∞. However, it is necessary to show that the minimizers converge as well. The convergence proof of the minimizers, i.e., the polynomials, is given in Appendix A, which makes use of strong convexity arguments of the objective functions in Eqs. (32) and (33). | D |
This function is monotone since the inequality 𝑨≤𝑩𝑨𝑩\bm{A}\leq\bm{B}bold_italic_A ≤ bold_italic_B yields the inequality tr(𝑨i)≤tr(𝑩i)trsuperscript𝑨𝑖trsuperscript𝑩𝑖\operatorname{tr}(\bm{A}^{i})\leq\operatorname{tr}(\bm{B}^{i})roman_tr ( bold_italic_A start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) ≤ roman_tr ( bold_italic_B start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) for each integer i≥0𝑖0i\geq 0italic_i ≥ 0, and therefore Tr(𝑨)≤Tr(𝑩)Tr𝑨Tr𝑩\operatorname{Tr}(\bm{A})\leq\operatorname{Tr}(\bm{B})roman_Tr ( bold_italic_A ) ≤ roman_Tr ( bold_italic_B ). It retains the cyclic property of trace, which implies that Tr(𝑨𝑩)=Tr(𝑩𝑨)Tr𝑨𝑩Tr𝑩𝑨\operatorname{Tr}(\bm{A}\bm{B})=\operatorname{Tr}(\bm{B}\bm{A})roman_Tr ( bold_italic_A bold_italic_B ) = roman_Tr ( bold_italic_B bold_italic_A ). | We consider the Kleene star matrix 𝑪∗superscript𝑪∗\bm{C}^{\ast}bold_italic_C start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, which generates the solutions, and note that the existence condition at (4) yields the inequality 𝑪∗≥𝑪isuperscript𝑪∗superscript𝑪𝑖\bm{C}^{\ast}\geq\bm{C}^{i}bold_italic_C start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ≥ bold_italic_C start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT, which is valid for all integers i≥0𝑖0i\geq 0italic_i ≥ 0. Therefore, with l=max(n,k)−1𝑙𝑛𝑘1l=\max(n,k)-1italic_l = roman_max ( italic_n , italic_k ) - 1, we can write | The application of these equalities leads to the Kleene matrix in the following form (see also [29]): | The Kleene star operator takes the matrix 𝑨𝑨\bm{A}bold_italic_A to the Kleene star matrix | Suppose that the condition Tr(𝑨)≤𝟙Tr𝑨double-struck-𝟙\operatorname{Tr}(\bm{A})\leq\mathbb{1}roman_Tr ( bold_italic_A ) ≤ blackboard_𝟙 holds. It is not difficult to verify that under this condition, the Kleene star matrix becomes the finite sum (we assume below that the condition is satisfied whenever a Kleene matrix is defined): | C |
Furthermore, once a vertex is pushed into the priority queue, its cost remains unchanged. As a result, we can ensure that when the two search trees from s0subscript𝑠0s_{0}italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and s1subscript𝑠1s_{1}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT converge at a pair of adjacent physical qubits, they will indeed form the optimal routing path with the smallest occupied time. This is because if there were an alternative sequence that could be completed earlier, it would have been visited from the priority queue and halted the search process earlier. The proof of the optimality of the priority queue is provided in Section 4.1. | In summary, the LE Search approach explores all possible gate sequences up to a specified depth d𝑑ditalic_d. It has the capability to close in on global optimum solutions for smaller designs when the depth d𝑑ditalic_d is set sufficiently large. Conversely, the SP Estimation approach prioritizes better scalability, even if it may sacrifice some performance. As a result, the SP Estimation approach can be effectively applied to larger circuits, thus paving the way toward achieving quantum advantage. | While the Limitedly-Exhaustive (LE) Search heuristic may provide an approximation close to the global optimum solution when employing a large search depth d𝑑ditalic_d, its practical applicability remains limited due to the extensive calculations involved in the path enumeration. Consequently, it can only be effectively applied to small-scale designs. To address this challenge and being inspired by Li et al. (li_tackling_2019, ), we propose an alternative heuristic approach that considers the concept of the shortest path (floyd_algorithm_1962, ) between the physical qubits of the gate. | The scheduler then proceeds to select the sequence with the minimal occupied time, determining its corresponding gate in the Waitlist for execution. Subsequently, the Waitlist is updated, and the process is repeated for the subsequent depth-d𝑑ditalic_d sequences. To illustrate, if the depth limit is set to 2222, the Limitedly-Exhaustive Search will first enumerate all the gates in the Waitlist, finding the optimal routing path for each gate using Duostra, and updating the corresponding occupied time and Waitlists accordingly. Next, for each gate in the previous iteration and its updated Waitlist, the gates in the updated Waitlist are enumerated, and the above process is repeated again. Finally, the gate that results in the smallest occupied time among these depth-2222 sequences will be selected for execution, and the Limitedly-Exhaustive search process will be restarted. This approach offers an efficient strategy to tackle the scheduling problem by limiting the search depth while reducing computational complexity. | 3.3. Scheduling: Limitedly-Exhaustive (LE) Search and Shortest-Path (SP) Estimation | D |
A speed based prediction scheme is proposed in [29] which uses a weighted average of current bus speed and historically averaged section speed as inputs. As previous method, it ignores | A dynamic SVR based prediction scheme is proposed in [9] which exploits spatio-temporal (ST) correlations in a minimal manner. In particular, it considers current bus | approximators capturing ST correlations was proposed in [11]. Recently, a CNN approach capturing ST correlations | which is also distinct from all existing BATP approaches (in particular from ED based BATP approaches also [13, 14]). It exploits current real-time | incorporating both current spatio-temporal correlations and seasonal correlations. | A |
All regularized GPHLVMs with 2222-dimensional latent spaces outperform their Euclidean counterparts. | In general, we observe a prominent stress reduction for the Euclidean and hyperbolic 3333-dimensional latent spaces compared to the 2222-dimensional ones. | In the case of the support pose taxonomy, the Euclidean models with 3333-dimensional latent space slightly outperform the 3333-dimensional hyperbolic embeddings. We attribute this to the cyclic graph structure of the taxonomy. Such type of structure has been shown to be better embedded in spherical or Euclidean spaces (Gu et al., 2019). Interestingly, despite the cyclic graph structure of the support pose taxonomy, the Euclidean models are still outperformed by the hyperbolic embeddings in the 2222-dimensional case (see Table 1). This suggests that the increase of volume available to match the graph structure in hyperbolic spaces compared to Euclidean spaces leads to better low-dimensional representations of taxonomy data, including those with cyclic graph structure. | We embed the taxonomy data of the aforementioned taxonomies into 2222-dimensional hyperbolic and Euclidean spaces using GPHLVM and GPLVM. | This is due to the increase of volume available to match the graph structure in 3333-dimensional spaces relative to 2222-dimensional ones. | A |
|𝔼[S]−V|=exp(−Ω(min{t1−κ1,γ−κ2})).𝔼delimited-[]𝑆𝑉Ωsubscript𝑡1subscript𝜅1𝛾subscript𝜅2|\mathbb{E}[S]-V|=\exp(-\Omega(\min\{t_{1}-\kappa_{1},\gamma-\kappa_{2}\})).| blackboard_E [ italic_S ] - italic_V | = roman_exp ( - roman_Ω ( roman_min { italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_κ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_γ - italic_κ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT } ) ) . | When the initialization is poor, using the sample variance of the checkpoints as an estimator gives a computational improvement over the sample variance of k𝑘kitalic_k independent runs of a training algorithm. | Among the inference aggregations, OPA provides the maximum absolute accuracy gains over the DP-SGD baseline of 7.45% and 17.37%, respectively, for both ε∈{1,8}𝜀18\varepsilon\in\{1,8\}italic_ε ∈ { 1 , 8 }. From the rightmost two plots (Figure 4), we see that DP-SGD baseline models exhibit very large variance with PDS CIFAR10 across training steps, but all the inference aggregation methods completely eliminate the variance. | Here, the expectation 𝔼[⋅]𝔼delimited-[]⋅\mathbb{E}[\cdot]blackboard_E [ ⋅ ] and the variance 𝐕𝐚𝐫[⋅]𝐕𝐚𝐫delimited-[]⋅{\mathbf{Var}}[\cdot]bold_Var [ ⋅ ] are over the randomness of DP-SGD. | We perform an empirical study of using the checkpoint variance estimator. We consider running DP-SGD on a 1-dimensional quadratic loss; we ignore clipping for simplicity, and assume the training rounds/privacy budget are fixed such that we can do exactly 128 rounds of DP-SGD. We set the learning rate η=.07𝜂.07\eta=.07italic_η = .07, set the Gaussian variance such that the distribution of the final iterate has variance exactly 1, and set the initialization to be a random point drawn from 𝒩(0,σ2=1002)𝒩0superscript𝜎2superscript1002\mathcal{N}(0,\sigma^{2}=100^{2})caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 100 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). Since (1−η)64≈1/100superscript1𝜂641100(1-\eta)^{64}\approx 1/100( 1 - italic_η ) start_POSTSUPERSCRIPT 64 end_POSTSUPERSCRIPT ≈ 1 / 100, under these parameters it takes roughly 64 rounds for DP-SGD to converge to within distance 1 of the minimizer. This reflects the setting where the burn-in time is a significant fraction of the training time, i.e. where 5.1 offers improvements over independent runs. We vary the burn-in time (i.e. round number of the first checkpoint) and the number of rounds between each checkpoint (i.e., the total number of checkpoints used) used in the variance estimator, and compute the error of the variance estimator across 1000 runs. | C |
In addition to comparing the entire PCKRF framework, we also conduct experiments on the point cloud completion method. For our pipeline, we are particularly concerned with the details around keypoints, so we designed a keypoint detector as a decoder to enhance the details around keypoints. However, we also found that our detector did not perform well for overly complex completion frameworks, and even had a counterproductive effect. Based on this, we chose PCN as our backbone. | Our refinement method mainly contains two modules. Firstly, we propose a point cloud completion network to fully utilize the point cloud and RGB data. Our composite encoder of the network has two branches: the local branch fuses the RGB and point cloud information at each corresponding pixel, and the global branch extracts the feature of the whole point cloud. The decoder of the network follows [16] and employs a multistage point generation structure. Additionally, we add a keypoint detection module to the point cloud completion network during the training process to improve the sensitivity of the completed point cloud to pose accuracy, leading to better pose optimization. Secondly, to use color and point cloud data in registration and to enhance method stability, we propose a novel method named Color supported Iterative KeyPoint (CIKP), which samples the point cloud surrounding each key point and leverages both RGB and point cloud information to refine object keypoints iteratively. However, the CIKP method will make it hard to refine all key points when the point cloud is incomplete, which limits its performance. To address this issue, we introduce a combination of our completion network and the CIKP method, referred to as Point Cloud Completion and Keypoint Refinement with Fusion (PCKRF). This integrated approach enables the refinement of the initial pose prediction from the pose estimation network. We further conduct extensive experiments on YCB-Video[10] and Occlusion LineMOD[6] datasets to evaluate our method. The results demonstrate that our method can be effectively integrated with most existing pose estimation techniques, leading to improved performance in most cases. | Table VIII is performed on the YCB dataset. We employed SeedFormer [51], a Transformer-based point cloud completion method, along with the traditional PCN [16] as the backbone while keeping other modules unchanged apart from the completion network. Experimental results demonstrated that the inclusion of SeedFormer for point cloud completion had minimal impact on the final outcome. Surprisingly, the direct application of PCN even showed adverse effects on the results. However, when combined with our custom-designed completion network, it positively influenced the accuracy. We postulate that this is primarily due to PCN’s superior adaptability compared to Transformer in handling bidirectional fusion of RGB and point cloud information for the task of point cloud completion. | Figure 2: The upper diagram features the PCKRF pipeline and the lower diagram is the architecture of our point cloud completion network. In the preprocessing step, we utilize the segmentation result and pose of the target object given by the pose estimation network to obtain the partial point cloud in the object coordinate system. The PCKRF pipeline first completes the partial point cloud by the point completion network and then refines the initial pose by our CIKP method. In the point cloud completion network, the Feature Extractor fuses the point cloud and RGB color at each corresponding pixel, and the Keypoint Detector predicts the offset from each point to each keypoint to improve the sensitivity of the completed point cloud to pose accuracy. The loss function of the completion network is a joint optimization of the keypoint detector Loss Lkpsubscript𝐿𝑘𝑝L_{kp}italic_L start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT and the completion decoder Loss Lcdsubscript𝐿𝑐𝑑L_{cd}italic_L start_POSTSUBSCRIPT italic_c italic_d end_POSTSUBSCRIPT. | Table VII shows the experimental results of applying point cloud completion methods to point cloud registration on the YCB-Video dataset, with ADD(S) AUC serving as the evaluation metric. We evaluate the impact of three methods: not using any point cloud completion (None), using the PCN method, and using our proposed method (Ours) with the point-to-point ICP, Colored 6D ICP, and CIKP methods, respectively. The results show that regardless of whether the initial pose estimation results of FFB6D or PVN3D are used, using the PCN method as the point cloud completion method is similar to the optimization results without using point cloud completion for point-to-point ICP and Colored 6D ICP methods. However, for the CIKP method, the result is the opposite and PCN gets lower performance than without completion. On the contrary, after applying our proposed point cloud completion method, the results of the three methods are improved to varying degrees compared to those without a point cloud. In particular, the CIKP method registers the point cloud near the keypoint. Without completion, only visible points near the keypoints participate in optimization. With completion, all keypoints (visible or not), participate in optimization. As a result, the impact of completed point clouds on the CIKP method is more significant than on the ICP and Colored 6D ICP methods. Therefore, the effect of the CIKP method drops significantly when the PCN method is adopted, while the other methods are not affected as much. | B |
The section is motivated by the BHK interpretation of intuitionism (Section 2.3) in which | This distinction between realizability and proof-theoretic validity is why IPL is not ‘structurally’ complete — see, for example, Pogorzelski [45]. That is, while the horizontal bar in natural deduction corresponds to realizablity — that is, the existence of realizers for the things above it guarantee the existence of the things below it — the implication corresponds to proof-theoretic validity — that is, there is a ℬℬ\mathscr{B}script_B-valid argument for φ→ψ→𝜑𝜓\varphi\to\psiitalic_φ → italic_ψ iff there is a ℬℬ\mathscr{B}script_B-valid argument from φ𝜑\varphiitalic_φ to ψ𝜓\psiitalic_ψ. | While realizability and proof-theoretic validity are deeply connected, they are not the same thing. The realizability interpretation takes place at an essentially classical meta-level, while proof-theoretic validity takes place at an essentially intuitionistic meta-level. | In parallel, Gheorghiu and Pym [21] have shown that the key factor driving the proof-theoretic semantic distinction between intuitionistic and classical logic lies in the interpretation of disjunction (cf. Dummett [11]). Moreover, just as intuitionistic logic corresponds to the (simply typed) λ𝜆\lambdaitalic_λ-calculus as a canonical instantiation of the realizability (i.e., BHK) interpretation, classical logic corresponds to the λμ𝜆𝜇\lambda\muitalic_λ italic_μ-calculus (see Parigot [38]). In this context, Pym and Ritter [52] have shown that one can give two natural interpretations of disjunction, both of which are constructive, through λμ𝜆𝜇\lambda\muitalic_λ italic_μ-terms, one corresponding to intuitionistic disjunction and the other corresponding to classical disjunction. Investigating the concept of proof-theoretic validity for classical logic relative to these findings remains future work. | This Lemma simplifies the presentation of the argument that proof-theoretic validity is equivalent to support; that is, the proof of Theorem 33. | B |
Otherwise, the algorithm chooses a component xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for which the solution value x^isubscript^𝑥𝑖\hat{x}_{i}over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is not integral. It then creates two new subproblems, by either adding the constraint xi≤⌊x^i⌋subscript𝑥𝑖subscript^𝑥𝑖x_{i}\leq\lfloor\hat{x}_{i}\rflooritalic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≤ ⌊ over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⌋ or xi≥⌈x^i⌉subscript𝑥𝑖subscript^𝑥𝑖x_{i}\geq\lceil\hat{x}_{i}\rceilitalic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≥ ⌈ over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⌉. This operation is called branching. The tree of subproblems, in which the children of a problem are created by the branching operation, is called the branch-and-bound tree. Because a subproblem contains more constraints than its parent, its objective value is greater or equal to the one of its parent. The algorithm can also be used to solve mixed-integer linear programs (MIPs), where some of the variables are allowed to be continuous. | Not much theoretical research has been done on the choice of the node selection rule. | problem is still relevant for the choice of a node selection rule, even if all nodes | At the core, the algorithm consists of two important components: the branching rule and the node selection rule. | the node label to the time that the node is explored. Note that this distinction does not really matter for the application of the algorithm as a node selection rule in branch-and-bound, since there the node labels are fixed because they are derived from the integer program and branching rule. | C |
This might be explained by IIP being a strict subset of the input-dependent methods, as these could zero-out the input and instead supply constant prompts. | Third, we find that, as in the case of CLIP, the PGN approach vastly outperforms the baseline IIP approach in adapting these models to the datasets, e.g., showing gains of 40% for CIFAR100. | Note however that the PGN is not doing the heavy-lifting in terms of classifying the images by itself, as its output is not well-aligned with the ground-truth, as demonstrated in Figure 3 of the main paper. | This is further demonstrated in fig. 4, where we compare the similarities of the representations of the various components. | The benefits of input-dependency is further demonstrated by using convolutional neural networks as backbones with gains of up to +5.8% for CIFAR100 (77.9 vs. 72.1). | D |
The reliability diagrams in \figurereffigure: reliability diagrams display the correspondence or discrepancy between a model prediction uncertainty and the observed frequencies in a test set of data. Ideal calibration corresponds to the y=x𝑦𝑥y=xitalic_y = italic_x diagonal: we see that PBNN mini-batch size can help calibrate a model prediction which is overconfident. | The reliability diagrams in \figurereffigure: reliability diagrams display the correspondence or discrepancy between a model prediction uncertainty and the observed frequencies in a test set of data. Ideal calibration corresponds to the y=x𝑦𝑥y=xitalic_y = italic_x diagonal: we see that PBNN mini-batch size can help calibrate a model prediction which is overconfident. | where it is evident that ⟨δ(θ′,θ)⟩=Δ(θ′,θ)delimited-⟨⟩𝛿superscript𝜃′𝜃Δsuperscript𝜃′𝜃\langle\delta({\theta^{\prime},\theta})\rangle=\Delta({\theta^{\prime},\theta})⟨ italic_δ ( italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_θ ) ⟩ = roman_Δ ( italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_θ ). In this case, varying the size n𝑛nitalic_n of the mini-batches corresponds to changing the target poster distribution that we wish to sample. This relates to empirical Bayes and tempering techniques shown in the appendix (see \sectionrefapd:first). However, these well establish techniques do not target the distribution defined by \equationrefequation: expected loss, resulting in qualitatively different results compared to PBNN (cf. \figurereffigure: tempered in \appendixrefapd:first). As shown in \equationrefequation: mini-batch loss, tempering the posterior distribution requires the computation of the likelihood over the full data set, a process we aim to avoid. | Note that, partial BNNs (Sharma et al., 2023) are used in the experiments shown in \figurereffigure: reliability diagrams to induce overconfidence. Only the last layer is stochastic, such that the usual posterior distribution (\equationrefequation: posterior distribution), given the full available dataset does not capture the total uncertainty (see misspecification in the \appendixrefapd:first). In \figurereffigure: reliability diagrams, BNN and PBNN share the same non-stochastic layers. | As shown in \figurereffigure: penalty_linear_regression, it is possible to target the usual posterior distribution as defined by the loss given in \equationrefequation: BNN loss using the noise penalty. Since the noise penalty requires the computation of σ2(θ,θ′)superscript𝜎2𝜃superscript𝜃′\sigma^{2}({\theta,\theta^{\prime}})italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_θ , italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ), i.e. the expected variance of the loss differences over multiple mini-batches, we would like to show in the following that it is more interesting to target the posterior distribution defined by the mean loss ℒn(θ)subscriptℒ𝑛𝜃\mathcal{L}_{n}(\theta)caligraphic_L start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_θ ). This loss corresponds to the average loss given an infinite number of mini-batches. | C |
The Speaker-listener Label Propagation Algorithm (SLPA) [67] is a general framework to analyze overlapping communities in social networks. According to this algorithm, nodes exchange information (label) based on dynamic interaction rules. This framework is designed to analyze both individual overlapping nodes and the whole community. SLPA is an extension of the previous label propagation algorithm (LPA) [43]. Each node in the LPA has a single label. This label is iteratively updated by the majority of labels in the neighborhood. After completion of the algorithm, non-overlapping (disjoint) communities are discovered. To allow overlap, each node is allowed to have multiple labels. | Community detection using the BigClam model is the reverse problem of generating the graph. The community affiliation matrix F𝐹Fitalic_F is determined by the underlying graph G(V,E)𝐺𝑉𝐸G(V,E)italic_G ( italic_V , italic_E ). The number of communities k𝑘kitalic_k is given, and the BigClam model finds the most likely affiliation matrix F^^𝐹\hat{F}over^ start_ARG italic_F end_ARG maximizing log-likelihood. | NOCD stands for neural overlapping community detection [50]. The model consists of an encoder and a decoder. The encoder is a shallow graph convolution network (GCN) with two layers. The decoder is based on the Bernoulli-Poisson model [69]. The output of the encoder is a community affiliation matrix. The decoder attempts to reconstruct the original graph. Finally, a loss is generated based on the reconstruction error. This loss is used to train the GCN model. We use a similar framework in this study. | BigClam was designed for overlapping community detection in large networks [69]. BigClam uses gradient ascent to create an embedding, which is later used to determine nodes’ community affiliation. MNMF [60] uses modularity-based regularization with non-negative matrix factorization. DANMF uses a telescopic non-negative matrix factorization approach based on a deep auto-encoder to learn membership distribution over nodes [73]. Some works [7, 72] performed factorization of modularity matrices using neural nets, while other works considered adversarial learning [28, 11] or deep belief networks [26] to learn community affiliation. However, they did not use graphs in their neural network architecture, which is crucial to achieving superior performance [50]. Finally, NOCD [50] used a graph neural networks (GNN) based encoder-decoder model to detect overlapping communities. Unlike NMF-based methods, it optimizes the weights of a GNN to generate a better community affiliation matrix. | BigClam (Cluster Affiliation Model for Big Networks) was originally designed for large networks [69]. BigClam is a probabilistic generative model for graphs to capture network community structure based on the community distribution of each node. The whole idea of BigClam is based on a bipartite community affiliation network where | D |
To implement the query, we pick one sketching matrix and compute their corresponding vectors 𝖡⊤(𝖡𝖡⊤)−1𝖡R⊤Rhsuperscript𝖡topsuperscriptsuperscript𝖡𝖡top1𝖡superscript𝑅top𝑅ℎ{\sf B}^{\top}({\sf B}{\sf B}^{\top})^{-1}{\sf B}R^{\top}Rhsansserif_B start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ( sansserif_BB start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT sansserif_B italic_R start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_R italic_h. One of the main challenges in our data structure is to implement this sophisticated step with a Kronecker product-based projection matrix. We show that as long as W=UΛU⊤𝑊𝑈Λsuperscript𝑈topW=U\Lambda U^{\top}italic_W = italic_U roman_Λ italic_U start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT with only the diagonal matrix ΛΛ\Lambdaroman_Λ changing, we can leverage matrix Woodbury identity and still implement this step relatively fast. For query, we note that a naive approach will be just multiplying the n2×n2superscript𝑛2superscript𝑛2n^{2}\times n^{2}italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT × italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT projection matrix with a vector, which will take O(n4)𝑂superscript𝑛4O(n^{4})italic_O ( italic_n start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) time. To break this quadruple barrier, however, is non-trivial. While using sketching seemingly speeds up the matrix-vector product, this is not enough: since we update the projection in a lazy fashion, during query we are required to “complete” the low rank update. Since W𝑊Witalic_W is positive semi-definite, the orthonormal eigenbasis might be dense, causing a dense n2×n2superscript𝑛2superscript𝑛2n^{2}\times n^{2}italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT × italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT multiplication with a n2superscript𝑛2n^{2}italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT vector. Hence, we require the eigenbasis U∈ℝn×n𝑈superscriptℝ𝑛𝑛U\in\mathbb{R}^{n\times n}italic_U ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT to be relatively sparse, i.e, nnz(U)=O(n1.5+a/2)nnz𝑈𝑂superscript𝑛1.5𝑎2\mathrm{nnz}(U)=O(n^{1.5+a/2})roman_nnz ( italic_U ) = italic_O ( italic_n start_POSTSUPERSCRIPT 1.5 + italic_a / 2 end_POSTSUPERSCRIPT ) for a∈(0,1)𝑎01a\in(0,1)italic_a ∈ ( 0 , 1 ). Equivalently, we can seek for a simultaneously diagonalization using a sparse matrix. In this work, we keep this assumption, and leave removing it as a future direction. | The second main result uses techniques from differential privacy to develop robust data structures against an adaptive adversary. The intuition of such data structure is to protect the privacy of internal randomness (i.e., sketching matrices) from the adversary. | To improve the runtime efficiency and space usage of Monte Carlo data structures, randomness is typically exploited and made internal to the data structure. Examples such as re-using sketching matrices and locality-sensitive hashing [IM98]. To utilize the efficiency brought by internal randomness, these data structures assume the query sequence is chosen oblivious to its pre-determined randomness. This assumption, however, is not sufficient when incorporating a data structure in an iterative process, oftentimes the input query is chosen based on the output from the data structure over prior iterations. Since the query is no longer independent of the internal randomness of the data structure, the success probability guaranteed by the Monte Carlo data structure usually fails. | To make our data structure both robust against an adaptive adversary and reduce the number of sketches to use, we adapt a differential privacy framework as in the fantastic work [BKM+22]. Given a data structure against an oblivious adversary that outputs a real number, the pioneering work [BKM+22] proves that it is enough to use O~(T)~𝑂𝑇\widetilde{O}(\sqrt{T})over~ start_ARG italic_O end_ARG ( square-root start_ARG italic_T end_ARG ) data structures instead of T𝑇Titalic_T for adaptive adversary, while the runtime is only blew up by a polylogarithmic factor. However, their result is not immediately useful for our applications, since we need an approximate vector with n2superscript𝑛2n^{2}italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT numbers. We generalize their result to n2superscript𝑛2n^{2}italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT dimension by applying the strong composition theorem, which gives rise to O~(nT)~𝑂𝑛𝑇\widetilde{O}(n\sqrt{T})over~ start_ARG italic_O end_ARG ( italic_n square-root start_ARG italic_T end_ARG ). While not directly applicable to the SDP problem, we hope the differential privacy framework we develop could be useful for applications when n<T𝑛𝑇n<\sqrt{T}italic_n < square-root start_ARG italic_T end_ARG, i.e., problems require a large number of iterations. Due to the tightness of strong composition [KOV15], we conjecture our result is essentially tight. If one wants to remove the n𝑛nitalic_n dependence in the number of sketches, one might need resort to much more sophisticated machinery such as differentially private mean estimation in nearly-linear time. We further abstract the result as a generic set query data structure, with the number of sketches required scaling with the number of coordinates one wants to output. | previous outputs. This means that our data structure should be robust against an adaptive adversary. Such an adversary can infer the randomness from observing the output of the data structure and design new input to the data structure. Prior works combat this issue by using a uniformly-chosen sketching matrix that won’t be used again. | A |
Monotone systems are a class of dynamical systems characterized by preserving a partial ordering along their trajectories. | The framework of monotone systems has been successfully used to model complex systems in nature such as biochemical cascade reactions [1] as well as engineered system such as transportation networks [2]. | Monotone system theory has been successfully used for scalable control design in cooperative systems [5] and in systems with rectangular safety constraints [37]. However, in many applications, due to the nature of the problem, estimating the safe set using hyper-rectangles can either make the control design infeasible or can lead to overly-conservative results. In this subsection, we develop a scalable approach for state feedback design with safety guarantees, where we under-approximate the safe set using polytopes. Consider the following control system: | In the literature, the translation-invariance property (11) is sometimes referred to as plus-homogeneity [35] and has shown to play a critical role in stability of K𝐾Kitalic_K-monotone dynamical systems [20]. | cooperative systems, i.e., systems that are monotone with respect to the positive orthant. | A |
We introduced PoseScript, the first dataset to map 3D human poses and descriptions in natural language. | We provided multi-modal applications to text-to-pose retrieval, to text-conditioned human pose generation and pose description generation. | We next study the problem of text-conditioned human pose generation, i.e., generating possible matching poses for a given text query. Our proposed model is based on Variational Auto-Encoders (VAEs) [61]. | This section addresses the problem of text-to-pose retrieval, which involves ranking a large collection of poses based on their relevance to a given textual query. This task is also relevant for the inverse problem of pose-to-text retrieval. To tackle cross-modal retrieval problems, it is common practice to encode both modalities into a shared latent space. | Future works. The PoseScript dataset could be extended to account for multi-people interactions. One could also leverage knowledge from large multi-modal models (e.g. text-to-image) to help filling in the gaps of the collected data in some aspects (e.g. activity concepts). One could further explore the use of a text-based pose prior (i.e. with body semantics awareness) for other applications, e.g. action recognition. | A |
We adapt various transition systems from the constituency parsing literature to handle TOP annotations and conduct a comprehensive comparison against the original top-down approach, demonstrating the superiority of the in-order algorithm across all scenarios. | Lastly, we incorporate the recent state-of-the-art approach by Do et al. [33], which employs a language model enhanced with semantic structured information, into both high-resource and low-resource comparisons. | We extensively evaluate the three proposed algorithms across high-resource and low-resource settings, as well as multiple domains of the widely-used Facebook TOP benchmark. This marks the first evaluation of a shift-reduce approach in low-resource task-oriented parsing, to the best of our knowledge. Through these experiments, we demonstrate that the in-order transition system emerges as the most accurate alternative, surpassing all existing shift-reduce parsers not enhanced with re-ranking. Furthermore, it advances the state of the art in both high-resource and low-resource settings, surpassing all top-performing sequence-to-sequence baselines, including those employing larger pre-trained language models like BART. | We evaluate our approach on both low-resource and high-resource settings of the Facebook TOP datasets, pushing the boundaries of the state of the art in task-oriented parsing and narrowing the divide with sequence-to-sequence models. | Overall, our top-down and in-order shift-reduce parsers deliver competitive accuracies on the main Facebook TOP benchmark, surpassing the state of the art in both high-resource and low-resource settings in most cases. Furthermore, shift-reduce parsers ensure that the resulting structure is a well-formed tree in any setting, whereas sequence-to-sequence models may produce invalid trees due to the absence of grammar constraints during parsing. For instance, Rongali et al. [11] reported that 2% of generated trees for the TOP test split were not well-formed. Although Chen et al. [19] did not document this information, we anticipate a significant increase in invalid trees in the low-resource setting. Finally, it is worth mentioning that techniques such as ensembling, re-ranking, or fine-tuning pre-trained language models are orthogonal to our approach and, while they may consume more resources, they can be directly implemented to further enhance performance. | C |
If the number of triangles is at least αn𝛼𝑛\alpha nitalic_α italic_n for some α>0𝛼0\alpha>0italic_α > 0, then Theorem 12 implies the result. Otherwise, the number of matchings of size 2 is at least (1−o(1))n1𝑜1𝑛(1-o(1))n( 1 - italic_o ( 1 ) ) italic_n, where Theorem 5 implies the result. | We thank Lutz Warnke for meaningful discussion inspiring the simplified proof of Theorem 6. | To prove the lemma, we shall apply Chebyshev’s inequality (Theorem 4). For this purpose we have to estimate VarXVar𝑋\operatorname{Var}Xroman_Var italic_X. We have | We thank Michael Krivelevich for pointing out an alternative proof of a theorem that we cite. | We thank Ron Aharoni for helpful suggestions during the preparation of this note. | D |
Our contributions bridge a theoretical gap in justifying heuristics commonly used in practical implementations of the proximal distance algorithm. While the findings are largely theoretical in nature, the resulting stochastic algorithm naturally enables large-scale data to be analyzed that would be intractable under the full batch version of the method. The remainder of this manuscript is organized as follows: we begin with an overview and necessary background in Section 2. Next, Section 3 proposes our stochastic version of the proximal distance algorithm, and discusses connections and differences to related stochastic proximal methods. In Section 4, we present our theoretical analysis of the algorithm, establishing finite error bounds, revealing convergence rates and providing a rigorous justification for mechanisms to increase ρ𝜌\rhoitalic_ρ. In Section 5, we conduct empirical study to validate our analysis and investigate behavior in practice for both convex and non-convex settings, while demonstrating computational gains over the full version. We conclude and discuss future directions in Section 6. | {\theta}^{\prime}\|_{2}dist ( bold_italic_θ , italic_C ) = roman_inf start_POSTSUBSCRIPT bold_italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ italic_C end_POSTSUBSCRIPT ∥ bold_italic_θ - bold_italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT denotes the Euclidean distance between 𝜽𝜽\bm{\theta}bold_italic_θ and the constraint set C𝐶Citalic_C. Using the idea of distance majorization (Chi et al., 2014), this reformulation can be solved via iterative algorithms from the perspective of Majorization-Minimization, or MM (Hunter and Lange, 2004; Mairal, 2015) as long as the projection operators are practical to evaluate. This is the case for many common constraints including sparsity, rank, and shape restrictions; a broad variety of examples are considered in Xu et al. (2017); Keys et al. (2019); Landeros et al. (2022). | Many powerful iterative algorithms for optimization involve the proximal operator (Bauschke et al., 2011; Parikh et al., 2014), defined as | Fortunately, there is a large literature on related stochastic proximal methods, which are increasingly prominent in statistical and machine learning problems involving large data sets (Toulis et al., 2020; Lee et al., 2022). In this paper, we draw new connections between the proximal distance algorithm and these studies, leveraging techniques for studying their convergence toward rigorously resolving the open questions regarding the convergence of the proximal distance algorithm under various ρksubscript𝜌𝑘\rho_{k}italic_ρ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT schedules. Our point of departure is to propose a stochastic version of the proximal distance algorithm. By evaluating the loss at only a subsample of the data, we break from the original geometry of majorization from which the method was derived. However, we take a different view by drawing analogies to implicit gradient methods. Noting the relation between the penalty parameter and the step size or learning rate in proximal/implicit algorithms, we establish connections to implicit SGD (Toulis et al., 2014; Toulis and Airoldi, 2017; Lee et al., 2022), the proximal Robbins-Monro algorithm (Toulis et al., 2020), and incremental proximal algorithms (Bertsekas, 2011). We provide new theoretical analyses that reveal convergence rates in several regimes under various polynomial learning rate schedules under weaker assumptions than previous studies. | A fundamental example is the proximal point algorithm (Rockafellar, 1976; Parikh et al., 2014), which minimizes an objective F𝐹Fitalic_F with successive proximal operations: | B |
The results demonstrate a clear correlation between batch size and learning rate. | The batch size and learning rate are hyper-parameters that define the NN’s ability to adapt and should be carefully chosen. | The results demonstrate a clear correlation between batch size and learning rate. | Conversely, when a high learning rate is paired with small batch size, the model may undergo severe changes, which could potentially cause instability and even crashes. | By varying batch sizes and learning rates, we could analyze their relationship with control performance. | C |
Let L𝐿Litalic_L be a line through a point r∈V𝑟𝑉r\in Vitalic_r ∈ italic_V between c𝑐citalic_c and c′superscript𝑐′c^{\prime}italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with direction m→=(m1,1)→𝑚subscript𝑚11\vec{m}=(m_{1},1)over→ start_ARG italic_m end_ARG = ( italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , 1 ), 0<m1<10subscript𝑚110<m_{1}<10 < italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT < 1. | of the weighted bottleneck distance of the line L𝐿Litalic_L through the point r∈V𝑟𝑉r\in Vitalic_r ∈ italic_V with direction m→=(m1,1)→𝑚subscript𝑚11\vec{m}=(m_{1},1)over→ start_ARG italic_m end_ARG = ( italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , 1 ), 0<m1<10subscript𝑚110<m_{1}<10 < italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT < 1, as L𝐿Litalic_L rotates toward the vertical direction, does not depend on the point r𝑟ritalic_r. | As can be seen in the proof of Lemma 3.7, it is not immediately apparent that the bottleneck distance for horizontal or vertical lines is irrelevant for the overall matching distance computation - in other words, this lemma does not discount the possibility that the matching distance is achieved as a limit to such a line. To understand better the role that horizontal and vertical lines play in the computation of the matching distance, we must more formally derive the cost of a vertical line (Definition 3.13); the correctness of this definition relies on Theorem 3.12. | The cost of a vertical line V𝑉Vitalic_V is the limit of the weighted bottleneck distance of a line L𝐿Litalic_L through a fixed point r∈V𝑟𝑉r\in Vitalic_r ∈ italic_V as it rotates toward the vertical direction: | Then the limit of the weighted bottleneck distance along L𝐿Litalic_L as it rotates around r𝑟ritalic_r toward the vertical direction, written as | D |
We leverage equivariant denoising diffusion probabilistic models (DDPMs) [23, 24] to generate molecules and binding conformations jointly with respect to a specific protein target. | Figure 1A schematically depicts the 3D diffusion procedure. During training, varying amounts of random noise are applied to 3D structures of real ligands and a neural network learns to predict the noise-less features of the molecules. For sampling, these predictions are used to parameterize denoising transition probabilities which allow us to gradually move a sample from a standard normal distribution onto the data manifold. | Denoising diffusion probabilistic models (DDPMs) [51, 23] are a class of generative models inspired by non-equilibrium thermodynamics. Briefly, they define a Markovian chain of random diffusion steps by slowly adding noise to sample data and then learning the reverse of this process (typically via a neural network) to reconstruct data samples from noise. | (D) Iterative procedure to tune molecular features. We find variations of a starting molecule by applying small amounts of noise and running an appropriate number of denoising steps. The new set of molecules is ranked by an arbitrary oracle and the procedure is repeated for the strongest candidates. | In this work, we investigated the distribution learning capabilities of 3D-conditional diffusion models as an alternative to the autoregressive paradigm. To this end, we developed DiffSBDD, an SE(3)𝑆𝐸3SE(3)italic_S italic_E ( 3 )-equivariant 3D-conditional diffusion model that generates small molecule ligands for given target pocket structures, with chemical properties that closely match those of native ligands. | A |
Timestamped linear velocities (m/s)𝑚𝑠(m/s)( italic_m / italic_s ) of mobile platform along [east,north,up]𝑒𝑎𝑠𝑡𝑛𝑜𝑟𝑡ℎ𝑢𝑝[east,north,up][ italic_e italic_a italic_s italic_t , italic_n italic_o italic_r italic_t italic_h , italic_u italic_p ] | Table VII: ATE [m]delimited-[]𝑚[m][ italic_m ] for C-SLAM in the S3Ev2.0 outdoor | Available in all sequences of S3Ev1.0. 22footnotemark: 2 Available in outdoor sequences of S3Ev1.0. 33footnotemark: 3 Available in all sequences of S3Ev2.0. | The position ta∈{x,y,z}subscript𝑡𝑎𝑥𝑦𝑧t_{a\in\{x,y,z\}}italic_t start_POSTSUBSCRIPT italic_a ∈ { italic_x , italic_y , italic_z } end_POSTSUBSCRIPT denotes the robot’s location in | S3Ev1.0 outdoor environment without UWB measurement. α𝛼\alphaitalic_α, β𝛽\betaitalic_β, | B |
Under mild assumptions, we prove that the iterates of a broad range of Langevin-based sampling schemes converge | Wasserstein distances for a wide spectrum of sampling schemes, thus resolving all the | all measures other than π𝜋\piitalic_π). Thus, all requirements of [6, Prop. 6.4] are | In this paper, we provided a new, unified framework for analyzing a wide range of sampling schemes, | Table 1: Comparison to existing works on convergence of LRM schemes. All methods, except for [4], require bounded second moments of the iterates. | A |
The remainder of this paper is organized as follows: the details of the CT reconstruction problem and the related works are introduced in Section II. The proposed framework is introduced in Section III, and compared with conventional methods, pre-trained and untrained methods with a similar network structure on real CT images under various imaging conditions in Section IV. The results from our experiments are discussed in Section V followed by the conclusion Section VI. | The proposed framework can handle multiple geometries and non-ideal factors. Such versatility exists thanks to the U-net architecture’s widespread use in 2D and 3D images and the thorough analysis of IR algorithms in the RBP connection. In fact, the RBP-DIP framework is not limited to CT and has the potential to solve other reconstruction problems, as it only requires an untrained model for DIP image generation and a conventional IR method for residual back projection. Thanks to its concise architecture, further improvements such as employing more delicate IR methods and neural networks are also possible. Additional priors can also be incorporated as needed in both the RBP connection and the objective function. | This study shows that the RBP-DIP framework shows improvements over the original DIP method and two pre-trained methods with similar network structures. Also, the RBP-DIP framework has great potential for further improvement. The RBP connection in this paper is implemented by a basic IR algorithm with the objective function ‖𝒈−𝑨𝒙‖2superscriptnorm𝒈𝑨𝒙2||\boldsymbol{g}-\boldsymbol{A}\boldsymbol{x}||^{2}| | bold_italic_g - bold_italic_A bold_italic_x | | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT without any additional constraints or regularizations. However, it is evident that RBP-DIP is compatible with all other IR methods, constraints, and regularizations. Notably, these techniques can be incorporated into both the objective function and the RBP connection. In contrast to pre-trained models, these adjustments can be adapted on demand (e.g., augmenting the total variation regularization’s weight when dealing with noisy input data). Moreover, the proposed framework employs a basic U-net structure, but more sophisticated models such as U-net++[55] and ResUnet[56] might be integrated for further improvement. | IR approaches frame image reconstruction as an optimization problem, minimizing the inconsistency between the forward projection of the iterate and the measurements. X-ray physics, non-ideal effects, and image priors can be incorporated into the reconstruction algorithms by employing various constraints, regularizations, and forward models. The objective function for the CT optimization problem is often expressed by fidelity and regularization terms: | Also, Fig.5 and Fig.6, particularly Fig.6b, show that an increasing number of views does not sufficiently lead to improved performance of pre-trained models. The reason is that pre-trained models aim to learn the mapping between the input and the corresponding output from training datasets, rather than actually solve the corresponding inverse problem. Increasing the number of views cannot directly strengthen this mapping. Conversely, the proposed RBP-DIP directly minimizes the inconsistency between the ground truth and reconstructed images under the same measurements. Increasing the number of views reduces the dimensions of the solution space and thus benefits both network optimization and the iterative reconstruction (IR) algorithms integrated in the RBP connection. This outcome is further verified in the last row of Fig.7, and Fig.8, where RBP-DIP considerably outperforms other methods. | C |
Looking at prevalence of latent configuration over time in 4(b), we initially see lower prevalence as the pool has unused IP addresses to allocate. Beyond that, prevalence for Random and LRU allocation approaches pcsubscript𝑝𝑐p_{c}italic_p start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT (note that prevalence can exceed pcsubscript𝑝𝑐p_{c}italic_p start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT as multiple tenants have the opportunity to associate configuration with a given IP address). LRU unsurprisingly outperforms Random slightly, due to the higher time between reuse of IP addresses. While higher time between reuse most clearly reduces aggregate exposure of latent configuration under our posited exponential distribution, cloud providers could also use EIPSim with other models of latent configuration to validate against their unique scenarios. We expect similar results from any monotonic distribution of dvsubscript𝑑𝑣d_{v}italic_d start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT. | The way in which cloud IP addresses are allocated has a substantial impact on the security of hosted applications. Our work proposes new defenses for cloud IP allocation, and evaluates these defenses through a comprehensive model of tenant and adversarial behavior. Our proposed new policy, IP scan segmentation, successfully reduces an adversary’s ability to scan the IP pool even if they can create new cloud tenants without limit. We anticipate that new IP allocation policies, such as IP scan segmentation, will prove useful to providers in protecting their customers and address pools. To that end, we release both our models and policies as open source for use by providers. We are also hopeful that modeling of IP allocation, such as that implemented in EIPSim, will enable further improvements in the security of cloud provider offerings. | IP addresses are a scarce resource, so cloud providers should aim to achieve the best security against latent configurations while incurring minimal pool size overhead. In 4(d), we see that the studied allocation policies have differing behavior as pool size changes. While both Tagged and Segmented outperform the Random and LRU policies, Tagged performs slightly better. This is because benign tenants preferentially receive IPs from other tenants exhibiting benign behavior, making other IPs available for segmentation to heuristically malicious tenants. Our experiments demonstrate that allocation policies can have marked impact on overall latent configuration exposure even for high IP allocation ratios. | 5(b) displays the adversary’s ability to discover new IPs across policies and allocation ratio. The Random and LRU policies exhibit roughly identical behavior: IP yield is reduced as the pool gets smaller (ARmax𝐴subscript𝑅𝑚𝑎𝑥AR_{max}italic_A italic_R start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT gets higher) because the adversary is more likely to receive the same IPs back. Likewise, Tagged and Segmented both almost completely eliminate the single tenant adversary’s ability to discover new IPs. This is unsurprising, as both strategies tag IPs to the most recent tenant and reallocate those IPs back to the tenant. Segmented exhibits a slight increase in adversarial IP yield at very high allocation ratios, as other tenant allocations interfere with the IPs tagged to the adversary–this does not occur in Tagged because the LRU backup queue prevents the tenant’s tagged IPs from being taken. | While an adversary might directly seek to observe a high number of IPs, the end goal is to discover IPs that actually have associated configuration. Our results (5(c)) demonstrate a marked difference here as well, with both Tagged and Segmented performing equivalently well against the single-tenant adversary. As seen in the non-adversarial scenario, LRU also slightly outperforms Random as IP addresses are held in the pool longer before reuse, though this effect is diminished as the allocation ratio increases since the policies are best effort and must allocate some available IP to the adversary. | B |
Leading text-based ZSL methods map class descriptions or images to a shared representation, but that mapping is constant for all classification tasks. | Figure 5 shows that T2M-HN captures the complex semantic distinctions of our task better than baselines. We attribute this to its ability to draw new classifiers for each new description. | Figure 1: The text-to-model (T2M) setup. (a) Classification tasks are described in rich language. (b) Traditional zero-shot methods produce static representations, shared for all tasks. (c) T2M generates task-specific representations and classifiers. This allows T2M to extract task-specific discriminative features. | To better understand the results, consider an important distinction between our approach and previous shared-representation approaches. These approaches aim to learn class representations that would generalize to new classification tasks. In contrast, our approach aims to build task-specific representations and classifiers. For easy tasks, task-dependent representation may not be important because the input contains a sufficient signal for accurate classification. In contrast, in hard tasks, a model would benefit from task-dependent representation to focus on the few existing discriminative features of the input examples. Indeed, as demonstrated in Figure 4, in the easy tasks, although our model is superior on the seen classes, it is outperformed by the GAN-based baselines on unseen classes. In contrast, for the hard tasks, where task-specific class representation is more valuable, our model is superior on both seen and unseen classes. | Our T2M-HN is designed to use information about the classes of each specific classification task. | D |
In this work, we design a Bi-level Contrastive Learning scheme (Bi-CL) to learn discriminative representations of tangled multi-party dialogue utterances. It not only learns utterance level differences across sessions, but more importantly, it encodes session level structures discovered by clustering into the learned embedding space. Specifically, we introduce session prototypes to represent each session for capturing global dialogue structure and encourage each utterance to be closer to their assigned prototypes. Since the prototypes can be estimated via performing clustering on the utterance representations, it also supports unsupervised conversation disentanglement under an Expectation-Maximization framework. We evaluate the proposed model under both supervised and unsupervised settings across several public datasets. It achieves new state-of-the-art on both. | In this work, we design a Bi-level Contrastive Learning scheme (Bi-CL) to learn discriminative representations of tangled multi-party dialogue utterances. It not only learns utterance level differences across sessions, but more importantly, it encodes session level structures discovered by clustering into the learned embedding space. Specifically, we introduce session prototypes to represent each session for capturing global dialogue structure and encourage each utterance to be closer to their assigned prototypes. Since the prototypes can be estimated via performing clustering on the utterance representations, it also supports unsupervised conversation disentanglement under an Expectation-Maximization framework. We evaluate the proposed model under both supervised and unsupervised settings across several public datasets. It achieves new state-of-the-art on both. | We design a bi-level contrastive learning scheme to learn better utterance level and session level representations for disentanglement. | Figure 2: Overview of the proposed Bi-CL framework. It incorporates utterance level contrastive loss to discriminate utterances, and uses session level contrastive loss to encourage them flocking around session centers. | Despite their improved performance, these instance discrimination methods share a common weakness: the representation is not encouraged to encode the global semantic structure of data Caron et al. (2020). This is because it treats two samples as a negative pair as long as they are from different instances, regardless of their semantic similarity Li et al. (2020a). Hence, there are methods which simultaneously conduct contrastive learning at both the instance- and cluster-level Li et al. (2021); Shen et al. (2021). Likewise, we emphasize leveraging bi-level contrastive objects to learn better utterance level and session level representations. | B |
This section presents recent literature focusing on duration modelling in E2E training. In FastSpeech [6], duration information is obtained from a Transformer TTS [20], which is considered a teacher model. External aligners are used in a few papers– MFA [7, 18], HMM-based [21], and connectionist temporal classification (CTC) based [22]. TTS systems trained in [9, 11, 12, 23, 24] learn duration information internally using HMM-based approaches. Soft and hard alignments are learnt with monotonicity constraint in [9, 11, 12]. Glow-TTS [13] uses normalizing flows and dynamic programming to determine the most probable monotonic alignments between text and the latent audio representation. Variational autoencoder with adversarial learning text-to-speech system (VITS) [14] also uses the monotonic alignment search (MAS) proposed in [13]. In [25], word-level hard alignments are obtained from an external aligner, and soft phone alignments are learnt using a word-to-phone attention network. A recently developed network called SoftSpeech [26] proposes a soft length regulator for unsupervised duration modelling within the FastSpeech2 network. Among the presented literature, [24, 26] demonstrate the capability of their TTS systems in low resource scenarios. | Our observations from the experiments conducted so far are summarised here. Although the alignments from the teacher model are poor (but mostly consistent) in many places, the FastSpeech2 student model still learns to generate good-quality speech, given enough amount of training data. But more accurate alignments are required to further improve the pronunciation of sounds in the generated output, especially in low-resource scenarios. In this context, signal processing cues, such as GD and SBSF, in tandem with deep learning techniques, aid in providing accurate alignments. It is to be noted that the accuracy of alignments also depends on the accuracy of transcriptions in correspondence with the training utterances and the accuracy of the word-to-phone lexicon. | HMM-based alignments and MAS are both statistical-based approaches. In [27], it is seen that forced alignment using HMMs does not always provide accurate alignments, especially for fricatives, affricates and nasals. The boundaries of these classes of sounds are refined using signal processing cues. Hence, in the current work, we use the hybrid segmentation algorithm [16] to obtain accurate phone boundaries for the training data. Hybrid segmentation is an external aligner that combines the complementary features of signal processing and neural network-based techniques. | In the context of HMM-based systems, [15, 16] have studied the effect of accurate alignments on synthesis quality and intelligibility, highlighting the importance of accurate boundaries for training. Current E2E TTS architectures employ machine learning based alignments for system building. Signal processing primarily depends on the acoustic characteristics of the speech signal and is agnostic to the transcriptions. Does combining their complementary features also help in E2E training, as already evidenced in the HTS and conventional neural network-based frameworks? Such a study is very important to produce good quality speech as duration is a vital prosody marker. We employ an external aligner, the hybrid segmentation (HS) algorithm, which combines signal processing cues in tandem with deep learning techniques [16], to obtain accurate alignments for the training data. We use the FastSpeech2 architecture [7], and the HiFi-GAN v1 vocoder [17] for E2E training111In this paper, the two-stage pipeline of generating mel-spectrograms and then reconstructing waveforms is also considered as E2E training.. | Hybrid segmentation is an alignment technique that combines the complementary features of machine learning and signal processing-based approaches to generate accurate phone boundaries [15, 16]. HMM-based forced alignment does not accurately model the location of phone boundaries. Hence, in [15], these boundaries are corrected using signal processing-based cues. Specifically, a group delay (GD) based algorithm is used to obtain accurate syllable boundaries. However, the drawback of the GD-based technique is that it doesn’t capture the correct number of syllable boundaries as it is agnostic to the text. Hence, spurious GD boundaries are estimated, and the GD boundary closest to an HMM boundary is considered the correct syllable boundary [15]. Then the phone boundaries are re-estimated within these syllable boundaries instead of re-estimating across the entire utterance. | B |
We train a meta-learning Reptile Nichol et al. (2018) model on our auxiliary data and use it as another baseline. This allows us to directly assess the added advantage of utilising SLPs in MetaSLPREPTILE. | We use ProtoNet Snell et al. (2017) as another baseline for both the high and low-resource settings. ProtoNets use Euclidean distance as a measure of similarity between points and clusters, which is similar to DeepSLP and MetaSLP that assign test points to the closest line. | We presented a novel few-shot learning paradigm that is based on soft-label prototypes capturing the simultaneous membership of data points over several classes, and demonstrated its effectiveness in low and high-resource settings. We evaluated our approach on 48484848 different tasks / settings and showed that it outperforms a range of strong baselines. In the future, we plan to use meta-learning algorithms such as PACMAML Ding et al. (2021) and Bayesian MAML Kim et al. (2018) that relax assumptions with respect to train–test set distributions and thus alleviate this current limitation in our work. | Similar to Bansal et al. (2020a), we use GLUE (Wang et al., 2018) to train our models in the high-resource setting. This dataset consists of a range of natural language tasks such as entailment, classification and textual similarity, which are used for model training and evaluation. We use only the training split for meta-learning. Similar to Bansal et al. (2020a), the MNLI Williams et al. (2018) and SNLI Bowman et al. (2015) entailment tasks, which are three-label classification problems, are split in a pairwise manner such that they are included as multiple two-label datasets during training. Following Bansal et al. (2020a), we also train for detecting the sentiment contained within phrases of a sentence by using the phrase-level annotations in SST2 Wang et al. (2018). | First, we compute the centroid of each class in the input training data. Then, we find and fit class centroids on the minimum number of lines using recursive regression Sucholutsky et al. (2021). This method clusters centroids hierarchically to group similar (interval) centroids together, and fits a regression line on the centroids. The similarity of centroids within a single cluster is judged by how well all the centroids fit on a regression line. If the Euclidean distance of a particular centroid is beyond a pre-defined tolerance threshold ϵitalic-ϵ\epsilonitalic_ϵ from a line, it is removed from that cluster and assigned to another one. We use this method for all our experiments, as we experimentally find (on our dev data) that it performs well on high-dimensional data spread across many classes such as the ones we test here. In Appendix A.1, Figure 3(a), we present an example set of lines connecting all centroids. | A |
By setting α<0.5𝛼0.5\alpha<0.5italic_α < 0.5, Tournesol favors entities that are consensually good, | the Community Notes have been argued to be infiltrated by disinformation groups, | This measure is aligned, in spirit, with some measures of Twitter’s Community Notes. | This is far from being the case in general, and in particular in the case of Tournesol. | The governance of the Community Notes is transparent and fully community-driven, | B |
The solution algorithm for the multiscale mortar MFE method has the performance advantage over the similar method for matching grids discussed in [34] that a coarse mortar mesh could be used to obtain a smaller interface problem due to the reduction in the mortar degrees of freedom. Moreover, as discussed in Theorem 5.1 and Remark 5.1, optimal order accuracy on the fine scale can be maintained with a suitable choice of the mortar space polynomial degree. | The solution algorithm for the multiscale mortar MFE method has the performance advantage over the similar method for matching grids discussed in [34] that a coarse mortar mesh could be used to obtain a smaller interface problem due to the reduction in the mortar degrees of freedom. Moreover, as discussed in Theorem 5.1 and Remark 5.1, optimal order accuracy on the fine scale can be maintained with a suitable choice of the mortar space polynomial degree. | We presented a multiscale mortar mixed finite element method for the Biot system of poroelasticity in a five-filed fully mixed formulation. The method allows for non-matching subdomain grids at the interfaces, using a composite mortar Lagrange multiplier space that approximates the displacement and pressure on a (possibly coarse) mortar interface grid to impose weakly stress and flux continuity. We established the well-posedness of the method and carried out a multiscale a priori error analysis. The results are robust in the limit of small storativity coefficient. We further presented a non-overlapping domain decomposition algorithm based on a Schur complement reduction of the global system to a (coarse scale) mortar interface problem, which is solved with a Krylov space iterative method. Each iteration requires solving Dirichlet type subdomain problems, which can be performed in parallel. A series of numerical tests illustrates the stability and convergence properties of the method, as well as its computational efficiency. We observed, both theoretically and numerically, that fine scale order convergence can be obtained even for a coarse mortar mesh with a suitable choice of the mortar polynomial degree. An application of the method to a highly heterogeneous benchmark problem illustrates that the multiscale mortar method can achieve comparable accuracy to the fine scale method at a highly reduced computational cost. Moreover, the use of a pre-computed multiscale stress–flux basis further increases the efficiency, making the computational cost independent of the global number of interface degrees of freedom and weakly dependent on the number of time steps. | As noted in Remark 6.1, a coarser mortar mesh can lead to a smaller interface problem, but even in that case the number of subdomain solves of the type (6.14)−\---(6.18) | an inf-sup condition for the mortar space, as well as inf-sup conditions for the stress and velocity spaces with weak interface continuity of normal components. Next, we establish a priori error estimates for the stress, displacement, rotation, pressure, and Darcy velocity, as well as the displacement and pressure mortar variables in their natural norms. We then consider a fully-discrete method based on the backward Euler time discretization. We show that the solution of the algebraic system at each time step can be reduced to solving a positive definite interface problem for the composite displacement–pressure mortar variable. Motivated by the multiscale flux basis from [24] and the multiscale stress basis from [35], we propose the construction and use of a multiscale stress–flux basis, which makes the number of subdomain solves independent of the number of iterations required for the interface problem. Moreover, since the basis can be reused at each time step, the total number of subdomain solves depends weakly on the number of time steps. This illustrates that the multiscale basis results in a significant reduction of computational cost in the case of time-dependent problems. Finally, we present the results of several numerical tests designed to illustrate the well-posedness, stability, and accuracy of the proposed MMMFE method. We also consider a test based on data from the Society of Petroleum Engineers SPE10 benchmark, illustrating the multiscale capabilities of the method and the advantages of using a multiscale basis. | C |
(a) MOS results with 95% CI for naturalness without and with accent conversion; and accent and speaker similarity after accent conversion. | This paper introduces a novel framework for accented TTS, which fuses a Conditional VAE with the Tacotron2. The proposed framework allows for efficient synthesis of any chosen speaker’s speech converted to any of the target accents. We conducted extensive objective and subjective tests to evaluate the efficacy of the proposed approach. The results show a strong performance in terms of the model’s ability to synthesize natural-sounding speech in a converted accent, which pushes the state-of-the-art boundaries. We discuss the accent-identity balance and sketch out possible improvements in the development of accented TTS. Overall, the proposed framework has the potential to improve the quality and flexibility of TTS models and could play a significant role in the development of more advanced TTS systems. | We assessed the proposed method’s performance in terms of accent and speaker similarity using MOS to quantify the perceived trade-off between accent and speaker identity after conversion. In the accent similarity test, listeners were given two reference samples: one of the source speaker (S1) to understand the original accent (A1) and one of the target accent (A2) represented by a different speaker (S2). They then rated the similarity of an audio sample of S1 in accent A2 to the reference accent A2. A paired t-test showed a statistically significant difference in accent similarity between CVAE-L and GST, and GMVAE (both with p<0.001𝑝0.001p<0.001italic_p < 0.001). All models differed significantly in accent similarity (all p<0.05𝑝0.05p<0.05italic_p < 0.05). The results indicate that the proposed methods outperform the baselines in accent conversion, with CVAE-L being the best performer, while the GMVAE baseline performed poorly. | CVAE shows promising results in both objective and subjective evaluations. Section IV-C highlights that while the embedding space supports strong accent conversion, it may disrupt speaker identity. Balancing accent and identity is challenging, as accent is a part of one’s identity. The GMVAE captures speaker identity well but limits accent change. This issue is exacerbated by the dataset having only 4 speakers per accent as we can imagine a case of just 1 speaker per accent, where distinguishing between accent and speaker identity becomes impossible. In future work, we will focus on designing stronger mechanisms to further separate accent from speaker identity. | We visualized the embedding space of the CVAE-NL variant by encoding reference audio of the validation set and performing a t-distributed stochastic neighbor embedding analysis (t-SNE) for 12 of the 24 speakers. In Fig. 3(a), we can see that the speaker embeddings clearly form clusters per speaker. In Fig. 3(b), we observe overlap between speakers of the same specific accents. Interestingly, the combined (concatenated) embeddings in Fig. 3(c) show even more compact clusters. This shows that the identity of each speaker is determined by both of the embeddings to a certain degree. One can imagine that if we move inside the combined embedding space by changing accent embeddings only, we might get a representation of a different speaker too. Naturally, we have observed this phenomenon in some of the synthesized audio samples with accent conversion, which influenced the design of the subjective evaluation tests, described in Section IV-E. | D |
We now extend the above hypergraph-based results to develop a behavioral analytic capability assisting with real time detection of port scan activities. | In real-world settings, almost none of the NIDS models can claim to detect all the cyber-attacks at a 100% confidence level to avoid over-fitting their model. In addition, there is generally no prior knowledge of malicious or benign nature of network traffic to each user or computer node. To mitigate this vulnerability in the context of port scanning activities, we use the hypergraph construct discussed above to monitor the simulated network traffic and conduct on-line analysis to detect if the signature pattern of port scan activities exists. | Case 6 extends Case 5 to simulate production environment with no prior knowledge of the nature of network traffic. It uses hypergraph construct to detect if there exists any signature pattern of port scanning activities. Given no prior knowledge of the nature of network traffic, selecting the retrained NIDS with higher detection rate to update the active NIDS is in alignment of the commonly adopted zero-trust practice in cybersecurity. Its detection performance is the same as that of Case 5, namely, volatile initially but attaining 100% detection level after updating the ML Ensemble NIDS at all the threshold choice in TH𝑇𝐻THitalic_T italic_H. | Network traffic flow data typically includes source and destination IPs, source and destination ports, and the communication protocol, as found in Network Intrusion Detection Evaluation Dataset CIC-IDS2017 [27] used for this study. This data was then abstracted into a hypergraph to capture the pattern of port scan activities. Mirroring the concept of using closeness centrality to measure the proximity of one node to all others in a network, the abstracted hypergraph enables to formulate a set of the s𝑠sitalic_s-closeness centrality to quantify the pattern. These results create a set of hypergraph metrics to use in training the NIDS models, thereby becoming the building blocks of the proposed ML Ensemble NIDS. We then extend these results to develop a behavioral analytic capability monitoring the network activities with no prior knowledge of their activity types to assist with online detection of port scan activities. It is worth noting that modeling the multi-dimensional cyber-security data as hypergraphs, the visualization of hypergraphs and the computation of hypergraph metric features were accomplished using the HyperNetX library [29, 30]. | ML models are applied across a variety of domains to include cybersecurity, with varying effectiveness. In the context of digital world, the attacker launches an attack on victims through the computer network; therefore, it is natural to research cybersecurity from the perspective of a network problem [12]. One can model the network traffic data from source and destination internet protocols (IPs), source and destination ports as nodes in a network, while the protocols and attacks can be modeled as links. Network science based approaches have proven effective in understanding the system traffic to find attack information, detect abnormal traffic, or devise network detection and prevention approaches [13]. | A |
Road and context attributes (1 attribute) contain the attribute Carriageway label along with twelve attributes related to data acquisition and annotation metadata. | The attributes which get the largest relative improvement from loss weighting are Pedestrian crossing - inspected road, Median Type, Pedestrian observed flow along the road passenger-side, Roadside severity - driver-side object, and Bicycle facility. | Intersection attributes (5 attributes) capture various intersection characteristics. | Observed flow attributes (5 attributes) record the flow of motorcycles, bicycles, and pedestrians | Roadside attributes (7 attributes) involve dual-side (passenger and driver) attributes | C |
Code to run agent-based and ODE simulations as well as create the phase diagrams listed in this paper is available on Github at github.com/arphysics/optimal-shepherding. Due to the large file sizes the outputs of all raw simulation (400-2000 simulations per phase diagram) are not listed on the Github but are available upon request. | Inspired by observations of shepherding across the animal kingdom, here we have proposed and studied a minimal model of the dynamics of a shepherd guiding a herd towards a desired location or along a path. By investigating two complementary formulations of the underlying dynamics, we have shown the emergence of three fundamentally distinct families of optimal shepherding behavior—mustering, droving, and driving—that arise from our model in different parameter ranges, and characterized by a phase diagram of herding regimes (Fig. 4) in terms of the scaled herd size Nla/ls𝑁subscript𝑙𝑎subscript𝑙𝑠\sqrt{N}l_{a}/l_{s}square-root start_ARG italic_N end_ARG italic_l start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT / italic_l start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT, and the scaled shepherd velocity va/vssubscript𝑣𝑎subscript𝑣𝑠v_{a}/v_{s}italic_v start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT / italic_v start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. Moving beyond the strategies to guide a herd to a target, we also considered the benefits and trade-offs in each strategy. A minimal model of efficiency measured by the time taken by the shepherd to transport a single agent show that droving is the most efficient way to transport small herds quickly while driving is most efficient for large cohesive herds. While our analysis is a step towards understanding herding strategies, it also raises a number of questions. Accounting for the effects of complex spatial geometry (e.g. the presence of boundaries and obstacles), shepherd training, and non-local strategies that sacrifice short-term control for long-term gain, the presence of multiple shepherds, and applications of shepherding to different fields Turgut et al. (2008) are all natural questions for future study. | A.R.: investigation, simulation, methodology, visualization, writing, review and editing. D.G.: simulation, visualization, writing. A.H.: methodology, visualization, review and editing. A.G.: investigation, methodology, writing, review and editing. L.M.: conceptualization, investigation, methodology, administration, resources, writing, review and editing. All authors give approval for publication. | Here the λ(.)\lambda_{(.)}italic_λ start_POSTSUBSCRIPT ( . ) end_POSTSUBSCRIPT set the inverse timescales of relaxation for the herd position, area, and aspect ratio, with A0subscript𝐴0A_{0}italic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and Q0subscript𝑄0Q_{0}italic_Q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT being the relaxed areas and aspect ratios of the herd. Here Q0=1subscript𝑄01Q_{0}=1italic_Q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 1 corresponds to a circular herd in the absence of a shepherd, ϵitalic-ϵ\epsilonitalic_ϵ and ω𝜔\omegaitalic_ω set the sensitivity of the flock’s area and aspect ratio to the shepherd repulsive force which we take to be of the form f=f0e−R/ls𝑓subscript𝑓0superscript𝑒𝑅subscript𝑙𝑠f=f_{0}e^{-R/l_{s}}italic_f = italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - italic_R / italic_l start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, where lssubscript𝑙𝑠l_{s}italic_l start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT is the repulsion length-scale as before, and the shepherd’s radial and angular velocity relative to the flock are given by uRsubscript𝑢𝑅u_{R}italic_u start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT and uθsubscript𝑢𝜃u_{\theta}italic_u start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT, which are free to be manipulated by the shepherd. In the penultimate equation for the evolution of the herd size, we account for the observations seen in our numerical simulations that a shepherd encircling the flock tends to squeeze it periodically by using a phenomenological “centrifugal” term of the form ζu¯θ2/R𝜁superscriptsubscript¯𝑢𝜃2𝑅\zeta\bar{u}_{\theta}^{2}/Ritalic_ζ over¯ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_R which captures the radial squeezing effect. | Code to run agent-based and ODE simulations as well as create the phase diagrams listed in this paper is available on Github at github.com/arphysics/optimal-shepherding. Due to the large file sizes the outputs of all raw simulation (400-2000 simulations per phase diagram) are not listed on the Github but are available upon request. | B |
Fundamental Principles: We discuss the fundamental principles underlying the occurrence of different types of anomalies in time series data. This discussion aids in understanding the nature of anomalies and how they can be effectively detected. | Benchmarks and Datasets: We compile and describe the primary benchmarks and datasets used in this field. Additionally, we categorise the datasets into a set of domains and provide hyperlinks to these datasets, facilitating easy access for researchers and practitioners. | This section summarises datasets and benchmarks for TSAD, which provides a rich resource for researchers in TSAD. Some of these datasets are single-purpose datasets for anomaly detection, and some are general-purpose time series datasets that we can use in anomaly detection model evaluation with some assumptions or customisation. We can characterise each dataset or benchmark based on multiple aspects and their natural features. Here, we collect 48 well-known and/or highly-cited datasets examined by classic and state-of-the-art (SOTA) deep models for anomaly detection in time series. These datasets are characterised based on the below attributes: | Guidelines for Practitioners: Our survey includes practical guidelines for readers on selecting appropriate deep learning architectures, datasets, and models. These guidelines are designed to assist researchers and practitioners in making informed choices based on their specific needs and the context of their work. | Evaluation Metrics and Interpretability: We provide an extensive discussion on evaluation metrics together with guidelines for metric selection. Additionally, we include a detailed discussion on model interpretability to help practitioners understand and explain the behaviour and decisions of TSAD models. | D |
Algorithm 2 starts by generating new training labels (residuals) using a pre-trained network 𝒩𝒩{\mathcal{N}}caligraphic_N as demonstrated in line 4 of Algorithm 2. A shallow network 𝒬2subscript𝒬2{\mathcal{Q}}_{2}caligraphic_Q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is then trained for a limited number of epochs as shown in line 6 of Algorithm 2. Note that this is essential for preventing overfitting on the residuals and promoting δ−limit-from𝛿\delta-italic_δ -stability as will be discussed later in Section 3.2. The procedure is then repeated until a termination criteria is satisfied (line 2 in Algorithm 2). | Recall Definition 3.6 for δ−limit-from𝛿\delta-italic_δ -stable function. By tuning parameter γ𝛾\gammaitalic_γ in (24), and parameter ζ𝜁\zetaitalic_ζ in (36), each network 𝒩𝒩{\mathcal{N}}caligraphic_N and 𝒬𝒬{\mathcal{Q}}caligraphic_Q is δ−limit-from𝛿\delta-italic_δ -stable. More information on tuning the parameters for δ−limit-from𝛿\delta-italic_δ -stability are provided in Section E.2. Lastly, | Remark 3.27 (Approximate δ−limit-from𝛿\delta-italic_δ -robustness by Algorithm 2). | 3.2.1 ε−δ𝜀𝛿\varepsilon-\deltaitalic_ε - italic_δ stability and approximate δ−limit-from𝛿\delta-italic_δ -robustness | In this section, we introduce key concepts—δ−limit-from𝛿\delta-italic_δ -stability and approximate δ−limit-from𝛿\delta-italic_δ -robustness—that drive the development of our approach. In particular, we prove how the design of Algorithm 1 promotes δ−limit-from𝛿\delta-italic_δ -stability for each hidden layer (Section 3.1.1) while also analyzing training saturation problems that might arise from certain hyperparameter settings (Section 3.1.2). Lastly, Section 3.2.1 analyzes the role of Algorithm 2 in promoting robustness. | D |
We train each model with an Adam optimizer with β1=0.9,β2=0.999formulae-sequencesubscript𝛽10.9subscript𝛽20.999\beta_{1}=0.9,\beta_{2}=0.999italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.9 , italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.999 | The learning rate starts at 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT and drops to 3×10−53superscript1053\times 10^{-5}3 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT at 1.5M steps, | drops to 10−5superscript10510^{-5}10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT at 1.8M steps, and drops to 3×10−63superscript1063\times 10^{-6}3 × 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT at 1.9M steps, | drops to 10−6superscript10610^{-6}10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT at 1.95M steps. | We set learning rate to 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT and batch size to 8888. We crop images to 256×256256256256\times 256256 × 256 | A |
Snnlogn→p1.subscript→𝑝subscript𝑆𝑛𝑛𝑛1\frac{S_{n}}{n\log n}\rightarrow_{p}1.divide start_ARG italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_ARG start_ARG italic_n roman_log italic_n end_ARG → start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT 1 . | Though subtle difference in practice, the so-called strong consistency has attracted mathematicians and statisticians for decades and it is based on a family of results known as strong law of large numbers (SLLN) [12]. One of the most celebrated SLLN in statistics is the Glivenko-Cantelli theorem. In fact, it asserts a little more: the strong consistency is uniform. Later the result was extended by Dvoretzky, Kiefer and Wolfowitz in 1956 to get an exponential-type convergence bound [13]. | A good estimator should, at least in the asymptotic sense, be close to the true quantity that it wishes to estimate and we should be able to give uncertainty measure based on a finite sample size. An estimator with well-behaved asymptotic properties can help clinicians in many ways such as reducing the number of patients needed in a trial, cutting down the budget for toxicology studies and providing insightful findings for late phase trials. Suggested by Sr. Fisher [1], generations of statisticians have worked on the so-called "consistency" and "asymptotic normality" of estimators. The former is based on different versions of law of large numbers (LLN) and the later is based on various types of central limit theorems (CLT) [2]. In addition to these two main tools, statisticians also apply other important but less well-known results in probability theory and other mathematical fields. To name a few, extreme value theory for distributions of maxima and minima [3], convex analysis for checking the optimality of a statistical design [4], asymptotic relative efficiency (ARE) of an estimator [5], concentration inequalities for finite sample properties and selection consistency [6] and other non-normal limits , robustness and simultaneous confidence bands of common statistical estimators [7, 8]. | The CLT used to be the central part of mathematics before 1940s. The definitive answer to CLT is given by William Feller in 1940s. However, the Lindenberg-Lévy CLT is the most well-known and widely-used theorem among statisticians and practitioners. Other versions of CLT such as De Moivre-Laplace CLT and Hajék-Sidak CLT are also used in literature. They have paramount influence in both theory and practice. We provide an example below. | The weak and strong consistency are based on weak law of large numbers (WLLN) and strong law of large numbers (SLLN), respectively. The naive example is the consistency of sample mean: we measure the height of a subject multiple times and then take the average of all measurements. The theoretical foundation is based on Chebyshev’s inequality and is known as the Khinchin’s WLLN. | A |
Then an event encoder f𝑓fitalic_f converts the sequence to an embedding 𝐦isubscript𝐦𝑖\mathbf{m}_{i}bold_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, which is passed to the event aggregator g𝑔gitalic_g, which then makes a prediction 𝐲^^𝐲\hat{\mathbf{y}}over^ start_ARG bold_y end_ARG. | Researchers have been working on alternatives to overcome heterogeneity in EHR without CDM, which is considered as one of the main challenges in the modeling of medical data. | Several studies on EHR-based prediction have attempted to fully utilize medical domain knowledge. | Our framework presents a method for embedding any form of EHR systems for prediction tasks without requiring domain-knowledge-based pre-processing, such as medical code mapping and feature selection. | First, we compare UniHPF§ with Benchmark§ to see how absence of domain knowledge affects prediction performance. | B |
We use G𝐺Gitalic_G to denote the input graph and H𝐻Hitalic_H to denote the pattern graph, both G=(V(G),E(G))𝐺𝑉𝐺𝐸𝐺G=(V(G),E(G))italic_G = ( italic_V ( italic_G ) , italic_E ( italic_G ) ) and H=(V(H),E(H))𝐻𝑉𝐻𝐸𝐻H=(V(H),E(H))italic_H = ( italic_V ( italic_H ) , italic_E ( italic_H ) ) are simple, undirected and connected graphs. We denote |V(G)|𝑉𝐺|V(G)|| italic_V ( italic_G ) | and |E(G)|𝐸𝐺|E(G)|| italic_E ( italic_G ) | by n𝑛nitalic_n and m𝑚mitalic_m respectively and |V(H)|𝑉𝐻|V(H)|| italic_V ( italic_H ) | by k𝑘kitalic_k. | A pattern graph H𝐻Hitalic_H is divided into orbits, we use the definition from Bondy and Murty (Chapter 1111, Section 2222 [12]): | We use G𝐺Gitalic_G to denote the input graph and H𝐻Hitalic_H to denote the pattern graph, both G=(V(G),E(G))𝐺𝑉𝐺𝐸𝐺G=(V(G),E(G))italic_G = ( italic_V ( italic_G ) , italic_E ( italic_G ) ) and H=(V(H),E(H))𝐻𝑉𝐻𝐸𝐻H=(V(H),E(H))italic_H = ( italic_V ( italic_H ) , italic_E ( italic_H ) ) are simple, undirected and connected graphs. We denote |V(G)|𝑉𝐺|V(G)|| italic_V ( italic_G ) | and |E(G)|𝐸𝐺|E(G)|| italic_E ( italic_G ) | by n𝑛nitalic_n and m𝑚mitalic_m respectively and |V(H)|𝑉𝐻|V(H)|| italic_V ( italic_H ) | by k𝑘kitalic_k. | is the treewidth of the pattern graph H𝐻Hitalic_H and k𝑘kitalic_k the number of vertices of H𝐻Hitalic_H. | A common formalism used for graph pattern counting is homomorphism counting. The pattern graph is denoted H=(V(H),E(H))𝐻𝑉𝐻𝐸𝐻H=(V(H),E(H))italic_H = ( italic_V ( italic_H ) , italic_E ( italic_H ) ) and is assumed | A |
We propose a causal model depicting the relationship between landscape features and urban/rural property of images, to unravel the type of implicit bias that a model might learn from data. This framework enables us to identify and address unique disparity challenges in deep learning application for satellite images. | We perform satellite image feature extraction with the studied contrastive self-supervised learning (SSL) method, MOCO-v2 (Chen et al. 2020b), and report the semantic segmentation fine-tuning results in Figure 1. There are several major disparities visible, especially for the class of “Forest” and “Agricultural”. To further expose the issue, we evaluate with a general-purpose feature encoder, Segmenting Anything Model (SAM) (Kirillov et al. 2023). It is a vision foundation model trained on a large image dataset (11 millions) of wide geographic coverage for learning comprehensive features. Therefore, the model can transfer zero-shot to image segmentation for our dataset. The results show similar disparities to SSL for each land-cover class (Figure 1). Motivated by the problem, we propose a causal model to unravel feature relationships in satellite images and the design to utilize robust features to mitigate disparity. | For the described bias scenario, we design a fair representation learning method which regularizes the statistical association between pixel-level image features and sensitive variables, termed FairDCL. The methods includes a novel feature map based local mutual information estimation module which incorporates layer-wise fairness regularization into the contrastive optimization objective. Given characteristics of satellite images, this work serves to mitigate performance disparities in downstream landscape segmentation tasks. | The idea of constraining mutual information between representation and sensitive attribute, also referred to as bias, to achieve attribute-invariant predictions has multiple applications (Zhu et al. 2021; Ragonesi et al. 2021; Kim et al. 2019), which all operate on a global representation 𝐳=F(𝐝)𝐳𝐹𝐝\mathbf{z}=F(\mathbf{d})bold_z = italic_F ( bold_d ), output from image encoder F𝐹Fitalic_F. However, invariance constraints only on the global output layer do not guarantee that sensitive information is omitted from representation hierarchies of intermediate layers or blocks in a model (herein we use the term “multi-level representation” for simplicity). As has been shown, the distribution of bias in terms of its category, number and strength is not constant across layers in contrastive self-supervised models (Sirotkin, Carballeira, and Escudero-Viñolo 2022). Besides, layer-wise regularization is necessary to constrain the underlying representation space (Jin et al. 2016; Jiang et al. 2017; Li et al. 2019). Pixel-level image features in representation hierarchies are important (O Pinheiro et al. 2020; Wang et al. 2021b), especially when transferring to dense downstream tasks such as semantic segmentation, where representations are aggregated at different resolution scales in order to identify objects in pixel space. Given the evidences in sum, we design a feature map based local mutual information estimation module and incorporate layer-wise regularization into the contrastive optimization objective. | Here we define the scenario with a causal graph, showing that contrastive self-supervised pre-training can utilize spurious land-cover object features, thus accumulate urban/rural attribute-correlated bias. The biased image representations will result in disparate downstream segmentation accuracy between subgroups within a specific geographic area. Then, we address the problem via a mutual information training objective to learn robust local features with minimal spurious representations. Experimental results show fairer segmentation results pre-trained with the proposed method on real-world satellite datasets. In addition to disparity reduction, the method consistently avoids a trade-off between model fairness and accuracy. | B |
.01⋅10−3⋅.01superscript103.01\cdot 10^{-3}.01 ⋅ 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT | .12⋅10−2⋅.12superscript102.12\cdot 10^{-2}.12 ⋅ 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT | .46⋅10−1⋅.46superscript101.46\cdot 10^{-1}.46 ⋅ 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT | .28⋅10−6⋅.28superscript106.28\cdot 10^{-6}.28 ⋅ 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT | .10⋅10−4⋅.10superscript104.10\cdot 10^{-4}.10 ⋅ 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT | B |
It is also generally unclear how the design decisions impact the performance on downstream tasks. | In this work, we thoroughly compare speech representations learned by HuBERT and MelHuBERT | However, this loss function of predicting the k𝑘kitalic_k-means of the input also has | after most matrix multiplications, such as after query, key, and value multiplication, | In this work, we identify and study several key components, such as the loss function, | D |
2) It relies on large antenna arrays at the PS to achieve vanishing optimality error, which may not be available, e.g., in small UAVs. | Instead, in decentralized settings, multiple receivers receive signals through different channels. This diversity of channels makes simultaneous coherent alignment across multiple receivers impossible. | Scheduling & signaling overhead: Mitigating the impact of unreliable wireless links requires scheduling of transmissions to manage interference, and the acquisition of channel state information (CSI, average or instantaneous) to compensate for signal fluctuations and link outages caused by fading. | In fully decentralized wireless systems lacking centralized coordination, this tension is further exacerbated by the signaling overhead associated with tasks such as scheduling, channel estimation, link monitoring, and topology awareness: | 3) It hinges on average channel inversion to ensure unbiased gradient estimation at the PS. This is impractical in decentralized systems due to the signaling overhead associated with average CSI acquisition, and the diversity of channels across multiple receivers. | D |
Since the preconditioner in Theorem 3.3 is applicable for one dimension only, in this example, a𝑎aitalic_a-minimization problems are solved by n𝑛nitalic_n iterations of the unpreconditioned conjugate gradient method. | For numerical integration, we use the trapezoidal rule on 10,000 uniformly sampled points; see Appendix C. | For this example with 28superscript282^{8}2 start_POSTSUPERSCRIPT 8 end_POSTSUPERSCRIPT neurons, we utilize 20,000 sampling points for numerical integration due to the oscillatory nature of the problem that necessitates a more accurate quadrature. | We use the quasi-Monte Carlo method [6] with 10,000 sampling points for numerical integration; see Appendix C for details. | Same as in (29), we use the trapezoidal rule on 10,000 uniformly sampled points for numerical integration. | C |
In contrast, our proposed video instance shadow detection approach requires no reliance on video instance annotations, concentrating on detecting and tracking foreground objects along with their associated shadows. | During the training phase, our framework uses an association cycle consistency loss to ensure the consistency of tracking embeddings across frames. This loss helps maintain the identity of shadow-object pairs even when they temporarily disappear. In the testing phase, the bidirectional retrieval mechanism becomes crucial. If an instance (shadow or object) disappears in a frame, the mechanism stores its last known embedding. When the instance reappears, the retrieval mechanism uses the stored embeddings to re-associate the shadow and object, maintaining the continuity of their association. This approach effectively handles occlusions and temporary disappearances, ensuring robust tracking of shadow-object pairs throughout the video sequence. | previous works [1, 2, 3] solely focus on detecting shadow-object pairs and overlook the individual shadows or objects (as shown in the bottom of Fig. 2). Consequently, these methods fail to track frames with occlusion or instances outside the field of view. Therefore, it is necessary to retrieve these single parts of pairs in these scenarios. | They treat shadows merely as background or part of the objects, which limits their ability to handle tasks that specifically require shadow-object pairing. Also, due to the limited categories, they cannot find all objects from the videos. Moreover, standard video segmentation methods do not track the temporal continuity of shadow-object pairs across frames, which is crucial for applications like shadow editing or video inpainting where maintaining the relationship between objects and their shadows over time is essential. | Most importantly, even the shadow or object instance is disappeared in several frames, our method is still to detect and track the left part of shadow-object association, as the comparisons shown in Fig. 2. | D |
We conclude by remarking that the unknown input is characterized by two symmetries. Hence, β(t)𝛽𝑡\beta(t)italic_β ( italic_t ) is unidentifiable. | We have two generators of 𝒪⊥superscript𝒪bottom\mathcal{O}^{\bot}caligraphic_O start_POSTSUPERSCRIPT ⊥ end_POSTSUPERSCRIPT and we discuss the two cases separately. | 5:Compute 𝒪⊥superscript𝒪bottom\mathcal{O}^{\bot}caligraphic_O start_POSTSUPERSCRIPT ⊥ end_POSTSUPERSCRIPT. | we have two independent symmetries of the state, which are the generators of 𝒪⊥superscript𝒪bottom\mathcal{O}^{\bot}caligraphic_O start_POSTSUPERSCRIPT ⊥ end_POSTSUPERSCRIPT in (7.4), and, | we have two distinct independent symmetries that are the two generators of 𝒪⊥superscript𝒪bottom\mathcal{O}^{\bot}caligraphic_O start_POSTSUPERSCRIPT ⊥ end_POSTSUPERSCRIPT | A |
AL has been widely used in Deep Learning [23] and NLP [36, 8, 25, 4, 21], and it has been shown to be helpful for MRQA as well: | While [16] learn whether samples are annotated automatically or manually, others make use of pool based sampling strategies, based on heuristics, to score samples for annotation [21]. | We consider pool based sampling for our AL scenario, where samples are iteratively selected for annotation. | Table 3: Context and question length in tokens on TechQA for samples selected with different strategies: AL strategies on the data generation model prefer samples with long contexts. | Therefore, a pool of unlabeled samples is ranked using a scoring function based on context c𝑐citalic_c. | A |
Table 2: Accuracies on MNIST-USPS benchmark. Each column is averaged over the 45 binary classification tasks. M and U indicate MNIST and USPS. Ratios indicate MNIST tasks where one digit is subsampled. In tasks with severe subsampling the proposed algorithm improves the accuracy and achieves the highest performance. DANN performs worse than a regular neural network under subsampling. | Table 3 shows the average source domain accuracy and domain classifier accuracy of DANN. Average source accuracy remains ≥95%absentpercent95\geq 95\%≥ 95 % and average domain classifier remains ≈50%absentpercent50\approx 50\%≈ 50 %, indicating that DANN has managed to learn a representation that is suitable for the source domain and maps the points from the source and target domain close to each other. The large drop in DANN’s performance can be attributed to the fact that the representation maps positive points in the source domain close to negative points in the target domain and vice-versa and therefore the joint error of the best hypothesis on the two domains (as described in Section 5) is large. We verify this by training a nearest neighbour (1-NN) classifier on the learned representation in the subsampled settings. The 1-NN classifier uses the source domain representations as the training data and the target domain representations as the test data. The accuracy of this classifier will suffer if in the learned representation the source domain points from one class are mapped close to the target domain points from the other class. The third row in the table, which is also averaged over the 45 tasks, shows a noticeable drop in the performance of the 1-NN classifier and indicates that this problem is present in the learned embeddings. | Recall that a domain-adversarial method consists of a domain classifier (discriminator) that predicts whether a data point is from the source or the target domain, a generator that learns a shared embedding between the two domains, and a label predictor that performs classification on the task of interest using the generator’s embedding. CDAN improves on DANN by conditioning the domain classifier on the label predictor’s prediction. The motivation is the improvements observed by incorporating this modification to generative adversarial networks (Goodfellow et al., 2020; Mirza and Osindero, 2014). | We then investigate why DANN hurts performance under subsampling. A domain-adversarial network like DANN has three components: a domain classifier (discriminator) that predicts whether a data point is from the source or the target domain, a generator that learns a shared embedding between the two domains, and a label predictor that performs classification on the task of interest using the generator’s embedding. The label predictor uses the labeled source data to increase source accuracy, i.e. the label predictor’s accuracy on the source domain. The ultimate goal is to have the label predictor achieve high accuracy on the target domain. The discriminator’s accuracy on the other hand shows how successful the discriminator is in recognizing whether a point is from the source or the target domain. In an ideal case this accuracy should be close to that of a random classifier since the data points from the two domains are mapped close to each other in the shared embedding. | Table 3: Source accuracy and domain classifier accuracy of DANN on MNIST-USPS. The drop in source accuracy under severe subsampling is minimal compared to the drop in target accuracy in the previous table. The domain classifier accuracy is near random regardless of the amount of subsampling. The performance of a nearest neighbour classifier trained on the mapped source data points and evaluated on the mapped target data points degrades to a large extent with more subsampling. | C |
Furthermore, random masking without focused areas always requires substantial computing resources for pretraining in MIM, which leads to massive training costs. | Figure 4: Visualization of the attention map of the last layer in the encoder after 400 epochs pre-training. From left to right, there is the original image, the attention map from the last layer of | As shown in Fig. 4, the FAMT strategy improves the model’s ability to focus on salient regions while decreasing its attention on the background. Furthermore, the FAMT approach promotes a smoother transition between salient and common features, such as the head and body of the object. | obtain an attention map as a constraint for masking, which we refer to as semantic information extraction, in a completely unsupervised manner. We then introduce frequency domain information to supplement the semantic information provided by the attention map. To be specific, we low-pass filter the image token and then assign a weight to each token corresponding to the image block according to the component. The image patch that has more low-pass components will get a higher weight. After that, we combine the attention values to the final weights of each image patch. The greater the weight of the image patch is, the more likely the patch will be masked. | To tackle the aforementioned issues, it is natural to utilize the attention map as guidance for masking during the pre-training phase, which achieves effective semantic information extraction. However, the attention map alone cannot sufficiently help models focus on objects of interest, where some salient features might be either overly focused or ignored. | D |
The ensemble of forecasts from a physics-based model (e.g., NCEP-CFSv2 or NASA-GMAO) contain information salient to precipitation and temperature forecasting besides their mean, and ML models that leverage the full ensemble generally outperform methods that rely on the ensemble mean alone (Section 8.1). | The target variable y𝑦yitalic_y is observed from 1985 to 2020. Data from January 1985 to September 2005 are used for training (249 time steps), and data from October 2005 to December 2010 are used for validation and model selection (63 time steps). Data from 2011 to 2020 (or from 2011 to 2018 in the case of NASA-GMAO data) are used to test our methods after all model development, selection, and parameter tuning are completed. | The NCEP-CFSv2 model has two different products available in the NMME archive: we use its hindcasts from 1982 to 2010 for training and validation of our models, and we use its forecasts from April 2011 to December 2020 for the final evaluation of our models. | Finally, we emphasize that the final validation of our approach was conducted on data from 2011 to 2020 that was not used during any of the training, model development, parameter tuning, or model selection steps. We only conducted our final assessment of the predictive skill for 2011 to 2020 after we had completed all other aspects of this manuscript. Because of this, our final empirical results accurately reflect the anticipated performance of our methods on new data. | validation (i.e., model selection and hyperparameter tuning). Test data spanning 2011 to 2020 was not viewed at any point of the model development and training process and only used to evaluate the predictive skill of our trained models on previously unseen data; we refer to this period as the “test period”. | C |
The hyperparameter to increase the model regularization effect is λ𝜆\lambdaitalic_λ. | Meanwhile, the discriminator evaluates the credibility of the generated images based on the input images and the reference target images (the Ground-Truth images). | In the generator, the distances between the generated images and the Ground-Truth images in each color channel are minimized by the loss function 𝕃Gsubscript𝕃𝐺\mathbb{L}_{G}blackboard_L start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT in Eq. (5). | +(1-y)\log(1-G(x))]\end{split}start_ROW start_CELL start_BIGOP blackboard_L start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT end_BIGOP = start_BIGOP blackboard_E start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT end_BIGOP [ italic_D ( italic_G ( italic_x ) , italic_x ) ] + italic_λ start_BIGOP blackboard_E start_POSTSUBSCRIPT italic_G ( italic_x ) , italic_y end_POSTSUBSCRIPT end_BIGOP [ italic_y roman_log italic_G ( italic_x ) end_CELL end_ROW start_ROW start_CELL + ( 1 - italic_y ) roman_log ( 1 - italic_G ( italic_x ) ) ] end_CELL end_ROW | Generator (G𝐺Gitalic_G) generates the input images as the generated images G(x)𝐺𝑥G(x)italic_G ( italic_x ). | D |
MCMC𝑀𝐶𝑀𝐶MCMCitalic_M italic_C italic_M italic_C is the Markov Chain Monte Carlo method | The GitHub dataset is from 1/1/2015 to 8/30/2017 including 2 million users and 13 million projects. We selected the 100 most popular repositories because we aimed to characterize top GitHub repositories, and the impact of influencers on the popularity of repositories using the TTERGM model. The number of repositories is a hyperparameter which can be adjusted, depending on the goal of the model. We selected the top 10 users, in terms of number of followers, as the influencers in this study. This threshold was chosen because the number of followers drops off sharply after that point, but can be chosen arbitrarily for different datasets. The data was acquired using the API from GitHub [Gousios and Spinellis, 2012]. The API can be used to stream GitHub repository interactions with customized formats. Optionally, meta data from user relation events can be retrieved as well. The dataset contains 14 types of events which are listed in Table 1. | Influencer-follower networks Ntsubscript𝑁𝑡N_{t}italic_N start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT were identified and constructed by connecting users by events in Table 1. In the subsequent module, temporal features and triadic features listed in Table 2 were extracted from the influencer-follower networks. Network characteristics were then estimated using the Markov chain Monte Carlo (MCMC) method. In the pattern analysis module, the TTERGM model was applied to model the data. The general form of TTERGM can be written as in Equation (3). P(N)𝑃𝑁P(N)italic_P ( italic_N ) represents the probability of a given network architechture, N; Z𝑍Zitalic_Z represents the normalization constant as done in classic TERGM models; γ𝛾\gammaitalic_γ is a vector of network characteristics (e.g, number of edges, triangles, 2-stars), t𝑡titalic_t represents the temporal sequence of network observations, and σ𝜎\sigmaitalic_σ is the vector of social-theory driven temporal network characteristics such as homophily, transitivity, reciprocity, etc. To fit the TTERGM model to the data, algorithm 1 is used to initialize and traverse each node in the network to construct the generative influencer-follower networks. Finally, we used these observed features to simulate real-world influencer-follower networks and compared the predictive of TTERGM performance with the classic TERGM model and the Block Model using left-out validation data. | We implemented a social-theory driven temporal exponential random graph model to infer the temporal pattern of edge formation and elimination in complex networks (e.g., social networks), and examine the effect of influencers and triadic relatinships on predicting future network dynamics. When popular repositories are formed or influencers act, the structure of the social network alters, affecting network metrics. The TTERGM technique build upon previous statistical models by incorporating information flow across hierarchical configuration features. We represent social network learning theory as an additional probability distribution that optimizes Markov chains in the graph vector space. The new parameters are then approximated via Monte Carlo maximum likelihood estimation. The TTERGM model is capable of reproducing the dynamics observed empirically in large-scale social network data, and produced more accurate predictions on left-out data compared to the classic TERGM and block models. However, the TTERGM model imposes additional computational burden during parameter estimation, which may hinder its ability to scale to larger datasets. Future work may include expanding this approach to model the influence of more "distant" users in the network, or those that do not directly follow an influencer. | P(N)𝑃𝑁P(N)italic_P ( italic_N ) represents the probability of the network N𝑁Nitalic_N; Z𝑍Zitalic_Z represents the normalization constant that is usually difficult to compute; γ𝛾\gammaitalic_γ is the vector of network characteristics such as number of edges, triangles, 2-stars, etc. t𝑡titalic_t represents the sequence of network observations; σ𝜎\sigmaitalic_σ is the vector of social-theory driven temporal network characteristics such as homophily, transitivity, reciprocity, etc. t𝑡titalic_t represents the sequence of network observations; Compared to ERGMs, TERGMs are able to model the distribution on time series data (either embedded in the network or in separate timesteps), hence certain temporal patterns can be captured and reflected in the parameter values for phenomenon interpretation. Examples of these dynamic patterns of triadic effects in influencer networks are shown in Figure 1. TERGM provides significant advantages over ERGMs, since certain static patterns can be enriched in higher dimensions when the sequence order is considered. TERGMs are also capable of modeling observed friendship networks with bootstrap methods estimated by maximum pseudolikelihood [Leifeld et al., 2018], or networks of infectious disease transmission using statistical methods in network analysis [Jenness et al., 2018]. The flexibility of TERGMs make it possible to adapt to a variety of input data types, such as cross-sectional or longitude data [Henry et al., 2016][Block et al., 2018]. | A |
We call ψπsuperscript𝜓𝜋\psi^{\pi}italic_ψ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT in (2.11) the advantage function. | It is well-known that any differential function with Lipschitz continuous gradients | It is different than the advantage function used in the RL literature in that it deals with deterministic policies and | different sense than PMD) depending on the curvature of the action-value function. | (rather than Qπk(s,a)superscript𝑄subscript𝜋𝑘𝑠𝑎Q^{\pi_{k}}(s,a)italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ( italic_s , italic_a )) used to define the objective function in (3.15). | B |
For |𝒜|=4𝒜4|\mathcal{A}|=4| caligraphic_A | = 4 and s=3𝑠3s=3italic_s = 3 a code of size 31313131 is given by | {00000,00111,00222,01012,01120,02201,10021,\displaystyle\{00000,00111,00222,01012,01120,02201,10021,{ 00000 , 00111 , 00222 , 01012 , 01120 , 02201 , 10021 , | τ(c2)𝜏subscript𝑐2\displaystyle\tau(c_{2})italic_τ ( italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) | {00000,32222,21202,31010,31133,22311,20323,01320,03131,30120,\displaystyle\{00000,32222,21202,31010,31133,22311,20323,01320,03131,30120,{ 00000 , 32222 , 21202 , 31010 , 31133 , 22311 , 20323 , 01320 , 03131 , 30120 , | {c′∈C∣∃c1∈𝒜t;d(c′,(c1,c2))=t})\displaystyle\left\{c^{\prime}\in C\,\mid\,\exists c_{1}\in\mathcal{A}^{t};d(c% | C |
Traditional model training leverages the GPU for its high parallel computation capability. When performing on-device training, there normally are no powerful GPUs, but auxiliary hardware is still available for workload offload. For example, the DSPs are particularly suitable for integer operations and ubiquitously available on modern SoCs. Mandheling leverages a DSP to enable model training on smartphones. | The rest of the paper is organized as follows. Section 2 provides an overview of current on-device training systems and the target devices of these systems. We present an analysis from the Machine Learning perspective in Section 3 and Section 4 presents optimization techniques to enable on-device model training. Section 5 discusses the results reported from current SOTA systems. Before presenting the conclusions, Section 6 elaborates on the methods and ideas presented in the current SOTA system works for on-device training. Finally, Section 7 discusses potential future directions and concludes this article. | The efficiency of DNN training is a critical area of research, given the extensive hardware resources, both memory and computation resources, required for such tasks. This section discusses the prevalent techniques for high-efficiency DNN training and the distinctions between them and on-device training. Specifically, the efficiency metrics for DL consist of two representative types: computation-related and memory-related metrics. Hence, general speeding-up techniques for training aim to optimize either one or both of these. Concretely, these approaches aim to reduce training time and resource usage while maintaining (or even improving) model performance. In contrast, while on-device training systems generally share similar rationale and goals, they face an additional layer of constraints, including energy resources, memory, and computation resources constraints, which pose additional challenges for on-device training. | This section presents the performance gains of the surveyed systems. Overall, researchers use one or several metrics from the following four metrics to measure their systems: accuracy improvement, memory usage reduction, energy consumption reduction, and training acceleration. Section 5.1 summarises the baselines against which the systems compare their performance. Section 5.2 describes the accuracy improvement brought by these on-device training methods. One of the goals of on-device training is to keep improving the accuracy of the deployed model. Therefore, this is the key outcome of on-device training. Section 5.3 presents the optimized memory usage of the different systems. Limited memory is the main bottleneck of on-device training systems. Hence, this is one of the most important metrics for evaluating the system. Section 5.4 shows the energy consumption of model training, which is another concern for IoT devices that are normally battery-powered. Section 5.5 describes the training speed improvement, which is another important evaluation metric for on-device training. The training tasks are expected to be completed within a reasonable time. Table 4 summarizes the performance gain of the discussed on-device training systems. | This approach refers to applying various techniques to reduce resource consumption during training, which consists of two directions. The first one is using memory consumption optimization techniques to reduce peak memory consumption. Researchers exploring this direction of research conclude that memory is the main bottleneck for on-device training due to backpropagation causing a large number of intermediate variables. Hence, several works, including POET, Melon, and Sage incorporate memory optimization techniques, such as paging and materialization (Section 2.5). The second direction is to reduce resource consumption from the model perspective. For example, model quantization techniques can significantly reduce the size of the model and, as a result, decrease the resource requirement. However, quantized models naively are more challenging to train due to precision reduction ( e.g., from float32 to int8). Minilearn and TTE, respectively, present solutions for this problem to allow for effective on-device training (Section 2.4 and Section 2.5). In addition, selectively training the parameters of the model can also reduce the training time and resource consumption. Unfortunately, this leads to limited generalization ability. TinyTL addresses this issue by designing a lighter residual model (Section 2.4). | C |
The purpose of wash trading is to influence a specific asset’s pricing or trading activities, generating interest in the asset and attracting external traders. | One of these manipulations is called wash trading. In the standard stock market, this has been extensively studied in different works \citescao2014detecting,cao2015detecting,palshikar2008collusion,franke2008analysis, but only recently this manipulation attracted attention in the crypto market. | This practice has been illegal for almost a century in the U.S. stock markets since the federal Commodity Exchange Act in 1936 [commodityact]. | However, where money flows, malicious actors enter the scene and try to exploit the technology to their advantage. Indeed, due to the unregulated nature of blockchains, market manipulations, illegal on the traditional stock market, are common in the crypto market. | We find that 7,564 out of 11,690 (64.7%) activities in the three marketplaces are not followed by a transaction that sells the NFT to an external entity. In some cases, the NFT has been transferred for zero crypto, so these are probably just internal movements between accounts of the same owner. | B |
Λ{𝖭𝖥}∗Λsuperscript𝖭𝖥\rotatebox[origin={c}]{180.0}{$\Lambda$}\;\{\mathsf{NF}\}^{*}roman_Λ { sansserif_NF } start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | Λ{Λ}∗ΛsuperscriptΛ\rotatebox[origin={c}]{180.0}{$\Lambda$}\;\{\Lambda\}^{*}roman_Λ { roman_Λ } start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | λΛ.Λformulae-sequence𝜆ΛΛ\lambda\rotatebox[origin={c}]{180.0}{$\Lambda$}.\Lambdaitalic_λ roman_Λ . roman_Λ | λΛ.Λformulae-sequence𝜆ΛΛ\lambda\rotatebox[origin={c}]{180.0}{$\Lambda$}.\Lambdaitalic_λ roman_Λ . roman_Λ | ΛΛ{Λ}∗ΛΛsuperscriptΛ\rotatebox[origin={c}]{180.0}{$\Lambda$}\ \Lambda\ \{\Lambda\}^{*}roman_Λ roman_Λ { roman_Λ } start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | B |
If we remove the penalty associated with each miscoordination, i.e., p=0𝑝0p=0italic_p = 0, the corresponding task is likely to be free from RO. | Inspired by this, to create a curriculum, we change only the reward function of the target task to control the probability of RO occurring, without modifying any other components (e.g., the state space and action space) of the target task Dec-POMDP. | In buffer transfer, after training the agent on the source tasks, the state-action-reward experience tuples stored in the replay buffer are used to initialize the replay buffer for the target task. | In CURO, to create a curriculum (i.e., a sequence of source tasks with increasing difficulty), we fine-tune the reward function of the target task to control the probability of RO occurring, in order to find suitable source tasks that are as similar as possible to the target task, while not exhibiting a strong RO problem. | More specifically, to determine the reward function of the source tasks, we generate a sequence of candidate reward functions by reducing the magnitude of the miscoordination penalty term in the reward function of the target task, while keeping other components of the reward function the same. <R1,R2,…,Rn><R_{1},R_{2},\dots,R_{n}>< italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT > is the resulting sequence of candidate reward functions, which are ordered from the smallest miscoordination penalty term to the largest one. | A |
PTB-XL was labelled by 12121212 nurses, naturally forming data sources. However, since this data is high quality, synthetic noise is required. We add Gaussian noise to sources’ ECG recordings (simulating electromagnetic interference as in Wong et al. [35]) and label flipping to simulate human error in labelling. This also allows us to test the setting with multiple noise types. | Figure 2: Effect of the introduced parameters on training. Section 3, introduces three parameters that control the effects of LAP. 1−ds1subscript𝑑𝑠1-d_{s}1 - italic_d start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT is multiplied by the gradient (equivalently, loss) contribution from a given source before the model is updated. Here, we show these values for each source (the different coloured lines) during model training on synthetic data (Appendix A.5). Unless stated in the title of a given plot, the parameters of LAP were set to H=25𝐻25H=25italic_H = 25, δ=1.0𝛿1.0\delta=1.0italic_δ = 1.0, λ=1.0𝜆1.0\lambda=1.0italic_λ = 1.0. We had 5555 sources with noise levels of 0.00.00.00.0, 0.0250.0250.0250.025, 0.050.050.050.05, 0.250.250.250.25, and 1.01.01.01.0 (a darker colour indicates a higher noise rate). | Figure 3: LAP performance with a varied number of sources. In 3(a) we show the area under the precision-recall curve for standard training and using LAP on PTB-XL with label noise and simulated ECG interference noise for 12121212 total sources. In 3(b) we show the accuracy on CIFAR-10N with real human labelling noise when using RRL and RRL + LAP, with 10101010 sources. In both, the noise of the sources increases linearly from 25%percent2525\%25 % to 100%percent100100\%100 % for each number of noisy sources. The lines and error bands represent the mean and standard deviation of the maximum value for each of the 5 repeats. These figures illustrate that LAP maintains higher performance as noise rates increase. | Although CIFAR-10N contains real-world noise, we must still split the data into sources. For each experiment, we assign sources as to evaluate varied levels of noise and numbers of noisy sources. As is done for PTB-XL, we linearly increase the number of noisy sources from 1111 to 7777 (out of 10101010 in total), and for each set of noisy sources we linearly increase the noise level from 0.250.250.250.25 to 1.01.01.01.0. | Here, data from sources are upsampled so that each source contains the same number of observations. For experiments with PTB-XL, we linearly increase the number of noisy sources from 1111 to 8888 (out of 12121212 in total), and for each number of noisy sources we set the noise level for each source linearly from 0.250.250.250.25 to 1.01.01.01.0. For example, when training with 4444 noisy sources, sources haves noise levels of 0.250.250.250.25, 0.500.500.500.50, 0.750.750.750.75, and 1.01.01.01.0. | D |
Statistical agencies and other custodians of secure facilities such as trusted research environments (TREs) [1] provide researchers with access to confidential data under the ‘Five Safes’ data governance framework [2]. This enforces five orthogonal layers of safety procedures with the last requiring explicit checking of research outputs for disclosure risk. This can be a time-consuming and costly task, requiring skilled staff. This article discusses the development of a free and open source tool for automating the statistical disclosure control (SDC) of routine research outputs such as tables, plots, and statistical models. The goal is to make the clearance process more efficient and timely, and to allow the skilled checkers to focus their attention on the less straightforward cases. | DataSHIELD444https://datashield.org is an infrastructure and suite of R packages [20] that aims to enable remote and non-disclosive analysis of distributed sensitive research data while avoiding the normal practice of human involvement in output checking. It operates under the principle that researchers never see or have access to the underlying data, even for manipulation. This is achieved by providing a set of restricted bespoke commands for performing data manipulation and querying. This approach implements rules-based rather than principles-based SDC and some rules (e.g., allowing min/max values, and not suppressing table cells with zero counts) differ significantly from the standard practice as described in the SDC handbook [10]. | More specifically, ACRO (automatic checking of research outputs) assists researchers and output checkers by distinguishing between research output that is safe to publish, output that requires further analysis, and output that cannot be published because of substantial disclosure risk. This is achieved through the use of Python’s capacity to override standard commands for creating tables, regressions, and other queries. This keeps the syntax identical while simultaneously augmenting analysis commands by running and reporting appropriate disclosure risk assessments. A schematic illustration of the ACRO workflow is shown in Figure LABEL:fig:schematic. | Statistical agencies and other custodians of secure facilities such as trusted research environments (TREs) [1] provide researchers with access to confidential data under the ‘Five Safes’ data governance framework [2]. This enforces five orthogonal layers of safety procedures with the last requiring explicit checking of research outputs for disclosure risk. This can be a time-consuming and costly task, requiring skilled staff. This article discusses the development of a free and open source tool for automating the statistical disclosure control (SDC) of routine research outputs such as tables, plots, and statistical models. The goal is to make the clearance process more efficient and timely, and to allow the skilled checkers to focus their attention on the less straightforward cases. | Transparent: applying disclosure mitigation where required, but with clear and rapid explanations that enable (i) researchers to understand and revise their requests; and (ii) an auditable record for TRE staff with simple summary reports to streamline their workflow; ACRO provides researchers with immediate feedback by displaying the results of both the checks and the output to the researcher, with the option for automatic suppression where appropriate. Details of the queries and results are stored in a list, which may subsequently be written to file for review by a human output checker. ACRO gives researchers control over the outputs that are submitted for review, e.g., the removal of unwanted outputs and choice of output format; currently JSON [5] or Microsoft Excel®. | B |
The coefficient of Sδ(F)subscript𝑆𝛿𝐹S_{\delta}(F)italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_F ) in the term xεsuperscript𝑥𝜀x^{\varepsilon}italic_x start_POSTSUPERSCRIPT italic_ε end_POSTSUPERSCRIPT is called the leading coefficient of Sδ(F)subscript𝑆𝛿𝐹S_{\delta}(F)italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_F ), denoted by sδ(F)subscript𝑠𝛿𝐹s_{\delta}(F)italic_s start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_F ). If no ambiguity occurs, we can write Sδ(F)subscript𝑆𝛿𝐹S_{\delta}(F)italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_F ) as Sδsubscript𝑆𝛿S_{\delta}italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT for simplicity. | The above rational expression for Sksubscript𝑆𝑘S_{k}italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT in roots should be interpreted as follows; otherwise, the denominator will vanish when A𝐴Aitalic_A is not squarefree. | Convert the δ𝛿\deltaitalic_δth subresultant polynomial in Newton basis from a rational expression in roots to a determinant expression in coefficients (see Lemma 26). To achieve the goal, we use the companion matrix of a polynomial in Newton basis as the bridge to connect the expression in roots and that in coefficients together. | It is emphasized that the rational expression (4) in Definition 7 should be interpreted the same way as that for interpreting Equation (3). | It is noted that the rational expression (7) in Lemma 23 should be interpreted the same way as that for interpreting Equation (3) and the rational expression (4) for Sδsubscript𝑆𝛿S_{\delta}italic_S start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT in | C |
Table 3: Number of prompts for the different tasks: prompts after step 0 (creating prompts manually), prompts after step 1 (GPT3 paraphrasing), and prompts after step 2 (backtranslation). | In this section, we describe our method for automatically expanding a seed set of manually created prompts using paraphrasing. | Table 1: Example prompts for the task AG News (news classification) that vary considerably in accuracy. | Table 4 lists all 4 manually created prompts we use for the AG News task (news classification), alongside a few sampled prompts created automatically using our method. As was typically the case, we are able to get prompts that are rather different in phrasing and structure from those included in the seed set. | Table 4: Prompts for the task AG News (news classification): the manually created prompts and a sample of automatically created prompts using our method. | D |
We compare GT-CausIn with various popular traffic forecasting models in recent years, including (1) DCRNN: Diffusion Convolutional Recurrent Neural Network [10]; (2) Graph Wavenet [33], which develops an adaptive dependency matrix based on node embedding; (3)GMAN: Graph Multi-Attention Network for Traffic Prediction [11], of which the main structure is an encoder and a decoder both with several attention blocks; (4) ST-GRAT: Spatio-temporal Graph Attention Networks for Accurately Forecasting Dynamically Changing Road Speed [34], which includes spatial attention, temporal attention, and spatial sentinel vectors; (5) SLCNN: Structure Learning Convolution Neural Network [35], which learns the graph structure information with convolutional methods; (6) DGCRN: Dynamic Graph Convolutional Recurrent Network (DGCRN) [36], which filters node embedding to generate dynamic graph at each time step; (7) DMSTGCN: Dynamic and Multi-faceted Spatio-temporal Deep Learning for Traffic Speed Forecasting [37], which provides a multi-faceted fusion module to incorporate the hidden states learned at different stages; (8) PGCN: Progressive Graph Convolutional Networks for Spatial-temporal Traffic Forecasting [23], which constructs progressive adjacency matrices by learning in training and test phases. | Our approach is different from all the approaches above, we first use a causal discovery program to discover a general rule between node neighbors of different orders, then build a model named GT-CausIn to enhance prediction performance. Although the causality relation is discovered with dataset PEMS-BAY, it generalizes as well on the other dataset METR-LA. Besides, the proposed causal insight layer is integrated with directed graph diffusion layers to model the spatial dependency. TCN layers and skip connections are further used to discover the temporal dependency. | In this paper, we implemented knowledge learned from a causal discovery program with deep learning models and proposed Graph Spatial-Temporal Network Based on Causal Insight to capture spatiotemporal dependencies. Specifically, we serialize graph diffusion layers and TCN layers to capture dependencies between spatial flow and temporal flow, we also use skip connections to guarantee information propagation for long sequences. We creatively design a causal insight layer that focuses on stations, their first-order in-neighbors, and their first-order out-neighbors. When evaluating our model on two real-world traffic datasets, we achieved significantly better overall prediction than baselines. We also conducted ablation studies to show the effectiveness of causal knowledge and the influence of the stacking layer number. In the future, we plan to (1) apply the proposed model to other spatial-temporal forecasting tasks; (2) investigate scalable methods to apply to large-scale datasets. | The causal structure discovery analyzes the causal relation between different variables. In this work, we adopt ICD [18] as our causal discovery program. The input of ICD is a set of variable values, and the output is the graph adjacency matrix, in which each element represents the importance of the causal relation between variables. The algorithm is only implemented on the PEMS-BAY dataset since there are too many missing values in METR-LA (8.11%) than in PEMS-BAY (0.003%). The dataset will be further described in Section V-A. | We present an effective and efficient framework to capture spatial-temporal dependencies, which is named Graph Spatial-Temporal Network Based on Causal Insight (GT-CausIn). The core idea is to assemble causal insight, spatial dependency modeling, and temporal dependency modeling in a way that information can flexibly flow between different perspectives at different scales. | A |
The evaluation results using Set 3 of models trained on Set 1 or Set 2 are shown in Table 1. Our model was ranked second and achieved an AP score of 70.9% and an AUROC score of 89%. Hevi AI’s model achieved the highest AP score, 2.3% higher than ours, while the AUROC score is 0.2% lower than ours. Swangeese’s model achieved the highest AUROC score, 2.8% higher than ours, while the AP score is 6% lower than ours. All the methods developed using Set 2 (semi-supervised setting) performed better than the baseline models trained using only Set 1 (supervised setting), both for lesion-level detection and patient-level diagnosis. | The test results using Set 4 of models trained on Set 1 or Set 2 are shown in Table 2. Our model achieved the highest score compared to other methods, with the highest AP score of 63.3% and AUROC score of 88.1%. Our AUROC score is 0.8% lower than DataScientX’s and Hevi AI’s. All the methods developed using Set 2 showed better performances than the baseline models developed using Set 1, both for lesion-level detection and patient-level diagnosis. We visualized the lesion detection results of the baseline methods, the method developed by Kan et al. [37], and our Z-SSMNet trained on Set 2 in Fig. 3. The lesion-level PR and FROC curves and the patient-level ROC curve are presented in Appendix C. The visual results demonstrate the superior detection performance of our model. | Table 1: Evaluation results on Set 3. The models were trained on Set 1 or Set 2. | The evaluation results using Set 3 of models trained on Set 1 or Set 2 are shown in Table 1. Our model was ranked second and achieved an AP score of 70.9% and an AUROC score of 89%. Hevi AI’s model achieved the highest AP score, 2.3% higher than ours, while the AUROC score is 0.2% lower than ours. Swangeese’s model achieved the highest AUROC score, 2.8% higher than ours, while the AP score is 6% lower than ours. All the methods developed using Set 2 (semi-supervised setting) performed better than the baseline models trained using only Set 1 (supervised setting), both for lesion-level detection and patient-level diagnosis. | The separate test results provided by the PI-CAI challenge organizers in the Closed Testing Phase are shown in Table 3, where the models were trained on Set 5 and tested on Set 4. Our Z-SSMNet was ranked second with an AP score of 0.3% and an AUROC score of 2.5% lower than the model of DataScientX. Here, nnDetection trained on the datasets with only manually labeled data performed better than the other settings of baseline methods. | B |
The experimental results clearly demonstrate that the proposed TFA method achieves significant improvements in the cases of image-specific attack success rates, universal and generalization performance. | This work proposes a new targeted attack method for object detection to mislead detectors to detect extra designated objects with specific target labels. we further design a novel attention based feature space attack method (TFA) that drives the extracted features of the victim images towards the target objects’ features in the guided images, thus achieving the targeted attack of a specific class. | Generally, adversarial attacks for object detection can be grouped into untargeted attacks and targeted attacks. The former aims to mislead the detectors to predict objects to other arbitrary labels or none labels [11, 12], while the latter is used to fool the detectors to predict certain specific wrong labels [13, 14]. Compared with the untargeted attack, the targeted attack for object detection is a more challenging task and there are relatively few methods designed for it. As shown in Fig. 1(a), current targeted attack methods typically attempt to deceive detectors by mislabeling an existing object as a specific wrong label. They add human-imperceptible perturbations to the victim images to change the features of the victim objects, consequently misleading the detectors into predicting the victim object as the designated incorrect label. Since the existing methods must rely on the detected objects to launch the attack, they have several limitations. | Compared with the existing targeted attack methods which mislabel detected objects as specific wrong labels, we propose a more flexible and effective attack method that can mislead detectors to ‘fabricate’ extra designated objects with specific target labels. | We can see that all the existing targeted attack methods launch the attack by mislabeling the detected objects. | C |
Essentially a TADD is a two-approximation of the set of all pairwise distances in P×Q𝑃𝑄P\times Qitalic_P × italic_Q and it can be used to determine, approximately, the (Fréchet) distance values for when the simplification of the input curve changes, or when the reachability of the free space matrix changes. | It is known how to compute a TADD from a Well-Separated Pair Decomposition (WSPD) in time linear in the size of the WSPD [22, Lemma 3.8]. | We can transform the decision variant of the Fréchet distance to the optimization variant, by using only a well-separated pair decomposition of P𝑃Pitalic_P (mapped to ℝ1superscriptℝ1\mathbb{R}^{1}blackboard_R start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT) with itself. | Using the 1:1 correspondence between a 1-dimensional WSPD and a TADD, we maintain a TADD of M𝑀Mitalic_M which is a translated version of P′superscript𝑃′P^{\prime}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. | A downside of this approach [22] is that, it is only known how to compute a WSPD for doubling metrics [48]. Moreover, for non-constant (doubling) dimensions d𝑑ditalic_d, computing the WSPD (and therefore the TADD) takes O(2dn+dnlogn)𝑂superscript2𝑑𝑛𝑑𝑛𝑛O(2^{d}n+dn\log n)italic_O ( 2 start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT italic_n + italic_d italic_n roman_log italic_n ) time [34, 48], which dominates the running time. | A |
LIME Ribeiro et al. (2016) and its multimodal adaptation DIME Lyu et al. (2022) approximate the vicinity of the input with a linear function that is interpretable. But depending on the choice of the size of the vicinity, LIME can lead to very disparate results. Methods like RISE Petsiuk et al. (2018) and SHAP Lundberg and Lee (2017) compute importance scores by randomly masking parts of the input and determining the effect this has on the output. SHAP exhibits great theoretical properties that enable us to define a MM score, as | agrees that VL models are not as cross-modal as expected – but disagree on whether models rely | As a community, we are interested in improving model performance, and thus need to evaluate models using performance metrics such as accuracy. But | An accuracy-based MM score is limited when model performance on a task is very low, since the differences between a model’s accuracy with correct vs. permuted inputs are small in such cases | Our analyses show that degrees of MM contributions can be orthogonal to task performance, supporting the need for performance-agnostic metrics. | B |
For fine-tuning, we train models using MSMARCO333https://huggingface.co/datasets/sentence-transformers/embedding-training-data/blob/main/msmarco-triplets.jsonl.gz for 10k steps with a batch size of 1,024, pairing each example with one hard negative example (1 positive + 2,047 negative), using a learning rate of 1e−51superscript𝑒51e^{-5}1 italic_e start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT). | To assess the effectiveness of the proposed augmentation methods as pretraining measures, we present the fine-tuned results on MS MARCO in Table 3. We use basic fine-tuning settings without employing advanced techniques such as negative mining (contriever) or asynchronous index refresh (ance). We compare with multiple baselines reported in BEIR (beir) (BM25, DPR (dpr), ANCE, ColBERT (colbert) ) and we fine-tune the other pretrained models under the same setting (Spider, LaPraDor, Condenser (condenser), CoCondenser (cocondenser), and Contriever). | Spar ΛΛ\Lambdaroman_Λ (spar): The dense lexical model ΛΛ\Lambdaroman_Λ is trained with questions or random sentences as queries, paired with the top K passages retrieved by BM25. | For a fair comparison, we fine-tune Spar ΛΛ\Lambdaroman_Λ using the query-encoder only. For all reproduced fine-tuning, we use pooling and vector normalization in consistence with the way in their pretraining. | We present the main unsupervised results in Table 1 and will discuss certain details in Sec 4.5. Among all unsupervised baselines, Bm25 still outperforms the other baselines by a significant margin. For dense retrievers, the lexical-oriented retriever Spar ΛΛ\Lambdaroman_Λ performs the best on BEIR14 and SQ&EQ, indicating that dense retrievers can achieve robust retrieval performance through a lexical teacher. Contriever performs comparably with Spar ΛΛ\Lambdaroman_Λ on BEIR. | C |
KoPA [189] first pre-trains structural embeddings for the KGs (i.e., entities and relations) and then employs instruction tuning to fine-tune the LLM. | KoPA [189] first pre-trains structural embeddings for the KGs (i.e., entities and relations) and then employs instruction tuning to fine-tune the LLM. | KSL [188] retrieve relevant knowledge subgraph and encode it into text prompt for further instruct tuning of LLMs. | Unlike GraphRAG, which does not involve any training, G-Retriever trains a GNN model to align the outputs of the GNN with text tokens. Specifically, G-Retriever first transforms nodes and edges into representations using a pretrained LM. It then uses the same LM to generate a query representation, which is employed to retrieve relevant nodes and edges, constructing a subgraph. G-Retriever applies a GAT to obtain the graph token and projects this token into the same vector space as the first layer of a frozen LLM. This alignment allows G-Retriever to effectively train the GNN to synchronize graph and text representations, leveraging the LLM’s capabilities to perform a variety of graph-related downstream tasks. | GraphGPT [53] adopts a similar approach. It first aligns graph tokens (i.e., nodes and their neighbors) and language tokens (i.e., instructions and node text). Then it constructs instruction consisting of graph tokens and human question for self-supervised instruction tuning and task-specific instruction tuning. | D |
B_{c}(1-e^{-\gamma_{c}\boldsymbol{z}_{i,p}}).bold_italic_I start_POSTSUBSCRIPT italic_i , italic_c , italic_p end_POSTSUBSCRIPT = bold_italic_J start_POSTSUBSCRIPT italic_c , italic_p end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - italic_β start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT bold_italic_z start_POSTSUBSCRIPT italic_i , italic_p end_POSTSUBSCRIPT end_POSTSUPERSCRIPT + italic_B start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( 1 - italic_e start_POSTSUPERSCRIPT - italic_γ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT bold_italic_z start_POSTSUBSCRIPT italic_i , italic_p end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ) . | Parameters of Eq. 6 are then estimated by fitting the model in a least squares manner: | Table 3 investigates the impact of using multiple views when estimating different parts of the complete model. The first row shows Sea-thru* results, where both the UIFM parameters (β𝛽\betaitalic_β, B𝐵Bitalic_B and γ𝛾\gammaitalic_γ) and the restored image 𝑱𝑱\boldsymbol{J}bold_italic_J are estimated using a single image. In the second row, the UIFM parameters are fixed to those obtained with Sea-thru*, yet the restored image is obtained using multi-view observations to minimize the same error as SUCRe (Eq. 7). The last row shows SUCRe results, where all parameters are estimated in a multi-view setting. Results show that SSIM improves mainly by the recovery of low contrast areas when 𝑱𝑱\boldsymbol{J}bold_italic_J is estimated using multiple views. While PSNR values indicate that using multi-view observations for estimating both the UIFM parameters and the restored image is required to obtain the peak performance exhibited by our approach. | is a state-of-the-art underwater image color restoration method that relies on images in a raw file format and their corresponding distance maps [2]. It focuses on inverting the UIFM described by Eq. 2. Given that the distance maps are generated using SfM, there is potential to leverage the scene’s 3D information in order to constrain the estimation of the parameters in Eq. 2. | In Appendix A, we offer insights that justify the selection of the least squares estimator for estimating the model’s parameters. | A |
11.80±0.04subscript11.80plus-or-minus0.0411.80_{\pm 0.04}11.80 start_POSTSUBSCRIPT ± 0.04 end_POSTSUBSCRIPT | 29.56±0.06subscript29.56plus-or-minus0.0629.56_{\pm 0.06}29.56 start_POSTSUBSCRIPT ± 0.06 end_POSTSUBSCRIPT | 4.56±0.04subscript4.56plus-or-minus0.044.56_{\pm 0.04}4.56 start_POSTSUBSCRIPT ± 0.04 end_POSTSUBSCRIPT | 56.9%p=1e−5subscriptpercent56.9𝑝1𝑒556.9\%_{p=1e-5}56.9 % start_POSTSUBSCRIPT italic_p = 1 italic_e - 5 end_POSTSUBSCRIPT | 56.0%p=2e−4subscriptpercent56.0𝑝2𝑒456.0\%_{p=2e-4}56.0 % start_POSTSUBSCRIPT italic_p = 2 italic_e - 4 end_POSTSUBSCRIPT | A |
Results for all models on adjective-noun binding task, training epoch chosen by performance on validation set. We report the average accuracy for all the methods on 5 random seeds and the standard error. | The correct label for the image is an adjective-noun label. Four distractors are sampled from the other possible adjective-noun combinations. | Here, Adj means the model predicted the adjective incorrectly but the noun correct; Noun means the opposite error; and Both means the model predicted neither the adjective nor the noun correctly. | Percentages assigned to each type of error for the single-object color task, generalization split. Here, Adj means the model predicted the adjective incorrectly but the noun correct; Noun means the opposite error; and Both means the model predicted neither the adjective nor the noun correctly. We report the average error proportions for all the methods on 5 random seeds and the standard error. | CLIP-FT, however, predicts the adjective (color) correctly but gets the noun wrong. | B |
One of the main issues with Replay is the requirement for additional memory, which is called replay memory. | However, as noted before, the use of replay memory could be a too heavy burden for resource-constrained devices or when the consumed memory is a big constraint. | This can be a strong constraint, especially for resource-constraint devices that need to work with small memory and processing power. | This can be very useful because it eliminates the need to save any data in memory. | One issue with the Replay is the use of external memory, which could be too heavy for resource-constrained devices. | B |
The simulations are conducted in the setting of IRT with multiple ray interactions (max. 2), with 10 m tiling length of the building elements. | All simulations were saved with a resolution of 1 m per pixel, as .png images. Furthermore, the data describing the simulation settings, i.e., the transmitter locations, city maps, and cars (when applicable) are provided as images, together with their corresponding coordinates/shapes (as polygons) in .json files. The roads are saved as images and as polygonal lines. | The same (as in the previous 2D cases) 701 256×256256256256\times 256256 × 256 city maps are used. Each building in a city map is assigned a height that lies between 2 to 6 stories, where a story is taken as 3.3 m. This range of 13.2 m (from the minimum of 6.6 meters to the maximum of 19.8 m) is divided into 255 equal length levels and building heights are found by picking one of these levels uniformly. This data is provided in two image sets, one as black and white (BW) images of the pixels occupied by buildings, and one with their encoded height as gray levels. As in the previous datasets, the corresponding polygons (2.5D) in .json format are provided. | In this section, we present datasets in the 2D setting, i.e., Tx deployed in the street level, 1.5 m above ground, having the same height as the 256×256256256256\times 256256 × 256 receiver pixels. | In Fig. 3, we show an example use of the presented dataset, where we trained RadioUNet with two versions of input: In the first one (naive), only the black-and-white 2D images of the buildings (city map) and Tx were used as input features, whereas in the second case, the height information of the buildings were used as additional input features, through an appropriate usage of the provided height-encoded city images. In particular, we have decomposed the 3D building maps into equal length 2D horizontal slices, such that each slice corresponds to an interval [Hslice,min,Hslice,max]subscriptHslice,minsubscriptHslice,max\left[\textup{H}_{\textup{slice,min}},\textup{H}_{\textup{slice,max}}\right][ H start_POSTSUBSCRIPT slice,min end_POSTSUBSCRIPT , H start_POSTSUBSCRIPT slice,max end_POSTSUBSCRIPT ] in the vertical direction, and the height of the buildings are re-scaled in this interval, such that heights below and above this range acquire 0 and 1, respectively, while the intermediate values lie within (0,1)01\left(0,1\right)( 0 , 1 ). We observed that 12 equal length slices together with the BW (2D) city map and the Grid Anchor maps introduced in [13] yielded a good accuracy with RMSE 0.87 dB (cf. Fig. 3, right), outperforming the naive approach, which had resulted in an RMSE of 1.26 dB. | B |
It should be noted that 𝐩~isubscript~𝐩𝑖{\tilde{\mathbf{p}}}_{i}over~ start_ARG bold_p end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and 𝐩~m,ijsubscript~𝐩m𝑖𝑗\tilde{\mathbf{p}}_{\text{m},ij}over~ start_ARG bold_p end_ARG start_POSTSUBSCRIPT m , italic_i italic_j end_POSTSUBSCRIPT are not related. | Then, two barrier functions used to ensure the i𝑖iitalic_ith robot keeping within the trapezoid virtual tube are defined as | Then, according to the single integrator model (1), the position of the i𝑖iitalic_ith robot is given by | In this work, the Lyapunov-like barrier function is used to represent the repulsive potential field. | Then, according to the nominal barrier function (7), the barrier function used to ensure collision avoidance among robots is defined as | D |
For the Bregman distance, replacing z=(x,y)𝑧𝑥𝑦z=(x,y)italic_z = ( italic_x , italic_y ) by (x∗,y∗)superscript𝑥superscript𝑦(x^{*},y^{*})( italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ), one can obtain the same sublinear convergence rate for the ergodic sequences. | The rest of the paper is organized as follows. In section 2, we give a brief introduction of the preconditioned deterministic DR and the proposed stochastic, over-relaxed, and preconditioned DR splitting method along with some necessary analysis tools. In section 3, we discuss the almost sure convergence of the proposed SRPDR splitting method to a fixed point using stochastic quasi-Fejér monotonicity [15] and Opial’s lemma [35]. In section 4, we show the sublinear convergence rate of ergodic sequences respecting the expectation of the restricted primal-dual gap and the primal error. In section 5, we extend these results to the stochastic relaxed and preconditioned DR method for quadratic-linear problems (henceforth abbreviated as SRPDRQ). In section 6, we present detailed numerical experiments for total generalized variation (TGV) regularized problems and binary classification with real datasets to show the competitive performance of the proposed stochastic and preconditioned DR methods. Finally, we give some conclusions in section 7. | Since taking the expectation of the supremum of the primal-dual gap concerning transitional variables is a very subtle issue [1], a detailed analysis is devoted to this part. Furthermore, we also show the SRPDR splitting method has a 𝒪(1/K)𝒪1𝐾\mathcal{O}(1/K)caligraphic_O ( 1 / italic_K ) rate for primal error when Lipschitz continuity is satisfied for dual functions. Finally, we show the high efficiency of the proposed SRPDR compared to state-of-art methods including the stochastic primal-dual method [13] with some numerical experiments on synthetic and real datasets. | In the next theorem, we investigate the convergence rate of expected primal error when Lipschitz smoothness is satisfied for 𝒢𝒢\mathcal{G}caligraphic_G. | We proposed a stochastic and relaxed PDR framework to solve general saddle-point problems for separable dual variables cases. We showed the almost sure convergence of SRPDR along with SRPDRQ. We gave the 𝒪(1/K)𝒪1𝐾\mathcal{O}(1/K)caligraphic_O ( 1 / italic_K ) convergence rate of the ergodic iteration sequences regarding the restricted primal-dual gap functions and the primal errors. For SRPDR, we will consider local linear convergence involving metric subregularity as in [1] and design highly efficient preconditioners for more complicated operators in applications including the Radon transforms [29] for tomography. | C |
A family of Copeland methods, Copelandα𝐶𝑜𝑝𝑒𝑙𝑎𝑛superscript𝑑𝛼Copeland^{\alpha}italic_C italic_o italic_p italic_e italic_l italic_a italic_n italic_d start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT, was suggested by Faliszewski et al. [14]. | The score of candidate cjsubscript𝑐𝑗c_{j}italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is | The difference is in tie-breaking the alternatives. The fraction of points each candidate receives in case of a tie between two candidates can be set to any rational number: 0≤α≤10𝛼10\leq\alpha\leq 10 ≤ italic_α ≤ 1. The Copeland score of a candidate is thus the number of points it obtained plus the number of ties times α𝛼\alphaitalic_α. | Each candidate is compared to every other candidate. The candidate obtains one point when it is preferred over another candidate by the majority of the voters. Adding up the points results is the Copeland score of each candidate. | The unique number of rankings in each method was computed, which refers to the number of projects not tied in their order. | B |
We propose wide-residual weighting units for SISR, which consist of WIRW and WCRW. They effectively mitigate the negative impact of intermediate feature loss by incorporating a wide residual mechanism. | The effectiveness of WDIB. Firstly, we analyze the internal components of WDIB in Table II, including our proposed WRDC (Case 1), SCF (Case 2), and ⊗tensor-product\otimes⊗ (adaptive multiplier, Case 4). By comparing case 1, case 2, and the baseline, we observe that our proposed WRDC module achieves slightly better performance than the baseline while saving approximately 13%percent\%% of the number of parameters and 37%percent\%% of the Multi-adds. The SCF module further improves the PSNR value by 0.18 dB with a parametric gain of less than 10K. The adaptive multiplier improves the PSNR by 0.04 dB compared to the baseline and does not introduce additional computational load or slow down the inference process. Overall, the integration of these submodules significantly enhances the performance. | To reduce the model size, many existing approaches have focused on designing efficient model structures, which include weight sharing [5], multi-scale structures [6], strategies for neural structure search [7], grouped convolution [8]. However, existing approaches often overlook the loss of intermediate information caused by activation functions like ReLU. This issue has been demonstrated by MobileNetV2 [9], where the reduction in intermediate information during increases in network depth can negatively affect the quality of image reconstruction. We propose the Feature Interaction Weighted Hybrid Network (FIWHN) to address this concern while maintaining lightweight models. Specifically, our Convolutional Neural Network (CNN) part incorporates wide-residual attention-weighted units, which consist of Wide Identical Residual Weight (WIRW) and Wide Convolutional Residual Weighting (WCRW). These units help compensate for the lost intermediate features by obtaining a broader feature map before applying the activation function. Additionally, we adopt Wide-residual Distillation Interaction Blocks (WDIB) with a lattice structure [10]. The WDIB includes two paired skip connections and adaptive combinations of wide residual blocks that utilize attention-based connection weights. As a result, we achieve a compact network with strong expressive power. The Wide-Residual Distillation Connection (WRDC) framework and the Self-Calibrating Fusion (SCF) unit facilitate the distillation and fusion of split features from different classes within the WDIB, thereby enhancing its generalization capability. Multiple WDIBs are combined to form a Feature Shuffle Weighted Group (FSWG), which leverages information from the middle layers at the group level through blending, fusion, and weighting of the output features from each WDIB. | We introduce WRDC, which enhances information flow by leapfrogging features at different levels within the WDIB. Additionally, we propose a SCF, allowing for a more precise and adaptive feature combination. | where FSCFsubscript𝐹𝑆𝐶𝐹{F_{SCF}}italic_F start_POSTSUBSCRIPT italic_S italic_C italic_F end_POSTSUBSCRIPT represents the SCF module. Within the SCF module, the outputs of the upper and lower branches perform weighted connections. Subsequently, different levels of refinement are applied to the fused features, resulting in a diverse range of information being incorporated into the final fused features. The module adjusts its weights during training, leading to improved performance compared to standard fusion techniques. | C |
Subsets and Splits