title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
A critical analysis of string APIs: The case of Pharo
Most programming languages, besides C, provide a native abstraction for character strings, but string APIs vary widely in size, expressiveness, and subjective convenience across languages. In Pharo, while at first glance the API of the String class seems rich, it often feels cumbersome in practice; to improve its usability, we faced the challenge of assessing its design. However, we found hardly any guideline about design forces and how they structure the design space, and no comprehensive analysis of the expected string operations and their different variations. In this article, we first analyse the Pharo 4 String library, then contrast it with its Haskell, Java, Python, Ruby, and Rust counterparts. We harvest criteria to describe a string API, and reflect on features and design tensions. This analysis should help language designers in understanding the design space of strings, and will serve as a basis for a future redesign of the string library in Pharo.
1
0
0
0
0
0
Power Maxwell distribution: Statistical Properties, Estimation and Application
In this article, we proposed a new probability distribution named as power Maxwell distribution (PMaD). It is another extension of Maxwell distribution (MaD) which would lead more flexibility to analyze the data with non-monotone failure rate. Different statistical properties such as reliability characteristics, moments, quantiles, mean deviation, generating function, conditional moments, stochastic ordering, residual lifetime function and various entropy measures have been derived. The estimation of the parameters for the proposed probability distribution has been addressed by maximum likelihood estimation method and Bayes estimation method. The Bayes estimates are obtained under gamma prior using squared error loss function. Lastly, real-life application for the proposed distribution has been illustrated through different lifetime data.
0
0
0
1
0
0
Deep Graphs
We propose an algorithm for deep learning on networks and graphs. It relies on the notion that many graph algorithms, such as PageRank, Weisfeiler-Lehman, or Message Passing can be expressed as iterative vertex updates. Unlike previous methods which rely on the ingenuity of the designer, Deep Graphs are adaptive to the estimation problem. Training and deployment are both efficient, since the cost is $O(|E| + |V|)$, where $E$ and $V$ are the sets of edges and vertices respectively. In short, we learn the recurrent update functions rather than positing their specific functional form. This yields an algorithm that achieves excellent accuracy on both graph labeling and regression tasks.
0
0
0
1
0
0
On the optimal investment-consumption and life insurance selection problem with an external stochastic factor
In this paper, we study a stochastic optimal control problem with stochastic volatility. We prove the sufficient and necessary maximum principle for the proposed problem. Then we apply the results to solve an investment, consumption and life insurance problem with stochastic volatility, that is, we consider a wage earner investing in one risk-free asset and one risky asset described by a jump-diffusion process and has to decide concerning consumption and life insurance purchase. We assume that the life insurance for the wage earner is bought from a market composed of $M>1$ life insurance companies offering pairwise distinct life insurance contracts. The goal is to maximize the expected utilities derived from the consumption, the legacy in the case of a premature death and the investor's terminal wealth.
0
0
0
0
0
1
An Introduction to Adjoints and Output Error Estimation in Computational Fluid Dynamics
In recent years, the use of adjoint vectors in Computational Fluid Dynamics (CFD) has seen a dramatic rise. Their utility in numerous applications, including design optimization, data assimilation, and mesh adaptation has sparked the interest of both researchers and practitioners alike. In many of these fields, the concept of an adjoint is explained differently, with various notations and motivations employed. Further complicating matters is the existence of two seemingly different types of adjoints -- "continuous" and "discrete" -- as well as the more formal definition of adjoint operators employed in linear algebra and functional analysis. These issues can make the fundamental concept of an adjoint difficult to pin down. In these notes, we hope to clarify some of the ideas surrounding adjoint vectors and to provide a useful reference for both continuous and discrete adjoints alike. In particular, we focus on the use of adjoints within the context of output-based mesh adaptation, where the goal is to achieve accuracy in a particular quantity (or "output") of interest by performing targeted adaptation of the computational mesh. While this is our application of interest, the ideas discussed here apply directly to design optimization, data assimilation, and many other fields where adjoints are employed.
1
1
0
0
0
0
Incremental Sharpe and other performance ratios
We present a new methodology of computing incremental contribution for performance ratios for portfolio like Sharpe, Treynor, Calmar or Sterling ratios. Using Euler's homogeneous function theorem, we are able to decompose these performance ratios as a linear combination of individual modified performance ratios. This allows understanding the drivers of these performance ratios as well as deriving a condition for a new asset to provide incremental performance for the portfolio. We provide various numerical examples of this performance ratio decomposition.
0
0
0
0
0
1
Saliency Detection by Forward and Backward Cues in Deep-CNNs
As prior knowledge of objects or object features helps us make relations for similar objects on attentional tasks, pre-trained deep convolutional neural networks (CNNs) can be used to detect salient objects on images regardless of the object class is in the network knowledge or not. In this paper, we propose a top-down saliency model using CNN, a weakly supervised CNN model trained for 1000 object labelling task from RGB images. The model detects attentive regions based on their objectness scores predicted by selected features from CNNs. To estimate the salient objects effectively, we combine both forward and backward features, while demonstrating that partially-guided backpropagation will provide sufficient information for selecting the features from forward run of CNN model. Finally, these top-down cues are enhanced with a state-of-the-art bottom-up model as complementing the overall saliency. As the proposed model is an effective integration of forward and backward cues through objectness without any supervision or regression to ground truth data, it gives promising results compared to state-of-the-art models in two different datasets.
1
0
0
0
0
0
Active sorting of orbital angular momentum states of light with cascaded tunable resonators
Light carrying orbital angular momentum (OAM) has been shown to be of use in a disparate range of fields ranging from astronomy to optical trapping, and as a promising new dimension for multiplexing signals in optical communications and data storage. A challenge to many of these applications is a reliable and dynamic method that sorts incident OAM states without altering them. Here we report a wavelength-independent technique capable of dynamically filtering individual OAM states based on the resonant transmission of a tunable optical cavity. The cavity length is piezo-controlled to facilitate dynamic reconfiguration, and the sorting process leaves both the transmitted and reflected signals in their original states for subsequent processing. As a result, we also show that a reconfigurable sorting network can be constructed by cascading such optical resonators to handle multiple OAM states simultaneously. This approach to sorting OAM states is amenable to integration into optical communication networks and has implications in quantum optics, astronomy, optical data storage and optical trapping.
0
1
0
0
0
0
Factorization tests and algorithms arising from counting modular forms and automorphic representations
A theorem of Gekeler compares the number of non-isomorphic automorphic representations associated with the space of cusp forms of weight $k$ on $\Gamma_0(N)$ to a simpler function of $k$ and $N$, showing that the two are equal whenever $N$ is squarefree. We prove the converse of this theorem (with one small exception), thus providing a characterization of squarefree integers. We also establish a similar characterization of prime numbers in terms of the number of Hecke newforms of weight $k$ on $\Gamma_0(N)$. It follows that a hypothetical fast algorithm for computing the number of such automorphic representations for even a single weight $k$ would yield a fast test for whether $N$ is squarefree. We also show how to obtain bounds on the possible square divisors of a number $N$ that has been found to not be squarefree via this test, and we show how to probabilistically obtain the complete factorization of the squarefull part of $N$ from the number of such automorphic representations for two different weights. If in addition we have the number of such Hecke newforms for even a single weight $k$, then we show how to probabilistically factor $N$ entirely. All of these computations could be performed quickly in practice, given the number(s) of automorphic representations and modular forms as input.
0
0
1
0
0
0
ADAPT: Zero-Shot Adaptive Policy Transfer for Stochastic Dynamical Systems
Model-free policy learning has enabled robust performance of complex tasks with relatively simple algorithms. However, this simplicity comes at the cost of requiring an Oracle and arguably very poor sample complexity. This renders such methods unsuitable for physical systems. Variants of model-based methods address this problem through the use of simulators, however, this gives rise to the problem of policy transfer from simulated to the physical system. Model mismatch due to systematic parameter shift and unmodelled dynamics error may cause sub-optimal or unsafe behavior upon direct transfer. We introduce the Adaptive Policy Transfer for Stochastic Dynamics (ADAPT) algorithm that achieves provably safe and robust, dynamically-feasible zero-shot transfer of RL-policies to new domains with dynamics error. ADAPT combines the strengths of offline policy learning in a black-box source simulator with online tube-based MPC to attenuate bounded model mismatch between the source and target dynamics. ADAPT allows online transfer of policy, trained solely in a simulation offline, to a family of unknown targets without fine-tuning. We also formally show that (i) ADAPT guarantees state and control safety through state-action tubes under the assumption of Lipschitz continuity of the divergence in dynamics and, (ii) ADAPT results in a bounded loss of reward accumulation relative to a policy trained and evaluated in the source environment. We evaluate ADAPT on 2 continuous, non-holonomic simulated dynamical systems with 4 different disturbance models, and find that ADAPT performs between 50%-300% better on mean reward accrual than direct policy transfer.
1
0
0
0
0
0
Learning the Kernel for Classification and Regression
We investigate a series of learning kernel problems with polynomial combinations of base kernels, which will help us solve regression and classification problems. We also perform some numerical experiments of polynomial kernels with regression and classification tasks on different datasets.
1
0
0
0
0
0
Critical system involving fractional Laplacian
In this paper, we study the following critical system with fractional Laplacian: \begin{equation*} \begin{cases} (-\Delta)^{s}u= \mu_{1}|u|^{2^{\ast}-2}u+\frac{\alpha\gamma}{2^{\ast}}|u|^{\alpha-2}u|v|^{\beta} \ \ \ \text{in} \ \ \mathbb{R}^{n}, (-\Delta)^{s}v= \mu_{2}|v|^{2^{\ast}-2}v+\frac{\beta\gamma}{2^{\ast}}|u|^{\alpha}|v|^{\beta-2}v\ \ \ \ \text{in} \ \ \mathbb{R}^{n}, u,v\in D_{s}(\mathbb{R}^{n}). \end{cases} \end{equation*} By using the Nehari\ manifold,\ under proper conditions, we establish the existence and nonexistence of positive least energy solution of the system.
0
0
1
0
0
0
Skin Lesion Classification Using Hybrid Deep Neural Networks
Skin cancer is one of the major types of cancers and its incidence has been increasing over the past decades. Skin lesions can arise from various dermatologic disorders and can be classified to various types according to their texture, structure, color and other morphological features. The accuracy of diagnosis of skin lesions, specifically the discrimination of benign and malignant lesions, is paramount to ensure appropriate patient treatment. Machine learning-based classification approaches are among popular automatic methods for skin lesion classification. While there are many existing methods, convolutional neural networks (CNN) have shown to be superior over other classical machine learning methods for object detection and classification tasks. In this work, a fully automatic computerized method is proposed, which employs well established pre-trained convolutional neural networks and ensembles learning to classify skin lesions. We trained the networks using 2000 skin lesion images available from the ISIC 2017 challenge, which has three main categories and includes 374 melanoma, 254 seborrheic keratosis and 1372 benign nevi images. The trained classifier was then tested on 150 unlabeled images. The results, evaluated by the challenge organizer and based on the area under the receiver operating characteristic curve (AUC), were 84.8% and 93.6% for Melanoma and seborrheic keratosis binary classification problem, respectively. The proposed method achieved competitive results to experienced dermatologist. Further improvement and optimization of the proposed method with a larger training dataset could lead to a more precise, reliable and robust method for skin lesion classification.
1
0
0
0
0
0
Nice derivations over principal ideal domains
In this paper we investigate to what extent the results of Z. Wang and D. Daigle on nice derivations of the polynomial ring in three variables over a field k of characteristic zero extend to the polynomial ring over a PID R, containing the field of rational numbers. One of our results shows that the kernel of a nice derivation on the polynomial ring in four variables over k of rank at most three is a polynomial ring over k.
0
0
1
0
0
0
Cooperative Estimation via Altruism
A novel approach, based on the notion of altruism, is presented to cooperative estimation in a system comprising two information-sharing estimators. The underlying assumption is that the system's global mission can be accomplished even if only one of the estimators achieves satisfactory performance. The notion of altruism motivates a new definition of cooperative estimation optimality that generalizes the common definition of minimum mean square error optimality. Fundamental equations are derived for two types of altruistic cooperative estimation problems, corresponding to heterarchical and hierarchical setups. Although these equations are hard to solve in the general case, their solution in the Gaussian case is straightforward and only entails the largest eigenvalue of the conditional covariance matrix and its corresponding eigenvector. Moreover, in that case the performance improvement of the two altruistic cooperative estimation techniques over the conventional (egoistic) estimation approach is shown to depend on the problem's dimensionality and statistical distribution. In particular, the performance improvement grows with the dispersion of the spectrum of the conditional covariance matrix, rendering the new estimation approach especially appealing in ill-conditioned problems. The performance of the new approach is demonstrated using a numerical simulation study.
1
0
1
0
0
0
Multi-GPU maximum entropy image synthesis for radio astronomy
The maximum entropy method (MEM) is a well known deconvolution technique in radio-interferometry. This method solves a non-linear optimization problem with an entropy regularization term. Other heuristics such as CLEAN are faster but highly user dependent. Nevertheless, MEM has the following advantages: it is unsupervised, it has a statistical basis, it has a better resolution and better image quality under certain conditions. This work presents a high performance GPU version of non-gridding MEM, which is tested using real and simulated data. We propose a single-GPU and a multi-GPU implementation for single and multi-spectral data, respectively. We also make use of the Peer-to-Peer and Unified Virtual Addressing features of newer GPUs which allows to exploit transparently and efficiently multiple GPUs. Several ALMA data sets are used to demonstrate the effectiveness in imaging and to evaluate GPU performance. The results show that a speedup from 1000 to 5000 times faster than a sequential version can be achieved, depending on data and image size. This allows to reconstruct the HD142527 CO(6-5) short baseline data set in 2.1 minutes, instead of 2.5 days that takes a sequential version on CPU.
1
1
0
0
0
0
Green function for linearized Navier-Stokes around a boundary layer profile: away from critical layers
In this paper, we construct the Green function for the classical Orr-Sommerfeld equations, which are the linearized Navier-Stokes equations around a boundary layer profile. As an immediate application, we derive uniform sharp bounds on the semigroup of the linearized Navier-Stokes problem around unstable profiles in the vanishing viscosity limit.
0
0
1
0
0
0
SETI in vivo: testing the we-are-them hypothesis
After it was proposed that life on Earth might descend from seeding by an earlier civilization, some authors noted that this alternative offers a testable aspect: the seeds could be supplied with a signature that might be found in extant organisms. In particular, it was suggested that the optimal location for such an artifact is the genetic code, as the least evolving part of cells. However, as the mainstream view goes, this scenario is too speculative and cannot be meaningfully tested because encoding/decoding a signature within the genetic code is ill-defined, so any retrieval attempt is doomed to guesswork. Here we refresh the seeded-Earth hypothesis and discuss the motivation for inserting a signature. We then show that "biological SETI" involves even weaker assumptions than traditional SETI and admits a well-defined methodological framework. After assessing the possibility in terms of molecular and evolutionary biology, we formalize the approach and, adopting the guideline of SETI that encoding/decoding should follow from first principles and be convention-free, develop a retrieval strategy. Applied to the canonical code, it reveals a nontrivial precision structure of interlocked systematic attributes. To assess this result in view of the initial assumption, we perform statistical, comparison, interdependence, and semiotic analyses. Statistical analysis reveals no causal connection to evolutionary models of the code, interdependence analysis precludes overinterpretation, and comparison analysis shows that known code variations lack any precision-logic structures, in agreement with these variations being post-seeding deviations from the canonical code. Finally, semiotic analysis shows that not only the found attributes are consistent with the initial assumption, but that they make perfect sense from SETI perspective, as they maintain some of the most universal codes of culture.
0
1
0
0
0
0
Explicit Time Integration of Transient Eddy Current Problems
For time integration of transient eddy current problems commonly implicit time integration methods are used, where in every time step one or several nonlinear systems of equations have to be linearized with the Newton-Raphson method due to ferromagnetic materials involved. In this paper, a generalized Schur-complement is applied to the magnetic vector potential formulation, which converts a differential-algebraic equation system of index 1 into a system of ordinary differential equations (ODE) with reduced stiffness. For the time integration of this ODE system of equations, the explicit Euler method is applied. The Courant-Friedrich-Levy (CFL) stability criterion of explicit time integration methods may result in small time steps. Applying a pseudo-inverse of the discrete curl-curl operator in nonconducting regions of the problem is required in every time step. For the computation of the pseudo-inverse, the preconditioned conjugate gradient (PCG) method is used. The cascaded Subspace Extrapolation method (CSPE) is presented to produce suitable start vectors for these PCG iterations. The resulting scheme is validated using the TEAM 10 benchmark problem.
1
1
1
0
0
0
Transversality for local Morse homology with symmetries and applications
We prove the transversality result necessary for defining local Morse chain complexes with finite cyclic group symmetry. Our arguments use special regularized distance functions constructed using classical covering lemmas, and an inductive perturbation process indexed by the strata of the isotropy set. A global existence theorem for symmetric Morse-Smale pairs is also proved. Regarding applications, we focus on Hamiltonian dynamics and rigorously establish a local contact homology package based on discrete action functionals. We prove a persistence theorem, analogous to the classical shifting lemma for geodesics, asserting that the iteration map is an isomorphism for good and admissible iterations. We also consider a Chas-Sullivan product on non-invariant local Morse homology, which plays the role of pair-of-pants product, and study its relationship to symplectically degenerate maxima. Finally, we explore how our invariants can be used to study bifurcation of critical points (and periodic points) under additional symmetries.
0
0
1
0
0
0
On the maximum principle for the Riesz transform
Let $\mu$ be a measure in $\mathbb R^d$ with compact support and continuous density, and let $$ R^s\mu(x)=\int\frac{y-x}{|y-x|^{s+1}}\,d\mu(y),\ \ x,y\in\mathbb R^d,\ \ 0<s<d. $$ We consider the following conjecture: $$ \sup_{x\in\mathbb R^d}|R^s\mu(x)|\le C\sup_{x\in\text{supp}\,\mu}|R^s\mu(x)|,\quad C=C(d,s). $$ This relation was known for $d-1\le s<d$, and is still an open problem in the general case. We prove the maximum principle for $0< s<1$, and also for $0<s<d$ in the case of radial measure. Moreover, we show that this conjecture is incorrect for non-positive measures.
0
0
1
0
0
0
The complex case of Schmidt's going-down Theorem
In 1967, Schmidt wrote a seminal paper [10] on heights of subspaces of R n or C n defined over a number field K, and diophantine approximation problems. The going-down Theorem -- one of the main theorems he proved in his paper -- remains valid in two cases depending on whether the embedding of K in the complex field C is a real or a complex non-real embedding. For the latter, and more generally as soon as K is not totally real, at some point of the proof, the arguments in [10] do not exactly work as announced. In this note, Schmidt's ideas are worked out in details and his proof of the complex case is presented, solving the aforementioned problem. Some definitions of Schmidt are reformulated in terms of multilinear algebra and wedge product, following the approaches of Laurent [5], Bugeaud and Laurent [1] and Roy [7], [8]. In [5] Laurent introduces in the case K = Q a family of exponents and he gives a series of inequalities relating them. In Section 5 these exponents are defined for an arbitrary number field K. Using the going-up and the going-down Theorems Laurent's inequalities are generalized to this setting.
0
0
1
0
0
0
Optimal rates of estimation for multi-reference alignment
In this paper, we establish optimal rates of adaptive estimation of a vector in the multi-reference alignment model, a problem with important applications in fields such as signal processing, image processing, and computer vision, among others. We describe how this model can be viewed as a multivariate Gaussian mixture model under the constraint that the centers belong to the orbit of a group. This enables us to derive matching upper and lower bounds that feature an interesting dependence on the signal-to-noise ratio of the model. Both upper and lower bounds are articulated around a tight local control of Kullback-Leibler divergences that showcases the central role of moment tensors in this problem.
0
0
1
1
0
0
Distribution System Voltage Control under Uncertainties using Tractable Chance Constraints
Voltage control plays an important role in the operation of electricity distribution networks, especially with high penetration of distributed energy resources. These resources introduce significant and fast varying uncertainties. In this paper, we focus on reactive power compensation to control voltage in the presence of uncertainties. We adopt a chance constraint approach that accounts for arbitrary correlations between renewable resources at each of the buses. We show how the problem can be solved efficiently using historical samples via a stochastic quasi gradient method. We also show that this optimization problem is convex for a wide variety of probabilistic distributions. Compared to conventional per-bus chance constraints, our formulation is more robust to uncertainty and more computationally tractable. We illustrate the results using standard IEEE distribution test feeders.
1
0
1
1
0
0
Overcomplete Frame Thresholding for Acoustic Scene Analysis
In this work, we derive a generic overcomplete frame thresholding scheme based on risk minimization. Overcomplete frames being favored for analysis tasks such as classification, regression or anomaly detection, we provide a way to leverage those optimal representations in real-world applications through the use of thresholding. We validate the method on a large scale bird activity detection task via the scattering network architecture performed by means of continuous wavelets, known for being an adequate dictionary in audio environments.
1
0
0
1
0
0
A Hybrid Approach for Trajectory Control Design
This work presents a methodology to design trajectory tracking feedback control laws, which embed non-parametric statistical models, such as Gaussian Processes (GPs). The aim is to minimize unmodeled dynamics such as undesired slippages. The proposed approach has the benefit of avoiding complex terramechanics analysis to directly estimate from data the robot dynamics on a wide class of trajectories. Experiments in both real and simulated environments prove that the proposed methodology is promising.
1
0
0
0
0
0
Achieving and Managing Availability SLAs with ITIL Driven Processes, DevOps, and Workflow Tools
System and application availability continues to be a fundamental characteristic of IT services. In recent years the IT Operations team at Wolters Kluwer CT Corporation has placed special focus on this area. Using a combination of goals, metrics, processes, organizational models, communication methods, corrective maintenance, root cause analysis, preventative engineering, automated alerting, and workflow automation significant progress has been made in meeting availability SLAs or Service Level Agreements. This paper presents the background of this work, approach, details of its implementation, and results. A special focus is provided on the use of a classical ITIL view as operationalized in an Agile and DevOps environment. Keywords: System Availability, Software Reliability, ITIL, Workflow Automation, Process Engineering, Production Support, Customer Support, Product Support, Change Management, Release Management, Incident Management, Problem Management, Organizational Design, Scrum, Agile, DevOps, Service Level Agreements, Software Measurement, Microsoft SharePoint.
1
0
0
0
0
0
Foresight: Rapid Data Exploration Through Guideposts
Current tools for exploratory data analysis (EDA) require users to manually select data attributes, statistical computations and visual encodings. This can be daunting for large-scale, complex data. We introduce Foresight, a visualization recommender system that helps the user rapidly explore large high-dimensional datasets through "guideposts." A guidepost is a visualization corresponding to a pronounced instance of a statistical descriptor of the underlying data, such as a strong linear correlation between two attributes, high skewness or concentration about the mean of a single attribute, or a strong clustering of values. For each descriptor, Foresight initially presents visualizations of the "strongest" instances, based on an appropriate ranking metric. Given these initial guideposts, the user can then look at "nearby" guideposts by issuing "guidepost queries" containing constraints on metric type, metric strength, data attributes, and data values. Thus, the user can directly explore the network of guideposts, rather than the overwhelming space of data attributes and visual encodings. Foresight also provides for each descriptor a global visualization of ranking-metric values to both help orient the user and ensure a thorough exploration process. Foresight facilitates interactive exploration of large datasets using fast, approximate sketching to compute ranking metrics. We also contribute insights on EDA practices of data scientists, summarizing results from an interview study we conducted to inform the design of Foresight.
1
0
0
0
0
0
Transport by Lagrangian Vortices in the Eastern Pacific
Rotationally coherent Lagrangian vortices (RCLVs) are identified from satellite-derived surface geostrophic velocities in the Eastern Pacific (180$^\circ$-130$^\circ$ W) using the objective (frame-invariant) finite-time Lagrangian-coherent-structure detection method of Haller et al. (2016) based on the Lagrangian-averaged vorticity deviation. RCLVs are identified for 30, 90, and 270 day intervals over the entire satellite dataset, beginning in 1993. In contrast to structures identified using Eulerian eddy-tracking methods, the RCLVs maintain material coherence over the specified time intervals, making them suitable for material transport estimates. Statistics of RCLVs are compared to statistics of eddies identified from sea-surface height (SSH) by Chelton et al. 2011. RCLVs and SSH eddies are found to propagate westward at similar speeds at each latitude, consistent with the Rossby wave dispersion relation. However, RCLVs are uniformly smaller and shorter-lived than SSH eddies. A coherent eddy diffusivity is derived to quantify the contribution of RCLVs to meridional transport; it is found that RCLVs contribute less than 1% to net meridional dispersion and diffusion in this sector, implying that eddy transport of tracers is mostly due to incoherent motions, such as swirling and filamentation outside of the eddy cores, rather than coherent meridional translation of eddies themselves. These findings call into question prior estimates of coherent eddy transport based on Eulerian eddy identification methods.
0
1
0
0
0
0
Online Adaptive Methods, Universality and Acceleration
We present a novel method for convex unconstrained optimization that, without any modifications, ensures: (i) accelerated convergence rate for smooth objectives, (ii) standard convergence rate in the general (non-smooth) setting, and (iii) standard convergence rate in the stochastic optimization setting. To the best of our knowledge, this is the first method that simultaneously applies to all of the above settings. At the heart of our method is an adaptive learning rate rule that employs importance weights, in the spirit of adaptive online learning algorithms (Duchi et al., 2011; Levy, 2017), combined with an update that linearly couples two sequences, in the spirit of (Allen-Zhu and Orecchia, 2017). An empirical examination of our method demonstrates its applicability to the above mentioned scenarios and corroborates our theoretical findings.
0
0
0
1
0
0
A Note on a Quantitative Form of the Solovay-Kitaev Theorem
The problem of finding good approximations of arbitrary 1-qubit gates is identical to that of finding a dense group generated by a universal subset of $SU(2)$ to approximate an arbitrary element of $SU(2)$. The Solovay-Kitaev Theorem is a well-known theorem that guarantees the existence of a finite sequence of 1-qubit quantum gates approximating an arbitrary unitary matrix in $SU(2)$ within specified accuracy $\varepsilon > 0$. In this note we study a quantitative description of this theorem in the following sense. We will work with a universal gate set $T$, a subset of $SU(2)$ such that the group generated by the elements of $T$ is dense in $SU(2)$. For $\varepsilon > 0$ small enough, we define $t_{\varepsilon}$ as the minimum reduced word length such that every point of $SU(2)$ lies within a ball of radius $\varepsilon$ centered at the points in the dense subgroup generated by $T$. For a measure of efficiency on T, which we denote $K(T)$, we prove the following theorem: Fix a $\delta$ in $[0, \frac{2}{3}]$. Choose $f: (0, \infty) \rightarrow (1, \infty)$ satisfying $\lim_{\varepsilon\to 0+}\dfrac{\log(f(t_{\varepsilon}))}{t_{\varepsilon}}$ exists with value $0$. Assume that the inequality $\varepsilon \leqslant f(t_{\varepsilon})\cdot 5^{\frac{-t_{\varepsilon}}{6-3\delta}}$ holds. Then $K(T) \leqslant 2-\delta$. Our conjecture implies the following: Let $\nu(5^{t_{\varepsilon}})$ denote the set of integer solutions of the quadratic form: $x_1^2+x_2^2+x_3^2+x_4^2=5^{t_{\varepsilon}}$. Let $M\equiv M_{S^3}(\mathcal{N})$ denote the covering radius of the points $\mathcal{N}=\nu(5^{t_{\varepsilon}})\cup\nu(5^{t_{\varepsilon}-1})$ on the sphere $S^{3}$ in $\mathbb{R}^{4}$. Then $M \sim f(\log N)N^{\frac{-1}{6-3\delta}}$. Here $N\equiv N(\varepsilon)=6\cdot5^{t_{\varepsilon}}-2$.
0
0
1
0
0
0
Learning Steerable Filters for Rotation Equivariant CNNs
In many machine learning tasks it is desirable that a model's prediction transforms in an equivariant way under transformations of its input. Convolutional neural networks (CNNs) implement translational equivariance by construction; for other transformations, however, they are compelled to learn the proper mapping. In this work, we develop Steerable Filter CNNs (SFCNNs) which achieve joint equivariance under translations and rotations by design. The proposed architecture employs steerable filters to efficiently compute orientation dependent responses for many orientations without suffering interpolation artifacts from filter rotation. We utilize group convolutions which guarantee an equivariant mapping. In addition, we generalize He's weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters. Numerical experiments show a substantial enhancement of the sample complexity with a growing number of sampled filter orientations and confirm that the network generalizes learned patterns over orientations. The proposed approach achieves state-of-the-art on the rotated MNIST benchmark and on the ISBI 2012 2D EM segmentation challenge.
1
0
0
0
0
0
Stochastic Multi-armed Bandits in Constant Space
We consider the stochastic bandit problem in the sublinear space setting, where one cannot record the win-loss record for all $K$ arms. We give an algorithm using $O(1)$ words of space with regret \[ \sum_{i=1}^{K}\frac{1}{\Delta_i}\log \frac{\Delta_i}{\Delta}\log T \] where $\Delta_i$ is the gap between the best arm and arm $i$ and $\Delta$ is the gap between the best and the second-best arms. If the rewards are bounded away from $0$ and $1$, this is within an $O(\log 1/\Delta)$ factor of the optimum regret possible without space constraints.
1
0
0
1
0
0
Max K-armed bandit: On the ExtremeHunter algorithm and beyond
This paper is devoted to the study of the max K-armed bandit problem, which consists in sequentially allocating resources in order to detect extreme values. Our contribution is twofold. We first significantly refine the analysis of the ExtremeHunter algorithm carried out in Carpentier and Valko (2014), and next propose an alternative approach, showing that, remarkably, Extreme Bandits can be reduced to a classical version of the bandit problem to a certain extent. Beyond the formal analysis, these two approaches are compared through numerical experiments.
1
0
0
1
0
0
Comparing Different Models for Investigating Cascading Failures in Power Systems
This paper centers on the comparison of three different models that describe cascading failures of power systems. Specifically, these models are different in characterizing the physical properties of power networks and computing the branch power flow. Optimal control approach is applied on these models to identify the critical disturbances that result in the worst-case cascading failures of power networks. Then we compare these models by analyzing the critical disturbances and cascading processes. Significantly, comparison results on IEEE 9 bus system demonstrate that physical and electrical properties of power networks play a crucial role in the evolution of cascading failures, and it is necessary to take into account these properties appropriately while applying the model in the analysis of cascading blackout.
0
0
1
0
0
0
Initial-boundary value problems in a rectangle for two-dimensional Zakharov-Kuznetsov equation
Initial-boundary value problems in a bounded rectangle with different types of boundary conditions for two-dimensional Zakharov-Kuznetsov equation are considered. Results on global well-posedness in the classes of weak and regular solution are established. As applications of the developed technique results on boundary controllability and long-time decay of weak solutions are also obtained.
0
0
1
0
0
0
Two-dimensional Fermi gases near a p-wave resonance: effect of quantum fluctuations
We study the stability of p-wave superfluidity against quantum fluctuations in two-dimensional Fermi gases near a p-wave Feshbach resonance . An analysis is carried out in the limit when the interchannel coupling is strong. By investigating the effective potential for the pairing field via the standard loop expansion, we show that a homogeneous p-wave pairing state becomes unstable when two-loop quantum fluctuations are taken into account. This is in contrast to the previously predicted $p + ip$ supefluid in the weak-coupling limit [V. Gurarie et al., Phys. Rev. Lett. 94, 230403 (2005)]. It implies a possible onset of instability at certain intermediate interchannel coupling strength. Alternatively, the instability can also be driven by lowering the particle density. We also discuss the validity of our analysis.
0
1
0
0
0
0
Optimal Strong Rates of Convergence for a Space-Time Discretization of the Stochastic Allen-Cahn Equation with multiplicative noise
The stochastic Allen-Cahn equation with multiplicative noise involves the nonlinear drift operator ${\mathscr A}(x) = \Delta x - \bigl(\vert x\vert^2 -1\bigr)x$. We use the fact that ${\mathscr A}(x) = -{\mathcal J}^{\prime}(x)$ satisfies a weak monotonicity property to deduce uniform bounds in strong norms for solutions of the temporal, as well as of the spatio-temporal discretization of the problem. This weak monotonicity property then allows for the estimate $ \underset{1 \leq j \leq J}\sup {\mathbb E}\bigl[ \Vert X_{t_j} - Y^j\Vert_{{\mathbb L}^2}^2\bigr] \leq C_{\delta}(k^{1-\delta} + h^2)$ for all small $\delta>0$, where $X$ is the strong variational solution of the stochastic Allen-Cahn equation, while $\big\{Y^j:0\le j\le J\big\}$ solves a structure preserving finite element based space-time discretization of the problem on a temporal mesh $\{ t_j;\, 1 \leq j \leq J\}$ of size $k>0$ which covers $[0,T]$.
0
0
1
0
0
0
A generalized quantum Slepian-Wolf
In this work we consider a quantum generalization of the task considered by Slepian and Wolf [1973] regarding distributed source compression. In our task Alice, Bob, Charlie and Reference share a joint pure state. Alice and Bob wish to send a part of their respective systems to Charlie without collaborating with each other. We give achievability bounds for this task in the one-shot setting and provide the asymptotic and i.i.d. analysis in the case when there is no side information with Charlie. Our result implies the result of Abeyesinghe, Devetak, Hayden and Winter [2009] who studied a special case of this problem. As another special case wherein Bob holds trivial registers, we recover the result of Devetak and Yard [2008] regarding quantum state redistribution.
1
0
0
0
0
0
The rational SPDE approach for Gaussian random fields with general smoothness
A popular approach for modeling and inference in spatial statistics is to represent Gaussian random fields as solutions to stochastic partial differential equations (SPDEs) of the form $L^{\beta}u = \mathcal{W}$, where $\mathcal{W}$ is Gaussian white noise, $L$ is a second-order differential operator, and $\beta>0$ is a parameter that determines the smoothness of $u$. However, this approach has been limited to the case $2\beta\in\mathbb{N}$, which excludes several important covariance models and makes it necessary to keep $\beta$ fixed during inference. We introduce a new method, the rational SPDE approach, which is applicable for any $\beta>0$ and therefore remedies the mentioned limitation. The presented scheme combines a finite element discretization in space with a rational approximation of the function $x^{-\beta}$ to approximate $u$. For the resulting approximation, an explicit rate of strong convergence to $u$ is derived and we show that the method has the same computational benefits as in the restricted case $2\beta\in\mathbb{N}$ when used for statistical inference and prediction. Several numerical experiments are performed to illustrate the accuracy of the method, and to show how it can be used for likelihood-based inference for all model parameters including $\beta$.
0
0
0
1
0
0
An Analog of the Neumann Problem for the $1$-Laplace Equation in the Metric Setting: Existence, Boundary Regularity, and Stability
We study an inhomogeneous Neumann boundary value problem for functions of least gradient on bounded domains in metric spaces that are equipped with a doubling measure and support a Poincaré inequality. We show that solutions exist under certain regularity assumptions on the domain, but are generally nonunique. We also show that solutions can be taken to be differences of two characteristic functions, and that they are regular up to the boundary when the boundary is of positive mean curvature. By regular up to the boundary we mean that if the boundary data is $1$ in a neighborhood of a point on the boundary of the domain, then the solution is $-1$ in the intersection of the domain with a possibly smaller neighborhood of that point. Finally, we consider the stability of solutions with respect to boundary data.
0
0
1
0
0
0
Activating spin-forbidden transitions in molecules by the highly localized plasmonic field
Optical spectroscopy has been the primary tool to study the electronic structure of molecules. However the strict spin selection rule has severely limited its ability to access states of different spin multiplicities. Here we propose a new strategy to activate spin-forbidden transitions in molecules by introducing spatially highly inhomogeneous plasmonic field. The giant enhancement of the magnetic field strength resulted from the curl of the inhomogeneous vector potential makes the transition between states of different spin multiplicities naturally feasible. The dramatic effect of the inhomogeneity of the plasmonic field on the spin and symmetry selection rules is well illustrated by first principles calculations of C60. Remarkably, the intensity of singlet-triplet transitions can even be stronger than that of singlet-singlet transitions when the plasmon spatial distribution is comparable with the molecular size. This approach offers a powerful means to completely map out all excited states of molecules and to actively control their photochemical processes. The same concept can also be applied to study nano and biological systems.
0
1
0
0
0
0
On the Essential Spectrum of Schrödinger Operators on Trees
It is known that the essential spectrum of a Schrödinger operator $H$ on $\ell^{2}\left(\mathbb{N}\right)$ is equal to the union of the spectra of right limits of $H$. The natural generalization of this relation to $\mathbb{Z}^{n}$ is known to hold as well. In this paper we generalize the notion of right limits to general infinite connected graphs and construct examples of graphs for which the essential spectrum of the Laplacian is strictly bigger than the union of the spectra of its right limits. As these right limits are trees, this result is complemented by the fact that the equality still holds for general bounded operators on regular trees. We prove this and characterize the essential spectrum in the spherically symmetric case.
0
0
1
0
0
0
The equilibrium of over-pressurised polytropes
We investigate the impact of an external pressure on the structure of self-gravitating polytropes for axially symmetric ellipsoids and rings. The confinement of the fluid by photons is accounted for through a boundary condition on the enthalpy $H$. Equilibrium configurations are determined numerically from a generalised "Self-Consistent-Field"-method. The new algorithm incorporates an intra-loop re-scaling operator ${\cal R}(H)$, which is essential for both convergence and getting self-normalised solutions. The main control parameter is the external-to-core enthalpy ratio. In the case of uniform rotation rate and uniform surrounding pressure, we compute the mass, the volume, the rotation rate and the maximum enthalpy. This is repeated for a few polytropic indices $n$. For a given axis ratio, over-pressurization globally increases all output quantities, and this is more pronounced for large $n$. Density profiles are flatter than in the absence of an external pressure. When the control parameter asymptotically tends to unity, the fluid converges toward the incompressible solution, whatever the index, but becomes geometrically singular. Equilibrium sequences, obtained by varying the axis ratio, are built. States of critical rotation are greatly exceeded or even disappear. The same trends are observed with differential rotation. Finally, the typical response to a photon point source is presented. Strong irradiation favours sharp edges. Applications concern star forming regions and matter orbiting young stars and black holes.
0
1
0
0
0
0
Charge and pairing dynamics in the attractive Hubbard model: mode coupling and the validity of linear-response theory
Pump-probe experiments have turned out as a powerful tool in order to study the dynamics of competing orders in a large variety of materials. The corresponding analysis of the data often relies on standard linear-response theory generalized to non-equilibrium situations. Here we examine the validity of such an approach within the attractive Hubbard model for which the dynamics of pairing and charge-density wave orders is computed using the time-dependent Hartree-Fock approximation (TDHF). Our calculations reveal that the `linear-response assumption' is justified for small to moderate non-equilibrium situations (i.e., pump pulses) when the symmetry of the pump-induced state differs from that of the external field. This is the case, when we consider the pairing response in a charge-ordered state or the charge-order response in a superconducting state. The situation is very different when the non-equilibrium state and the external probe field have the same symmetry. In this case, we observe significant changes of the response in magnitude but also due to mode coupling when moving away from an equilibrium state, indicating the failure of the linear-response assumption.
0
1
0
0
0
0
From voids to filaments: environmental transformations of galaxies in the SDSS
We investigate the impact of filament and void environments on galaxies, looking for residual effects beyond the known relations with environment density. We quantified the host environment of galaxies as the distance to the spine of the nearest filament, and compared various galaxy properties within 12 bins of this distance. We considered galaxies up to 10 $h^{-1}$Mpc from filaments, i.e. deep inside voids. The filaments were defined by a point process (the Bisous model) from the Sloan Digital Sky Survey data release 10. In order to remove the dependence of galaxy properties on the environment density and redshift, we applied weighting to normalise the corresponding distributions of galaxy populations in each bin. After the normalisation with respect to environment density and redshift, several residual dependencies of galaxy properties still remain. Most notable is the trend of morphology transformations, resulting in a higher elliptical-to-spiral ratio while moving from voids towards filament spines, bringing along a corresponding increase in the $g-i$ colour index and a decrease in star formation rate. After separating elliptical and spiral subsamples, some of the colour index and star formation rate evolution still remains. The mentioned trends are characteristic only for galaxies brighter than about $M_{r} = -20$ mag. Unlike some other recent studies, we do not witness an increase in the galaxy stellar mass while approaching filaments. The detected transformations can be explained by an increase in the galaxy-galaxy merger rate and/or the cut-off of extragalactic gas supplies (starvation) near and inside filaments. Unlike voids, large-scale galaxy filaments are not a mere density enhancement, but have their own specific impact on the constituent galaxies, reducing the star formation rate and raising the chances of elliptical morphology also at a fixed environment density level.
0
1
0
0
0
0
Distributed Algorithms Made Secure: A Graph Theoretic Approach
In the area of distributed graph algorithms a number of network's entities with local views solve some computational task by exchanging messages with their neighbors. Quite unfortunately, an inherent property of most existing distributed algorithms is that throughout the course of their execution, the nodes get to learn not only their own output but rather learn quite a lot on the inputs or outputs of many other entities. This leakage of information might be a major obstacle in settings where the output (or input) of network's individual is a private information. In this paper, we introduce a new framework for \emph{secure distributed graph algorithms} and provide the first \emph{general compiler} that takes any "natural" non-secure distributed algorithm that runs in $r$ rounds, and turns it into a secure algorithm that runs in $\widetilde{O}(r \cdot D \cdot poly(\Delta))$ rounds where $\Delta$ is the maximum degree in the graph and $D$ is its diameter. The security of the compiled algorithm is information-theoretic but holds only against a semi-honest adversary that controls a single node in the network. This compiler is made possible due to a new combinatorial structure called \emph{private neighborhood trees}: a collection of $n$ trees $T(u_1),\ldots,T(u_n)$, one for each vertex $u_i \in V(G)$, such that each tree $T(u_i)$ spans the neighbors of $u_i$ {\em without going through $u_i$}. Intuitively, each tree $T(u_i)$ allows all neighbors of $u_i$ to exchange a \emph{secret} that is hidden from $u_i$, which is the basic graph infrastructure of the compiler. In a $(d,c)$-private neighborhood trees each tree $T(u_i)$ has depth at most $d$ and each edge $e \in G$ appears in at most $c$ different trees. We show a construction of private neighborhood trees with $d=\widetilde{O}(\Delta \cdot D)$ and $c=\widetilde{O}(D)$.
1
0
0
0
0
0
Interpretation of Neural Networks is Fragile
In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification. How to interpret black-box predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself? In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations. We systematically characterize the fragility of several widely-used feature-importance interpretation methods (saliency maps, relevance propagation, and DeepLIFT) on ImageNet and CIFAR-10. Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile. Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches.
1
0
0
1
0
0
Dipolar phonons and electronic screening in monolayer FeSe on SrTiO$_3$
Monolayer films of FeSe grown on SrTiO$_3$ substrates exhibit significantly higher superconducting transition temperatures than those of bulk FeSe. Interaction of electrons in the FeSe layer with dipolar SrTiO$_3$ phonons has been suggested as the cause of the enhanced transition temperature. In this paper we systematically study the coupling of SrTiO$_3$ longitudinal optical phonons to the FeSe electron, including also electron-electron Coulomb interactions at the random phase approximation level. We find that the electron-phonon interaction between FeSe and SrTiO$_3$ substrate is almost entirely screened by the electronic fluctuations in the FeSe monolayer, so that the net electron-phonon interaction is very weak and unlikely to lead to superconductivity.
0
1
0
0
0
0
A Supervised Approach to Extractive Summarisation of Scientific Papers
Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods.
1
0
0
1
0
0
Channel Feedback Based on AoD-Adaptive Subspace Codebook in FDD Massive MIMO Systems
Channel feedback is essential in frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. Unfortunately, previous work on multiuser MIMO has shown that the codebook size for channel feedback should scale exponentially with the number of base station (BS) antennas, which is greatly increased in massive MIMO systems. To reduce the codebook size and feedback overhead, we propose an angle-of-departure (AoD)-adaptive subspace codebook for channel feedback in FDD massive MIMO systems. Our key insight is to leverage the observation that path AoDs vary more slowly than the path gains. Within the angle coherence time, by utilizing the constant AoD information, the proposed AoD-adaptive subspace codebook is able to quantize the channel vector in a more accurate way. We also provide performance analysis of the proposed codebook in the large-dimensional regime, where we prove that to limit the capacity degradation within an acceptable level, the required number of feedback bits only scales linearly with the number of resolvable (path) AoDs, which is much smaller than the number of BS antennas. Moreover, we compare quantized channel feedback using the proposed AoD-adaptive subspace codebook with analog channel feedback. Extensive simulations that verify the analytical results are provided.
1
0
0
0
0
0
Thermalization in simple metals: The role of electron-phonon and phonon-phonon scatterings
We study the electron and phonon thermalization in simple metals excited by a laser pulse. The thermalization is investigated numerically by solving the Boltzmann transport equation taking into account all the relevant scattering mechanism: the electron-electron, electron-phonon (e-ph), phonon-electron (ph-e), and phonon-phonon (ph-ph) scatterings. In the initial stage of the relaxation, most of the excitation energy is transferred from the electrons to phonons through the e-ph scattering. This creates hot high-frequency phonons due to the ph-e scatterings, followed by an energy redistribution between phonon subsystems through the ph-ph scatterings. This yields an overshoot of the total longitudinal-acoustic phonon energy at a time, across which a crossover occurs from a nonequilibrium state, where the e-ph and ph-e scatterings frequently occur, to a state, where the ph-ph scattering occurs to reach a thermal equilibrium. This picture is quite different from the scenario of the well-known two-temperature model (2TM). The behavior of the relaxation dynamics is compared with those calculated by several models, including the 2TM, the four-temperature model, and nonequilibrium electron or phonon models. The relationship between the relaxation time and the initial distribution function is also discussed.
0
1
0
0
0
0
Convergence of the free Boltzmann quadrangulation with simple boundary to the Brownian disk
We prove that the free Boltzmann quadrangulation with simple boundary and fixed perimeter, equipped with its graph metric, natural area measure, and the path which traces its boundary converges in the scaling limit to the free Boltzmann Brownian disk. The topology of convergence is the so-called Gromov-Hausdorff-Prokhorov-uniform (GHPU) topology, the natural analog of the Gromov-Hausdorff topology for curve-decorated metric measure spaces. From this we deduce that a random quadrangulation of the sphere decorated by a $2l$-step self-avoiding loop converges in law in the GHPU topology to the random curve-decorated metric measure space obtained by gluing together two independent Brownian disks along their boundaries.
0
0
1
0
0
0
Canonical quantization of nonlinear sigma models with theta term, with applications to symmetry-protected topological phases
We canonically quantize $O(D+2)$ nonlinear sigma models (NLSMs) with theta term on arbitrary smooth, closed, connected, oriented $D$-dimensional spatial manifolds $\mathcal{M}$, with the goal of proving the suitability of these models for describing symmetry-protected topological (SPT) phases of bosons in $D$ spatial dimensions. We show that in the disordered phase of the NLSM, and when the coefficient $\theta$ of the theta term is an integer multiple of $2\pi$, the theory on $\mathcal{M}$ has a unique ground state and a finite energy gap to all excitations. We also construct the ground state wave functional of the NLSM in this parameter regime, and we show that it is independent of the metric on $\mathcal{M}$ and given by the exponential of a Wess-Zumino term for the NLSM field, in agreement with previous results on flat space. Our results show that the NLSM in the disordered phase and at $\theta=2\pi k$, $k\in\mathbb{Z}$, has a symmetry-preserving ground state but no topological order (i.e., no topology-dependent ground state degeneracy), making it an ideal model for describing SPT phases of bosons. Thus, our work places previous results on SPT phases derived using NLSMs on solid theoretical ground. To canonically quantize the NLSM on $\mathcal{M}$ we use Dirac's method for the quantization of systems with second class constraints, suitably modified to account for the curvature of space. In a series of four appendices we provide the technical background needed to follow the discussion in the main sections of the paper.
0
1
0
0
0
0
A Nonparametric Bayesian Approach to Copula Estimation
We propose a novel Dirichlet-based Pólya tree (D-P tree) prior on the copula and based on the D-P tree prior, a nonparametric Bayesian inference procedure. Through theoretical analysis and simulations, we are able to show that the flexibility of the D-P tree prior ensures its consistency in copula estimation, thus able to detect more subtle and complex copula structures than earlier nonparametric Bayesian models, such as a Gaussian copula mixture. Further, the continuity of the imposed D-P tree prior leads to a more favorable smoothing effect in copula estimation over classic frequentist methods, especially with small sets of observations. We also apply our method to the copula prediction between the S\&P 500 index and the IBM stock prices during the 2007-08 financial crisis, finding that D-P tree-based methods enjoy strong robustness and flexibility over classic methods under such irregular market behaviors.
0
0
0
1
0
0
Model Predictive Control for Autonomous Driving Based on Time Scaled Collision Cone
In this paper, we present a Model Predictive Control (MPC) framework based on path velocity decomposition paradigm for autonomous driving. The optimization underlying the MPC has a two layer structure wherein first, an appropriate path is computed for the vehicle followed by the computation of optimal forward velocity along it. The very nature of the proposed path velocity decomposition allows for seamless compatibility between the two layers of the optimization. A key feature of the proposed work is that it offloads most of the responsibility of collision avoidance to velocity optimization layer for which computationally efficient formulations can be derived. In particular, we extend our previously developed concept of time scaled collision cone (TSCC) constraints and formulate the forward velocity optimization layer as a convex quadratic programming problem. We perform validation on autonomous driving scenarios wherein proposed MPC repeatedly solves both the optimization layers in receding horizon manner to compute lane change, overtaking and merging maneuvers among multiple dynamic obstacles.
1
0
0
0
0
0
A Grouping Genetic Algorithm for Joint Stratification and Sample Allocation Designs
Predicting the cheapest sample size for the optimal stratification in multivariate survey design is a problem in cases where the population frame is large. A solution exists that iteratively searches for the minimum sample size necessary to meet accuracy constraints in partitions of atomic strata created by the Cartesian product of auxiliary variables into larger strata. The optimal stratification can be found by testing all possible partitions. However the number of possible partitions grows exponentially with the number of initial strata. There are alternative ways of modelling this problem, one of the most natural is using Genetic Algorithms (GA). These evolutionary algorithms use recombination, mutation and selection to search for optimal solutions. They often converge on optimal or near-optimal solution more quickly than exact methods. We propose a new GA approach to this problem using grouping genetic operators instead of traditional operators. The results show a significant improvement in solution quality for similar computational effort, corresponding to large monetary savings.
0
0
0
1
0
0
Murmur Detection Using Parallel Recurrent & Convolutional Neural Networks
In this article, we propose a novel technique for classification of the Murmurs in heart sound. We introduce a novel deep neural network architecture using parallel combination of the Recurrent Neural Network (RNN) based Bidirectional Long Short-Term Memory (BiLSTM) & Convolutional Neural Network (CNN) to learn visual and time-dependent characteristics of Murmur in PCG waveform. Set of acoustic features are presented to our proposed deep neural network to discriminate between Normal and Murmur class. The proposed method was evaluated on a large dataset using 5-fold cross-validation, resulting in a sensitivity and specificity of 96 +- 0.6 % , 100 +- 0 % respectively and F1 Score of 98 +- 0.3 %.
1
0
0
1
0
0
Time-of-Flight Electron Energy Loss Spectroscopy by Longitudinal Phase Space Manipulation with Microwave Cavities
The possibility to perform high-resolution time-resolved electron energy loss spectroscopy has the potential to impact a broad range of research fields. Resolving small energy losses with ultrashort electron pulses, however, is an enormous challenge due to the low average brightness of a pulsed beam. In this letter, we propose to use time-of-flight measurements combined with longitudinal phase space manipulation using resonant microwave cavities. This allows for both an accurate detection of energy losses with a high current throughput, and efficient monochromation. First, a proof-of-principle experiment is presented, showing that with the incorporation of a compression cavity the flight time resolution can be improved significantly. Then, it is shown through simulations that by adding a cavity-based monochromation technique, a full-width-at-half-maximum energy resolution of 22 meV can be achieved with 3.1 ps pulses at a beam energy of 30 keV with currently available technology. By combining state-of-the-art energy resolutions with a pulsed electron beam, the technique proposed here opens up the way to detecting short-lived excitations within the regime of highly collective physics.
0
1
0
0
0
0
Learning across scales - A multiscale method for Convolution Neural Networks
In this work we establish the relation between optimal control and training deep Convolution Neural Networks (CNNs). We show that the forward propagation in CNNs can be interpreted as a time-dependent nonlinear differential equation and learning as controlling the parameters of the differential equation such that the network approximates the data-label relation for given training data. Using this continuous interpretation we derive two new methods to scale CNNs with respect to two different dimensions. The first class of multiscale methods connects low-resolution and high-resolution data through prolongation and restriction of CNN parameters. We demonstrate that this enables classifying high-resolution images using CNNs trained with low-resolution images and vice versa and warm-starting the learning process. The second class of multiscale methods connects shallow and deep networks and leads to new training strategies that gradually increase the depths of the CNN while re-using parameters for initializations.
1
0
0
0
0
0
Capacitated Covering Problems in Geometric Spaces
In this article, we consider the following capacitated covering problem. We are given a set $P$ of $n$ points and a set $\mathcal{B}$ of balls from some metric space, and a positive integer $U$ that represents the capacity of each of the balls in $\mathcal{B}$. We would like to compute a subset $\mathcal{B}' \subseteq \mathcal{B}$ of balls and assign each point in $P$ to some ball in $\mathcal{B}$ that contains it, such that the number of points assigned to any ball is at most $U$. The objective function that we would like to minimize is the cardinality of $\mathcal{B}$. We consider this problem in arbitrary metric spaces as well as Euclidean spaces of constant dimension. In the metric setting, even the uncapacitated version of the problem is hard to approximate to within a logarithmic factor. In the Euclidean setting, the best known approximation guarantee in dimensions $3$ and higher is logarithmic in the number of points. Thus we focus on obtaining "bi-criteria" approximations. In particular, we are allowed to expand the balls in our solution by some factor, but optimal solutions do not have that flexibility. Our main result is that allowing constant factor expansion of the input balls suffices to obtain constant approximations for these problems. In fact, in the Euclidean setting, only $(1+\epsilon)$ factor expansion is sufficient for any $\epsilon > 0$, with the approximation factor being a polynomial in $1/\epsilon$. We obtain these results using a unified scheme for rounding the natural LP relaxation; this scheme may be useful for other capacitated covering problems. We also complement these bi-criteria approximations by obtaining hardness of approximation results that shed light on our understanding of these problems.
1
0
0
0
0
0
Irregular Oscillatory-Patterns in the Early-Time Region of Coherent Phonon Generation in Silicon
Coherent phonon (CP) generation in an undoped Si crystal is theoretically investigated to shed light on unexplored quantum-mechanical effects in the early-time region immediately after the irradiation of ultrashort laser pulse. One examines time signals attributed to an induced charge density of an ionic core, placing the focus on the effects of the Rabi frequency $\Omega_{0cv}$ on the signals; this frequency corresponds to the peak electric-field of the pulse. It is found that at specific $\Omega_{0cv}$'s where the energy of plasmon caused by photoexcited carriers coincides with the longitudinal-optical phonon energy, the energetically {\it resonant } interaction between these two modes leads to striking anticrossings, revealing irregular oscillations with anomalously enhanced amplitudes in the observed time signals. Also, the oscillatory pattern is subject to the Rabi flopping of the excited carrier density that is controlled by $\Omega_{0cv}$. These findings show that the early-time region is enriched with quantum-mechanical effects inherent in the CP generation, though experimental signals are more or less masked by the so-called coherent artifact due to nonlinear optical effects.
0
1
0
0
0
0
Energy-Efficient Hybrid Stochastic-Binary Neural Networks for Near-Sensor Computing
Recent advances in neural networks (NNs) exhibit unprecedented success at transforming large, unstructured data streams into compact higher-level semantic information for tasks such as handwriting recognition, image classification, and speech recognition. Ideally, systems would employ near-sensor computation to execute these tasks at sensor endpoints to maximize data reduction and minimize data movement. However, near- sensor computing presents its own set of challenges such as operating power constraints, energy budgets, and communication bandwidth capacities. In this paper, we propose a stochastic- binary hybrid design which splits the computation between the stochastic and binary domains for near-sensor NN applications. In addition, our design uses a new stochastic adder and multiplier that are significantly more accurate than existing adders and multipliers. We also show that retraining the binary portion of the NN computation can compensate for precision losses introduced by shorter stochastic bit-streams, allowing faster run times at minimal accuracy losses. Our evaluation shows that our hybrid stochastic-binary design can achieve 9.8x energy efficiency savings, and application-level accuracies within 0.05% compared to conventional all-binary designs.
1
0
0
0
0
0
Position-based coding and convex splitting for private communication over quantum channels
The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender $X$, a legitimate quantum receiver $B$, and a quantum eavesdropper $E$. The goal of a private communication protocol that uses such a channel is for the sender $X$ to transmit a message in such a way that the legitimate receiver $B$ can decode it reliably, while the eavesdropper $E$ learns essentially nothing about which message was transmitted. The $\varepsilon $-one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than $\varepsilon\in(0,1)$. The present paper provides a lower bound on the $\varepsilon$-one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between $X$ and $B$ and the "alternate" smooth max-information between $X$ and $E$. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.
1
0
0
0
0
0
Geometric vulnerability of democratic institutions against lobbying: a sociophysics approach
An alternative voting scheme is proposed to fill the democratic gap between a president elected democratically via universal suffrage (deterministic outcome, the actual majority decides), and a president elected by one person randomly selected from the population (probabilistic outcome depending on respective supports). Moving from one voting agent to a group of r randomly selected voting agents reduces the probabilistic character of the outcome. Building r such groups, each one electing its president, to constitute a group of the groups with the r local presidents electing a higher-level president, does reduce further the outcome probabilistic aspect. Repeating the process n times leads to a n-level bottom-up pyramidal structure. The hierarchy top president is still elected with a probability but the distance from a deterministic outcome reduces quickly with increasing n. At a critical value n_{c,r} the outcome turns deterministic recovering the same result a universal suffrage would yield. The scheme yields several social advantages like the distribution of local power to the competing minority making the structure more resilient, yet preserving the presidency allocation to the actual majority. An area is produced around fifty percent for which the president is elected with an almost equiprobability slightly biased in favor of the actual majority. However, our results reveal the existence of a severe geometric vulnerability to lobbying. A tiny lobbying group is able to kill the democratic balance by seizing the presidency democratically. It is sufficient to complete a correlated distribution of a few agents at the hierarchy bottom. Moreover, at the present stage, identifying an actual killing distribution is not feasible, which sheds a disturbing light on the devastating effect geometric lobbying can have on democratic hierarchical institutions.
0
1
0
0
0
0
Strongly correlated double Dirac fermions
Double Dirac fermions have recently been identified as possible quasiparticles hosted by three-dimensional crystals with particular non-symmorphic point group symmetries. Applying a combined approach of ab-initio methods and dynamical mean field theory, we investigate how interactions and double Dirac band topology conspire to form the electronic quantum state of Bi$_2$CuO$_4$. We derive a downfolded eight-band model of the pristine material at low energies around the Fermi level. By tuning the model parameters from the free band structure to the realistic strongly correlated regime, we find a persistence of the double Dirac dispersion until its constituting time reveral symmetry is broken due to the onset of magnetic ordering at the Mott transition. We analyze pressure as a promising route to realize a double-Dirac metal in Bi$_2$CuO$_4$.
0
1
0
0
0
0
A Phase Variable Approach for Improved Volitional and Rhythmic Control of a Powered Knee-Ankle Prosthesis
Although there has been recent progress in control of multi-joint prosthetic legs for periodic tasks such as walking, volitional control of these systems for non-periodic maneuvers is still an open problem. In this paper, we develop a new controller that is capable of both periodic walking and common volitional leg motions based on a piecewise holonomic phase variable through a finite state machine. The phase variable is constructed by measuring the thigh angle, and the transitions in the finite state machine are formulated through sensing foot contact along with attributes of a nominal reference gait trajectory. The controller was implemented on a powered knee-ankle prosthesis and tested with a transfemoral amputee subject, who successfully performed a wide range of periodic and non-periodic tasks, including low- and high-speed walking, quick start and stop, backward walking, walking over obstacles, and kicking a soccer ball. Use of the powered leg resulted in significant reductions in amputee compensations including vaulting and hip circumduction when compared to use of the take-home passive leg. The proposed approach is expected to provide better understanding of volitional motions and lead to more reliable control of multi-joint prostheses for a wider range of tasks.
1
0
0
0
0
0
New estimates for the $n$th prime number
In this paper we establish a new explicit upper and lower bound for the $n$-th prime number, which improve the currently best estimates given by Dusart in 2010. As the main tool we use some recently obtained explicit estimates for the prime counting function. A further main tool is the usage of estimates concerning the reciprocal of $\log p_n$. As an application we derive refined estimates for $\vartheta(p_n)$ in terms of $n$, where $\vartheta(x)$ is Chebyshev's $\vartheta$-function.
0
0
1
0
0
0
Unbiased Markov chain Monte Carlo for intractable target distributions
Performing numerical integration when the integrand itself cannot be evaluated point-wise is a challenging task that arises in statistical analysis, notably in Bayesian inference for models with intractable likelihood functions. Markov chain Monte Carlo (MCMC) algorithms have been proposed for this setting, such as the pseudo-marginal method for latent variable models and the exchange algorithm for a class of undirected graphical models. As with any MCMC algorithm, the resulting estimators are justified asymptotically in the limit of the number of iterations, but exhibit a bias for any fixed number of iterations due to the Markov chains starting outside of stationarity. This "burn-in" bias is known to complicate the use of parallel processors for MCMC computations. We show how to use coupling techniques to generate unbiased estimators in finite time, building on recent advances for generic MCMC algorithms. We establish the theoretical validity of some of these procedures by extending existing results to cover the case of polynomially ergodic Markov chains. The efficiency of the proposed estimators is compared with that of standard MCMC estimators, with theoretical arguments and numerical experiments including state space models and Ising models.
0
0
0
1
0
0
An Observer for an Occluded Reaction-Diffusion System With Spatially Varying Parameters
Spatially dependent parameters of a two-component chaotic reaction-diffusion PDE model describing ocean ecology are observed by sampling a single species. We estimate model parameters and the other species in the system by autosynchronization, where quantities of interest are evolved according to misfit between model and observations, to only partially observed data. Our motivating example comes from oceanic ecology as viewed by remote sensing data, but where noisy occluded data are realized in the form of cloud cover. We demonstrate a method to learn a large-scale coupled synchronizing system that represents spatio-temporal dynamics and apply a network approach to analyze manifold stability.
0
0
1
0
0
0
D4M 3.0
The D4M tool is used by hundreds of researchers to perform complex analytics on unstructured data. Over the past few years, the D4M toolbox has evolved to support connectivity with a variety of database engines, graph analytics in the Apache Accumulo database, and an implementation using the Julia programming language. In this article, we describe some of our latest additions to the D4M toolbox and our upcoming D4M 3.0 release.
1
0
0
0
0
0
Human-in-the-Loop SLAM
Building large-scale, globally consistent maps is a challenging problem, made more difficult in environments with limited access, sparse features, or when using data collected by novice users. For such scenarios, where state-of-the-art mapping algorithms produce globally inconsistent maps, we introduce a systematic approach to incorporating sparse human corrections, which we term Human-in-the-Loop Simultaneous Localization and Mapping (HitL-SLAM). Given an initial factor graph for pose graph SLAM, HitL-SLAM accepts approximate, potentially erroneous, and rank-deficient human input, infers the intended correction via expectation maximization (EM), back-propagates the extracted corrections over the pose graph, and finally jointly optimizes the factor graph including the human inputs as human correction factor terms, to yield globally consistent large-scale maps. We thus contribute an EM formulation for inferring potentially rank-deficient human corrections to mapping, and human correction factor extensions to the factor graphs for pose graph SLAM that result in a principled approach to joint optimization of the pose graph while simultaneously accounting for multiple forms of human correction. We present empirical results showing the effectiveness of HitL-SLAM at generating globally accurate and consistent maps even when given poor initial estimates of the map.
1
0
0
0
0
0
What drives gravitational instability in nearby star-forming spirals? The impact of CO and HI velocity dispersions
The velocity dispersion of cold interstellar gas, sigma, is one of the quantities that most radically affect the onset of gravitational instabilities in galaxy discs, and the quantity that is most drastically approximated in stability analyses. Here we analyse the stability of a large sample of nearby star-forming spirals treating molecular gas, atomic gas and stars as three distinct components, and using radial profiles of sigma_CO and sigma_HI derived from HERACLES and THINGS observations. We show that the radial variations of sigma_CO and sigma_HI have a weak effect on the local stability level of galaxy discs, which remains remarkably flat and well above unity, but is low enough to ensure (marginal) instability against non-axisymmetric perturbations and gas dissipation. More importantly, the radial variation of sigma_CO has a strong impact on the size of the regions over which gravitational instabilities develop, and results in a characteristic instability scale that is one order of magnitude larger than the Toomre length of molecular gas. Disc instabilities are driven, in fact, by the self-gravity of stars at kpc scales. This is true across the entire optical disc of every galaxy in the sample, with few exceptions. In the linear phase of the disc instability process, stars and molecular gas are strongly coupled, and it is such a coupling that ultimately triggers local gravitational collapse/fragmentation in the molecular gas.
0
1
0
0
0
0
An improved belief propagation algorithm for detecting meso-scale structure in complex networks
The framework of statistical inference has been successfully used to detect the meso-scale structures in complex networks, such as community structure, core-periphery (CP) structure. The main principle is that the stochastic block model (SBM) is used to fit the observed network and the learnt parameters indicate the group assignment, in which the parameters of model are often calculated via an expectation-maximization (EM) algorithm and a belief propagation (BP) algorithm is implemented to calculate the decomposition itself. In the derivation process of the BP algorithm, some approximations were made by omitting the effects of node's neighbors, the approximations do not hold if networks are dense or some nodes holding large degrees. As a result, for example, the BP algorithm cannot well detect CP structure in networks and even yields wrong detection because the nodal degrees in core group are very large. In doing so, we propose an improved BP algorithm to solve the problem in the original BP algorithm without increasing any computational complexity. By comparing the improved BP algorithm with the original BP algorithm on community detection and CP detection, we find that the two algorithms yield the same performance on the community detection when the network is sparse, for the community structure in dense networks or CP structure in networks, our improved BP algorithm is much better and more stable. The improved BP algorithm may help us correctly partition different types of meso-scale structures in networks.
1
0
0
0
0
0
On Least Squares Linear Regression Without Second Moment
If X and Y are real valued random variables such that the first moments of X, Y, and XY exist and the conditional expectation of Y given X is an affine function of X, then the intercept and slope of the conditional expectation equal the intercept and slope of the least squares linear regression function, even though Y may not have a finite second moment. As a consequence, the affine in X form of the conditional expectation and zero covariance imply mean independence.
0
0
1
1
0
0
Combinatorics of involutive divisions
The classical involutive division theory by Janet decomposes in the same way both the ideal and the escalier. The aim of this paper, following Janet's approach, is to discuss the combinatorial properties of involutive divisions, when defined on the set of all terms in a fixed degree D, postponing the discussion of ideal membership and related test. We adapt the theory by Gerdt and Blinkov, introducing relative involutive divisions and then, given a complete description of the combinatorial structure of a relative involutive division, we turn our attention to the problem of membership. In order to deal with this problem, we introduce two graphs as tools, one is strictly related to Seiler's L-graph, whereas the second generalizes it, to cover the case of "non-continuous" (in the sense of Gerdt-Blinkov) relative involutive divisions. Indeed, given an element in the ideal (resp. escalier), walking backwards (resp. forward) in the graph, we can identify all the other generators of the ideal (resp. elements of degree D in the escalier).
0
0
1
0
0
0
Reducing asynchrony to synchronized rounds
Synchronous computation models simplify the design and the verification of fault-tolerant distributed systems. For efficiency reasons such systems are designed and implemented using an asynchronous semantics. In this paper, we bridge the gap between these two worlds. We introduce a (synchronous) round-based computational model and we prove a reduction for a class of asynchronous protocols to our new model. The reduction is based on properties of the code that can be checked with sequential methods. We apply the reduction to state machine replication systems, such as, Paxos, Zab, and Viewstamped Replication.
1
0
0
0
0
0
A wearable general-purpose solution for Human-Swarm Interaction
Swarms of robots will revolutionize many industrial applications, from targeted material delivery to precision farming. Controlling the motion and behavior of these swarms presents unique challenges for human operators, who cannot yet effectively convey their high-level intentions to a group of robots in application. This work proposes a new human-swarm interface based on novel wearable gesture-control and haptic-feedback devices. This work seeks to combine a wearable gesture recognition device that can detect high-level intentions, a portable device that can detect Cartesian information and finger movements, and a wearable advanced haptic device that can provide real-time feedback. This project is the first to envisage a wearable Human-Swarm Interaction (HSI) interface that separates the input and feedback components of the classical control loop (input, output, feedback), as well as being the first of its kind suitable for both indoor and outdoor environments.
1
0
0
0
0
0
Numerical simulation of oxidation processes in a cross-flow around tube bundles
An oxidation process is simulated for a bundle of metal tubes in a cross-flow. A fluid flow is governed by the incompressible Navier-Stokes equations. To describe the transport of oxygen, the corresponding convection-diffusion equation is applied. The key point of the model is related to the description of oxidation processes taking into account the growth of a thin oxide film in the quasi-stationary approximation. Mathematical modeling of oxidant transport in a tube bundle is carried out in the 2D approximation. The numerical algorithm employed in the work is based on the finite-element discretization in space and the fully implicit discretization in time. The tube rows of a bundle can be either in-line or staggered in the direction of the fluid flow velocity. The growth of the oxide film on tube walls is predicted for various bundle structures using the developed oxidation model.
1
1
0
0
0
0
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core GPUs, exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenACC, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
0
1
0
0
0
0
NodeTrix Planarity Testing with Small Clusters
We study the NodeTrix planarity testing problem for flat clustered graphs when the maximum size of each cluster is bounded by a constant $k$. We consider both the case when the sides of the matrices to which the edges are incident are fixed and the case when they can be arbitrarily chosen. We show that NodeTrix planarity testing with fixed sides can be solved in $O(k^{3k+\frac{3}{2}} n^3)$ time for every flat clustered graph that can be reduced to a partial 2-tree by collapsing its clusters into single vertices. In the general case, NodeTrix planarity testing with fixed sides can be solved in $O(n^3)$ time for $k = 2$, but it is NP-complete for any $k \geq 3$. NodeTrix planarity testing remains NP-complete also in the free side model when $k > 4$.
1
0
0
0
0
0
Exact results for directed random networks that grow by node duplication
We present exact analytical results for the degree distribution and for the distribution of shortest path lengths (DSPL) in a directed network model that grows by node duplication. Such models are useful in the study of the structure and growth dynamics of gene regulatory and scientific citation networks. Starting from an initial seed network, at each time step a random node, referred to as a mother node, is selected for duplication. Its daughter node is added to the network and duplicates, with probability p, each one of the outgoing links of the mother node. In addition, the daughter node forms a directed link to the mother node itself. Thus, the model is referred to as the corded directed-node-duplication (DND) model. We obtain analytical results for the in-degree distribution, $P_t(K_{in})$, and for the out-degree distribution, $P_t(K_{out})$, of the network at time t. It is found that the in-degrees follow a shifted power-law distribution, so the network is asymptotically scale free. In contrast, the out-degree distribution is a narrow distribution, that converges to a Poisson distribution in the sparse limit and to a Gaussian distribution in the dense limit. Using these distributions we calculate the mean degree, $\langle K_{in} \rangle_t = \langle K_{out} \rangle_t$. To calculate the DSPL we derive a master equation for the time evolution of the probability $P_t(L=\ell)$, $\ell=1,2,\dots$, that for two nodes, i and j, selected randomly at time t, the shortest path from i to j is of length $\ell$. Solving the master equation, we obtain a closed form expression for $P_t(L=\ell)$. It is found that the DSPL at time t consists of a convolution of the initial DSPL, $P_0(L=\ell)$, with a Poisson distribution and a sum of Poisson distributions. The mean distance, $<L>_t$, is found to depend logarithmically on the network size, $N_t$, namely the corded DND network is a small-world network.
1
0
0
0
0
0
Investigating the past history of EXors: the cases of V1118 Ori, V1143 Ori, and NY Ori
EXor objects are young variables that show episodic variations of brightness commonly associated to enhanced accretion outbursts. With the aim of investigating the long-term photometric behaviour of a few EXor sources, we present here data from the archival plates of the Asiago Observatory, showing the Orion field where the three EXors V1118, V1143, and NY are located. A total of 484 plates were investigated, providing a total of more than 1000 magnitudes for the three stars, which cover a period of about 35 yrs between 1959 to 1993. We then compared our data with literature data. Apart from a newly discovered flare-up of V1118, we identify the same outbursts already known, but we provide two added values: (i) a long-term sampling of the quiescence phase; and (ii) repeated multi-colour observations (BVRI bands). The former allows us to give a reliable characterisation of the quiescence, which represents a unique reference for studies that will analyze future outbursts and the physical changes induced by these events. The latter is useful for confirming whether the intermittent increases of brightness are accretion-driven (as in the case of V1118), or extinction-driven (as in the case of V1143). Accordingly, doubts arise about the V1143 classification as a pure EXor object. Finally, although our plates do not separate NY Ori and the star very close to it, they indicate that this EXor did not undergo any major outbursts during our 40 yrs of monitoring.
0
1
0
0
0
0
Some estimates for $θ$-type Calderón-Zygmund operators and linear commutators on certain weighted amalgam spaces
In this paper, we first introduce some new kinds of weighted amalgam spaces. Then we discuss the strong type and weak type estimates for a class of Calderón--Zygmund type operators $T_\theta$ in these new weighted spaces. Furthermore, the strong type estimate and endpoint estimate of linear commutators $[b,T_{\theta}]$ formed by $b$ and $T_{\theta}$ are established. Also we study related problems about two-weight, weak type inequalities for $T_{\theta}$ and $[b,T_{\theta}]$ in the weighted amalgam spaces and give some results.
0
0
1
0
0
0
A Cluster Elastic Net for Multivariate Regression
We propose a method for estimating coefficients in multivariate regression when there is a clustering structure to the response variables. The proposed method includes a fusion penalty, to shrink the difference in fitted values from responses in the same cluster, and an L1 penalty for simultaneous variable selection and estimation. The method can be used when the grouping structure of the response variables is known or unknown. When the clustering structure is unknown the method will simultaneously estimate the clusters of the response and the regression coefficients. Theoretical results are presented for the penalized least squares case, including asymptotic results allowing for p >> n. We extend our method to the setting where the responses are binomial variables. We propose a coordinate descent algorithm for both the normal and binomial likelihood, which can easily be extended to other generalized linear model (GLM) settings. Simulations and data examples from business operations and genomics are presented to show the merits of both the least squares and binomial methods.
0
0
0
1
0
0
Achieving robust and high-fidelity quantum control via spectral phase optimization
Achieving high-fidelity control of quantum systems is of fundamental importance in physics, chemistry and quantum information sciences. However, the successful implementation of a high-fidelity quantum control scheme also requires robustness against control field fluctuations. Here, we demonstrate a robust optimization method for control of quantum systems by optimizing the spectral phase of an ultrafast laser pulse, which is accomplished in the framework of frequency domain quantum optimal control theory. By incorporating a filtering function of frequency into the optimization algorithm, our numerical simulations in an abstract two-level quantum system as well as in a three-level atomic rubidium show that the optimization procedure can be enforced to search optimal solutions while achieving remarkable robustness against the control field fluctuations, providing an efficient approach to optimize the spectral phase of the ultrafast laser pulse to achieve a desired final quantum state of the system.
0
1
0
0
0
0
Help Me Find a Job: A Graph-based Approach for Job Recommendation at Scale
Online job boards are one of the central components of modern recruitment industry. With millions of candidates browsing through job postings everyday, the need for accurate, effective, meaningful, and transparent job recommendations is apparent more than ever. While recommendation systems are successfully advancing in variety of online domains by creating social and commercial value, the job recommendation domain is less explored. Existing systems are mostly focused on content analysis of resumes and job descriptions, relying heavily on the accuracy and coverage of the semantic analysis and modeling of the content in which case, they end up usually suffering from rigidity and the lack of implicit semantic relations that are uncovered from users' behavior and could be captured by Collaborative Filtering (CF) methods. Few works which utilize CF do not address the scalability challenges of real-world systems and the problem of cold-start. In this paper, we propose a scalable item-based recommendation system for online job recommendations. Our approach overcomes the major challenges of sparsity and scalability by leveraging a directed graph of jobs connected by multi-edges representing various behavioral and contextual similarity signals. The short lived nature of the items (jobs) in the system and the rapid rate in which new users and jobs enter the system make the cold-start a serious problem hindering CF methods. We address this problem by harnessing the power of deep learning in addition to user behavior to serve hybrid recommendations. Our technique has been leveraged by CareerBuilder.com which is one of the largest job boards in the world to generate high-quality recommendations for millions of users.
1
0
0
0
0
0
Bundle Optimization for Multi-aspect Embedding
Understanding semantic similarity among images is the core of a wide range of computer vision applications. An important step towards this goal is to collect and learn human perceptions. Interestingly, the semantic context of images is often ambiguous as images can be perceived with emphasis on different aspects, which may be contradictory to each other. In this paper, we present a method for learning the semantic similarity among images, inferring their latent aspects and embedding them into multi-spaces corresponding to their semantic aspects. We consider the multi-embedding problem as an optimization function that evaluates the embedded distances with respect to the qualitative clustering queries. The key idea of our approach is to collect and embed qualitative measures that share the same aspects in bundles. To ensure similarity aspect sharing among multiple measures, image classification queries are presented to, and solved by users. The collected image clusters are then converted into bundles of tuples, which are fed into our bundle optimization algorithm that jointly infers the aspect similarity and multi-aspect embedding. Extensive experimental results show that our approach significantly outperforms state-of-the-art multi-embedding approaches on various datasets, and scales well for large multi-aspect similarity measures.
1
0
0
0
0
0
Glitch Classification and Clustering for LIGO with Deep Transfer Learning
The detection of gravitational waves with LIGO and Virgo requires a detailed understanding of the response of these instruments in the presence of environmental and instrumental noise. Of particular interest is the study of anomalous non-Gaussian noise transients known as glitches, since their high occurrence rate in LIGO/Virgo data can obscure or even mimic true gravitational wave signals. Therefore, successfully identifying and excising glitches is of utmost importance to detect and characterize gravitational waves. In this article, we present the first application of Deep Learning combined with Transfer Learning for glitch classification, using real data from LIGO's first discovery campaign labeled by Gravity Spy, showing that knowledge from pre-trained models for real-world object recognition can be transferred for classifying spectrograms of glitches. We demonstrate that this method enables the optimal use of very deep convolutional neural networks for glitch classification given small unbalanced training datasets, significantly reduces the training time, and achieves state-of-the-art accuracy above 98.8%. Once trained via transfer learning, we show that the networks can be truncated and used as feature extractors for unsupervised clustering to automatically group together new classes of glitches and anomalies. This novel capability is of critical importance to identify and remove new types of glitches which will occur as the LIGO/Virgo detectors gradually attain design sensitivity.
1
1
0
1
0
0
Provable Smoothness Guarantees for Black-Box Variational Inference
Black-box variational inference tries to approximate a complex target distribution though a gradient-based optimization of the parameters of a simpler distribution. Provable convergence guarantees require structural properties of the objective. This paper shows that for location-scale family approximations, if the target is M-Lipschitz smooth, then so is the objective, if the entropy is excluded. The key proof idea is to describe gradients in a certain inner-product space, thus permitting use of Bessel's inequality. This result gives insight into how to parameterize distributions, gives bounds the location of the optimal parameters, and is a key ingredient for convergence guarantees.
1
0
0
1
0
0
A k-means procedure based on a Mahalanobis type distance for clustering multivariate functional data
This paper proposes a clustering procedure for samples of multivariate functions in $(L^2(I))^{J}$, with $J\geq1$. This method is based on a k-means algorithm in which the distance between the curves is measured with a metrics that generalizes the Mahalanobis distance in Hilbert spaces, considering the correlation and the variability along all the components of the functional data. The proposed procedure has been studied in simulation and compared with the k-means based on other distances typically adopted for clustering multivariate functional data. In these simulations, it is shown that the k-means algorithm with the generalized Mahalanobis distance provides the best clustering performances, both in terms of mean and standard deviation of the number of misclassified curves. Finally, the proposed method has been applied to two real cases studies, concerning ECG signals and growth curves, where the results obtained in simulation are confirmed and strengthened.
0
0
0
1
0
0
Tidal Dissipation in WASP-12
WASP-12 is a hot Jupiter system with an orbital period of $P= 1.1\textrm{ day}$, making it one of the shortest-period giant planets known. Recent transit timing observations by Maciejewski et al. (2016) and Patra et al. (2017) find a decreasing period with $P/|\dot{P}| = 3.2\textrm{ Myr}$. This has been interpreted as evidence of either orbital decay due to tidal dissipation or a long term oscillation of the apparent period due to apsidal precession. Here we consider the possibility that it is orbital decay. We show that the parameters of the host star are consistent with either a $M_\ast \simeq 1.3 M_\odot$ main sequence star or a $M_\ast \simeq 1.2 M_\odot$ subgiant. We find that if the star is on the main sequence, the tidal dissipation is too inefficient to explain the observed $\dot{P}$. However, if it is a subgiant, the tidal dissipation is significantly enhanced due to nonlinear wave breaking of the dynamical tide near the star's center. The subgiant models have a tidal quality factor $Q_\ast'\simeq 2\times10^5$ and an orbital decay rate that agrees well with the observed $\dot{P}$. It would also explain why the planet survived for $\simeq 3\textrm{ Gyr}$ while the star was on the main sequence and yet is now inspiraling on a 3 Myr timescale. Although this suggests that we are witnessing the last $\sim 0.1\%$ of the planet's life, the probability of such a detection is a few percent given the observed sample of $\simeq 30$ hot Jupiters in $P<3\textrm{ day}$ orbits around $M_\ast>1.2 M_\odot$ hosts.
0
1
0
0
0
0
Label Sanitization against Label Flipping Poisoning Attacks
Many machine learning systems rely on data collected in the wild from untrusted sources, exposing the learning algorithms to data poisoning. Attackers can inject malicious data in the training dataset to subvert the learning process, compromising the performance of the algorithm producing errors in a targeted or an indiscriminate way. Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points. Even if the capabilities of the attacker are constrained, these attacks have been shown to be effective to significantly degrade the performance of the system. In this paper we propose an efficient algorithm to perform optimal label flipping poisoning attacks and a mechanism to detect and relabel suspicious data points, mitigating the effect of such poisoning attacks.
0
0
0
1
0
0
DeepCCI: End-to-end Deep Learning for Chemical-Chemical Interaction Prediction
Chemical-chemical interaction (CCI) plays a key role in predicting candidate drugs, toxicity, therapeutic effects, and biological functions. In various types of chemical analyses, computational approaches are often required due to the amount of data that needs to be handled. The recent remarkable growth and outstanding performance of deep learning have attracted considerable research attention. However,even in state-of-the-art drug analysis methods, deep learning continues to be used only as a classifier, although deep learning is capable of not only simple classification but also automated feature extraction. In this paper, we propose the first end-to-end learning method for CCI, named DeepCCI. Hidden features are derived from a simplified molecular input line entry system (SMILES), which is a string notation representing the chemical structure, instead of learning from crafted features. To discover hidden representations for the SMILES strings, we use convolutional neural networks (CNNs). To guarantee the commutative property for homogeneous interaction, we apply model sharing and hidden representation merging techniques. The performance of DeepCCI was compared with a plain deep classifier and conventional machine learning methods. The proposed DeepCCI showed the best performance in all seven evaluation metrics used. In addition, the commutative property was experimentally validated. The automatically extracted features through end-to-end SMILES learning alleviates the significant efforts required for manual feature engineering. It is expected to improve prediction performance, in drug analyses.
1
0
0
0
0
0
Intervals between numbers that are sums of two squares
In this paper, we improve the moment estimates for the gaps between numbers that can be represented as a sum of two squares of integers. We consider certain sum of Bessel functions and prove the upper bound for its weighted mean value. This bound provides estimates for the $\gamma$-th moments of gaps for all $\gamma\leq 2$.
0
0
1
0
0
0
On a family of Caldero-Chapoton algebras that have the Laurent phenomenon
We realize a family of generalized cluster algebras as Caldero-Chapoton algebras of quivers with relations. Each member of this family arises from an unpunctured polygon with one orbifold point of order 3, and is realized as a Caldero-Chapoton algebra of a quiver with relations naturally associated to any triangulation of the alluded polygon. The realization is done by defining for every arc $j$ on the polygon with orbifold point a representation $M(j)$ of the referred quiver with relations, and by proving that for every triangulation $\tau$ and every arc $j\in\tau$, the product of the Caldero-Chapoton functions of $M(j)$ and $M(j')$, where $j'$ is the arc that replaces $j$ when we flip $j$ in $\tau$, equals the corresponding exchange polynomial of Chekhov-Shapiro in the generalized cluster algebra. Furthermore, we show that there is a bijection between the set of generalized cluster variables and the isomorphism classes of $E$-rigid indecomposable decorated representations of $\Lambda$.
0
0
1
0
0
0
Ricean K-factor Estimation based on Channel Quality Indicator in OFDM Systems using Neural Network
Ricean channel model is widely used in wireless communications to characterize the channels with a line-of-sight path. The Ricean K factor, defined as the ratio of direct path and scattered paths, provides a good indication of the link quality. Most existing works estimate K factor based on either maximum-likelihood criterion or higher-order moments, and the existing works are targeted at K-factor estimation at receiver side. In this work, a novel approach is proposed. Cast as a classification problem, the estimation of K factor by neural network provides high accuracy. Moreover, the proposed K-factor estimation is done at transmitter side for transmit processing, thus saving the limited feedback bandwidth.
0
0
0
1
0
0
Augmented lagrangian two-stage algorithm for LP and SCQP
In this paper, we consider a framework of projected gradient iterations for linear programming (LP) and an augmented lagrangian two-stage algorithm for strongly convex quadratic programming (SCQP). Based on the framework of projected gradient, LP problem is transformed to a finite number of SCQP problems. Furthermore, we give an estimate of the number of the SCQP problems. We use augmented lagrangian method (ALM) to solve SCQP and each augmented lagrangian subproblem is solved by a two-stage algorithm exactly, which ensures the superlinear convergence of ALM for SCQP. The two-stage algorithm consists of the accelerated proximal gradient algorithm as the first stage algorithm which provides an approximate solution, and a simplified parametric active-set method as the second stage algorithm which gives an exact solution. Moreover, we improve the parametric active-set method by introducing a sorting technique to update the cholesky factorization. Finally, the numerical experiments on randomly generated and real-world test problems indicate that our algorithm is effective, especially for random problems.
0
0
1
0
0
0
Exact relations between homoclinic and periodic orbit actions in chaotic systems
Homoclinic and unstable periodic orbits in chaotic systems play central roles in various semiclassical sum rules. The interferences between terms are governed by the action functions and Maslov indices. In this article, we identify geometric relations between homoclinic and unstable periodic orbits, and derive exact formulae expressing the periodic orbit classical actions in terms of corresponding homoclinic orbit actions plus certain phase space areas. The exact relations provide a basis for approximations of the periodic orbit actions as action differences between homoclinic orbits with well-estimated errors. This make possible the explicit study of relations between periodic orbits, which results in an analytic expression for the action differences between long periodic orbits and their shadowing decomposed orbits in the cycle expansion.
0
1
0
0
0
0
Subexponentially growing Hilbert space and nonconcentrating distributions in a constrained spin model
Motivated by recent experiments with two-component Bose-Einstein condensates, we study fully-connected spin models subject to an additional constraint. The constraint is responsible for the Hilbert space dimension to scale only linearly with the system size. We discuss the unconventional statistical physical and thermodynamic properties of such a system, in particular the absence of concentration of the underlying probability distributions. As a consequence, expectation values are less suitable to characterize such systems, and full distribution functions are required instead. Sharp signatures of phase transitions do not occur in such a setting, but transitions from singly peaked to doubly peaked distribution functions of an "order parameter" may be present.
0
1
0
0
0
0