title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
K-closedness of weighted Hardy spaces on the two-dimensional torus
It is proved that, under certain restrictions on weights, a pair of weighted Hardy spaces on the two-dimensional torus is K-closed in the pair of the corresponding weighted Lebesgue spaces. By now, K-closedness of Hardy spaces on the two-dimensional torus was considered either in the case of no weights or in the case of weights that split into a product of two functions of one variable (the so-called "split weights"). Here the case of certain nonsplit weights is studied.
0
0
1
0
0
0
Self-paced Convolutional Neural Network for Computer Aided Detection in Medical Imaging Analysis
Tissue characterization has long been an important component of Computer Aided Diagnosis (CAD) systems for automatic lesion detection and further clinical planning. Motivated by the superior performance of deep learning methods on various computer vision problems, there has been increasing work applying deep learning to medical image analysis. However, the development of a robust and reliable deep learning model for computer-aided diagnosis is still highly challenging due to the combination of the high heterogeneity in the medical images and the relative lack of training samples. Specifically, annotation and labeling of the medical images is much more expensive and time-consuming than other applications and often involves manual labor from multiple domain experts. In this work, we propose a multi-stage, self-paced learning framework utilizing a convolutional neural network (CNN) to classify Computed Tomography (CT) image patches. The key contribution of this approach is that we augment the size of training samples by refining the unlabeled instances with a self-paced learning CNN. By implementing the framework on high performance computing servers including the NVIDIA DGX1 machine, we obtained the experimental result, showing that the self-pace boosted network consistently outperformed the original network even with very scarce manual labels. The performance gain indicates that applications with limited training samples such as medical image analysis can benefit from using the proposed framework.
1
0
0
1
0
0
PS-DBSCAN: An Efficient Parallel DBSCAN Algorithm Based on Platform Of AI (PAI)
We present PS-DBSCAN, a communication efficient parallel DBSCAN algorithm that combines the disjoint-set data structure and Parameter Server framework in Platform of AI (PAI). Since data points within the same cluster may be distributed over different workers which result in several disjoint-sets, merging them incurs large communication costs. In our algorithm, we employ a fast global union approach to union the disjoint-sets to alleviate the communication burden. Experiments over the datasets of different scales demonstrate that PS-DBSCAN outperforms the PDSDBSCAN with 2-10 times speedup on communication efficiency. We have released our PS-DBSCAN in an algorithm platform called Platform of AI (PAI - this https URL) in Alibaba Cloud. We have also demonstrated how to use the method in PAI.
1
0
0
0
0
0
Non-singular spacetimes with a negative cosmological constant: IV. Stationary black hole solutions with matter fields
We use an elliptic system of equations with complex coefficients for a set of complex-valued tensor fields as a tool to construct infinite-dimensional families of non-singular stationary black holes, real-valued Lorentzian solutions of the Einstein-Maxwell-dilaton-scalar fields-Yang-Mills-Higgs-Chern-Simons-$f(R)$ equations with a negative cosmological constant. The families include an infinite-dimensional family of solutions with the usual AdS conformal structure at conformal infinity.
0
0
1
0
0
0
Non-Negative Matrix Factorization Test Cases
Non-negative matrix factorization (NMF) is a prob- lem with many applications, ranging from facial recognition to document clustering. However, due to the variety of algorithms that solve NMF, the randomness involved in these algorithms, and the somewhat subjective nature of the problem, there is no clear "correct answer" to any particular NMF problem, and as a result, it can be hard to test new algorithms. This paper suggests some test cases for NMF algorithms derived from matrices with enumerable exact non-negative factorizations and perturbations of these matrices. Three algorithms using widely divergent approaches to NMF all give similar solutions over these test cases, suggesting that these test cases could be used as test cases for implementations of these existing NMF algorithms as well as potentially new NMF algorithms. This paper also describes how the proposed test cases could be used in practice.
1
0
1
0
0
0
Inequalities for the fundamental Robin eigenvalue of the Laplacian for box-shaped domains
This document consists of two papers, both submitted, and supplementary material. The submitted papers are here given as Parts I and II. Part I establishes results, used in Part II, 'on functions and inverses, both positive, decreasing and convex'. Part II uses results from Part I to extablish 'inequalities for the fundamental Robin eigenvalue for the Laplacian on N-dimensional boxes'
0
0
1
0
0
0
Non-Hamiltonian isotopic Lagrangians on the one-point blow-up of CP^2
We show that two Hamiltonian isotopic Lagrangians in (CP^2,\omega_\textup{FS}) induce two Lagrangian submanifolds in the one-point blow-up (\widetilde{CP}^2,\widetilde{\omega}_\rho) that are not Hamiltonian isotopic. Furthermore, we show that for any integer k>1 there are k Hamiltonian isotopic Lagrangians in (CP^2,\omega_\textup{FS}) that induce k Lagrangian submanifolds in the one-point blow-up such that no two of them are Hamiltonian isotopic.
0
0
1
0
0
0
Berezinskii-Kosterlitz-Thouless Type Scenario in Molecular Spin Liquid $A$Cr$_2$O$_4$
The spin relaxation in chromium spinel oxides $A$Cr$_{2}$O$_{4}$ ($A=$ Mg, Zn, Cd) is investigated in the paramagnetic regime by electron spin resonance (ESR). The temperature dependence of the ESR linewidth indicates an unconventional spin-relaxation behavior, similar to spin-spin relaxation in the two-dimensional (2D) chromium-oxide triangular lattice antiferromagnets. The data can be described in terms of a generalized Berezinskii-Kosterlitz-Thouless (BKT) type scenario for 2D systems with additional internal symmetries. Based on the characteristic exponents obtained from the evaluation of the ESR linewidth, short-range order with a hidden internal symmetry is suggested.
0
1
0
0
0
0
Continuum percolation theory of epimorphic regeneration
A biophysical model of epimorphic regeneration based on a continuum percolation process of fully penetrable disks in two dimensions is proposed. All cells within a randomly chosen disk of the regenerating organism are assumed to receive a signal in the form of a circular wave as a result of the action/reconfiguration of neoblasts and neoblast-derived mesenchymal cells in the blastema. These signals trigger the growth of the organism, whose cells read, on a faster time scale, the electric polarization state responsible for their differentiation and the resulting morphology. In the long time limit, the process leads to a morphological attractor that depends on experimentally accessible control parameters governing the blockage of cellular gap junctions and, therefore, the connectivity of the multicellular ensemble. When this connectivity is weakened, positional information is degraded leading to more symmetrical structures. This general theory is applied to the specifics of planaria regeneration. Computations and asymptotic analyses made with the model show that it correctly describes a significant subset of the most prominent experimental observations, notably anterior-posterior polarization (and its loss) or the formation of four-headed planaria.
0
1
0
0
0
0
Multiplicity of solutions for a nonhomogeneous quasilinear elliptic problem with critical growth
It is established some existence and multiplicity of solution results for a quasilinear elliptic problem driven by $\Phi$-Laplacian operator. One of these solutions is built as a ground state solution. In order to prove our main results we apply the Nehari method combined with the concentration compactness theorem in an Orlicz-Sobolev framework. One of the difficulties in dealing with this kind of operator is the lost of homogeneity properties.
0
0
1
0
0
0
Mahonian STAT on rearrangement class of words
In 2000, Babson and Steingrímsson generalized the notion of permutation patterns to the so-called vincular patterns, and they showed that many Mahonian statistics can be expressed as sums of vincular pattern occurrence statistics. STAT is one of such Mahonian statistics discoverd by them. In 2016, Kitaev and the third author introduced a words analogue of STAT and proved a joint equidistribution result involving two sextuple statistics on the whole set of words with fixed length and alphabet. Moreover, their computer experiments hinted at a finer involution on $R(w)$, the rearrangement class of a given word $w$. We construct such an involution in this paper, which yields a comparable joint equidistribution between two sextuple statistics over $R(w)$. Our involution builds on Burstein's involution and Foata-Schützenberger's involution that utilizes the celebrated RSK algorithm.
1
0
0
0
0
0
A bibliometric approach to Systematic Mapping Studies: The case of the evolution and perspectives of community detection in complex networks
Critical analysis of the state of the art is a necessary task when identifying new research lines worthwhile to pursue. To such an end, all the available work related to the field of interest must be taken into account. The key point is how to organize, analyze, and make sense of the huge amount of scientific literature available today on any topic. To tackle this problem, we present here a bibliometric approach to Systematic Mapping Studies (SMS). Thus, a modify SMS protocol is used relying on the scientific references metadata to extract, process and interpret the wealth of information contained in nowadays research literature. As a test case, the procedure is applied to determine the current state and perspectives of community detection in complex networks. Our results show that community detection is a still active, far from exhausted, in development, field. In addition, we find that, by far, the most exploited methods are those related to determining hierarchical community structures. On the other hand, the results show that fuzzy clustering techniques, despite their interest, are underdeveloped as well as the adaptation of existing algorithms to parallel or, more specifically, distributed, computational systems.
1
1
0
0
0
0
SPEW: Synthetic Populations and Ecosystems of the World
Agent-based models (ABMs) simulate interactions between autonomous agents in constrained environments over time. ABMs are often used for modeling the spread of infectious diseases. In order to simulate disease outbreaks or other phenomena, ABMs rely on "synthetic ecosystems," or information about agents and their environments that is representative of the real world. Previous approaches for generating synthetic ecosystems have some limitations: they are not open-source, cannot be adapted to new or updated input data sources, and do not allow for alternative methods for sampling agent characteristics and locations. We introduce a general framework for generating Synthetic Populations and Ecosystems of the World (SPEW), implemented as an open-source R package. SPEW allows researchers to choose from a variety of sampling methods for agent characteristics and locations when generating synthetic ecosystems for any geographic region. SPEW can produce synthetic ecosystems for any agent (e.g. humans, mosquitoes, etc), provided that appropriate data is available. We analyze the accuracy and computational efficiency of SPEW given different sampling methods for agent characteristics and locations and provide a suite of diagnostics to screen our synthetic ecosystems. SPEW has generated over five billion human agents across approximately 100,000 geographic regions in about 70 countries, available online.
0
1
0
1
0
0
Bayesian hierarchical weighting adjustment and survey inference
We combine Bayesian prediction and weighted inference as a unified approach to survey inference. The general principles of Bayesian analysis imply that models for survey outcomes should be conditional on all variables that affect the probability of inclusion. We incorporate the weighting variables under the framework of multilevel regression and poststratification, as a byproduct generating model-based weights after smoothing. We investigate deep interactions and introduce structured prior distributions for smoothing and stability of estimates. The computation is done via Stan and implemented in the open source R package "rstanarm" ready for public use. Simulation studies illustrate that model-based prediction and weighting inference outperform classical weighting. We apply the proposal to the New York Longitudinal Study of Wellbeing. The new approach generates robust weights and increases efficiency for finite population inference, especially for subsets of the population.
0
0
0
1
0
0
Comparison of the h-index for Different Fields of Research Using Bootstrap Methodology
An important disadvantage of the h-index is that typically it cannot take into account the specific field of research of a researcher. Usually sample point estimates of the average and median h-index values for the various fields are reported that are highly variable and dependent of the specific samples and it would be useful to provide confidence intervals of prediction accuracy. In this paper we apply the non-parametric bootstrap technique for constructing confidence intervals for the h-index for different fields of research. In this way no specific assumptions about the distribution of the empirical hindex are required as well as no large samples since that the methodology is based on resampling from the initial sample. The results of the analysis showed important differences between the various fields. The performance of the bootstrap intervals for the mean and median h-index for most fields seems to be rather satisfactory as revealed by the performed simulation.
1
0
0
1
0
0
Impact of Intervals on the Emotional Effect in Western Music
Every art form ultimately aims to invoke an emotional response over the audience, and music is no different. While the precise perception of music is a highly subjective topic, there is an agreement in the "feeling" of a piece of music in broad terms. Based on this observation, in this study, we aimed to determine the emotional feeling associated with short passages of music; specifically by analyzing the melodic aspects. We have used the dataset put together by Eerola et. al. which is comprised of labeled short passages of film music. Our initial survey of the dataset indicated that other than "happy" and "sad" labels do not possess a melodic structure. We transcribed the main melody of the happy and sad tracks and used the intervals between the notes to classify them. Our experiments have shown that treating a melody as a bag-of-intervals do not possess any predictive power whatsoever, whereas counting intervals with respect to the key of the melody yielded a classifier with 85% accuracy.
1
0
0
0
1
0
Nonlocal Pertubations of Fractional Choquard Equation
We study the equation \begin{equation} (-\Delta)^{s}u+V(x)u= (I_{\alpha}*|u|^{p})|u|^{p-2}u+\lambda(I_{\beta}*|u|^{q})|u|^{q-2}u \quad\mbox{ in } \R^{N}, \end{equation} where $I_\gamma(x)=|x|^{-\gamma}$ for any $\gamma\in (0,N)$, $p, q >0$, $\alpha,\beta\in (0,N)$, $N\geq 3$ and $ \lambda \in R$. First, the existence of a groundstate solutions using minimization method on the associated Nehari manifold is obtained. Next, the existence of least energy sign-changing solutions is investigated by considering the Nehari nodal set.
0
0
1
0
0
0
Clustering Patients with Tensor Decomposition
In this paper we present a method for the unsupervised clustering of high-dimensional binary data, with a special focus on electronic healthcare records. We present a robust and efficient heuristic to face this problem using tensor decomposition. We present the reasons why this approach is preferable for tasks such as clustering patient records, to more commonly used distance-based methods. We run the algorithm on two datasets of healthcare records, obtaining clinically meaningful results.
1
0
0
1
0
0
Posterior distribution existence and error control in Banach spaces in the Bayesian approach to UQ in inverse problems
We generalize the results of \cite{Capistran2016} on expected Bayes factors (BF) to control the numerical error in the posterior distribution to an infinite dimensional setting when considering Banach functional spaces and now in a prior setting. The main result is a bound on the absolute global error to be tolerated by the Forward Map numerical solver, to keep the BF of the numerical vs. the theoretical model near to 1, now in this more general setting, possibly including a truncated, finite dimensional approximate prior measure. In so doing we found a far more general setting to define and prove existence of the infinite dimensional posterior distribution than that depicted in, for example, \cite{Stuart2010}. Discretization consistency and rates of convergence are also investigated in this general setting for the Bayesian inverse problem.
0
0
1
1
0
0
High-$T_c$ mechanism through analysis of diverging effective mass for YaBa$_2$Cu$_3$O$_{6+x}$ and pairing symmetry in cuprate superconductors
In order to clarify the high-$T_c$ mechanism in inhomogeneous cuprate layer superconductors, we deduce and find the correlation strength not revealed before, contributing to the formation of the Cooper pair and the 2-D density of state, and demonstrate the pairing symmetry in the superconductors still controversial. To the open questions, the fitting and analysis of the diverging effective mass with decreasing doping, extracted from the acquired quantum-oscillation data in underdoped YBCOO$_{6+x}$ superconductors, can provide solutions. Here, the results of the fitting using the extended Brinkman-Rice(BR) picture reveal the nodal constant Fermi energy with the maximum carrier density, a constant Coulomb correlation strength $k_{BR}$=$U/U_c$>0.90, and a growing Fermi arc from the nodal Fermi point to the isotropic Fermi surface with an increasing $x$. The growing of the Fermi arc indicates that a superconducting gap develops with $x$ from the node to the anti-node. The large $k_{BR}$ results from the $d$-wave MIT for the pseudogap phase in lightly doped superconductors, which can be direct evidence for high-$T_c$ superconductivity. The quantum critical point is regarded as the nodal Fermi point satisfied with the BR picture. The experimentally-measured mass diverging behavior is an average effect and the true effective mass is constant. As an application of the nodal constant carrier density, to find a superconducting node gap, the ARPES data and tunneling data are analyzed. The superconducting node gap is a precursor of $s$-wave symmetry in underdoped cuprates. The half-flux quantum, induced by the circulation of $d$-wave supercurrent and observed by the phase sensitive Josephson-pi junction experiments, is not shown due to anisotropic or asymmetric effect appearing in superconductors with trapped flux. The absence of $d$-wave superconducting pairing symmetry is also revealed.
0
1
0
0
0
0
The Long Term Fréchet distribution: Estimation, Properties and its Application
In this paper a new long-term survival distribution is proposed. The so called long term Fréchet distribution allows us to fit data where a part of the population is not susceptible to the event of interest. This model may be used, for example, in clinical studies where a portion of the population can be cured during a treatment. It is shown an account of mathematical properties of the new distribution such as its moments and survival properties. As well is presented the maximum likelihood estimators (MLEs) for the parameters. A numerical simulation is carried out in order to verify the performance of the MLEs. Finally, an important application related to the leukemia free-survival times for transplant patients are discussed to illustrates our proposed distribution
0
0
1
1
0
0
Active Expansion Sampling for Learning Feasible Domains in an Unbounded Input Space
Many engineering problems require identifying feasible domains under implicit constraints. One example is finding acceptable car body styling designs based on constraints like aesthetics and functionality. Current active-learning based methods learn feasible domains for bounded input spaces. However, we usually lack prior knowledge about how to set those input variable bounds. Bounds that are too small will fail to cover all feasible domains; while bounds that are too large will waste query budget. To avoid this problem, we introduce Active Expansion Sampling (AES), a method that identifies (possibly disconnected) feasible domains over an unbounded input space. AES progressively expands our knowledge of the input space, and uses successive exploitation and exploration stages to switch between learning the decision boundary and searching for new feasible domains. We show that AES has a misclassification loss guarantee within the explored region, independent of the number of iterations or labeled samples. Thus it can be used for real-time prediction of samples' feasibility within the explored region. We evaluate AES on three test examples and compare AES with two adaptive sampling methods -- the Neighborhood-Voronoi algorithm and the straddle heuristic -- that operate over fixed input variable bounds.
1
0
0
1
0
0
On diagrams of simplified trisections and mapping class groups
A simplified trisection is a trisection map on a 4-manifold such that, in its critical value set, there is no double point and cusps only appear in triples on innermost fold circles. We give a necessary and sufficient condition for a 3-tuple of systems of simple closed curves in a surface to be a diagram of a simplified trisection in terms of mapping class groups. As an application of this criterion, we show that trisections of spun 4-manifolds due to Meier are diffeomorphic (as trisections) to simplified ones. Baykur and Saeki recently gave an algorithmic construction of a simplified trisection from a directed broken Lefschetz fibration. We also give an algorithm to obtain a diagram of a simplified trisection derived from their construction.
0
0
1
0
0
0
Look No Further: Adapting the Localization Sensory Window to the Temporal Characteristics of the Environment
Many localization algorithms use a spatiotemporal window of sensory information in order to recognize spatial locations, and the length of this window is often a sensitive parameter that must be tuned to the specifics of the application. This letter presents a general method for environment-driven variation of the length of the spatiotemporal window based on searching for the most significant localization hypothesis, to use as much context as is appropriate but not more. We evaluate this approach on benchmark datasets using visual and Wi-Fi sensor modalities and a variety of sensory comparison front-ends under in-order and out-of-order traversals of the environment. Our results show that the system greatly reduces the maximum distance traveled without localization compared to a fixed-length approach while achieving competitive localization accuracy, and our proposed method achieves this performance without deployment-time tuning.
1
0
0
0
0
0
The Gaia-ESO Survey: Dynamical models of flattened, rotating globular clusters
We present a family of self-consistent axisymmetric rotating globular cluster models which are fitted to spectroscopic data for NGC 362, NGC 1851, NGC 2808, NGC 4372, NGC 5927 and NGC 6752 to provide constraints on their physical and kinematic properties, including their rotation signals. They are constructed by flattening Modified Plummer profiles, which have the same asymptotic behaviour as classical Plummer models, but can provide better fits to young clusters due to a slower turnover in the density profile. The models are in dynamical equilibrium as they depend solely on the action variables. We employ a fully Bayesian scheme to investigate the uncertainty in our model parameters (including mass-to-light ratios and inclination angles) and evaluate the Bayesian evidence ratio for rotating to non-rotating models. We find convincing levels of rotation only in NGC 2808. In the other clusters, there is just a hint of rotation (in particular, NGC 4372 and NGC 5927), as the data quality does not allow us to draw strong conclusions. Where rotation is present, we find that it is confined to the central regions, within radii of $R \leq 2 r_h$. As part of this work, we have developed a novel q-Gaussian basis expansion of the line-of-sight velocity distributions, from which general models can be constructed via interpolation on the basis coefficients.
0
1
0
0
0
0
A two-dimensional data-driven model for traffic flow on highways
Based on experimental traffic data obtained from German and US highways, we propose a novel two-dimensional first-order macroscopic traffic flow model. The goal is to reproduce a detailed description of traffic dynamics for the real road geometry. In our approach both the dynamic along the road and across the lanes is continuous. The closure relations, being necessary to complete the hydrodynamic equation, are obtained by regression on fundamental diagram data. Comparison with prediction of one-dimensional models shows the improvement in performance of the novel model.
0
1
0
0
0
0
Cut Finite Element Methods for Elliptic Problems on Multipatch Parametric Surfaces
We develop a finite element method for the Laplace--Beltrami operator on a surface described by a set of patchwise parametrizations. The patches provide a partition of the surface and each patch is the image by a diffeomorphism of a subdomain of the unit square which is bounded by a number of smooth trim curves. A patchwise tensor product mesh is constructed by using a structured mesh in the reference domain. Since the patches are trimmed we obtain cut elements in the vicinity of the interfaces. We discretize the Laplace--Beltrami operator using a cut finite element method that utilizes Nitsche's method to enforce continuity at the interfaces and a consistent stabilization term to handle the cut elements. Several quantities in the method are conveniently computed in the reference domain where the mappings impose a Riemannian metric. We derive a priori estimates in the energy and $L^2$ norm and also present several numerical examples confirming our theoretical results.
0
0
1
0
0
0
The rationality problem for forms of $\overline{M_{0, n}}$
Let $X$ be a del Pezzo surface of degree $5$ defined over a field $F$. A theorem of Yu. I. Manin and P. Swinnerton-Dyer asserts that every Del Pezzo surface of degree $5$ is rational. In this paper we generalize this result as follows. Recall that del Pezzo surfaces of degree $5$ over a field $F$ are precisely the twisted $F$-forms of the moduli space $\overline{M_{0, 5}}$ of stable curves of genus $0$ with $5$ marked points. Suppose $n \geq 5$ is an integer, and $F$ is an infinite field of characteristic $\neq 2$. It is easy to see that every twisted $F$-form of $\overline{M_{0, n}}$ is unirational over $F$. We show that (a) if $n$ is odd, then every twisted $F$-form of $\overline{M_{0, n}}$ is rational over $F$. (b) If $n$ is even, there exists a field extension $F/k$ and a twisted $F$-form $X$ of $\overline{M_{0, n}}$ such that $X$ is not retract rational over $F$.
0
0
1
0
0
0
The p-adic Kummer-Leopoldt constant - Normalized p-adic regulator
The p-adic Kummer--Leopoldt constant kappa\_K of a number field K is (assuming the Leopoldt conjecture) the least integer c such that for all n \textgreater{}\textgreater{} 0, any global unit of K, which is locally a p^(n+c)th power at the p-places, is necessarily the p^nth power of a global unit of K. This constant has been computed by Assim \& Nguyen Quang Do using Iwasawa's techniques,after intricate studies and calculations by many authors. We give an elementary p-adic proof and an improvement of these results, then a class field theory interpretation of kappa\_K. We give some applications (including generalizations of Kummer's lemma on regular pth cyclotomic fields) and a natural definition of the normalized p-adic regulator for any K and any p$\ge$2.This is done without analytical computations, using only class field theoryand especially the properties of the so-called p-torsion group T\_K of Abelian p-ramification theory over K.
0
0
1
0
0
0
Regularization Learning Networks: Deep Learning for Tabular Datasets
Despite their impressive performance, Deep Neural Networks (DNNs) typically underperform Gradient Boosting Trees (GBTs) on many tabular-dataset learning tasks. We propose that applying a different regularization coefficient to each weight might boost the performance of DNNs by allowing them to make more use of the more relevant inputs. However, this will lead to an intractable number of hyperparameters. Here, we introduce Regularization Learning Networks (RLNs), which overcome this challenge by introducing an efficient hyperparameter tuning scheme which minimizes a new Counterfactual Loss. Our results show that RLNs significantly improve DNNs on tabular datasets, and achieve comparable results to GBTs, with the best performance achieved with an ensemble that combines GBTs and RLNs. RLNs produce extremely sparse networks, eliminating up to 99.8% of the network edges and 82% of the input features, thus providing more interpretable models and reveal the importance that the network assigns to different inputs. RLNs could efficiently learn a single network in datasets that comprise both tabular and unstructured data, such as in the setting of medical imaging accompanied by electronic health records. An open source implementation of RLN can be found at this https URL.
0
0
0
1
0
0
Numerical dimension and locally ample curves
In the paper \cite{Lau16}, it was shown that the restriction of a pseudoeffective divisor $D$ to a subvariety $Y$ with nef normal bundle is pseudoeffective. Assuming the normal bundle is ample and that $D|_Y$ is not big, we prove that the numerical dimension of $D$ is bounded above by that of its restriction, i.e. $\kappa_{\sigma}(D)\leq \kappa_{\sigma}(D|_Y)$. The main motivation is to study the cycle classes of "positive" curves: we show that the cycle class of a curve with ample normal bundle lies in the interior of the cone of curves, and the cycle class of an ample curve lies in the interior of the cone of movable curves. We do not impose any condition on the singularities on the curve or the ambient variety. For locally complete intersection curves in a smooth projective variety, this is the main result of Ottem \cite{Ott16}. The main tool in this paper is the theory of $q$-ample divisors.
0
0
1
0
0
0
Local-global principles in circle packings
We generalize work of Bourgain-Kontorovich and Zhang, proving an almost local-to-global property for the curvatures of certain circle packings, to a large class of Kleinian groups. Specifically, we associate in a natural way an infinite family of integral packings of circles to any Kleinian group $\mathcal A\leq\textrm{PSL}_2(K)$ satisfying certain conditions, where $K$ is an imaginary quadratic field, and show that the curvatures of the circles in any such packing satisfy an almost local-to-global principle. A key ingredient in the proof of this is that $\mathcal A$ possesses a spectral gap property, which we prove for any infinite-covolume, geometrically finite, Zariski dense Kleinian group in $\textrm{PSL}_2(\mathcal{O}_K)$ containing a Zariski dense subgroup of $\textrm{PSL}_2(\mathbb{Z})$.
0
0
1
0
0
0
Underscreening in concentrated electrolytes
Screening of a surface charge by electrolyte and the resulting interaction energy between charged objects is of fundamental importance in scenarios from bio-molecular interactions to energy storage. The conventional wisdom is that the interaction energy decays exponentially with object separation and the decay length is a decreasing function of ion concentration; the interaction is thus negligible in a concentrated electrolyte. Contrary to this conventional wisdom, we have shown by surface force measurements that the decay length is an increasing function of ion concentration and Bjerrum length for concentrated electrolytes. In this paper we report surface force measurements to test directly the scaling of the screening length with Bjerrum length. Furthermore, we identify a relationship between the concentration dependence of this screening length and empirical measurements of activity coefficient and differential capacitance. The dependence of the screening length on the ion concentration and the Bjerrum length can be explained by a simple scaling conjecture based on the physical intuition that solvent molecules, rather than ions, are charge carriers in a concentrated electrolyte.
0
1
0
0
0
0
Asymptotics for Small Nonlinear Price Impact: a PDE Approach to the Multidimensional Case
We provide an asymptotic expansion of the value function of a multidimensional utility maximization problem from consumption with small non-linear price impact. In our model cross-impacts between assets are allowed. In the limit for small price impact, we determine the asymptotic expansion of the value function around its frictionless version. The leading order correction is characterized by a nonlinear second order PDE related to an ergodic control problem and a linear parabolic PDE. We illustrate our result on a multivariate geometric Brownian motion price model.
0
0
0
0
0
1
Data-driven Probabilistic Atlases Capture Whole-brain Individual Variation
Probabilistic atlases provide essential spatial contextual information for image interpretation, Bayesian modeling, and algorithmic processing. Such atlases are typically constructed by grouping subjects with similar demographic information. Importantly, use of the same scanner minimizes inter-group variability. However, generalizability and spatial specificity of such approaches is more limited than one might like. Inspired by Commowick "Frankenstein's creature paradigm" which builds a personal specific anatomical atlas, we propose a data-driven framework to build a personal specific probabilistic atlas under the large-scale data scheme. The data-driven framework clusters regions with similar features using a point distribution model to learn different anatomical phenotypes. Regional structural atlases and corresponding regional probabilistic atlases are used as indices and targets in the dictionary. By indexing the dictionary, the whole brain probabilistic atlases adapt to each new subject quickly and can be used as spatial priors for visualization and processing. The novelties of this approach are (1) it provides a new perspective of generating personal specific whole brain probabilistic atlases (132 regions) under data-driven scheme across sites. (2) The framework employs the large amount of heterogeneous data (2349 images). (3) The proposed framework achieves low computational cost since only one affine registration and Pearson correlation operation are required for a new subject. Our method matches individual regions better with higher Dice similarity value when testing the probabilistic atlases. Importantly, the advantage the large-scale scheme is demonstrated by the better performance of using large-scale training data (1888 images) than smaller training set (720 images).
0
0
0
1
1
0
Learning to Communicate: A Machine Learning Framework for Heterogeneous Multi-Agent Robotic Systems
We present a machine learning framework for multi-agent systems to learn both the optimal policy for maximizing the rewards and the encoding of the high dimensional visual observation. The encoding is useful for sharing local visual observations with other agents under communication resource constraints. The actor-encoder encodes the raw images and chooses an action based on local observations and messages sent by the other agents. The machine learning agent generates not only an actuator command to the physical device, but also a communication message to the other agents. We formulate a reinforcement learning problem, which extends the action space to consider the communication action as well. The feasibility of the reinforcement learning framework is demonstrated using a 3D simulation environment with two collaborating agents. The environment provides realistic visual observations to be used and shared between the two agents.
1
0
0
0
0
0
Optimal investment-consumption problem post-retirement with a minimum guarantee
We study the optimal investment-consumption problem for a member of defined contribution plan during the decumulation phase. For a fixed annuitization time, to achieve higher final annuity, we consider a variable consumption rate. Moreover, to eliminate the ruin possibilities and having a minimum guarantee for the final annuity, we consider a safety level for the wealth process which consequently yields a Hamilton-Jacobi-Bellman (HJB) equation on a bounded domain. We apply the policy iteration method to find approximations of solution of the HJB equation. Finally, we give the simulation results for the optimal investment-consumption strategies, optimal wealth process and the final annuity for different ranges of admissible consumptions. Furthermore, by calculating the present market value of the future cash flows before and after the annuitization, we compare the results for different consumption policies.
0
0
0
0
0
1
Parametrices for the light ray transform on Minkowski spacetime
We consider restricted light ray transforms arising from an inverse problem of finding cosmic strings. We construct a relative left parametrix for the transform on two tensors, which recovers the space-like and some light-like singularities of the two tensor.
0
0
1
0
0
0
Derivative Principal Component Analysis for Representing the Time Dynamics of Longitudinal and Functional Data
We propose a nonparametric method to explicitly model and represent the derivatives of smooth underlying trajectories for longitudinal data. This representation is based on a direct Karhunen--Loève expansion of the unobserved derivatives and leads to the notion of derivative principal component analysis, which complements functional principal component analysis, one of the most popular tools of functional data analysis. The proposed derivative principal component scores can be obtained for irregularly spaced and sparsely observed longitudinal data, as typically encountered in biomedical studies, as well as for functional data which are densely measured. Novel consistency results and asymptotic convergence rates for the proposed estimates of the derivative principal component scores and other components of the model are derived under a unified scheme for sparse or dense observations and mild conditions. We compare the proposed representations for derivatives with alternative approaches in simulation settings and also in a wallaby growth curve application. It emerges that representations using the proposed derivative principal component analysis recover the underlying derivatives more accurately compared to principal component analysis-based approaches especially in settings where the functional data are represented with only a very small number of components or are densely sampled. In a second wheat spectra classification example, derivative principal component scores were found to be more predictive for the protein content of wheat than the conventional functional principal component scores.
0
0
1
1
0
0
Stokes phenomenon and confluence in non-autonomous Hamiltonian systems
This article studies a confluence of a pair of regular singular points to an irregular one in a generic family of time-dependent Hamiltonian systems in dimension 2. This is a general setting for the understanding of the degeneration of the sixth Painleve equation to the fifth one. The main result is a theorem of sectoral normalization of the family to an integrable formal normal form, through which is explained the relation between the local monodromy operators at the two regular singularities and the non-linear Stokes phenomenon at the irregular singularity of the limit system. The problem of analytic classification is also addressed. Key words: Non-autonomous Hamiltonian systems; irregular singularity; non-linear Stokes phenomenon; wild monodromy; confluence; local analytic classification; Painleve equations.
0
0
1
0
0
0
A General Sequential Delay-Doppler Estimation Scheme for Sub-Nyquist Pulse-Doppler Radar
Sequential estimation of the delay and Doppler parameters for sub-Nyquist radars by analog-to-information conversion (AIC) systems has received wide attention recently. However, the estimation methods reported are AIC-dependent and have poor performance for off-grid targets. This paper develops a general estimation scheme in the sense that it is applicable to all AICs regardless whether the targets are on or off the grids. The proposed scheme estimates the delay and Doppler parameters sequentially, in which the delay estimation is formulated into a beamspace direction-of- arrival problem and the Doppler estimation is translated into a line spectrum estimation problem. Then the well-known spatial and temporal spectrum estimation techniques are used to provide efficient and high-resolution estimates of the delay and Doppler parameters. In addition, sufficient conditions on the AIC to guarantee the successful estimation of off-grid targets are provided, while the existing conditions are mostly related to the on-grid targets. Theoretical analyses and numerical experiments show the effectiveness and the correctness of the proposed scheme.
1
0
1
0
0
0
The independence number of the Birkhoff polytope graph, and applications to maximally recoverable codes
Maximally recoverable codes are codes designed for distributed storage which combine quick recovery from single node failure and optimal recovery from catastrophic failure. Gopalan et al [SODA 2017] studied the alphabet size needed for such codes in grid topologies and gave a combinatorial characterization for it. Consider a labeling of the edges of the complete bipartite graph $K_{n,n}$ with labels coming from $F_2^d$ , that satisfies the following condition: for any simple cycle, the sum of the labels over its edges is nonzero. The minimal d where this is possible controls the alphabet size needed for maximally recoverable codes in n x n grid topologies. Prior to the current work, it was known that d is between $(\log n)^2$ and $n\log n$. We improve both bounds and show that d is linear in n. The upper bound is a recursive construction which beats the random construction. The lower bound follows by first relating the problem to the independence number of the Birkhoff polytope graph, and then providing tight bounds for it using the representation theory of the symmetric group.
1
0
1
0
0
0
Spin Angular Momentum of Proton Spin Puzzle in Complex Octonion Spaces
The paper focuses on considering some special precessional motions as the spin motions, separating the octonion angular momentum of a proton into six components, elucidating the proton angular momentum in the proton spin puzzle, especially the proton spin, decomposition, quarks and gluons, and polarization and so forth. J. C. Maxwell was the first to use the quaternions to study the electromagnetic fields. Subsequently the complex octonions are utilized to depict the electromagnetic field, gravitational field, and quantum mechanics and so forth. In the complex octonion space, the precessional equilibrium equation infers the angular velocity of precession. The external electromagnetic strength may induce a new precessional motion, generating a new term of angular momentum, even if the orbital angular momentum is zero. This new term of angular momentum can be regarded as the spin angular momentum, and its angular velocity of precession is different from the angular velocity of revolution. The study reveals that the angular momentum of the proton must be separated into more components than ever before. In the proton spin puzzle, the orbital angular momentum and magnetic dipole moment are independent of each other, and they should be measured and calculated respectively.
0
1
0
0
0
0
Physics-Informed Regularization of Deep Neural Networks
This paper presents a novel physics-informed regularization method for training of deep neural networks (DNNs). In particular, we focus on the DNN representation for the response of a physical or biological system, for which a set of governing laws are known. These laws often appear in the form of differential equations, derived from first principles, empirically-validated laws, and/or domain expertise. We propose a DNN training approach that utilizes these known differential equations in addition to the measurement data, by introducing a penalty term to the training loss function to penalize divergence form the governing laws. Through three numerical examples, we will show that the proposed regularization produces surrogates that are physically interpretable with smaller generalization errors, when compared to other common regularization methods.
1
0
0
0
0
0
Gradient-based Filter Design for the Dual-tree Wavelet Transform
The wavelet transform has seen success when incorporated into neural network architectures, such as in wavelet scattering networks. More recently, it has been shown that the dual-tree complex wavelet transform can provide better representations than the standard transform. With this in mind, we extend our previous method for learning filters for the 1D and 2D wavelet transforms into the dual-tree domain. We show that with few modifications to our original model, we can learn directional filters that leverage the properties of the dual-tree wavelet transform.
0
0
0
1
0
0
Assimilated LVEF: A Bayesian technique combining human intuition with machine measurement for sharper estimates of left ventricular ejection fraction and stronger association with outcomes
The cardiologist's main tool for measuring systolic heart failure is left ventricular ejection fraction (LVEF). Trained cardiologist's report both a visual and machine-guided measurement of LVEF, but only use this machine-guided measurement in analysis. We use a Bayesian technique to combine visual and machine-guided estimates from the PARTNER-IIA Trial, a cohort of patients with aortic stenosis at moderate risk treated with bioprosthetic aortic valves, and find our combined estimate reduces measurement errors and improves the association between LVEF and a 1-year composite endpoint.
0
0
0
1
0
0
DBSCAN: Optimal Rates For Density Based Clustering
We study the problem of optimal estimation of the density cluster tree under various assumptions on the underlying density. Building up from the seminal work of Chaudhuri et al. [2014], we formulate a new notion of clustering consistency which is better suited to smooth densities, and derive minimax rates of consistency for cluster tree estimation for Holder smooth densities of arbitrary degree \alpha. We present a computationally efficient, rate optimal cluster tree estimator based on a straightforward extension of the popular density-based clustering algorithm DBSCAN by Ester et al. [1996]. The procedure relies on a kernel density estimator with an appropriate choice of the kernel and bandwidth to produce a sequence of nested random geometric graphs whose connected components form a hierarchy of clusters. The resulting optimal rates for cluster tree estimation depend on the degree of smoothness of the underlying density and, interestingly, match minimax rates for density estimation under the supremum norm. Our results complement and extend the analysis of the DBSCAN algorithm in Sriperumbudur and Steinwart [2012]. Finally, we consider level set estimation and cluster consistency for densities with jump discontinuities, where the sizes of the jumps and the distance among clusters are allowed to vanish as the sample size increases. We demonstrate that our DBSCAN-based algorithm remains minimax rate optimal in this setting as well.
0
0
1
1
0
0
Steganographic Generative Adversarial Networks
Steganography is collection of methods to hide secret information ("payload") within non-secret information ("container"). Its counterpart, Steganalysis, is the practice of determining if a message contains a hidden payload, and recovering it if possible. Presence of hidden payloads is typically detected by a binary classifier. In the present study, we propose a new model for generating image-like containers based on Deep Convolutional Generative Adversarial Networks (DCGAN). This approach allows to generate more setganalysis-secure message embedding using standard steganography algorithms. Experiment results demonstrate that the new model successfully deceives the steganography analyzer, and for this reason, can be used in steganographic applications.
1
0
0
1
0
0
Minimal Sum Labeling of Graphs
A graph $G$ is called a sum graph if there is a so-called sum labeling of $G$, i.e. an injective function $\ell: V(G) \rightarrow \mathbb{N}$ such that for every $u,v\in V(G)$ it holds that $uv\in E(G)$ if and only if there exists a vertex $w\in V(G)$ such that $\ell(u)+\ell(v) = \ell(w)$. We say that sum labeling $\ell$ is minimal if there is a vertex $u\in V(G)$ such that $\ell(u)=1$. In this paper, we show that if we relax the conditions (either allow non-injective labelings or consider graphs with loops) then there are sum graphs without a minimal labeling, which partially answers the question posed by Miller, Ryan and Smyth in 1998.
1
0
0
0
0
0
Provably efficient RL with Rich Observations via Latent State Decoding
We study the exploration problem in episodic MDPs with rich observations generated from a small number of latent states. Under certain identifiability assumptions, we demonstrate how to estimate a mapping from the observations to latent states inductively through a sequence of regression and clustering steps---where previously decoded latent states provide labels for later regression problems---and use it to construct good exploration policies. We provide finite-sample guarantees on the quality of the learned state decoding function and exploration policies, and complement our theory with an empirical evaluation on a class of hard exploration problems. Our method exponentially improves over $Q$-learning with naïve exploration, even when $Q$-learning has cheating access to latent states.
1
0
0
1
0
0
Markov Chain Lifting and Distributed ADMM
The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, which creates a new Markov chain on an expanded state space. For a class of quadratic objectives, we show an analogous behavior where a distributed ADMM algorithm can be seen as a lifting of Gradient Descent algorithm. This provides a deep insight for its faster convergence rate under optimal parameter tuning. We conjecture that this gain is always present, as opposed to the lifting of a Markov chain which sometimes only provides a marginal speedup.
1
0
1
1
0
0
Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis
Generating versatile and appropriate synthetic speech requires control over the output expression separate from the spoken text. Important non-textual speech variation is seldom annotated, in which case output control must be learned in an unsupervised fashion. In this paper, we perform an in-depth study of methods for unsupervised learning of control in statistical speech synthesis. For example, we show that popular unsupervised training heuristics can be interpreted as variational inference in certain autoencoder models. We additionally connect these models to VQ-VAEs, another, recently-proposed class of deep variational autoencoders, which we show can be derived from a very similar mathematical argument. The implications of these new probabilistic interpretations are discussed. We illustrate the utility of the various approaches with an application to acoustic modelling for emotional speech synthesis, where the unsupervised methods for learning expression control (without access to emotional labels) are found to give results that in many aspects match or surpass the previous best supervised approach.
1
0
0
1
0
0
Fast failover of multicast sessions in software-defined networks
With the rapid growth of services that stream to groups of users comes an increased importance of and demand for reliable multicast. In this paper, we turn to software-defined networking and develop a novel general-purpose multi-failure protection algorithm to provide quick failure recovery, via Fast Failover (FF) groups, for dynamic multicast groups. This extends previous research, which either could not realize fast failover, worked only for single link failures, or was only applicable to static multicast groups. However, while FF is know to be fast, it requires pre-installing back-up rules. These additional memory requirements, which in a multicast setting are even more pronounced than for unicast, are often mentioned as a big disadvantage of using FF. We develop an OpenFlow application for resilient multicast, with which we study FF resource usage, in an attempt to better understand the trade-off between recovery time and resource usage. Our tests on a realistic network suggest that using FF groups can reduce the recovery time of the network significantly compared to other methods, especially when the latency between the controller and the switches is relatively large.
1
0
0
0
0
0
AI Safety Gridworlds
We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.
1
0
0
0
0
0
Maximality of Galois actions for abelian varieties
Let $\{\rho_\ell\}_\ell$ be the system of $\ell$-adic representations arising from the $i$th $\ell$-adic cohomology of a complete smooth variety $X$ defined over a number field $K$. Denote the image of $\rho_\ell$ by $\Gamma_\ell$ and its Zariski closure, which is a linear algebraic group over $\mathbb{Q}_\ell$, by $\mathbf{G}_\ell$. We prove that $\mathbf{G}_\ell^{red}$, the quotient of $\mathbf{G}_\ell^\circ$ by its unipotent radical, is unramified over a totally ramified extension of $\mathbb{Q}_\ell$ for all sufficiently large $\ell$. We give a sufficient condition on $\{\rho_\ell\}_\ell$ such that for all sufficiently large $\ell$, $\Gamma_\ell$ is in some sense maximal compact in $\mathbf{G}_\ell(\mathbb{Q}_\ell)$. Since the condition is satisfied when $X$ is an abelian variety by the Tate conjecture, we obtain maximality of Galois actions for abelian varieties.
0
0
1
0
0
0
An Application of Multi-band Forced Photometry to One Square Degree of SERVS: Accurate Photometric Redshifts and Implications for Future Science
We apply The Tractor image modeling code to improve upon existing multi-band photometry for the Spitzer Extragalactic Representative Volume Survey (SERVS). SERVS consists of post-cryogenic Spitzer observations at 3.6 and 4.5 micron over five well-studied deep fields spanning 18 square degrees. In concert with data from ground-based near-infrared (NIR) and optical surveys, SERVS aims to provide a census of the properties of massive galaxies out to z ~ 5. To accomplish this, we are using The Tractor to perform "forced photometry." This technique employs prior measurements of source positions and surface brightness profiles from a high-resolution fiducial band from the VISTA Deep Extragalactic Observations (VIDEO) survey to model and fit the fluxes at lower-resolution bands. We discuss our implementation of The Tractor over a square degree test region within the XMM-LSS field with deep imaging in 12 NIR/optical bands. Our new multi-band source catalogs offer a number of advantages over traditional position-matched catalogs, including 1) consistent source cross-identification between bands, 2) de-blending of sources that are clearly resolved in the fiducial band but blended in the lower-resolution SERVS data, 3) a higher source detection fraction in each band, 4) a larger number of candidate galaxies in the redshift range 5 < z < 6, and 5) a statistically significant improvement in the photometric redshift accuracy as evidenced by the significant decrease in the fraction of outliers compared to spectroscopic redshifts. Thus, forced photometry using The Tractor offers a means of improving the accuracy of multi-band extragalactic surveys designed for galaxy evolution studies. We will extend our application of this technique to the full SERVS footprint in the future.
0
1
0
0
0
0
Stacked Structure Learning for Lifted Relational Neural Networks
Lifted Relational Neural Networks (LRNNs) describe relational domains using weighted first-order rules which act as templates for constructing feed-forward neural networks. While previous work has shown that using LRNNs can lead to state-of-the-art results in various ILP tasks, these results depended on hand-crafted rules. In this paper, we extend the framework of LRNNs with structure learning, thus enabling a fully automated learning process. Similarly to many ILP methods, our structure learning algorithm proceeds in an iterative fashion by top-down searching through the hypothesis space of all possible Horn clauses, considering the predicates that occur in the training examples as well as invented soft concepts entailed by the best weighted rules found so far. In the experiments, we demonstrate the ability to automatically induce useful hierarchical soft concepts leading to deep LRNNs with a competitive predictive power.
1
0
0
1
0
0
Spec-QP: Speculative Query Planning for Joins over Knowledge Graphs
Organisations store huge amounts of data from multiple heterogeneous sources in the form of Knowledge Graphs (KGs). One of the ways to query these KGs is to use SPARQL queries over a database engine. Since SPARQL follows exact match semantics, the queries may return too few or no results. Recent works have proposed query relaxation where the query engine judiciously replaces a query predicate with similar predicates using weighted relaxation rules mined from the KG. The space of possible relaxations is potentially too large to fully explore and users are typically interested in only top-k results, so such query engines use top-k algorithms for query processing. However, they may still process all the relaxations, many of whose answers do not contribute towards top-k answers. This leads to computation overheads and delayed response times. We propose Spec-QP, a query planning framework that speculatively determines which relaxations will have their results in the top-k answers. Only these relaxations are processed using the top-k operators. We, therefore, reduce the computation overheads and achieve faster response times without adversely affecting the quality of results. We tested Spec-QP over two datasets - XKG and Twitter, to demonstrate the efficiency of our planning framework at reducing runtimes with reasonable accuracy for query engines supporting relaxations.
1
0
0
0
0
0
SurfClipse: Context-Aware Meta Search in the IDE
Despite various debugging supports of the existing IDEs for programming errors and exceptions, software developers often look at web for working solutions or any up-to-date information. Traditional web search does not consider the context of the problems that they search solutions for, and thus it often does not help much in problem solving. In this paper, we propose a context-aware meta search tool, SurfClipse, that analyzes an encountered exception and its context in the IDE, and recommends not only suitable search queries but also relevant web pages for the exception (and its context). The tool collects results from three popular search engines and a programming Q & A site against the exception in the IDE, refines the results for relevance against the context of the exception, and then ranks them before recommendation. It provides two working modes--interactive and proactive to meet the versatile needs of the developers, and one can browse the result pages using a customized embedded browser provided by the tool. Tool page: www.usask.ca/~masud.rahman/surfclipse
1
0
0
0
0
0
Chomp on numerical semigroups
We consider the two-player game chomp on posets associated to numerical semigroups and show that the analysis of strategies for chomp is strongly related to classical properties of semigroups. We characterize, which player has a winning-strategy for symmetric semigroups, semigroups of maximal embedding dimension and several families of numerical semigroups generated by arithmetic sequences. Furthermore, we show that which player wins on a given numerical semigroup is a decidable question. Finally, we extend several of our results to the more general setting of subsemigroups of $\mathbb{N} \times T$, where $T$ is a finite abelian group.
1
0
1
0
0
0
Single Index Latent Variable Models for Network Topology Inference
A semi-parametric, non-linear regression model in the presence of latent variables is applied towards learning network graph structure. These latent variables can correspond to unmodeled phenomena or unmeasured agents in a complex system of interacting entities. This formulation jointly estimates non-linearities in the underlying data generation, the direct interactions between measured entities, and the indirect effects of unmeasured processes on the observed data. The learning is posed as regularized empirical risk minimization. Details of the algorithm for learning the model are outlined. Experiments demonstrate the performance of the learned model on real data.
0
0
0
1
0
0
Quadratically-Regularized Optimal Transport on Graphs
Optimal transportation provides a means of lifting distances between points on a geometric domain to distances between signals over the domain, expressed as probability distributions. On a graph, transportation problems can be used to express challenging tasks involving matching supply to demand with minimal shipment expense; in discrete language, these become minimum-cost network flow problems. Regularization typically is needed to ensure uniqueness for the linear ground distance case and to improve optimization convergence; state-of-the-art techniques employ entropic regularization on the transportation matrix. In this paper, we explore a quadratic alternative to entropic regularization for transport over a graph. We theoretically analyze the behavior of quadratically-regularized graph transport, characterizing how regularization affects the structure of flows in the regime of small but nonzero regularization. We further exploit elegant second-order structure in the dual of this problem to derive an easily-implemented Newton-type optimization algorithm.
1
0
1
0
0
0
Discussion on "Random-projection ensemble classification" by T. Cannings and R. Samworth
Discussion on "Random-projection ensemble classification" by T. Cannings and R. Samworth. We believe that the proposed approach can find many applications in economics such as credit scoring (e.g. Altman (1968)) and can be extended to more general type of classifiers. In this discussion we would like to draw authors attention to the copula-based discriminant analysis (Han et al. (2013) and He et al. (2016)).
0
0
1
1
0
0
Deep Asymmetric Multi-task Feature Learning
We propose Deep Asymmetric Multitask Feature Learning (Deep-AMTFL) which can learn deep representations shared across multiple tasks while effectively preventing negative transfer that may happen in the feature sharing process. Specifically, we introduce an asymmetric autoencoder term that allows reliable predictors for the easy tasks to have high contribution to the feature learning while suppressing the influences of unreliable predictors for more difficult tasks. This allows the learning of less noisy representations, and enables unreliable predictors to exploit knowledge from the reliable predictors via the shared latent features. Such asymmetric knowledge transfer through shared features is also more scalable and efficient than inter-task asymmetric transfer. We validate our Deep-AMTFL model on multiple benchmark datasets for multitask learning and image classification, on which it significantly outperforms existing symmetric and asymmetric multitask learning models, by effectively preventing negative transfer in deep feature learning.
1
0
0
1
0
0
Sub-nanometre resolution of atomic motion during electronic excitation in phase-change materials
Phase-change materials based on Ge-Sb-Te alloys are widely used in industrial applications such as nonvolatile memories, but reaction pathways for crystalline-to-amorphous phase-change on picosecond timescales remain unknown. Femtosecond laser excitation and an ultrashort x-ray probe is used to show the temporal separation of electronic and thermal effects in a long-lived ($>$100 ps) transient metastable state of Ge$_{2}$Sb$_{2}$Te$_{5}$ with muted interatomic interaction induced by a weakening of resonant bonding. Due to a specific electronic state, the lattice undergoes a reversible nondestructive modification over a nanoscale region, remaining cold for 4 ps. An independent time-resolved x-ray absorption fine structure experiment confirms the existence of an intermediate state with disordered bonds. This newly unveiled effect allows the utilization of non-thermal ultra-fast pathways enabling artificial manipulation of the switching process, ultimately leading to a redefined speed limit, and improved energy efficiency and reliability of phase-change memory technologies.
0
1
0
0
0
0
A Density Result for Real Hyperelliptic Curves
Let $\{\infty^+, \infty^-\}$ be the two points above $\infty$ on the real hyperelliptic curve $H: y^2 = (x^2 - 1) \prod_{i=1}^{2g} (x - a_i)$. We show that the divisor $([\infty^+] - [\infty^-])$ is torsion in $\operatorname{Jac} J$ for a dense set of $(a_1, a_2, \ldots, a_{2g}) \in (-1, 1)^{2g}$. In fact, we prove by degeneration to a nodal $\mathbb{P}^1$ that an associated period map has derivative generically of full rank.
0
0
1
0
0
0
On a class of integrable systems of Monge-Ampère type
We investigate a class of multi-dimensional two-component systems of Monge-Ampère type that can be viewed as generalisations of heavenly-type equations appearing in self-dual Ricci-flat geometry. Based on the Jordan-Kronecker theory of skew-symmetric matrix pencils, a classification of normal forms of such systems is obtained. All two-component systems of Monge-Ampère type turn out to be integrable, and can be represented as the commutativity conditions of parameter-dependent vector fields. Geometrically, systems of Monge-Ampère type are associated with linear sections of the Grassmannians. This leads to an invariant differential-geometric characterisation of the Monge-Ampère property.
0
1
1
0
0
0
When is the mode functional the Bayes classifier?
In classification problems, the mode of the conditional probability distribution, i.e., the most probable category, is the Bayes classifier under zero-one or misclassification loss. Under any other cost structure, the mode fails to persist.
0
0
1
1
0
0
Unsupervised Motion Artifact Detection in Wrist-Measured Electrodermal Activity Data
One of the main benefits of a wrist-worn computer is its ability to collect a variety of physiological data in a minimally intrusive manner. Among these data, electrodermal activity (EDA) is readily collected and provides a window into a person's emotional and sympathetic responses. EDA data collected using a wearable wristband are easily influenced by motion artifacts (MAs) that may significantly distort the data and degrade the quality of analyses performed on the data if not identified and removed. Prior work has demonstrated that MAs can be successfully detected using supervised machine learning algorithms on a small data set collected in a lab setting. In this paper, we demonstrate that unsupervised learning algorithms perform competitively with supervised algorithms for detecting MAs on EDA data collected in both a lab-based setting and a real-world setting comprising about 23 hours of data. We also find, somewhat surprisingly, that incorporating accelerometer data as well as EDA improves detection accuracy only slightly for supervised algorithms and significantly degrades the accuracy of unsupervised algorithms.
1
0
0
0
0
0
CredSaT: Credibility Ranking of Users in Big Social Data incorporating Semantic Analysis and Temporal Factor
The widespread use of big social data has pointed the research community in several significant directions. In particular, the notion of social trust has attracted a great deal of attention from information processors | computer scientists and information consumers | formal organizations. This is evident in various applications such as recommendation systems, viral marketing and expertise retrieval. Hence, it is essential to have frameworks that can temporally measure users credibility in all domains categorised under big social data. This paper presents CredSaT (Credibility incorporating Semantic analysis and Temporal factor): a fine-grained users credibility analysis framework for big social data. A novel metric that includes both new and current features, as well as the temporal factor, is harnessed to establish the credibility ranking of users. Experiments on real-world dataset demonstrate the effectiveness and applicability of our model to indicate highly domain-based trustworthy users. Further, CredSaT shows the capacity in capturing spammers and other anomalous users.
1
0
0
0
0
0
On singular Finsler foliation
In this paper we introduce the concept of singular Finsler foliation, which generalizes the concepts of Finsler actions, Finsler submersions and (regular) Finsler foliations. We show that if $\mathcal{F}$ is a singular Finsler foliation on a Randers manifold $(M,Z)$ with Zermelo data $(\mathtt{h},W),$ then $\mathcal{F}$ is a singular Riemannian foliation on the Riemannian manifold $(M,\mathtt{h} )$. As a direct consequence we infer that the regular leaves are equifocal submanifolds (a generalization of isoparametric submanifolds) when the wind $W$ is an infinitesimal homothety of $\mathtt{h}$ (e.,g when $W$ is killing vector field or $M$ has constant Finsler curvature). We also present a slice theorem that relates local singular Finsler foliations on Finsler manifolds with singular Finsler foliations on Minkowski spaces.
0
0
1
0
0
0
Early Experiences with Crowdsourcing Airway Annotations in Chest CT
Measuring airways in chest computed tomography (CT) images is important for characterizing diseases such as cystic fibrosis, yet very time-consuming to perform manually. Machine learning algorithms offer an alternative, but need large sets of annotated data to perform well. We investigate whether crowdsourcing can be used to gather airway annotations which can serve directly for measuring the airways, or as training data for the algorithms. We generate image slices at known locations of airways and request untrained crowd workers to outline the airway lumen and airway wall. Our results show that the workers are able to interpret the images, but that the instructions are too complex, leading to many unusable annotations. After excluding unusable annotations, quantitative results show medium to high correlations with expert measurements of the airways. Based on this positive experience, we describe a number of further research directions and provide insight into the challenges of crowdsourcing in medical images from the perspective of first-time users.
1
0
0
0
0
0
Convergent Iteration in Sobolev Space for Time Dependent Closed Quantum Systems
Time dependent quantum systems have become indispensable in science and its applications, particularly at the atomic and molecular levels. Here, we discuss the approximation of closed time dependent quantum systems on bounded domains, via iterative methods in Sobolev space based upon evolution operators. Recently, existence and uniqueness of weak solutions were demonstrated by a contractive fixed point mapping defined by the evolution operators. Convergent successive approximation is then guaranteed. This article uses the same mapping to define quadratically convergent Newton and approximate Newton methods. Estimates for the constants used in the convergence estimates are provided. The evolution operators are ideally suited to serve as the framework for this operator approximation theory, since the Hamiltonian is time dependent. In addition, the hypotheses required to guarantee quadratic convergence of the Newton iteration build naturally upon the hypotheses used for the existence/uniqueness theory.
0
0
1
0
0
0
Prototyping and Experimentation of a Closed-Loop Wireless Power Transmission with Channel Acquisition and Waveform Optimization
A systematic design of adaptive waveform for Wireless Power Transfer (WPT) has recently been proposed and shown through simulations to lead to significant performance benefits compared to traditional non-adaptive and heuristic waveforms. In this study, we design the first prototype of a closed-loop wireless power transfer system with adaptive waveform optimization based on Channel State Information acquisition. The prototype consists of three important blocks, namely the channel estimator, the waveform optimizer, and the energy harvester. Software Defined Radio (SDR) prototyping tools are used to implement a wireless power transmitter and a channel estimator, and a voltage doubler rectenna is designed to work as an energy harvester. A channel adaptive waveform with 8 sinewaves is shown through experiments to improve the average harvested DC power at the rectenna output by 9.8% to 36.8% over a non-adaptive design with the same number of sinewaves.
1
0
0
0
0
0
Image-based immersed boundary model of the aortic root
Each year, approximately 300,000 heart valve repair or replacement procedures are performed worldwide, including approximately 70,000 aortic valve replacement surgeries in the United States alone. This paper describes progress in constructing anatomically and physiologically realistic immersed boundary (IB) models of the dynamics of the aortic root and ascending aorta. This work builds on earlier IB models of fluid-structure interaction (FSI) in the aortic root, which previously achieved realistic hemodynamics over multiple cardiac cycles, but which also were limited to simplified aortic geometries and idealized descriptions of the biomechanics of the aortic valve cusps. By contrast, the model described herein uses an anatomical geometry reconstructed from patient-specific computed tomography angiography (CTA) data, and employs a description of the elasticity of the aortic valve leaflets based on a fiber-reinforced constitutive model fit to experimental tensile test data. Numerical tests show that the model is able to resolve the leaflet biomechanics in diastole and early systole at practical grid spacings. The model is also used to examine differences in the mechanics and fluid dynamics yielded by fresh valve leaflets and glutaraldehyde-fixed leaflets similar to those used in bioprosthetic heart valves. Although there are large differences in the leaflet deformations during diastole, the differences in the open configurations of the valve models are relatively small, and nearly identical hemodynamics are obtained in all cases considered.
1
1
0
0
0
0
Large-Scale Cox Process Inference using Variational Fourier Features
Gaussian process modulated Poisson processes provide a flexible framework for modelling spatiotemporal point patterns. So far this had been restricted to one dimension, binning to a pre-determined grid, or small data sets of up to a few thousand data points. Here we introduce Cox process inference based on Fourier features. This sparse representation induces global rather than local constraints on the function space and is computationally efficient. This allows us to formulate a grid-free approximation that scales well with the number of data points and the size of the domain. We demonstrate that this allows MCMC approximations to the non-Gaussian posterior. We also find that, in practice, Fourier features have more consistent optimization behavior than previous approaches. Our approximate Bayesian method can fit over 100,000 events with complex spatiotemporal patterns in three dimensions on a single GPU.
0
0
0
1
0
0
Buildings-to-Grid Integration Framework
This paper puts forth a mathematical framework for Buildings-to-Grid (BtG) integration in smart cities. The framework explicitly couples power grid and building's control actions and operational decisions, and can be utilized by buildings and power grids operators to simultaneously optimize their performance. Simplified dynamics of building clusters and building-integrated power networks with algebraic equations are presented---both operating at different time-scales. A model predictive control (MPC)-based algorithm that formulates the BtG integration and accounts for the time-scale discrepancy is developed. The formulation captures dynamic and algebraic power flow constraints of power networks and is shown to be numerically advantageous. The paper analytically establishes that the BtG integration yields a reduced total system cost in comparison with decoupled designs where grid and building operators determine their controls separately. The developed framework is tested on standard power networks that include thousands of buildings modeled using industrial data. Case studies demonstrate building energy savings and significant frequency regulation, while these findings carry over in network simulations with nonlinear power flows and mismatch in building model parameters. Finally, simulations indicate that the performance does not significantly worsen when there is uncertainty in the forecasted weather and base load conditions.
1
0
1
0
0
0
Oxygen Partial Pressure during Pulsed Laser Deposition: Deterministic Role on Thermodynamic Stability of Atomic Termination Sequence at SrRuO3/BaTiO3 Interface
With recent trends on miniaturizing oxide-based devices, the need for atomic-scale control of surface/interface structures by pulsed laser deposition (PLD) has increased. In particular, realizing uniform atomic termination at the surface/interface is highly desirable. However, a lack of understanding on the surface formation mechanism in PLD has limited a deliberate control of surface/interface atomic stacking sequences. Here, taking the prototypical SrRuO3/BaTiO3/SrRuO3 (SRO/BTO/SRO) heterostructure as a model system, we investigated the formation of different interfacial termination sequences (BaO-RuO2 or TiO2-SrO) with oxygen partial pressure (PO2) during PLD. We found that a uniform SrO-TiO2 termination sequence at the SRO/BTO interface can be achieved by lowering the PO2 to 5 mTorr, regardless of the total background gas pressure (Ptotal), growth mode, or growth rate. Our results indicate that the thermodynamic stability of the BTO surface at the low-energy kinetics stage of PLD can play an important role in surface/interface termination formation. This work paves the way for realizing termination engineering in functional oxide heterostructures.
0
1
0
0
0
0
A Data-Driven Analysis of the Influence of Care Coordination on Trauma Outcome
OBJECTIVE: To test the hypothesis that variation in care coordination is related to LOS. DESIGN We applied a spectral co-clustering methodology to simultaneously infer groups of patients and care coordination patterns, in the form of interaction networks of health care professionals, from electronic medical record (EMR) utilization data. The care coordination pattern for each patient group was represented by standard social network characteristics and its relationship with hospital LOS was assessed via a negative binomial regression with a 95% confidence interval. SETTING AND PATIENTS This study focuses on 5,588 adult patients hospitalized for trauma at the Vanderbilt University Medical Center. The EMRs were accessed by healthcare professionals from 179 operational areas during 158,467 operational actions. MAIN OUTCOME MEASURES: Hospital LOS for trauma inpatients, as an indicator of care coordination efficiency. RESULTS: Three general types of care coordination patterns were discovered, each of which was affiliated with a specific patient group. The first patient group exhibited the shortest hospital LOS and was managed by a care coordination pattern that involved the smallest number of operational areas (102 areas, as opposed to 125 and 138 for the other patient groups), but exhibited the largest number of collaborations between operational areas (e.g., an average of 27.1 connections per operational area compared to 22.5 and 23.3 for the other two groups). The hospital LOS for the second and third patient groups was 14 hours (P = 0.024) and 10 hours (P = 0.042) longer than the first patient group, respectively.
1
0
0
0
0
0
Construction of Non-asymptotic Confidence Sets in 2-Wasserstein Space
In this paper, we consider a probabilistic setting where the probability measures are considered to be random objects. We propose a procedure of construction non-asymptotic confidence sets for empirical barycenters in 2-Wasserstein space and develop the idea further to construction of a non-parametric two-sample test that is then applied to the detection of structural breaks in data with complex geometry. Both procedures mainly rely on the idea of multiplier bootstrap (Spokoiny and Zhilova (2015), Chernozhukov et al. (2014)). The main focus lies on probability measures that have commuting covariance matrices and belong to the same scatter-location family: we proof the validity of a bootstrap procedure that allows to compute confidence sets and critical values for a Wasserstein-based two-sample test.
0
0
1
1
0
0
Limits on statistical anisotropy from BOSS DR12 galaxies using bipolar spherical harmonics
We measure statistically anisotropic signatures imprinted in three-dimensional galaxy clustering using bipolar spherical harmonics (BipoSHs) in both Fourier space and configuration space. We then constrain a well-known quadrupolar anisotropy parameter $g_{2M}$ in the primordial power spectrum, parametrized by $P(\vec{k}) = \bar{P}(k) [ 1 + \sum_{M} g_{2M} Y_{2M}(\hat{k}) ]$, with $M$ determining the direction of the anisotropy. Such an anisotropic signal is easily contaminated by artificial asymmetries due to specific survey geometry. We precisely estimate the contaminated signal and finally subtract it from the data. Using the galaxy samples obtained by the Baryon Oscillation Spectroscopic Survey Data Release 12, we find no evidence for violation of statistical isotropy, $g_{2M}$ for all $M$ to be of zero within the $2\sigma$ level. The $g_{2M}$-type anisotropy can originate from the primordial curvature power spectrum involving a directional-dependent modulation $g_* (\hat{k} \cdot \hat{p})^2$. The bound on $g_{2M}$ is translated into $g_*$ as $-0.09 < g_* < 0.08$ with a $95\%$ confidence level when $\hat{p}$ is marginalized over.
0
1
0
0
0
0
A Central Limit Theorem for Wasserstein type distances between two different laws
This article is dedicated to the estimation of Wasserstein distances and Wasserstein costs between two distinct continuous distributions $F$ and $G$ on $\mathbb R$. The estimator is based on the order statistics of (possibly dependent) samples of $F$ resp. $G$. We prove the consistency and the asymptotic normality of our estimators. \begin{it}Keywords:\end{it} Central Limit Theorems- Generelized Wasserstein distances- Empirical processes- Strong approximation- Dependent samples.
0
0
1
1
0
0
Confidence Intervals for Stochastic Arithmetic
Quantifying errors and losses due to the use of Floating-Point (FP) calculations in industrial scientific computing codes is an important part of the Verification, Validation and Uncertainty Quantification (VVUQ) process. Stochastic Arithmetic is one way to model and estimate FP losses of accuracy, which scales well to large, industrial codes. It exists in different flavors, such as CESTAC or MCA, implemented in various tools such as CADNA, Verificarlo or Verrou. These methodologies and tools are based on the idea that FP losses of accuracy can be modeled via randomness. Therefore, they share the same need to perform a statistical analysis of programs results in order to estimate the significance of the results. In this paper, we propose a framework to perform a solid statistical analysis of Stochastic Arithmetic. This framework unifies all existing definitions of the number of significant digits (CESTAC and MCA), and also proposes a new quantity of interest: the number of digits contributing to the accuracy of the results. Sound confidence intervals are provided for all estimators, both in the case of normally distributed results, and in the general case. The use of this framework is demonstrated by two case studies of large, industrial codes: Europlexus and code\_aster.
1
0
0
1
0
0
Contaminated speech training methods for robust DNN-HMM distant speech recognition
Despite the significant progress made in the last years, state-of-the-art speech recognition technologies provide a satisfactory performance only in the close-talking condition. Robustness of distant speech recognition in adverse acoustic conditions, on the other hand, remains a crucial open issue for future applications of human-machine interaction. To this end, several advances in speech enhancement, acoustic scene analysis as well as acoustic modeling, have recently contributed to improve the state-of-the-art in the field. One of the most effective approaches to derive a robust acoustic modeling is based on using contaminated speech, which proved helpful in reducing the acoustic mismatch between training and testing conditions. In this paper, we revise this classical approach in the context of modern DNN-HMM systems, and propose the adoption of three methods, namely, asymmetric context windowing, close-talk based supervision, and close-talk based pre-training. The experimental results, obtained using both real and simulated data, show a significant advantage in using these three methods, overall providing a 15% error rate reduction compared to the baseline systems. The same trend in performance is confirmed either using a high-quality training set of small size, and a large one.
1
0
0
0
0
0
Co-segmentation for Space-Time Co-located Collections
We present a co-segmentation technique for space-time co-located image collections. These prevalent collections capture various dynamic events, usually by multiple photographers, and may contain multiple co-occurring objects which are not necessarily part of the intended foreground object, resulting in ambiguities for traditional co-segmentation techniques. Thus, to disambiguate what the common foreground object is, we introduce a weakly-supervised technique, where we assume only a small seed, given in the form of a single segmented image. We take a distributed approach, where local belief models are propagated and reinforced with similar images. Our technique progressively expands the foreground and background belief models across the entire collection. The technique exploits the power of the entire set of image without building a global model, and thus successfully overcomes large variability in appearance of the common foreground object. We demonstrate that our method outperforms previous co-segmentation techniques on challenging space-time co-located collections, including dense benchmark datasets which were adapted for our novel problem setting.
1
0
0
0
0
0
Speaker Diarization with LSTM
For many years, i-vector based audio embedding techniques were the dominant approach for speaker verification and speaker diarization applications. However, mirroring the rise of deep learning in various domains, neural network based audio embeddings, also known as d-vectors, have consistently demonstrated superior speaker verification performance. In this paper, we build on the success of d-vector based speaker verification systems to develop a new d-vector based approach to speaker diarization. Specifically, we combine LSTM-based d-vector audio embeddings with recent work in non-parametric clustering to obtain a state-of-the-art speaker diarization system. Our system is evaluated on three standard public datasets, suggesting that d-vector based diarization systems offer significant advantages over traditional i-vector based systems. We achieved a 12.0% diarization error rate on NIST SRE 2000 CALLHOME, while our model is trained with out-of-domain data from voice search logs.
1
0
0
1
0
0
Almost h-conformal semi-invariant submersions from almost quaternionic Hermitian manifolds
As a generalization of Riemannian submersions, horizontally conformal submersions, semi-invariant submersions, h-semi-invariant submersions, almost h-semi-invariant submersions, conformal semi-invariant submersions, we introduce h-conformal semi-invariant submersions and almost h-conformal semi-invariant submersions from almost quaternionic Hermitian manifolds onto Riemannian manifolds. We study their properties: the geometry of foliations, the conditions for total manifolds to be locally product manifolds, the conditions for such maps to be totally geodesic, etc. Finally, we give some examples of such maps.
0
0
1
0
0
0
Use of Source Code Similarity Metrics in Software Defect Prediction
In recent years, defect prediction has received a great deal of attention in the empirical software engineering world. Predicting software defects before the maintenance phase is very important not only to decrease the maintenance costs but also increase the overall quality of a software product. There are different types of product, process, and developer based software metrics proposed so far to measure the defectiveness of a software system. This paper suggests to use a novel set of software metrics which are based on the similarities detected among the source code files in a software project. To find source code similarities among different files of a software system, plagiarism and clone detection techniques are used. Two simple similarity metrics are calculated for each file, considering its overall similarity to the defective and non defective files in the project. Using these similarity metrics, we predict whether a specific file is defective or not. Our experiments on 10 open source data sets show that depending on the amount of detected similarity, proposed metrics could achieve significantly better performance compared to the existing static code metrics in terms of the area under the curve (AUC).
1
0
0
0
0
0
Connecting pairwise spheres by depth: DCOPS
We extend the classical notion of the spherical depth in \mathbb{R}^k, to the important setup of data on a Riemannian manifold. We show that this notion of depth satisfies a set of desirable properties. For the empirical version of this depth function both uniform consistency and the asymptotic distribution are studied. Consistency is also shown for functional data. The behaviour of the depth is illustrated through several examples for Riemannian manifold data.
0
0
1
1
0
0
Estimating the spectral gap of a trace-class Markov operator
The utility of a Markov chain Monte Carlo algorithm is, in large part, determined by the size of the spectral gap of the corresponding Markov operator. However, calculating (and even approximating) the spectral gaps of practical Monte Carlo Markov chains in statistics has proven to be an extremely difficult and often insurmountable task, especially when these chains move on continuous state spaces. In this paper, a method for accurate estimation of the spectral gap is developed for general state space Markov chains whose operators are non-negative and trace-class. The method is based on the fact that the second largest eigenvalue (and hence the spectral gap) of such operators can be bounded above and below by simple functions of the power sums of the eigenvalues. These power sums often have nice integral representations. A classical Monte Carlo method is proposed to estimate these integrals, and a simple sufficient condition for finite variance is provided. This leads to asymptotically valid confidence intervals for the second largest eigenvalue (and the spectral gap) of the Markov operator. The efficiency of the method is studied. For illustration, the method is applied to Albert and Chib's (1993) data augmentation (DA) algorithm for Bayesian probit regression, and also to a DA algorithm for Bayesian linear regression with non-Gaussian errors (Liu, 1996).
0
0
1
1
0
0
Tidal synchronization of an anelastic multi-layered body: Titan's synchronous rotation
This paper presents one analytical tidal theory for a viscoelastic multi-layered body with an arbitrary number of homogeneous layers. Starting with the static equilibrium figure, modified to include tide and differential rotation, and using the Newtonian creep approach, we find the dynamical equilibrium figure of the deformed body, which allows us to calculate the tidal potential and the forces acting on the tide generating body, as well as the rotation and orbital elements variations. In the particular case of the two-layer model, we study the tidal synchronization when the gravitational coupling and the friction in the interface between the layers is added. For high relaxation factors (low viscosity), the stationary solution of each layer is synchronous with the orbital mean motion (n) when the orbit is circular, but the spin rates increase if the orbital eccentricity increases. For low relaxation factors (high viscosity), as in planetary satellites, if friction remains low, each layer can be trapped in different spin-orbit resonances with frequencies n/2,n,3n/2,... . We apply the theory to Titan. The main results are: i) the rotational constraint does not allow us confirm or reject the existence of a subsurface ocean in Titan; and ii) the crust-atmosphere exchange of angular momentum can be neglected. Using the rotation estimate based on Cassini's observation, we limit the possible value of the shell relaxation factor, when a subsurface ocean is assumed, to 10^-9 Hz, which correspond to a shell's viscosity 10^18 Pa s, depending on the ocean's thickness and viscosity values. In the case in which the ocean does not exist, the maximum shell relaxation factor is one order of magnitude smaller and the corresponding minimum shell's viscosity is one order higher.
0
1
0
0
0
0
On Lie algebras responsible for zero-curvature representations of multicomponent (1+1)-dimensional evolution PDEs
Zero-curvature representations (ZCRs) are one of the main tools in the theory of integrable $(1+1)$-dimensional PDEs. According to the preprint arXiv:1212.2199, for any given $(1+1)$-dimensional evolution PDE one can define a sequence of Lie algebras $F^p$, $p=0,1,2,3,\dots$, such that representations of these algebras classify all ZCRs of the PDE up to local gauge equivalence. ZCRs depending on derivatives of arbitrary finite order are allowed. Furthermore, these algebras provide necessary conditions for existence of Backlund transformations between two given PDEs. The algebras $F^p$ are defined in arXiv:1212.2199 in terms of generators and relations. In the present paper, we describe some methods to study the structure of the algebras $F^p$ for multicomponent $(1+1)$-dimensional evolution PDEs. Using these methods, we compute the explicit structure (up to non-essential nilpotent ideals) of the Lie algebras $F^p$ for the Landau-Lifshitz, nonlinear Schrodinger equations, and for the $n$-component Landau-Lifshitz system of Golubchik and Sokolov for any $n>3$. In particular, this means that for the $n$-component Landau-Lifshitz system we classify all ZCRs (depending on derivatives of arbitrary finite order), up to local gauge equivalence and up to killing nilpotent ideals in the corresponding Lie algebras. The presented methods to classify ZCRs can be applied also to other $(1+1)$-dimensional evolution PDEs. Furthermore, the obtained results can be used for proving non-existence of Backlund transformations between some PDEs, which will be described in forthcoming publications.
0
1
1
0
0
0
TRAMP: Tracking by a Real-time AMbisonic-based Particle filter
This article presents a multiple sound source localization and tracking system, fed by the Eigenmike array. The First Order Ambisonics (FOA) format is used to build a pseudointensity-based spherical histogram, from which the source position estimates are deduced. These instantaneous estimates are processed by a wellknown tracking system relying on a set of particle filters. While the novelty within localization and tracking is incremental, the fully-functional, complete and real-time running system based on these algorithms is proposed for the first time. As such, it could serve as an additional baseline method of the LOCATA challenge.
1
0
0
0
0
0
Scalable Surface Reconstruction from Point Clouds with Extreme Scale and Density Diversity
In this paper we present a scalable approach for robustly computing a 3D surface mesh from multi-scale multi-view stereo point clouds that can handle extreme jumps of point density (in our experiments three orders of magnitude). The backbone of our approach is a combination of octree data partitioning, local Delaunay tetrahedralization and graph cut optimization. Graph cut optimization is used twice, once to extract surface hypotheses from local Delaunay tetrahedralizations and once to merge overlapping surface hypotheses even when the local tetrahedralizations do not share the same topology.This formulation allows us to obtain a constant memory consumption per sub-problem while at the same time retaining the density independent interpolation properties of the Delaunay-based optimization. On multiple public datasets, we demonstrate that our approach is highly competitive with the state-of-the-art in terms of accuracy, completeness and outlier resilience. Further, we demonstrate the multi-scale potential of our approach by processing a newly recorded dataset with 2 billion points and a point density variation of more than four orders of magnitude - requiring less than 9GB of RAM per process.
1
0
0
0
0
0
General Bounds for Incremental Maximization
We propose a theoretical framework to capture incremental solutions to cardinality constrained maximization problems. The defining characteristic of our framework is that the cardinality/support of the solution is bounded by a value $k\in\mathbb{N}$ that grows over time, and we allow the solution to be extended one element at a time. We investigate the best-possible competitive ratio of such an incremental solution, i.e., the worst ratio over all $k$ between the incremental solution after $k$ steps and an optimum solution of cardinality $k$. We define a large class of problems that contains many important cardinality constrained maximization problems like maximum matching, knapsack, and packing/covering problems. We provide a general $2.618$-competitive incremental algorithm for this class of problems, and show that no algorithm can have competitive ratio below $2.18$ in general. In the second part of the paper, we focus on the inherently incremental greedy algorithm that increases the objective value as much as possible in each step. This algorithm is known to be $1.58$-competitive for submodular objective functions, but it has unbounded competitive ratio for the class of incremental problems mentioned above. We define a relaxed submodularity condition for the objective function, capturing problems like maximum (weighted) ($b$-)matching and a variant of the maximum flow problem. We show that the greedy algorithm has competitive ratio (exactly) $2.313$ for the class of problems that satisfy this relaxed submodularity condition. Note that our upper bounds on the competitive ratios translate to approximation ratios for the underlying cardinality constrained problems.
1
0
1
0
0
0
Arrangements of pseudocircles on surfaces
A pseudocircle is a simple closed curve on some surface. Arrangements of pseudocircles were introduced by Grünbaum, who defined them as collections of pseudocircles that pairwise intersect in exactly two points, at which they cross. There are several variations on this notion in the literature, one of which requires that no three pseudocircles have a point in common. Working under this definition, Ortner proved that an arrangement of pseudocircles is embeddable into the sphere if and only if all of its subarrangements of size at most $4$ are embeddable into the sphere. Ortner asked if an analogous result held for embeddability into a compact orientable surface $\Sigma_g$ of genus $g>0$. In this paper we answer this question, under an even more general definition of an arrangement, in which the pseudocircles in the collection are not required to intersect each other, or that the intersections are crossings: it suffices to have one pseudocircle that intersects all other pseudocircles in the collection. We show that under this more general notion, an arrangement of pseudocircles is embeddable into $\Sigma_g$ if and only if all of its subarrangements of size at most $4g+5$ are embeddable into $\Sigma_g$, and that this can be improved to $4g+4$ under the concept of an arrangement used by Ortner. Our framework also allows us to generalize this result to arrangements of other objects, such as arcs.
1
0
1
0
0
0
Scalable methods for Bayesian selective inference
Modeled along the truncated approach in Panigrahi (2016), selection-adjusted inference in a Bayesian regime is based on a selective posterior. Such a posterior is determined together by a generative model imposed on data and the selection event that enforces a truncation on the assumed law. The effective difference between the selective posterior and the usual Bayesian framework is reflected in the use of a truncated likelihood. The normalizer of the truncated law in the adjusted framework is the probability of the selection event; this is typically intractable and it leads to the computational bottleneck in sampling from such a posterior. The current work lays out a primal-dual approach of solving an approximating optimization problem to provide valid post-selective Bayesian inference. The selection procedures are posed as data-queries that solve a randomized version of a convex learning program which have the advantage of preserving more left-over information for inference. We propose a randomization scheme under which the optimization has separable constraints that result in a partially separable objective in lower dimensions for many commonly used selective queries to approximate the otherwise intractable selective posterior. We show that the approximating optimization under a Gaussian randomization gives a valid exponential rate of decay for the selection probability on a large deviation scale. We offer a primal-dual method to solve the optimization problem leading to an approximate posterior; this allows us to exploit the usual merits of a Bayesian machinery in both low and high dimensional regimes where the underlying signal is effectively sparse. We show that the adjusted estimates empirically demonstrate better frequentist properties in comparison to the unadjusted estimates based on the usual posterior, when applied to a wide range of constrained, convex data queries.
0
0
0
1
0
0
Modified Recursive Cholesky (Rchol) Algorithm: An Explicit Estimation and Pseudo-inverse of Correlation Matrices
The Cholesky decomposition plays an important role in finding the inverse of the correlation matrices. As it is a fast and numerically stable for linear system solving, inversion, and factorization compared to singular valued decomposition (SVD), QR factorization and LU decomposition. As different methods exist to find the Cholesky decomposition of a given matrix. This paper presents the comparative study of a proposed RChol algorithm with the conventional methods. The RChol algorithm is an explicit way to estimate the modified Cholesky factors of a dynamic correlation matrix.
0
0
1
0
0
0
A Utility-Driven Multi-Queue Admission Control Solution for Network Slicing
The combination of recent emerging technologies such as network function virtualization (NFV) and network programmability (SDN) gave birth to the Network Slicing revolution. 5G networks consist of multi-tenant infrastructures capable of offering leased network "slices" to new customers (e.g., vertical industries) enabling a new telecom business model: Slice-as-aService (SlaaS). In this paper, we aim i ) to study the slicing admission control problem by means of a multi-queuing system for heterogeneous tenant requests, ii ) to derive its statistical behavior model, and iii ) to provide a utility-based admission control optimization. Our results analyze the capability of the proposed SlaaS system to be approximately Markovian and evaluate its performance as compared to legacy solutions.
1
0
0
0
0
0
Statistical inference for network samples using subgraph counts
We consider that a network is an observation, and a collection of observed networks forms a sample. In this setting, we provide methods to test whether all observations in a network sample are drawn from a specified model. We achieve this by deriving, under the null of the graphon model, the joint asymptotic properties of average subgraph counts as the number of observed networks increases but the number of nodes in each network remains finite. In doing so, we do not require that each observed network contains the same number of nodes, or is drawn from the same distribution. Our results yield joint confidence regions for subgraph counts, and therefore methods for testing whether the observations in a network sample are drawn from: a specified distribution, a specified model, or from the same model as another network sample. We present simulation experiments and an illustrative example on a sample of brain networks where we find that highly creative individuals' brains present significantly more short cycles.
1
0
0
1
0
0