title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Objective Bayesian Analysis for Change Point Problems
In this paper we present a loss-based approach to change point analysis. In particular, we look at the problem from two perspectives. The first focuses on the definition of a prior when the number of change points is known a priori. The second contribution aims to estimate the number of change points by using a loss-based approach recently introduced in the literature. The latter considers change point estimation as a model selection exercise. We show the performance of the proposed approach on simulated data and real data sets.
0
0
1
1
0
0
On the Mechanism of Large Amplitude Flapping of Inverted Foil in a Uniform Flow
An elastic foil interacting with a uniform flow with its trailing edge clamped, also known as the inverted foil, exhibits a wide range of complex self-induced flapping regimes such as large amplitude flapping (LAF), deformed and flipped flapping. Here, we perform three-dimensional numerical experiments to examine the role of vortex shedding and the vortex-vortex interaction on the LAF response at Reynolds number Re=30,000. Here we investigate the dynamics of the inverted foil for a novel configuration wherein we introduce a fixed splitter plate at the trailing edge to suppress the vortex shedding from trailing edge and inhibit the interaction between the counter-rotating vortices. We find that the inhibition of the interaction has an insignificant effect on the transverse flapping amplitudes, due to a relatively weaker coupling between the counter-rotating vortices emanating from the leading edge and trailing edge. However, the inhibition of the trailing edge vortex reduces the streamwise flapping amplitude, the flapping frequency and the net strain energy of foil. To further generalize our understanding of the LAF, we next perform low-Reynolds number (Re$\in[0.1,50]$) simulations for the identical foil properties to realize the impact of vortex shedding on the large amplitude flapping. Due to the absence of vortex shedding process in the low-$Re$ regime, the inverted foil no longer exhibits the periodic flapping. However, the flexible foil still loses its stability through divergence instability to undergo a large static deformation. Finally, we introduce an analogous analytical model for the LAF based on the dynamics of an elastically mounted flat plate undergoing flow-induced pitching oscillations in a uniform stream.
0
1
0
0
0
0
Ultrafast Epitaxial Growth of Metre-Sized Single-Crystal Graphene on Industrial Cu Foil
A foundation of the modern technology that uses single-crystal silicon has been the growth of high-quality single-crystal Si ingots with diameters up to 12 inches or larger. For many applications of graphene, large-area high-quality (ideally of single-crystal) material will be enabling. Since the first growth on copper foil a decade ago, inch-sized single-crystal graphene has been achieved. We present here the growth, in 20 minutes, of a graphene film of 5 x 50 cm2 dimension with > 99% ultra-highly oriented grains. This growth was achieved by: (i) synthesis of sub-metre-sized single-crystal Cu(111) foil as substrate; (ii) epitaxial growth of graphene islands on the Cu(111) surface; (iii) seamless merging of such graphene islands into a graphene film with high single crystallinity and (iv) the ultrafast growth of graphene film. These achievements were realized by a temperature-driven annealing technique to produce single-crystal Cu(111) from industrial polycrystalline Cu foil and the marvellous effects of a continuous oxygen supply from an adjacent oxide. The as-synthesized graphene film, with very few misoriented grains (if any), has a mobility up to ~ 23,000 cm2V-1s-1 at 4 K and room temperature sheet resistance of ~ 230 ohm/square. It is very likely that this approach can be scaled up to achieve exceptionally large and high-quality graphene films with single crystallinity, and thus realize various industrial-level applications at a low cost.
0
1
0
0
0
0
Multidimensional Sampling of Isotropically Bandlimited Signals
A new lower bound on the average reconstruction error variance of multidimensional sampling and reconstruction is presented. It applies to sampling on arbitrary lattices in arbitrary dimensions, assuming a stochastic process with constant, isotropically bandlimited spectrum and reconstruction by the best linear interpolator. The lower bound is exact for any lattice at sufficiently high and low sampling rates. The two threshold rates where the error variance deviates from the lower bound gives two optimality criteria for sampling lattices. It is proved that at low rates, near the first threshold, the optimal lattice is the dual of the best sphere-covering lattice, which for the first time establishes a rigorous relation between optimal sampling and optimal sphere covering. A previously known result is confirmed at high rates, near the second threshold, namely, that the optimal lattice is the dual of the best sphere-packing lattice. Numerical results quantify the performance of various lattices for sampling and support the theoretical optimality criteria.
1
0
1
1
0
0
Asymptotic structure of almost eigenfunctions of drift Laplacians on conical ends
We use a weighted variant of the frequency functions introduced by Almgren to prove sharp asymptotic estimates for almost eigenfunctions of the drift Laplacian associated to the Gaussian weight on an asymptotically conical end. As a consequence, we obtain a purely elliptic proof of a result of L. Wang on the uniqueness of self-shrinkers of the mean curvature flow asymptotic to a given cone. Another consequence is a unique continuation property for self-expanders of the mean curvature flow that flow from a cone.
0
0
1
0
0
0
Palomar Optical Spectrum of Hyperbolic Near-Earth Object A/2017 U1
We present optical spectroscopy of the recently discovered hyperbolic near-Earth object A/2017 U1, taken on 25 Oct 2017 at Palomar Observatory. Although our data are at a very low signal-to-noise, they indicate a very red surface at optical wavelengths without significant absorption features.
0
1
0
0
0
0
Electrically driven quantum light emission in electromechanically-tuneable photonic crystal cavities
A single quantum dot deterministically coupled to a photonic crystal environment constitutes an indispensable elementary unit to both generate and manipulate single-photons in next-generation quantum photonic circuits. To date, the scaling of the number of these quantum nodes on a fully-integrated chip has been prevented by the use of optical pumping strategies that require a bulky off-chip laser along with the lack of methods to control the energies of nano-cavities and emitters. Here, we concurrently overcome these limitations by demonstrating electrical injection of single excitonic lines within a nano-electro-mechanically tuneable photonic crystal cavity. When an electrically-driven dot line is brought into resonance with a photonic crystal mode, its emission rate is enhanced. Anti-bunching experiments reveal the quantum nature of these on-demand sources emitting in the telecom range. These results represent an important step forward in the realization of integrated quantum optics experiments featuring multiple electrically-triggered Purcell-enhanced single-photon sources embedded in a reconfigurable semiconductor architecture.
0
1
0
0
0
0
The Odyssey Approach for Optimizing Federated SPARQL Queries
Answering queries over a federation of SPARQL endpoints requires combining data from more than one data source. Optimizing queries in such scenarios is particularly challenging not only because of (i) the large variety of possible query execution plans that correctly answer the query but also because (ii) there is only limited access to statistics about schema and instance data of remote sources. To overcome these challenges, most federated query engines rely on heuristics to reduce the space of possible query execution plans or on dynamic programming strategies to produce optimal plans. Nevertheless, these plans may still exhibit a high number of intermediate results or high execution times because of heuristics and inaccurate cost estimations. In this paper, we present Odyssey, an approach that uses statistics that allow for a more accurate cost estimation for federated queries and therefore enables Odyssey to produce better query execution plans. Our experimental results show that Odyssey produces query execution plans that are better in terms of data transfer and execution time than state-of-the-art optimizers. Our experiments using the FedBench benchmark show execution time gains of at least 25 times on average.
1
0
0
0
0
0
A variant of Gromov's problem on Hölder equivalence of Carnot groups
It is unknown if there exists a locally $\alpha$-Hölder homeomorphism $f:\mathbb{R}^3\to \mathbb{H}^1$ for any $\frac{1}{2}< \alpha\le \frac{2}{3}$, although the identity map $\mathbb{R}^3\to \mathbb{H}^1$ is locally $\frac{1}{2}$-Hölder. More generally, Gromov asked: Given $k$ and a Carnot group $G$, for which $\alpha$ does there exist a locally $\alpha$-Hölder homeomorphism $f:\mathbb{R}^k\to G$? Here, we equip a Carnot group $G$ with the Carnot-Carathéodory metric. In 2014, Balogh, Hajlasz, and Wildrick considered a variant of this problem. These authors proved that if $k>n$, there does not exist an injective, $(\frac{1}{2}+)$-Hölder mapping $f:\mathbb{R}^k\to \mathbb{H}^n$ that is also locally Lipschitz as a mapping into $\mathbb{R}^{2n+1}$. For their proof, they use the fact that $\mathbb{H}^n$ is purely $k$-unrectifiable for $k>n$. In this paper, we will extend their result from the Heisenberg group to model filiform groups and Carnot groups of step at most three. We will now require that the Carnot group is purely $k$-unrectifiable. The main key to our proof will be showing that $(\frac{1}{2}+)$-Hölder maps $f:\mathbb{R}^k\to G$ that are locally Lipschitz into Euclidean space, are weakly contact. Proving weak contactness in these two settings requires understanding the relationship between the algebraic and metric structures of the Carnot group. We will use coordinates of the first and second kind for Carnot groups.
0
0
1
0
0
0
The Trees of Hanoi
The game of the Towers of Hanoi is generalized to binary trees. First, a straightforward solution of the game is discussed. Second, a shorter solution is presented, which is then shown to be optimal.
1
0
1
0
0
0
On the risk of convex-constrained least squares estimators under misspecification
We consider the problem of estimating the mean of a noisy vector. When the mean lies in a convex constraint set, the least squares projection of the random vector onto the set is a natural estimator. Properties of the risk of this estimator, such as its asymptotic behavior as the noise tends to zero, have been well studied. We instead study the behavior of this estimator under misspecification, that is, without the assumption that the mean lies in the constraint set. For appropriately defined notions of risk in the misspecified setting, we prove a generalization of a low noise characterization of the risk due to Oymak and Hassibi in the case of a polyhedral constraint set. An interesting consequence of our results is that the risk can be much smaller in the misspecified setting than in the well-specified setting. We also discuss consequences of our result for isotonic regression.
0
0
1
1
0
0
Benchmarks for Image Classification and Other High-dimensional Pattern Recognition Problems
A good classification method should yield more accurate results than simple heuristics. But there are classification problems, especially high-dimensional ones like the ones based on image/video data, for which simple heuristics can work quite accurately; the structure of the data in such problems is easy to uncover without any sophisticated or computationally expensive method. On the other hand, some problems have a structure that can only be found with sophisticated pattern recognition methods. We are interested in quantifying the difficulty of a given high-dimensional pattern recognition problem. We consider the case where the patterns come from two pre-determined classes and where the objects are represented by points in a high-dimensional vector space. However, the framework we propose is extendable to an arbitrarily large number of classes. We propose classification benchmarks based on simple random projection heuristics. Our benchmarks are 2D curves parameterized by the classification error and computational cost of these simple heuristics. Each curve divides the plane into a "positive- gain" and a "negative-gain" region. The latter contains methods that are ill-suited for the given classification problem. The former is divided into two by the curve asymptote; methods that lie in the small region under the curve but right of the asymptote merely provide a computational gain but no structural advantage over the random heuristics. We prove that the curve asymptotes are optimal (i.e. at Bayes error) in some cases, and thus no sophisticated method can provide a structural advantage over the random heuristics. Such classification problems, an example of which we present in our numerical experiments, provide poor ground for testing new pattern classification methods.
0
0
0
1
0
0
On the Binary Lossless Many-Help-One Problem with Independently Degraded Helpers
Although the rate region for the lossless many-help-one problem with independently degraded helpers is already "solved", its solution is given in terms of a convex closure over a set of auxiliary random variables. Thus, for any such a problem in particular, an optimization over the set of auxiliary random variables is required to truly solve the rate region. Providing the solution is surprisingly difficult even for an example as basic as binary sources. In this work, we derive a simple and tight inner bound on the rate region's lower boundary for the lossless many-help-one problem with independently degraded helpers when specialized to sources that are binary, uniformly distributed, and interrelated through symmetric channels. This scenario finds important applications in emerging cooperative communication schemes in which the direct-link transmission is assisted via multiple lossy relaying links. Numerical results indicate that the derived inner bound proves increasingly tight as the helpers become more degraded.
1
0
0
0
0
0
Efficient Correlated Topic Modeling with Topic Embedding
Correlated topic modeling has been limited to small model and problem sizes due to their high computational cost and poor scaling. In this paper, we propose a new model which learns compact topic embeddings and captures topic correlations through the closeness between the topic vectors. Our method enables efficient inference in the low-dimensional embedding space, reducing previous cubic or quadratic time complexity to linear w.r.t the topic size. We further speedup variational inference with a fast sampler to exploit sparsity of topic occurrence. Extensive experiments show that our approach is capable of handling model and data scales which are several orders of magnitude larger than existing correlation results, without sacrificing modeling quality by providing competitive or superior performance in document classification and retrieval.
1
0
0
1
0
0
Structure-aware error bounds for linear classification with the zero-one loss
We prove risk bounds for binary classification in high-dimensional settings when the sample size is allowed to be smaller than the dimensionality of the training set observations. In particular, we prove upper bounds for both 'compressive learning' by empirical risk minimization (ERM) (that is when the ERM classifier is learned from data that have been projected from high-dimensions onto a randomly selected low-dimensional subspace) as well as uniform upper bounds in the full high-dimensional space. A novel tool we employ in both settings is the 'flipping probability' of Durrant and Kaban (ICML 2013) which we use to capture benign geometric structures that make a classification problem 'easy' in the sense of demanding a relatively low sample size for guarantees of good generalization. Furthermore our bounds also enable us to explain or draw connections between several existing successful classification algorithms. Finally we show empirically that our bounds are informative enough in practice to serve as the objective function for learning a classifier (by using them to do so).
0
0
1
1
0
0
Dimensional reduction and the equivariant Chern character
We propose a dimensional reduction procedure in the Stolz--Teichner framework of supersymmetric Euclidean field theories (EFTs) that is well-suited in the presence of a finite gauge group or, more generally, for field theories over an orbifold. As an illustration, we give a geometric interpretation of the Chern character for manifolds with an action by a finite group.
0
0
1
0
0
0
Practical Processing of Mobile Sensor Data for Continual Deep Learning Predictions
We present a practical approach for processing mobile sensor time series data for continual deep learning predictions. The approach comprises data cleaning, normalization, capping, time-based compression, and finally classification with a recurrent neural network. We demonstrate the effectiveness of the approach in a case study with 279 participants. On the basis of sparse sensor events, the network continually predicts whether the participants would attend to a notification within 10 minutes. Compared to a random baseline, the classifier achieves a 40% performance increase (AUC of 0.702) on a withheld test set. This approach allows to forgo resource-intensive, domain-specific, error-prone feature engineering, which may drastically increase the applicability of machine learning to mobile phone sensor data.
1
0
0
0
0
0
Formal Privacy for Functional Data with Gaussian Perturbations
Motivated by the rapid rise in statistical tools in Functional Data Analysis, we consider the Gaussian mechanism for achieving differential privacy with parameter estimates taking values in a, potentially infinite-dimensional, separable Banach space. Using classic results from probability theory, we show how densities over function spaces can be utilized to achieve the desired differential privacy bounds. This extends prior results of Hall et al (2013) to a much broader class of statistical estimates and summaries, including "path level" summaries, nonlinear functionals, and full function releases. By focusing on Banach spaces, we provide a deeper picture of the challenges for privacy with complex data, especially the role regularization plays in balancing utility and privacy. Using an application to penalized smoothing, we explicitly highlight this balance in the context of mean function estimation. Simulations and an application to diffusion tensor imaging are briefly presented, with extensive additions included in a supplement.
0
0
1
1
0
0
An algorithm to reconstruct convex polyhedra from their face normals and areas
A well-known result in the study of convex polyhedra, due to Minkowski, is that a convex polyhedron is uniquely determined (up to translation) by the directions and areas of its faces. The theorem guarantees existence of the polyhedron associated to given face normals and areas, but does not provide a constructive way to find it explicitly. This article provides an algorithm to reconstruct 3D convex polyhedra from their face normals and areas, based on an method by Lasserre to compute the volume of a convex polyhedron in $\mathbb{R}^n$. A Python implementation of the algorithm is available at this https URL.
0
1
0
0
0
0
Electron affinities of water clusters from density-functional and many-body-perturbation theory
In this work, we assess the accuracy of dielectric-dependent hybrid density functionals and many-body perturbation theory methods for the calculation of electron affinities of small water clusters, including hydrogen-bonded water dimer and water hexamer isomers. We show that many-body perturbation theory in the G$_0$W$_0$ approximation starting with the dielectric-dependent hybrid functionals predicts electron affinities of clusters within 0.1 eV of the coupled-cluster results with single, double, and perturbative triple excitations.
0
1
0
0
0
0
Grid-converged Solution and Analysis of the Unsteady Viscous Flow in a Two-dimensional Shock Tube
The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics is of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru & Tenaud (Daru, V. & Tenaud, C. 2001 Evaluation of TVD high resolution schemes for unsteady viscous shocked flows. Computers & Fluids 30, 89-113) proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers have tried this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly-developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analysed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface.
0
1
0
0
0
0
Exact zero modes in twisted Kitaev chains
We study the Kitaev chain under generalized twisted boundary conditions, for which both the amplitudes and the phases of the boundary couplings can be tuned at will. We explicitly show the presence of exact zero modes for large chains belonging to the topological phase in the most general case, in spite of the absence of "edges" in the system. For specific values of the phase parameters, we rigorously obtain the condition for the presence of the exact zero modes in finite chains, and show that the zero modes obtained are indeed localized. The full spectrum of the twisted chains with zero chemical potential is analytically presented. Finally, we demonstrate the persistence of zero modes (level crossing) even in the presence of disorder or interactions.
0
1
0
0
0
0
Generative Temporal Models with Memory
We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models' operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.
1
0
0
1
0
0
Global research collaboration: Networks and partners in South East Asia
This is an empirical paper that addresses the role of bilateral and multilateral international co-authorships in the six leading science systems among the ASEAN group of countries (ASEAN6). The paper highlights the different ways that bilateral and multilateral co-authorships structure global networks and the collaborations of the ASEAN6. The paper looks at the influence of the collaboration styles of major collaborating countries of the ASEAN6, particularly the USA and Japan. It also highlights the role of bilateral and multilateral co-authorships in the production of knowledge in the leading specialisations of the ASEAN6. The discussion section offers some tentative explanations for major dynamics evident in the results and summarises the next steps in this research.
1
0
0
0
0
0
Spontaneous and stimulus-induced coherent states of dynamically balanced neuronal networks
How the information microscopically processed by individual neurons is integrated and used in organising the macroscopic behaviour of an animal is a central question in neuroscience. Coherence of dynamics over different scales has been suggested as a clue to the mechanisms underlying this integration. Balanced excitation and inhibition amplify microscopic fluctuations to a macroscopic level and may provide a mechanism for generating coherent dynamics over the two scales. Previous theories of brain dynamics, however, have been restricted to cases in which population-averaged activities have been constrained to constant values, that is, to cases with no macroscopic degrees of freedom. In the present study, we investigate balanced neuronal networks with a nonzero number of macroscopic degrees of freedom coupled to microscopic degrees of freedom. In these networks, amplified microscopic fluctuations drive the macroscopic dynamics, while the macroscopic dynamics determine the statistics of the microscopic fluctuations. We develop a novel type of mean-field theory applicable to this class of interscale interactions, for which an analytical approach has previously been unknown. Irregular macroscopic rhythms similar to those observed in the brain emerge spontaneously as a result of such interactions. Microscopic inputs to a small number of neurons effectively entrain the whole network through the amplification mechanism. Neuronal responses become coherent as the magnitude of either the balanced excitation and inhibition or the external inputs is increased. Our mean-field theory successfully predicts the behaviour of the model. Our numerical results further suggest that the coherent dynamics can be used for selective read-out of information. In conclusion, our results show a novel form of neuronal information processing that bridges different scales, and advance our understanding of the brain.
0
1
0
0
0
0
Parallelized Kendall's Tau Coefficient Computation via SIMD Vectorized Sorting On Many-Integrated-Core Processors
Pairwise association measure is an important operation in data analytics. Kendall's tau coefficient is one widely used correlation coefficient identifying non-linear relationships between ordinal variables. In this paper, we investigated a parallel algorithm accelerating all-pairs Kendall's tau coefficient computation via single instruction multiple data (SIMD) vectorized sorting on Intel Xeon Phis by taking advantage of many processing cores and 512-bit SIMD vector instructions. To facilitate workload balancing and overcome on-chip memory limitation, we proposed a generic framework for symmetric all-pairs computation by building provable bijective functions between job identifier and coordinate space. Performance evaluation demonstrated that our algorithm on one 5110P Phi achieves two orders-of-magnitude speedups over 16-threaded MATLAB and three orders-of-magnitude speedups over sequential R, both running on high-end CPUs. Besides, our algorithm exhibited rather good distributed computing scalability with respect to number of Phis. Source code and datasets are publicly available at this http URL.
1
0
0
0
0
0
Effect of Blast Exposure on Gene-Gene Interactions
Repeated exposure to low-level blast may initiate a range of adverse health problem such as traumatic brain injury (TBI). Although many studies successfully identified genes associated with TBI, yet the cellular mechanisms underpinning TBI are not fully elucidated. In this study, we investigated underlying relationship among genes through constructing transcript Bayesian networks using RNA-seq data. The data for pre- and post-blast transcripts, which were collected on 33 individuals in Army training program, combined with our system approach provide unique opportunity to investigate the effect of blast-wave exposure on gene-gene interactions. Digging into the networks, we identified four subnetworks related to immune system and inflammatory process that are disrupted due to the exposure. Among genes with relatively high fold change in their transcript expression level, ATP6V1G1, B2M, BCL2A1, PELI, S100A8, TRIM58 and ZNF654 showed major impact on the dysregulation of the gene-gene interactions. This study reveals how repeated exposures to traumatic conditions increase the level of fold change of transcript expression and hypothesizes new targets for further experimental studies.
0
0
0
0
1
0
A sparse linear algebra algorithm for fast computation of prediction variances with Gaussian Markov random fields
Gaussian Markov random fields are used in a large number of disciplines in machine vision and spatial statistics. The models take advantage of sparsity in matrices introduced through the Markov assumptions, and all operations in inference and prediction use sparse linear algebra operations that scale well with dimensionality. Yet, for very high-dimensional models, exact computation of predictive variances of linear combinations of variables is generally computationally prohibitive, and approximate methods (generally interpolation or conditional simulation) are typically used instead. A set of conditions are established under which the variances of linear combinations of random variables can be computed exactly using the Takahashi recursions. The ensuing computational simplification has wide applicability and may be used to enhance several software packages where model fitting is seated in a maximum-likelihood framework. The resulting algorithm is ideal for use in a variety of spatial statistical applications, including \emph{LatticeKrig} modelling, statistical downscaling, and fixed rank kriging. It can compute hundreds of thousands exact predictive variances of linear combinations on a standard desktop with ease, even when large spatial GMRF models are used.
0
0
0
1
0
0
Rational motivic path spaces and Kim's relative unipotent section conjecture
We initiate a study of path spaces in the nascent context of "motivic dga's", under development in doctoral work by Gabriella Guzman. This enables us to reconstruct the unipotent fundamental group of a pointed scheme from the associated augmented motivic dga, and provides us with a factorization of Kim's relative unipotent section conjecture into several smaller conjectures with a homotopical flavor. Based on a conversation with Joseph Ayoub, we prove that the path spaces of the punctured projective line over a number field are concentrated in degree zero with respect to Levine's t-structure for mixed Tate motives. This constitutes a step in the direction of Kim's conjecture.
0
0
1
0
0
0
Convergence Rates of Latent Topic Models Under Relaxed Identifiability Conditions
In this paper we study the frequentist convergence rate for the Latent Dirichlet Allocation (Blei et al., 2003) topic models. We show that the maximum likelihood estimator converges to one of the finitely many equivalent parameters in Wasserstein's distance metric at a rate of $n^{-1/4}$ without assuming separability or non-degeneracy of the underlying topics and/or the existence of more than three words per document, thus generalizing the previous works of Anandkumar et al. (2012, 2014) from an information-theoretical perspective. We also show that the $n^{-1/4}$ convergence rate is optimal in the worst case.
1
0
0
1
0
0
Subspace Clustering of Very Sparse High-Dimensional Data
In this paper we consider the problem of clustering collections of very short texts using subspace clustering. This problem arises in many applications such as product categorisation, fraud detection, and sentiment analysis. The main challenge lies in the fact that the vectorial representation of short texts is both high-dimensional, due to the large number of unique terms in the corpus, and extremely sparse, as each text contains a very small number of words with no repetition. We propose a new, simple subspace clustering algorithm that relies on linear algebra to cluster such datasets. Experimental results on identifying product categories from product names obtained from the US Amazon website indicate that the algorithm can be competitive against state-of-the-art clustering algorithms.
1
0
0
1
0
0
Quarnet inference rules for level-1 networks
An important problem in phylogenetics is the construction of phylogenetic trees. One way to approach this problem, known as the supertree method, involves inferring a phylogenetic tree with leaves consisting of a set $X$ of species from a collection of trees, each having leaf-set some subset of $X$. In the 1980's characterizations, certain inference rules were given for when a collection of 4-leaved trees, one for each 4-element subset of $X$, can all be simultaneously displayed by a single supertree with leaf-set $X$. Recently, it has become of interest to extend such results to phylogenetic networks. These are a generalization of phylogenetic trees which can be used to represent reticulate evolution (where species can come together to form a new species). It has been shown that a certain type of phylogenetic network, called a level-1 network, can essentially be constructed from 4-leaved trees. However, the problem of providing appropriate inference rules for such networks remains unresolved. Here we show that by considering 4-leaved networks, called quarnets, as opposed to 4-leaved trees, it is possible to provide such rules. In particular, we show that these rules can be used to characterize when a collection of quarnets, one for each 4-element subset of $X$, can all be simultaneously displayed by a level-1 network with leaf-set $X$. The rules are an intriguing mixture of tree inference rules, and an inference rule for building up a cyclic ordering of $X$ from orderings on subsets of $X$ of size 4. This opens up several new directions of research for inferring phylogenetic networks from smaller ones, which could yield new algorithms for solving the supernetwork problem in phylogenetics.
1
0
0
0
0
0
21 cm Angular Power Spectrum from Minihalos as a Probe of Primordial Spectral Runnings
Measurements of 21 cm line fluctuations from minihalos have been discussed as a powerful probe of a wide range of cosmological models. However, previous studies have taken into account only the pixel variance, where contributions from different scales are integrated. In order to sort out information from different scales, we formulate the angular power spectrum of 21 cm line fluctuations from minihalos at different redshifts, which can enhance the constraining power enormously. By adopting this formalism, we investigate expected constraints on parameters characterizing the primordial power spectrum, particularly focusing on the spectral index $n_s$ and its runnings $\alpha_s$ and $\beta_s$. We show that future observations of 21 cm line fluctuations from minihalos, in combination with cosmic microwave background, can potentially probe these runnings as $\alpha_s \sim {\cal O}(10^{-3})$ and $\beta_s \sim {\cal O}(10^{-4})$. Its implications to the test of inflationary models are also discussed.
0
1
0
0
0
0
The IRX-Beta Dust Attenuation Relation in Cosmological Galaxy Formation Simulations
We utilise a series of high-resolution cosmological zoom simulations of galaxy formation to investigate the relationship between the ultraviolet (UV) slope, beta, and the ratio of the infrared luminosity to UV luminosity (IRX) in the spectral energy distributions (SEDs) of galaxies. We employ dust radiative transfer calculations in which the SEDs of the stars in galaxies propagate through the dusty interstellar medium. Our main goals are to understand the origin of, and scatter in the IRX-beta relation; to assess the efficacy of simplified stellar population synthesis screen models in capturing the essential physics in the IRX-beta relation; and to understand systematic deviations from the canonical local IRX-beta relations in particular populations of high-redshift galaxies. Our main results follow. Galaxies that have young stellar populations with relatively cospatial UV and IR emitting regions and a Milky Way-like extinction curve fall on or near the standard Meurer relation. This behaviour is well captured by simplified screen models. Scatter in the IRX-beta relation is dominated by three major effects: (i) older stellar populations drive galaxies below the relations defined for local starbursts due to a reddening of their intrinsic UV SEDs; (ii) complex geometries in high-z heavily star forming galaxies drive galaxies toward blue UV slopes owing to optically thin UV sightlines; (iii) shallow extinction curves drive galaxies downward in the IRX-beta plane due to lowered NUV/FUV extinction ratios. We use these features of the UV slopes of galaxies to derive a fitting relation that reasonably collapses the scatter back toward the canonical local relation. Finally, we use these results to develop an understanding for the location of two particularly enigmatic populations of galaxies in the IRX-beta plane: z~2-4 dusty star forming galaxies, and z>5 star forming galaxies.
0
1
0
0
0
0
Strict convexity of the Mabuchi functional for energy minimizers
There are two parts of this paper. First, we discovered an explicit formula for the complex Hessian of the weighted log-Bergman kernel on a parallelogram domain, and utilised this formula to give a new proof about the strict convexity of the Mabuchi functional along a smooth geodesic. Second, when a C^{1,1}-geodesic connects two non-degenerate energy minimizers, we also proved this strict convexity, by showing that such a geodesic must be non-degenerate and smooth.
0
0
1
0
0
0
Anonymous Variables in Imperative Languages
In this paper, we bring anonymous variables into imperative languages. Anonymous variables represent don't-care values and have proven useful in logic programming. To bring the same level of benefits into imperative languages, we describe an extension to C wth anonymous variables.
1
0
0
0
0
0
Simultaneously constraining the astrophysics of reionisation and the epoch of heating with 21CMMC
The cosmic 21 cm signal is set to revolutionise our understanding of the early Universe, allowing us to probe the 3D temperature and ionisation structure of the intergalactic medium (IGM). It will open a window onto the unseen first galaxies, showing us how their UV and X-ray photons drove the cosmic milestones of the epoch of reionisation (EoR) and epoch of heating (EoH). To facilitate parameter inference from the 21 cm signal, we previously developed 21CMMC: a Monte Carlo Markov Chain sampler of 3D EoR simulations. Here we extend 21CMMC to include simultaneous modelling of the EoH, resulting in a complete Bayesian inference framework for the astrophysics dominating the observable epochs of the cosmic 21 cm signal. We demonstrate that second generation interferometers, the Hydrogen Epoch of Reionisation Array (HERA) and Square Kilometre Array (SKA) will be able to constrain ionising and X-ray source properties of the first galaxies with a fractional precision of order $\sim1$-10 per cent (1$\sigma$). The ionisation history of the Universe can be constrained to within a few percent. Using our extended framework, we quantify the bias in EoR parameter recovery incurred by the common simplification of a saturated spin temperature in the IGM. Depending on the extent of overlap between the EoR and EoH, the recovered astrophysical parameters can be biased by $\sim3-10\sigma$.
0
1
0
0
0
0
Non-penalized variable selection in high-dimensional linear model settings via generalized fiducial inference
Standard penalized methods of variable selection and parameter estimation rely on the magnitude of coefficient estimates to decide which variables to include in the final model. However, coefficient estimates are unreliable when the design matrix is collinear. To overcome this challenge an entirely new perspective on variable selection is presented within a generalized fiducial inference framework. This new procedure is able to effectively account for linear dependencies among subsets of covariates in a high-dimensional setting where $p$ can grow almost exponentially in $n$, as well as in the classical setting where $p \le n$. It is shown that the procedure very naturally assigns small probabilities to subsets of covariates which include redundancies by way of explicit $L_{0}$ minimization. Furthermore, with a typical sparsity assumption, it is shown that the proposed method is consistent in the sense that the probability of the true sparse subset of covariates converges in probability to 1 as $n \to \infty$, or as $n \to \infty$ and $p \to \infty$. Very reasonable conditions are needed, and little restriction is placed on the class of possible subsets of covariates to achieve this consistency result.
0
0
0
1
0
0
DNN Filter Bank Cepstral Coefficients for Spoofing Detection
With the development of speech synthesis techniques, automatic speaker verification systems face the serious challenge of spoofing attack. In order to improve the reliability of speaker verification systems, we develop a new filter bank based cepstral feature, deep neural network filter bank cepstral coefficients (DNN-FBCC), to distinguish between natural and spoofed speech. The deep neural network filter bank is automatically generated by training a filter bank neural network (FBNN) using natural and synthetic speech. By adding restrictions on the training rules, the learned weight matrix of FBNN is band-limited and sorted by frequency, similar to the normal filter bank. Unlike the manually designed filter bank, the learned filter bank has different filter shapes in different channels, which can capture the differences between natural and synthetic speech more effectively. The experimental results on the ASVspoof {2015} database show that the Gaussian mixture model maximum-likelihood (GMM-ML) classifier trained by the new feature performs better than the state-of-the-art linear frequency cepstral coefficients (LFCC) based classifier, especially on detecting unknown attacks.
1
0
0
0
0
0
The existence of positive least energy solutions for a class of Schrodinger-Poisson systems involving critical nonlocal term with general nonlinearity
The present study is concerned with the following Schrödinger-Poisson system involving critical nonlocal term with general nonlinearity: $$ \left\{ \begin{array}{ll} -\Delta u+V(x)u- \phi |u|^3u= f(u), & x\in\mathbb{R}^3, -\Delta \phi= |u|^5, & x\in\mathbb{R}^3,\\ \end{array} \right. $$ Under certain assumptions on non-constant $V(x)$, the existence of a positive least energy solution is obtained by using some new analytical skills and Pohožaev type manifold. In particular, the Ambrosetti-Rabinowitz type condition or monotonicity assumption on the nonlinearity is not necessary.
0
0
1
0
0
0
Up-down colorings of virtual-link diagrams and the necessity of Reidemeister moves of type II
We introduce an up-down coloring of a virtual-link diagram. The colorabilities give a lower bound of the minimum number of Reidemeister moves of type II which are needed between two 2-component virtual-link diagrams. By using the notion of a quandle cocycle invariant, we determine the necessity of Reidemeister moves of type II for a pair of diagrams of the trivial virtual-knot. This implies that for any virtual-knot diagram $D$, there exists a diagram $D'$ representing the same virtual-knot such that any sequence of generalized Reidemeister moves between them includes at least one Reidemeister move of type II.
0
0
1
0
0
0
Comparison of SMT and RBMT; The Requirement of Hybridization for Marathi-Hindi MT
We present in this paper our work on comparison between Statistical Machine Translation (SMT) and Rule-based machine translation for translation from Marathi to Hindi. Rule Based systems although robust take lots of time to build. On the other hand statistical machine translation systems are easier to create, maintain and improve upon. We describe the development of a basic Marathi-Hindi SMT system and evaluate its performance. Through a detailed error analysis, we, point out the relative strengths and weaknesses of both systems. Effectively, we shall see that even with a small amount of training corpus a statistical machine translation system has many advantages for high quality domain specific machine translation over that of a rule-based counterpart.
1
0
0
0
0
0
Introduction to the Special Issue on Approaches to Control Biological and Biologically Inspired Networks
The emerging field at the intersection of quantitative biology, network modeling, and control theory has enjoyed significant progress in recent years. This Special Issue brings together a selection of papers on complementary approaches to observe, identify, and control biological and biologically inspired networks. These approaches advance the state of the art in the field by addressing challenges common to many such networks, including high dimensionality, strong nonlinearity, uncertainty, and limited opportunities for observation and intervention. Because these challenges are not unique to biological systems, it is expected that many of the results presented in these contributions will also find applications in other domains, including physical, social, and technological networks.
1
0
0
0
1
0
Towards Classification of Web ontologies using the Horizontal and Vertical Segmentation
The new era of the Web is known as the semantic Web or the Web of data. The semantic Web depends on ontologies that are seen as one of its pillars. The bigger these ontologies, the greater their exploitation. However, when these ontologies become too big other problems may appear, such as the complexity to charge big files in memory, the time it needs to download such files and especially the time it needs to make reasoning on them. We discuss in this paper approaches for segmenting such big Web ontologies as well as its usefulness. The segmentation method extracts from an existing ontology a segment that represents a layer or a generation in the existing ontology; i.e. a horizontally extraction. The extracted segment should be itself an ontology.
1
0
0
0
0
0
An Efficient Version of the Bombieri-Vaaler Lemma
In their celebrated paper "On Siegel's Lemma", Bombieri and Vaaler found an upper bound on the height of integer solutions of systems of linear Diophantine equations. Calculating the bound directly, however, requires exponential time. In this paper, we present the bound in a different form that can be computed in polynomial time. We also give an elementary (and arguably simpler) proof for the bound.
1
0
1
0
0
0
Personalized and Private Peer-to-Peer Machine Learning
The rise of connected personal devices together with privacy concerns call for machine learning algorithms capable of leveraging the data of a large number of agents to learn personalized models under strong privacy requirements. In this paper, we introduce an efficient algorithm to address the above problem in a fully decentralized (peer-to-peer) and asynchronous fashion, with provable convergence rate. We show how to make the algorithm differentially private to protect against the disclosure of information about the personal datasets, and formally analyze the trade-off between utility and privacy. Our experiments show that our approach dramatically outperforms previous work in the non-private case, and that under privacy constraints, we can significantly improve over models learned in isolation.
1
0
0
1
0
0
Interacting Chaplygin gas revisited
The implications of considering interaction between Chaplygin gas and a barotropic fluid with constant equation of state have been explored. The unique feature of this work is that assuming an interaction $Q \propto H\rho_d$, analytic expressions for the energy density and pressure have been derived in terms of the Hypergeometric $_2\text{F}_1$ function. It is worthwhile to mention that an interacting Chaplygin gas model was considered in 2006 by Zhang and Zhu, nevertheless, analytic solutions for the continuity equations could not be determined assuming an interaction proportional to $H$ times the sum of the energy densities of Chaplygin gas and dust. Our model can successfully explain the transition from the early decelerating phase to the present phase of cosmic acceleration. Arbitrary choice of the free parameters of our model through trial and error show at recent observational data strongly favors $w_m=0$ and $w_m=-\frac{1}{3}$ over the $w_m=\frac{1}{3}$ case. Interestingly, the present model also incorporates the transition of dark energy into the phantom domain, however, future deceleration is forbidden.
0
1
0
0
0
0
GALARIO: a GPU Accelerated Library for Analysing Radio Interferometer Observations
We present GALARIO, a computational library that exploits the power of modern graphical processing units (GPUs) to accelerate the analysis of observations from radio interferometers like ALMA or the VLA. GALARIO speeds up the computation of synthetic visibilities from a generic 2D model image or a radial brightness profile (for axisymmetric sources). On a GPU, GALARIO is 150 faster than standard Python and 10 times faster than serial C++ code on a CPU. Highly modular, easy to use and to adopt in existing code, GALARIO comes as two compiled libraries, one for Nvidia GPUs and one for multicore CPUs, where both have the same functions with identical interfaces. GALARIO comes with Python bindings but can also be directly used in C or C++. The versatility and the speed of GALARIO open new analysis pathways that otherwise would be prohibitively time consuming, e.g. fitting high resolution observations of large number of objects, or entire spectral cubes of molecular gas emission. It is a general tool that can be applied to any field that uses radio interferometer observations. The source code is available online at this https URL under the open source GNU Lesser General Public License v3.
0
1
0
0
0
0
Focused time-lapse inversion of radio and audio magnetotelluric data
Geoelectrical techniques are widely used to monitor groundwater processes, while surprisingly few studies have considered audio (AMT) and radio (RMT) magnetotellurics for such purposes. In this numerical investigation, we analyze to what extent inversion results based on AMT and RMT monitoring data can be improved by (1) time-lapse difference inversion; (2) incorporation of statistical information about the expected model update (i.e., the model regularization is based on a geostatistical model); (3) using alternative model norms to quantify temporal changes (i.e., approximations of l1 and Cauchy norms using iteratively reweighted least-squares), (4) constraining model updates to predefined ranges (i.e., using Lagrange Multipliers to only allow either increases or decreases of electrical resistivity with respect to background conditions). To do so, we consider a simple illustrative model and a more realistic test case related to seawater intrusion. The results are encouraging and show significant improvements when using time-lapse difference inversion with non l2 model norms. Artifacts that may arise when imposing compactness of regions with temporal changes can be suppressed through inequality constraints to yield models without oscillations outside the true region of temporal changes. Based on these results, we recommend approximate l1-norm solutions as they can resolve both sharp and smooth interfaces within the same model.
0
1
0
0
0
0
Deep Exploration via Randomized Value Functions
We study the use of randomized value functions to guide deep exploration in reinforcement learning. This offers an elegant means for synthesizing statistically and computationally efficient exploration with common practical approaches to value function learning. We present several reinforcement learning algorithms that leverage randomized value functions and demonstrate their efficacy through computational studies. We also prove a regret bound that establishes statistical efficiency with a tabular representation.
1
0
0
1
0
0
Spectral Norm Regularization for Improving the Generalizability of Deep Learning
We investigate the generalizability of deep learning based on the sensitivity to input perturbation. We hypothesize that the high sensitivity to the perturbation of data degrades the performance on it. To reduce the sensitivity to perturbation, we propose a simple and effective regularization method, referred to as spectral norm regularization, which penalizes the high spectral norm of weight matrices in neural networks. We provide supportive evidence for the abovementioned hypothesis by experimentally confirming that the models trained using spectral norm regularization exhibit better generalizability than other baseline methods.
1
0
0
1
0
0
Estimating the chromospheric magnetic field from a revised NLTE modeling: the case of HR7428
In this work we use the semi-empirical atmospheric modeling method to obtain the chro-mospheric temperature, pressure, density and magnetic field distribution versus height in the K2 primary component of the RS CVn binary system HR 7428. While temperature, pressure, density are the standard output of the semi-empirical modeling technique, the chromospheric magnetic field estimation versus height comes from considering the possibility of not im-posing hydrostatic equilibrium in the semi-empirical computation. The stability of the best non-hydrostatic equilibrium model, implies the presence of an additive (toward the center of the star) pressure, that decreases in strength from the base of the chromosphere toward the outer layers. Interpreting the additive pressure as magnetic pressure we estimated a magnetic field intensity of about 500 gauss at the base of the chromosphere.
0
1
0
0
0
0
An accurate approximation formula for gamma function
In this paper, we present a very accurate approximation for gamma function: \begin{equation*} \Gamma \left( x+1\right) \thicksim \sqrt{2\pi x}\left( \dfrac{x}{e}\right) ^{x}\left( x\sinh \frac{1}{x}\right) ^{x/2}\exp \left( \frac{7}{324}\frac{1}{ x^{3}\left( 35x^{2}+33\right) }\right) =W_{2}\left( x\right) \end{equation*} as $x\rightarrow \infty $, and prove that the function $x\mapsto \ln \Gamma \left( x+1\right) -\ln W_{2}\left( x\right) $ is strictly decreasing and convex from $\left( 1,\infty \right) $ onto $\left( 0,\beta \right) $, where \begin{equation*} \beta =\frac{22\,025}{22\,032}-\ln \sqrt{2\pi \sinh 1}\approx 0.00002407. \end{equation*}
0
0
1
0
0
0
Reconciling Bayesian and Total Variation Methods for Binary Inversion
A central theme in classical algorithms for the reconstruction of discontinuous functions from observational data is perimeter regularization. On the other hand, sparse or noisy data often demands a probabilistic approach to the reconstruction of images, to enable uncertainty quantification; the Bayesian approach to inversion is a natural framework in which to carry this out. The link between Bayesian inversion methods and perimeter regularization, however, is not fully understood. In this paper two links are studied: (i) the MAP objective function of a suitably chosen phase-field Bayesian approach is shown to be closely related to a least squares plus perimeter regularization objective; (ii) sample paths of a suitably chosen Bayesian level set formulation are shown to possess finite perimeter and to have the ability to learn about the true perimeter. Furthermore, the level set approach is shown to lead to faster algorithms for uncertainty quantification than the phase field approach.
0
0
1
1
0
0
Perception Driven Texture Generation
This paper investigates a novel task of generating texture images from perceptual descriptions. Previous work on texture generation focused on either synthesis from examples or generation from procedural models. Generating textures from perceptual attributes have not been well studied yet. Meanwhile, perceptual attributes, such as directionality, regularity and roughness are important factors for human observers to describe a texture. In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input. In this model, a preliminary trained convolutional neural network is essentially integrated with the adversarial framework, which can drive the generated textures to possess given perceptual attributes. An important aspect of the proposed model is that, if we change one of the input perceptual features, the corresponding appearance of the generated textures will also be changed. We design several experiments to validate the effectiveness of the proposed method. The results show that the proposed method can produce high quality texture images with desired perceptual properties.
1
0
0
0
0
0
Computational modeling approaches in gonadotropin signaling
Follicle-stimulating hormone (FSH) and luteinizing hormone (LH) play essential roles in animal reproduction. They exert their function through binding to their cognate receptors, which belong to the large family of G protein-coupled receptors (GPCRs). This recognition at the plasma membrane triggers a plethora of cellular events, whose processing and integration ultimately lead to an adapted biological response. Understanding the nature and the kinetics of these events is essential for innovative approaches in drug discovery. The study and manipulation of such complex systems requires the use of computational modeling approaches combined with robust in vitro functional assays for calibration and validation. Modeling brings a detailed understanding of the system and can also be used to understand why existing drugs do not work as well as expected, and how to design more efficient ones.
0
0
0
0
1
0
SEP-Nets: Small and Effective Pattern Networks
While going deeper has been witnessed to improve the performance of convolutional neural networks (CNN), going smaller for CNN has received increasing attention recently due to its attractiveness for mobile/embedded applications. It remains an active and important topic how to design a small network while retaining the performance of large and deep CNNs (e.g., Inception Nets, ResNets). Albeit there are already intensive studies on compressing the size of CNNs, the considerable drop of performance is still a key concern in many designs. This paper addresses this concern with several new contributions. First, we propose a simple yet powerful method for compressing the size of deep CNNs based on parameter binarization. The striking difference from most previous work on parameter binarization/quantization lies at different treatments of $1\times 1$ convolutions and $k\times k$ convolutions ($k>1$), where we only binarize $k\times k$ convolutions into binary patterns. The resulting networks are referred to as pattern networks. By doing this, we show that previous deep CNNs such as GoogLeNet and Inception-type Nets can be compressed dramatically with marginal drop in performance. Second, in light of the different functionalities of $1\times 1$ (data projection/transformation) and $k\times k$ convolutions (pattern extraction), we propose a new block structure codenamed the pattern residual block that adds transformed feature maps generated by $1\times 1$ convolutions to the pattern feature maps generated by $k\times k$ convolutions, based on which we design a small network with $\sim 1$ million parameters. Combining with our parameter binarization, we achieve better performance on ImageNet than using similar sized networks including recently released Google MobileNets.
1
0
0
0
0
0
Fundamental Limitations of Cavity-assisted Atom Interferometry
Atom interferometers employing optical cavities to enhance the beam splitter pulses promise significant advances in science and technology, notably for future gravitational wave detectors. Long cavities, on the scale of hundreds of meters, have been proposed in experiments aiming to observe gravitational waves with frequencies below 1 Hz, where laser interferometers, such as LIGO, have poor sensitivity. Alternatively, short cavities have also been proposed for enhancing the sensitivity of more portable atom interferometers. We explore the fundamental limitations of two-mirror cavities for atomic beam splitting, and establish upper bounds on the temperature of the atomic ensemble as a function of cavity length and three design parameters: the cavity g-factor, the bandwidth, and the optical suppression factor of the first and second order spatial modes. A lower bound to the cavity bandwidth is found which avoids elongation of the interaction time and maximizes power enhancement. An upper limit to cavity length is found for symmetric two-mirror cavities, restricting the practicality of long baseline detectors. For shorter cavities, an upper limit on the beam size was derived from the geometrical stability of the cavity. These findings aim to aid the design of current and future cavity-assisted atom interferometers.
0
1
0
0
0
0
Markov chain aggregation and its application to rule-based modelling
Rule-based modelling allows to represent molecular interactions in a compact and natural way. The underlying molecular dynamics, by the laws of stochastic chemical kinetics, behaves as a continuous-time Markov chain. However, this Markov chain enumerates all possible reaction mixtures, rendering the analysis of the chain computationally demanding and often prohibitive in practice. We here describe how it is possible to efficiently find a smaller, aggregate chain, which preserves certain properties of the original one. Formal methods and lumpability notions are used to define algorithms for automated and efficient construction of such smaller chains (without ever constructing the original ones). We here illustrate the method on an example and we discuss the applicability of the method in the context of modelling large signalling pathways.
1
0
0
0
1
0
Shape and Positional Geometry of Multi-Object Configurations
In previous work, we introduced a method for modeling a configuration of objects in 2D and 3D images using a mathematical "medial/skeletal linking structure." In this paper, we show how these structures allow us to capture positional properties of a multi-object configuration in addition to the shape properties of the individual objects. In particular, we introduce numerical invariants for positional properties which measure the closeness of neighboring objects, including identifying the parts of the objects which are close, and the "relative significance" of objects compared with the other objects in the configuration. Using these numerical measures, we introduce a hierarchical ordering and relations between the individual objects, and quantitative criteria for identifying subconfigurations. In addition, the invariants provide a "proximity matrix" which yields a unique set of weightings measuring overall proximity of objects in the configuration. Furthermore, we show that these invariants, which are volumetrically defined and involve external regions, may be computed via integral formulas in terms of "skeletal linking integrals" defined on the internal skeletal structures of the objects.
1
0
0
0
0
0
Interface mediated mechanisms of plastic strain recovery in AgCu alloy
Through the combination of transmission electron microscopy analysis of the deformed microstructure and molecular dynamics computer simulations of the deformation processes, the mechanisms of plastic strain recovery in bulk AgCu eutectic with either incoherent twin or cube-on-cube interfaces between the Ag and Cu layers and a bilayer thickness of 500 nm have been revealed. The character of the incoherent twin interfaces changed uniquely after dynamic compressive loading for samples that exhibited plastic strain recovery and was found to drive the recovery, which is due to dislocation retraction and rearrangement of the interfaces. The magnitude of the recovery decreased with increasing strain as dislocation tangles and dislocation cell structures formed. No change in the orientation relationship was found at cube-on-cube interfaces and these exhibited a lesser amount of plastic strain recovery in the simulations and none experimentally in samples with larger layer thicknesses with predominantly cube-on-cube interfaces. Molecular dynamics computer simulations verified the importance of the change in the incoherent twin interface structure as the driving force for dislocation annihilation at the interfaces and the plastic strain recovery.
0
1
0
0
0
0
Refounding legitimacy towards Aethogenesis
The fusion of humans and technology takes us into an unknown world described by some authors as populated by quasi living species that would relegate us - ordinary humans - to the rank of alienated agents emptied of our identity and consciousness. I argue instead that our world is woven of simple though invisible perspectives which - if we become aware of them - may renew our ability for making judgments and enhance our autonomy. I became aware of these invisible perspectives by observing and practicing a real time collective net art experiment called the Poietic Generator. As the perspectives unveiled by this experiment are invisible I have called them anoptical perspectives i.e. non-optical by analogy with the optical perspective of the Renaissance. Later I have come to realize that these perspectives obtain their cognitive structure from the political origins of our language. Accordingly it is possible to define certain cognitive criteria for assessing the legitimacy of the anoptical perspectives just like some artists and architects of the Renaissance defined the geometrical criteria that established the legitimacy of the optical one.
1
0
0
0
0
0
FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this paper, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different types of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank-Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of Pu240 and Fm256. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. We emphasize that in a realistic calculation of fission mass distributions of Pu240, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).
0
1
0
0
0
0
Scattering in the energy space for Boussinesq equations
In this note we show that all small solutions in the energy space of the generalized 1D Boussinesq equation must decay to zero as time tends to infinity, strongly on slightly proper subsets of the space-time light cone. Our result does not require any assumption on the power of the nonlinearity, working even for the supercritical range of scattering. No parity assumption on the initial data is needed.
0
0
1
0
0
0
A Data-Driven Supply-Side Approach for Measuring Cross-Border Internet Purchases
The digital economy is a highly relevant item on the European Union's policy agenda. Cross-border internet purchases are part of the digital economy, but their total value can currently not be accurately measured or estimated. Traditional approaches based on consumer surveys or business surveys are shown to be inadequate for this purpose, due to language bias and sampling issues, respectively. We address both problems by proposing a novel approach based on supply-side data, namely tax returns. The proposed data-driven record-linkage techniques and machine learning algorithms utilize two additional open data sources: European business registers and internet data. Our main finding is that the value of total cross-border internet purchases within the European Union by Dutch consumers was over EUR 1.3 billion in 2016. This is more than 6 times as high as current estimates. Our finding motivates the implementation of the proposed methodology in other EU member states. Ultimately, it could lead to more accurate estimates of cross-border internet purchases within the entire European Union.
0
0
0
1
0
0
Low Rank Magnetic Resonance Fingerprinting
Purpose: Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI measures using randomized acquisition. Extraction of physical quantitative tissue parameters is performed off-line, without the need of patient presence, based on acquisition with varying parameters and a dictionary generated according to the Bloch equation simulations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore a high undersampling ratio in the sampling domain (k-space) is required for reasonable scanning time. This undersampling causes spatial artifacts that hamper the ability to accurately estimate the tissue's quantitative values. In this work, we introduce a new approach for quantitative MRI using MRF, called magnetic resonance Fingerprinting with LOw Rank (FLOR). Methods: We exploit the low rank property of the concatenated temporal imaging contrasts, on top of the fact that the MRF signal is sparsely represented in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Results: Experimental results consist of retrospective sampling, that allows comparison to a well defined reference, and prospective sampling that shows the performance of FLOR for a real-data sampling scenario. Both experiments demonstrate improved parameter accuracy compared to other compressed-sensing and low-rank based methods for MRF at 5% and 9% sampling ratios, for the retrospective and prospective experiments, respectively. Conclusions: We have shown through retrospective and prospective experiments that by exploiting the low rank nature of the MRF signal, FLOR recovers the MRF temporal undersampled images and provides more accurate parameter maps compared to previous iterative methods.
1
1
0
0
0
0
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.
1
0
1
1
0
0
A criterion related to the Riemann Hypothesis
A crucial role in the Nyman-Beurling-Báez-Duarte approach to the Riemann Hypothesis is played by the distance \[ d_N^2:=\inf_{A_N}\frac{1}{2\pi}\int_{-\infty}^\infty\left|1-\zeta A_N\left(\frac{1}{2}+it\right)\right|^2\frac{dt}{\frac{1}{4}+t^2}\:, \] where the infimum is over all Dirichlet polynomials $$A_N(s)=\sum_{n=1}^{N}\frac{a_n}{n^s}$$ of length $N$. In this paper we investigate $d_N^2$ under the assumption that the Riemann zeta function has four non-trivial zeros off the critical line. Thus we obtain a criterion for the non validity of the Riemann Hypothesis.
0
0
1
0
0
0
Non-Stationary Spectral Kernels
We propose non-stationary spectral kernels for Gaussian process regression. We propose to model the spectral density of a non-stationary kernel function as a mixture of input-dependent Gaussian process frequency density surfaces. We solve the generalised Fourier transform with such a model, and present a family of non-stationary and non-monotonic kernels that can learn input-dependent and potentially long-range, non-monotonic covariances between inputs. We derive efficient inference using model whitening and marginalized posterior, and show with case studies that these kernels are necessary when modelling even rather simple time series, image or geospatial data with non-stationary characteristics.
1
0
0
1
0
0
Spectral and Energy Efficiency of Uplink D2D Underlaid Massive MIMO Cellular Networks
One of key 5G scenarios is that device-to-device (D2D) and massive multiple-input multiple-output (MIMO) will be co-existed. However, interference in the uplink D2D underlaid massive MIMO cellular networks needs to be coordinated, due to the vast cellular and D2D transmissions. To this end, this paper introduces a spatially dynamic power control solution for mitigating the cellular-to-D2D and D2D-to-cellular interference. In particular, the proposed D2D power control policy is rather flexible including the special cases of no D2D links or using maximum transmit power. Under the considered power control, an analytical approach is developed to evaluate the spectral efficiency (SE) and energy efficiency (EE) in such networks. Thus, the exact expressions of SE for a cellular user or D2D transmitter are derived, which quantify the impacts of key system parameters such as massive MIMO antennas and D2D density. Moreover, the D2D scale properties are obtained, which provide the sufficient conditions for achieving the anticipated SE. Numerical results corroborate our analysis and show that the proposed power control solution can efficiently mitigate interference between the cellular and D2D tier. The results demonstrate that there exists the optimal D2D density for maximizing the area SE of D2D tier. In addition, the achievable EE of a cellular user can be comparable to that of a D2D user.
1
0
1
0
0
0
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
Neural networks are among the most accurate supervised learning methods in use today, but their opacity makes them difficult to trust in critical applications, especially when conditions in training differ from those in test. Recent work on explanations for black-box models has produced tools (e.g. LIME) to show the implicit rules behind predictions, which can help us identify when models are right for the wrong reasons. However, these methods do not scale to explaining entire datasets and cannot correct the problems they reveal. We introduce a method for efficiently explaining and regularizing differentiable models by examining and selectively penalizing their input gradients, which provide a normal to the decision boundary. We apply these penalties both based on expert annotation and in an unsupervised fashion that encourages diverse models with qualitatively different decision boundaries for the same classification problem. On multiple datasets, we show our approach generates faithful explanations and models that generalize much better when conditions differ between training and test.
1
0
0
1
0
0
The ANTARES Collaboration: Contributions to ICRC 2017 Part II: The multi-messenger program
Papers on the ANTARES multi-messenger program, prepared for the 35th International Cosmic Ray Conference (ICRC 2017, Busan, South Korea) by the ANTARES Collaboration
0
1
0
0
0
0
Hamiltonian structure of peakons as weak solutions for the modified Camassa-Holm equation
The modified Camassa-Holm (mCH) equation is a bi-Hamiltonian system possessing $N$-peakon weak solutions, for all $N\geq 1$, in the setting of an integral formulation which is used in analysis for studying local well-posedness, global existence, and wave breaking for non-peakon solutions. Unlike the original Camassa-Holm equation, the two Hamiltonians of the mCH equation do not reduce to conserved integrals (constants of motion) for $2$-peakon weak solutions. This perplexing situation is addressed here by finding an explicit conserved integral for $N$-peakon weak solutions for all $N\geq 2$. When $N$ is even, the conserved integral is shown to provide a Hamiltonian structure with the use of a natural Poisson bracket that arises from reduction of one of the Hamiltonian structures of the mCH equation. But when $N$ is odd, the Hamiltonian equations of motion arising from the conserved integral using this Poisson bracket are found to differ from the dynamical equations for the mCH $N$-peakon weak solutions. Moreover, the lack of conservation of the two Hamiltonians of the mCH equation when they are reduced to $2$-peakon weak solutions is shown to extend to $N$-peakon weak solutions for all $N\geq 2$. The connection between this loss of integrability structure and related work by Chang and Szmigielski on the Lax pair for the mCH equation is discussed.
0
1
0
0
0
0
Orbital-dependent correlations in PuCoGa$_5$
We investigate the normal state of the superconducting compound PuCoGa$_5$ using the combination of density functional theory (DFT) and dynamical mean field theory (DMFT), with the continuous time quantum Monte Carlo (CTQMC) and the vertex-corrected one-crossing approximation (OCA) as the impurity solvers. Our DFT+DMFT(CTQMC) calculations suggest a strong tendency of Pu-5$f$ orbitals to differentiate at low temperatures. The renormalized 5$f_{5/2}$ states exhibit a Fermi-liquid behavior whereas one electron in the 5$f_{7/2}$ states is at the edge of a Mott localization. We find that the orbital differentiation is manifested as the removing of 5$f_{7/2}$ spectral weight from the Fermi level relative to DFT. We corroborate these conclusions with DFT+DMFT(OCA) calculations which demonstrate that 5$f_{5/2}$ electrons have a much larger Kondo scale than the 5$f_{7/2}$.
0
1
0
0
0
0
A variational derivation of the nonequilibrium thermodynamics of a moist atmosphere with rain process and its pseudoincompressible approximation
Irreversible processes play a major role in the description and prediction of atmospheric dynamics. In this paper, we present a variational derivation of the evolution equations for a moist atmosphere with rain process and subject to the irreversible processes of viscosity, heat conduction, diffusion, and phase transition. This derivation is based on a general variational formalism for nonequilibrium thermodynamics which extends Hamilton's principle to incorporates irreversible processes. It is valid for any state equation and thus also covers the case of the atmosphere of other planets. In this approach, the second law of thermodynamics is understood as a nonlinear constraint formulated with the help of new variables, called thermodynamic displacements, whose time derivative coincides with the thermodynamic force of the irreversible process. The formulation is written both in the Lagrangian and Eulerian descriptions and can be directly adapted to oceanic dynamics. We illustrate the efficiency of our variational formulation as a modeling tool in atmospheric thermodynamics, by deriving a pseudoincompressible model for moist atmospheric thermodynamics with general equations of state and subject to the irreversible processes of viscosity, heat conduction, diffusion, and phase transition.
0
1
1
0
0
0
On certain weighted 7-colored partitions
Inspired by Andrews' 2-colored generalized Frobenius partitions, we consider certain weighted 7-colored partition functions and establish some interesting Ramanujan-type identities and congruences. Moreover, we provide combinatorial interpretations of some congruences modulo 5 and 7. Finally, we study the properties of weighted 7-colored partitions weighted by the parity of certain partition statistics.
0
0
1
0
0
0
Employee turnover prediction and retention policies design: a case study
This paper illustrates the similarities between the problems of customer churn and employee turnover. An example of employee turnover prediction model leveraging classical machine learning techniques is developed. Model outputs are then discussed to design \& test employee retention policies. This type of retention discussion is, to our knowledge, innovative and constitutes the main value of this paper.
1
0
0
1
0
0
Do You Want Your Autonomous Car To Drive Like You?
With progress in enabling autonomous cars to drive safely on the road, it is time to start asking how they should be driving. A common answer is that they should be adopting their users' driving style. This makes the assumption that users want their autonomous cars to drive like they drive - aggressive drivers want aggressive cars, defensive drivers want defensive cars. In this paper, we put that assumption to the test. We find that users tend to prefer a significantly more defensive driving style than their own. Interestingly, they prefer the style they think is their own, even though their actual driving style tends to be more aggressive. We also find that preferences do depend on the specific driving scenario, opening the door for new ways of learning driving style preference.
1
0
0
0
0
0
Now Playing: Continuous low-power music recognition
Existing music recognition applications require a connection to a server that performs the actual recognition. In this paper we present a low-power music recognizer that runs entirely on a mobile device and automatically recognizes music without user interaction. To reduce battery consumption, a small music detector runs continuously on the mobile device's DSP chip and wakes up the main application processor only when it is confident that music is present. Once woken, the recognizer on the application processor is provided with a few seconds of audio which is fingerprinted and compared to the stored fingerprints in the on-device fingerprint database of tens of thousands of songs. Our presented system, Now Playing, has a daily battery usage of less than 1% on average, respects user privacy by running entirely on-device and can passively recognize a wide range of music.
1
0
0
0
0
0
COLA: Decentralized Linear Learning
Decentralized machine learning is a promising emerging paradigm in view of global challenges of data ownership and privacy. We consider learning of linear classification and regression models, in the setting where the training data is decentralized over many user devices, and the learning algorithm must run on-device, on an arbitrary communication network, without a central coordinator. We propose COLA, a new decentralized training algorithm with strong theoretical guarantees and superior practical performance. Our framework overcomes many limitations of existing methods, and achieves communication efficiency, scalability, elasticity as well as resilience to changes in data and participating devices.
0
0
0
1
0
0
Improving the Expected Improvement Algorithm
The expected improvement (EI) algorithm is a popular strategy for information collection in optimization under uncertainty. The algorithm is widely known to be too greedy, but nevertheless enjoys wide use due to its simplicity and ability to handle uncertainty and noise in a coherent decision theoretic framework. To provide rigorous insight into EI, we study its properties in a simple setting of Bayesian optimization where the domain consists of a finite grid of points. This is the so-called best-arm identification problem, where the goal is to allocate measurement effort wisely to confidently identify the best arm using a small number of measurements. In this framework, one can show formally that EI is far from optimal. To overcome this shortcoming, we introduce a simple modification of the expected improvement algorithm. Surprisingly, this simple change results in an algorithm that is asymptotically optimal for Gaussian best-arm identification problems, and provably outperforms standard EI by an order of magnitude.
1
0
0
1
0
0
Performance Limits of Solutions to Network Utility Maximization Problems
We study performance limits of solutions to utility maximization problems (e.g., max-min problems) in wireless networks as a function of the power budget $\bar{p}$ available to transmitters. Special focus is devoted to the utility and the transmit energy efficiency (i.e., utility over transmit power) of the solution. Briefly, we show tight bounds for the general class of network utility optimization problems that can be solved by computing conditional eigenvalues of standard interference mappings. The proposed bounds, which are based on the concept of asymptotic functions, are simple to compute, provide us with good estimates of the performance of networks for any value of $\bar{p}$ in many real-world applications, and enable us to determine points in which networks move from a noise limited regime to an interference limited regime. Furthermore, they also show that the utility and the transmit energy efficiency scales as $\Theta(1)$ and $\Theta(1/\bar{p})$, respectively, as $\bar{p}\to\infty$.
1
0
0
0
0
0
Toeplitz Order
A new approach to problems of the Uncertainty Principle in Harmonic Analysis, based on the use of Toeplitz operators, has brought progress to some of the classical problems in the area. The goal of this paper is to develop and systematize the function theoretic component of the Toeplitz approach by introducing a partial order on the set of inner functions induced by the action of Toeplitz operators. We study connections of the new order with some of the classical problems and known results. We discuss remaining problems and possible directions for further research.
0
0
1
0
0
0
Definably compact groups definable in real closed fields. I
We study definably compact definably connected groups definable in a sufficiently saturated real closed field $R$. We introduce the notion of group-generic point for $\bigvee$-definable groups and show the existence of group-generic points for definably compact groups definable in a sufficiently saturated o-minimal expansion of a real closed field. We use this notion along with some properties of generic sets to prove that for every definably compact definably connected group $G$ definable in $R$ there are a connected $R$-algebraic group $H$, a definable injective map $\phi$ from a generic definable neighborhood of the identity of $G$ into the group $H\left(R\right)$ of $R$-points of $H$ such that $\phi$ acts as a group homomorphism inside its domain. This result is used in [2] to prove that the o-minimal universal covering group of an abelian connected definably compact group definable in a sufficiently saturated real closed field $R$ is, up to locally definable isomorphisms, an open connected locally definable subgroup of the o-minimal universal covering group of the $R$-points of some $R$-algebraic group.
0
0
1
0
0
0
The unreasonable effectiveness of small neural ensembles in high-dimensional brain
Despite the widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain. In machine learning for a long time, the famous curse of dimensionality seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of dimensionality becomes gradually more and more popular. Ensembles of non-interacting or weakly interacting simple units prove to be an effective tool for solving essentially multidimensional problems. This approach is especially useful for one-shot (non-iterative) correction of errors in large legacy artificial intelligence systems. These simplicity revolutions in the era of complexity have deep fundamental reasons grounded in geometry of multidimensional data spaces. To explore and understand these reasons we revisit the background ideas of statistical physics. In the course of the 20th century they were developed into the concentration of measure theory. New stochastic separation theorems reveal the fine structure of the data clouds. We review and analyse biological, physical, and mathematical problems at the core of the fundamental question: how can high-dimensional brain organise reliable and fast learning in high-dimensional world of data by simple tools? Two critical applications are reviewed to exemplify the approach: one-shot correction of errors in intellectual systems and emergence of static and associative memories in ensembles of single neurons.
0
0
0
0
1
0
Explicit cocycle formulas on finite abelian groups with applications to braided linear Gr-categories and Dijkgraaf-Witten invariants
We provide explicit and unified formulas for the cocycles of all degrees on the normalized bar resolutions of finite abelian groups. This is achieved by constructing a chain map from the normalized bar resolution to a Koszul-like resolution for any given finite abelian group. With a help of the obtained cocycle formulas, we determine all the braided linear Gr-categories and compute the Dijkgraaf-Witten Invariants of the $n$-torus for all $n$.
0
0
1
0
0
0
Threshold Selection for Multivariate Heavy-Tailed Data
Regular variation is often used as the starting point for modeling multivariate heavy-tailed data. A random vector is regularly varying if and only if its radial part $R$ is regularly varying and is asymptotically independent of the angular part $\Theta$ as $R$ goes to infinity. The conditional limiting distribution of $\Theta$ given $R$ is large characterizes the tail dependence of the random vector and hence its estimation is the primary goal of applications. A typical strategy is to look at the angular components of the data for which the radial parts exceed some threshold. While a large class of methods has been proposed to model the angular distribution from these exceedances, the choice of threshold has been scarcely discussed in the literature. In this paper, we describe a procedure for choosing the threshold by formally testing the independence of $R$ and $\Theta$ using a measure of dependence called distance covariance. We generalize the limit theorem for distance covariance to our unique setting and propose an algorithm which selects the threshold for $R$. This algorithm incorporates a subsampling scheme that is also applicable to weakly dependent data. Moreover, it avoids the heavy computation in the calculation of the distance covariance, a typical limitation for this measure. The performance of our method is illustrated on both simulated and real data.
0
0
1
1
0
0
An Orchestrated Empirical Study on Deep Learning Frameworks and Platforms
Deep learning (DL) has recently achieved tremendous success in a variety of cutting-edge applications, e.g., image recognition, speech and natural language processing, and autonomous driving. Besides the available big data and hardware evolution, DL frameworks and platforms play a key role to catalyze the research, development, and deployment of DL intelligent solutions. However, the difference in computation paradigm, architecture design and implementation of existing DL frameworks and platforms brings challenges for DL software development, deployment, maintenance, and migration. Up to the present, it still lacks a comprehensive study on how current diverse DL frameworks and platforms influence the DL software development process. In this paper, we initiate the first step towards the investigation on how existing state-of-the-art DL frameworks (i.e., TensorFlow, Theano, and Torch) and platforms (i.e., server/desktop, web, and mobile) support the DL software development activities. We perform an in-depth and comparative evaluation on metrics such as learning accuracy, DL model size, robustness, and performance, on state-of-the-art DL frameworks across platforms using two popular datasets MNIST and CIFAR-10. Our study reveals that existing DL frameworks still suffer from compatibility issues, which becomes even more severe when it comes to different platforms. We pinpoint the current challenges and opportunities towards developing high quality and compatible DL systems. To ignite further investigation along this direction to address urgent industrial demands of intelligent solutions, we make all of our assembled feasible toolchain and dataset publicly available.
1
0
0
0
0
0
On the complexity of topological conjugacy of compact metrizable $G$-ambits
In this note, we analyze the classification problem for compact metrizable $G$-ambits for a countable discrete group $G$ from the point of view of descriptive set theory. More precisely, we prove that the topological conjugacy relation on the standard Borel space of compact metrizable $G$-ambits is Borel for every countable discrete group $G$.
0
0
1
0
0
0
Classical affine W-superalgebras via generalized Drinfeld-Sokolov reductions and related integrable systems
The purpose of this article is to investigate relations between W-superalgebras and integrable super-Hamiltonian systems. To this end, we introduce the generalized Drinfel'd-Sokolov (D-S) reduction associated to a Lie superalgebra $g$ and its even nilpotent element $f$, and we find a new definition of the classical affine W-superalgebra $W(g,f,k)$ via the D-S reduction. This new construction allows us to find free generators of $W(g,f,k)$, as a differential superalgebra, and two independent Lie brackets on $W(g,f,k)/\partial W(g,f,k).$ Moreover, we describe super-Hamiltonian systems with the Poisson vertex algebras theory. A W-superalgebra with certain properties can be understood as an underlying differential superalgebra of a series of integrable super-Hamiltonian systems.
0
0
1
0
0
0
Temporally Identity-Aware SSD with Attentional LSTM
Temporal object detection has attracted significant attention, but most popular detection methods can not leverage the rich temporal information in videos. Very recently, many different algorithms have been developed for video detection task, but real-time online approaches are frequently deficient. In this paper, based on attention mechanism and convolutional long short-term memory (ConvLSTM), we propose a temporal signal-shot detector (TSSD) for real-world detection. Distinct from previous methods, we take aim at temporally integrating pyramidal feature hierarchy using ConvLSTM, and design a novel structure including a low-level temporal unit as well as a high-level one (HL-TU) for multi-scale feature maps. Moreover, we develop a creative temporal analysis unit, namely, attentional ConvLSTM (AC-LSTM), in which a temporal attention module is specially tailored for background suppression and scale suppression while a ConvLSTM integrates attention-aware features through time. An association loss is designed for temporal coherence. Besides, online tubelet analysis (OTA) is exploited for identification. Finally, our method is evaluated on ImageNet VID dataset and 2DMOT15 dataset. Extensive comparisons on the detection and tracking capability validate the superiority of the proposed approach. Consequently, the developed TSSD-OTA is fairly faster and achieves an overall competitive performance in terms of detection and tracking. The source code will be made available.
1
0
0
0
0
0
Carleman Estimate for Surface in Euclidean Space at Infinity
This paper develops a Carleman type estimate for immersed surface in Euclidean space at infinity. With this estimate, we obtain an unique continuation property for harmonic functions on immersed surfaces vanishing at infinity, which leads to rigidity results in geometry.
0
0
1
0
0
0
Formalizing Timing Diagram Requirements in Discrete Duration Calulus
Several temporal logics have been proposed to formalise timing diagram requirements over hardware and embedded controllers. These include LTL, discrete time MTL and the recent industry standard PSL. However, succintness and visual structure of a timing diagram are not adequately captured by their formulae. Interval temporal logic QDDC is a highly succint and visual notation for specifying patterns of behaviours. In this paper, we propose a practically useful notation called SeCeCntnl which enhances negation free fragment of QDDC with features of nominals and limited liveness. We show that timing diagrams can be naturally (compositionally) and succintly formalized in SeCeCntnl as compared with PSL and MTL. We give a linear time translation from timing diagrams to SeCeCntnl. As our second main result, we propose a linear time translation of SeCeCntnl into QDDC. This allows QDDC tools such as DCVALID and DCSynth to be used for checking consistency of timing diagram requirements as well as for automatic synthesis of property monitors and controllers. We give examples of a minepump controller and a bus arbiter to illustrate our tools. Giving a theoretical analysis, we show that for the proposed SeCeCntnl, the satisfiability and model checking have elementary complexity as compared to the non-elementary complexity for the full logic QDDC.
1
0
0
0
0
0
Cubical Covers of Sets in $\mathbb{R}^n$
Wild sets in $\mathbb{R}^n$ can be tamed through the use of various representations though sometimes this taming removes features considered important. Finding the wildest sets for which it is still true that the representations faithfully inform us about the original set is the focus of this rather playful, expository paper that we hope will stimulate interest in cubical coverings as well as the other two ideas we explore briefly: Jones' $\beta$ numbers and varifolds from geometric measure theory.
0
0
1
0
0
0
Mitochondrial network fragmentation modulates mutant mtDNA accumulation independently of absolute fission-fusion rates
Mitochondrial DNA (mtDNA) mutations cause severe congenital diseases but may also be associated with healthy aging. MtDNA is stochastically replicated and degraded, and exists within organelles which undergo dynamic fusion and fission. The role of the resulting mitochondrial networks in determining the time evolution of the cellular proportion of mutated mtDNA molecules (heteroplasmy), and cell-to-cell variability in heteroplasmy (heteroplasmy variance), remains incompletely understood. Heteroplasmy variance is particularly important since it modulates the number of pathological cells in a tissue. Here, we provide the first wide-reaching mathematical treatment which bridges mitochondrial network and genetic states. We show that, for a range of models, the rate of increase in heteroplasmy variance, and the rate of \textit{de novo} mutation, is proportionately modulated by the fraction of unfused mitochondria, independently of the absolute fission-fusion rate. In the context of selective fusion, we show that intermediate fusion/fission ratios are optimal for the clearance of mtDNA mutants. Our findings imply that modulating network state, mitophagy rate and copy number to slow down heteroplasmy dynamics when mean heteroplasmy is low, could have therapeutic advantages for mitochondrial disease and healthy aging.
0
0
0
0
1
0
Depth Separation for Neural Networks
Let $f:\mathbb{S}^{d-1}\times \mathbb{S}^{d-1}\to\mathbb{S}$ be a function of the form $f(\mathbf{x},\mathbf{x}') = g(\langle\mathbf{x},\mathbf{x}'\rangle)$ for $g:[-1,1]\to \mathbb{R}$. We give a simple proof that shows that poly-size depth two neural networks with (exponentially) bounded weights cannot approximate $f$ whenever $g$ cannot be approximated by a low degree polynomial. Moreover, for many $g$'s, such as $g(x)=\sin(\pi d^3x)$, the number of neurons must be $2^{\Omega\left(d\log(d)\right)}$. Furthermore, the result holds w.r.t.\ the uniform distribution on $\mathbb{S}^{d-1}\times \mathbb{S}^{d-1}$. As many functions of the above form can be well approximated by poly-size depth three networks with poly-bounded weights, this establishes a separation between depth two and depth three networks w.r.t.\ the uniform distribution on $\mathbb{S}^{d-1}\times \mathbb{S}^{d-1}$.
1
0
0
1
0
0
An Accurate Interconnect Test Structure for Parasitic Validation in On-Chip Machine Learning Accelerators
For nanotechnology nodes, the feature size is shrunk rapidly, the wire becomes narrow and thin, it leads to high RC parasitic, especially for resistance. The overall system performance are dominated by interconnect rather than device. As such, it is imperative to accurately measure and model interconnect parasitic in order to predict interconnect performance on silicon. Despite many test structures developed in the past to characterize device models and layout effects, only few of them are available for interconnects. Nevertheless, they are either not suitable for real chip implementation or too complicated to be embedded. A compact yet comprehensive test structure to capture all interconnect parasitic in a real chip is needed. To address this problem, this paper describes a set of test structures that can be used to study the timing performance (i.e. propagation delay and crosstalk) of various interconnect configurations. Moreover, an empirical model is developed to estimate the actual RC parasitic. Compared with the state-of-the-art interconnect test structures, the new structure is compact in size and can be easily embedded on die as a parasitic variation monitor. We have validated the proposed structure on a test chip in TSMC 28nm HPM process. Recently, the test structure is further modified to identify the serious interconnect process issues for critical path design using TSMC 7nm FF process.
1
0
0
0
0
0
A New Compton-thick AGN in our Cosmic Backyard: Unveiling the Buried Nucleus in NGC 1448 with NuSTAR
NGC 1448 is one of the nearest luminous galaxies ($L_{8-1000\mu m} >$ 10$^{9} L_{\odot}$) to ours ($z$ $=$ 0.00390), and yet the active galactic nucleus (AGN) it hosts was only recently discovered, in 2009. In this paper, we present an analysis of the nuclear source across three wavebands: mid-infrared (MIR) continuum, optical, and X-rays. We observed the source with the Nuclear Spectroscopic Telescope Array (NuSTAR), and combined this data with archival Chandra data to perform broadband X-ray spectral fitting ($\approx$0.5-40 keV) of the AGN for the first time. Our X-ray spectral analysis reveals that the AGN is buried under a Compton-thick (CT) column of obscuring gas along our line-of-sight, with a column density of $N_{\rm H}$(los) $\gtrsim$ 2.5 $\times$ 10$^{24}$ cm$^{-2}$. The best-fitting torus models measured an intrinsic 2-10 keV luminosity of $L_{2-10\rm{,int}}$ $=$ (3.5-7.6) $\times$ 10$^{40}$ erg s$^{-1}$, making NGC 1448 one of the lowest luminosity CTAGNs known. In addition to the NuSTAR observation, we also performed optical spectroscopy for the nucleus in this edge-on galaxy using the European Southern Observatory New Technology Telescope. We re-classify the optical nuclear spectrum as a Seyfert on the basis of the Baldwin-Philips-Terlevich diagnostic diagrams, thus identifying the AGN at optical wavelengths for the first time. We also present high spatial resolution MIR observations of NGC 1448 with Gemini/T-ReCS, in which a compact nucleus is clearly detected. The absorption-corrected 2-10 keV luminosity measured from our X-ray spectral analysis agrees with that predicted from the optical [OIII]$\lambda$5007\AA\ emission line and the MIR 12$\mu$m continuum, further supporting the CT nature of the AGN.
0
1
0
0
0
0
Blowup constructions for Lie groupoids and a Boutet de Monvel type calculus
We present natural and general ways of building Lie groupoids, by using the classical procedures of blowups and of deformations to the normal cone. Our constructions are seen to recover many known ones involved in index theory. The deformation and blowup groupoids obtained give rise to several extensions of $C^*$-algebras and to full index problems. We compute the corresponding K-theory maps. Finally, the blowup of a manifold sitting in a transverse way in the space of objects of a Lie groupoid leads to a calculus, quite similar to the Boutet de Monvel calculus for manifolds with boundary.
0
0
1
0
0
0
Possible spin gapless semiconductor type behaviour in CoFeMnSi epitaxial thin films
Spin-gapless semiconductors with their unique band structures have recently attracted much attention due to their interesting transport properties that can be utilized in spintronics applications. We have successfully deposited the thin films of quaternary spin-gapless semiconductor CoFeMnSi Heusler alloy on MgO (001) substrates using a pulsed laser deposition system. These films show epitaxial growth along (001) direction and display uniform and smooth crystalline surface. The magnetic properties reveal that the film is ferromagnetically soft along the in-plane direction and its Curie temperature is well above 400 K. The electrical conductivity of the film is low and exhibits a nearly temperature independent semiconducting behaviour. The estimated temperature coefficient of resistivity for the film is -7x10^-10 Ohm.m/K, which is comparable to the values reported for spin-gapless semiconductors.
0
1
0
0
0
0