text
stringlengths
138
2.38k
labels
sequencelengths
6
6
Predictions
sequencelengths
1
3
Title: HONE: Higher-Order Network Embeddings, Abstract: This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs. The HONE framework is highly expressive and flexible with many interchangeable components. The experimental results demonstrate the effectiveness of learning higher-order network representations. In all cases, HONE outperforms recent embedding methods that are unable to capture higher-order structures with a mean relative gain in AUC of $19\%$ (and up to $75\%$ gain) across a wide variety of networks and embedding methods.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Linear Progress with Exponential Decay in Weakly Hyperbolic Groups, Abstract: A random walk $w_n$ on a separable, geodesic hyperbolic metric space $X$ converges to the boundary $\partial X$ with probability one when the step distribution supports two independent loxodromics. In particular, the random walk makes positive linear progress. Progress is known to be linear with exponential decay when (1) the step distribution has exponential tail and (2) the action on $X$ is acylindrical. We extend exponential decay to the non-acylindrical case.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Talbot-enhanced, maximum-visibility imaging of condensate interference, Abstract: Nearly two centuries ago Talbot first observed the fascinating effect whereby light propagating through a periodic structure generates a `carpet' of image revivals in the near field. Here we report the first observation of the spatial Talbot effect for light interacting with periodic Bose-Einstein condensate interference fringes. The Talbot effect can lead to dramatic loss of fringe visibility in images, degrading precision interferometry, however we demonstrate how the effect can also be used as a tool to enhance visibility, as well as extend the useful focal range of matter wave detection systems by orders of magnitude. We show that negative optical densities arise from matter-wave induced lensing of detuned imaging light -- yielding Talbot-enhanced single-shot interference visibility of >135% compared to the ideal visibility for resonant light.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Hydrogen bonding characterization in water and small molecules, Abstract: The prototypical Hydrogen bond in water dimer and Hydrogen bonds in the protonated water dimer, in other small molecules, in water cyclic clusters, and in ice, covering a wide range of bond strengths, are theoretically investigated by first-principles calculations based on the Density Functional Theory, considering a standard Generalized Gradient Approximation functional but also, for the water dimer, hybrid and van-der-Waals corrected functionals. We compute structural, energetic, and electrostatic (induced molecular dipole moments) properties. In particular, Hydrogen bonds are characterized in terms of differential electron densities distributions and profiles, and of the shifts of the centres of Maximally localized Wannier Functions. The information from the latter quantities can be conveyed into a single geometric bonding parameter that appears to be correlated to the Mayer bond order parameter and can be taken as an estimate of the covalent contribution to the Hydrogen bond. By considering the cyclic water hexamer and the hexagonal phase of ice we also elucidate the importance of cooperative/anticooperative effects in Hydrogen-bonding formation.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Chemistry" ]
Title: NOOP: A Domain-Theoretic Model of Nominally-Typed OOP, Abstract: The majority of industrial-strength object-oriented (OO) software is written using nominally-typed OO programming languages. Extant domain-theoretic models of OOP developed to analyze OO type systems miss, however, a crucial feature of these mainstream OO languages: nominality. This paper presents the construction of NOOP as the first domain-theoretic model of OOP that includes full class/type names information found in nominally-typed OOP. Inclusion of nominal information in objects of NOOP and asserting that type inheritance in statically-typed OO programming languages is an inherently nominal notion allow readily proving that type inheritance and subtyping are completely identified in these languages. This conclusion is in full agreement with intuitions of developers and language designers of these OO languages, and contrary to the belief that "inheritance is not subtyping," which came from assuming non-nominal (a.k.a., structural) models of OOP. To motivate the construction of NOOP, this paper briefly presents the benefits of nominal-typing to mainstream OO developers and OO language designers, as compared to structural-typing. After presenting NOOP, the paper further briefly compares NOOP to the most widely known domain-theoretic models of OOP. Leveraging the development of NOOP, the comparisons presented in this paper provide clear, brief and precise technical and mathematical accounts for the relation between nominal and structural OO type systems. NOOP, thus, provides a firmer semantic foundation for analyzing and progressing nominally-typed OO programming languages.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Modelling wave-induced sea ice breakup in the marginal ice zone, Abstract: A model of ice floe breakup under ocean wave forcing in the marginal ice zone (MIZ) is proposed to investigate how floe size distribution (FSD) evolves under repeated wave breakup events. A three-dimensional linear model of ocean wave scattering by a finite array of compliant circular ice floes is coupled to a flexural failure model, which breaks a floe into two floes provided the two-dimensional stress field satisfies a breakup criterion. A closed-feedback loop algorithm is devised, which (i)~solves wave scattering problem for a given FSD under time-harmonic plane wave forcing, (ii)~computes the stress field in all the floes, (iii)~fractures the floes satisfying the breakup criterion and (iv)~generates an updated FSD, initialising the geometry for the next iteration of the loop.The FSD after 50 breakup events is uni-modal and near normal, or bi-modal. Multiple scattering is found to enhance breakup for long waves and thin ice, but to reduce breakup for short waves and thick ice. A breakup front marches forward in the latter regime, as wave-induced fracture weakens the ice cover allowing waves to travel deeper into the MIZ.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Safe Open-Loop Strategies for Handling Intermittent Communications in Multi-Robot Systems, Abstract: In multi-robot systems where a central decision maker is specifying the movement of each individual robot, a communication failure can severely impair the performance of the system. This paper develops a motion strategy that allows robots to safely handle critical communication failures for such multi-robot architectures. For each robot, the proposed algorithm computes a time horizon over which collisions with other robots are guaranteed not to occur. These safe time horizons are included in the commands being transmitted to the individual robots. In the event of a communication failure, the robots execute the last received velocity commands for the corresponding safe time horizons leading to a provably safe open-loop motion strategy. The resulting algorithm is computationally effective and is agnostic to the task that the robots are performing. The efficacy of the strategy is verified in simulation as well as on a team of differential-drive mobile robots.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Robotics" ]
Title: Scientific co-authorship networks, Abstract: The paper addresses the stability of the co-authorship networks in time. The analysis is done on the networks of Slovenian researchers in two time periods (1991-2000 and 2001-2010). Two researchers are linked if they published at least one scientific bibliographic unit in a given time period. As proposed by Kronegger et al. (2011), the global network structures are examined by generalized blockmodeling with the assumed multi-core--semi-periphery--periphery blockmodel type. The term core denotes a group of researchers who published together in a systematic way with each other. The obtained blockmodels are comprehensively analyzed by visualizations and through considering several statistics regarding the global network structure. To measure the stability of the obtained blockmodels, different adjusted modified Rand and Wallace indices are applied. Those enable to distinguish between the splitting and merging of cores when operationalizing the stability of cores. Also, the adjusted modified indices can be used when new researchers occur in the second time period (newcomers) and when some researchers are no longer present in the second time period (departures). The research disciplines are described and clustered according to the values of these indices. Considering the obtained clusters, the sources of instability of the research disciplines are studied (e.g., merging or splitting of cores, newcomers or departures). Furthermore, the differences in the stability of the obtained cores on the level of scientific disciplines are studied by linear regression analysis where some personal characteristics of the researchers (e.g., age, gender), are also considered.
[ 1, 0, 0, 1, 0, 0 ]
[ "Statistics", "Quantitative Biology" ]
Title: Proofs as Relational Invariants of Synthesized Execution Grammars, Abstract: The automatic verification of programs that maintain unbounded low-level data structures is a critical and open problem. Analyzers and verifiers developed in previous work can synthesize invariants that only describe data structures of heavily restricted forms, or require an analyst to provide predicates over program data and structure that are used in a synthesized proof of correctness. In this work, we introduce a novel automatic safety verifier of programs that maintain low-level data structures, named LTTP. LTTP synthesizes proofs of program safety represented as a grammar of a given program's control paths, annotated with invariants that relate program state at distinct points within its path of execution. LTTP synthesizes such proofs completely automatically, using a novel inductive-synthesis algorithm. We have implemented LTTP as a verifier for JVM bytecode and applied it to verify the safety of a collection of verification benchmarks. Our results demonstrate that LTTP can be applied to automatically verify the safety of programs that are beyond the scope of previously-developed verifiers.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Coastal flood implications of 1.5 °C, 2.0 °C, and 2.5 °C temperature stabilization targets in the 21st and 22nd century, Abstract: Sea-level rise (SLR) is magnifying the frequency and severity of coastal flooding. The rate and amount of global mean sea-level (GMSL) rise is a function of the trajectory of global mean surface temperature (GMST). Therefore, temperature stabilization targets (e.g., 1.5 °C and 2.0 °C of warming above pre-industrial levels, as from the Paris Agreement) have important implications for coastal flood risk. Here, we assess differences in the return periods of coastal floods at a global network of tide gauges between scenarios that stabilize GMST warming at 1.5 °C, 2.0 °C, and 2.5 °C above pre-industrial levels. We employ probabilistic, localized SLR projections and long-term hourly tide gauge records to construct estimates of the return levels of current and future flood heights for the 21st and 22nd centuries. By 2100, under 1.5 °C, 2.0 °C, and 2.5 °C GMST stabilization, median GMSL is projected to rise 47 cm with a very likely range of 28-82 cm (90% probability), 55 cm (very likely 30-94 cm), and 58 cm (very likely 36-93 cm), respectively. As an independent comparison, a semi-empirical sea level model calibrated to temperature and GMSL over the past two millennia estimates median GMSL will rise within < 13% of these projections. By 2150, relative to the 2.0 °C scenario, GMST stabilization of 1.5 °C inundates roughly 5 million fewer inhabitants that currently occupy lands, including 40,000 fewer individuals currently residing in Small Island Developing States. Relative to a 2.0 °C scenario, the reduction in the amplification of the frequency of the 100-yr flood arising from a 1.5 °C GMST stabilization is greatest in the eastern United States and in Europe, with flood frequency amplification being reduced by about half.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Statistics" ]
Title: Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data, Abstract: We present a factorized hierarchical variational autoencoder, which learns disentangled and interpretable representations from sequential data without supervision. Specifically, we exploit the multi-scale nature of information in sequential data by formulating it explicitly within a factorized hierarchical graphical model that imposes sequence-dependent priors and sequence-independent priors to different sets of latent variables. The model is evaluated on two speech corpora to demonstrate, qualitatively, its ability to transform speakers or linguistic content by manipulating different sets of latent variables; and quantitatively, its ability to outperform an i-vector baseline for speaker verification and reduce the word error rate by as much as 35% in mismatched train/test scenarios for automatic speech recognition tasks.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Room-Temperature Ionic Liquids Meet Bio-Membranes: the State-of-the- Art, Abstract: Room-temperature ionic liquids (RTIL) are a new class of organic salts whose melting temperature falls below the conventional limit of 100C. Their low vapor pressure, moreover, has made these ionic compounds the solvents of choice of the so-called green chemistry. For these and other peculiar characteristics, they are increasingly used in industrial applications. However, studies of their interaction with living organisms have highlighted mild to severe health hazards. Since their cytotoxicity shows a positive correlation with their lipo-philicity, several chemical-physical studies of their interaction with biomembranes have been carried out in the last few years, aiming to identify the microscopic mechanisms behind their toxicity. Cation chain length and anion nature have been seen to affect the lipo-philicity and, in turn, the toxicity of RTILs. The emerging picture, however, raises new questions, points to the need to assess toxicity on a case-by-case basis, but also suggests a potential positive role of RTILs in pharmacology, bio-medicine, and, more in general, bio-nano-technology. Here, we review this new subject of research, and comment on the future and the potential importance of this new field of study.
[ 0, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology" ]
Title: HyperMinHash: MinHash in LogLog space, Abstract: In this extended abstract, we describe and analyze a lossy compression of MinHash from buckets of size $O(\log n)$ to buckets of size $O(\log\log n)$ by encoding using floating-point notation. This new compressed sketch, which we call HyperMinHash, as we build off a HyperLogLog scaffold, can be used as a drop-in replacement of MinHash. Unlike comparable Jaccard index fingerprinting algorithms in sub-logarithmic space (such as b-bit MinHash), HyperMinHash retains MinHash's features of streaming updates, unions, and cardinality estimation. For a multiplicative approximation error $1+ \epsilon$ on a Jaccard index $ t $, given a random oracle, HyperMinHash needs $O\left(\epsilon^{-2} \left( \log\log n + \log \frac{1}{ t \epsilon} \right)\right)$ space. HyperMinHash allows estimating Jaccard indices of 0.01 for set cardinalities on the order of $10^{19}$ with relative error of around 10\% using 64KiB of memory; MinHash can only estimate Jaccard indices for cardinalities of $10^{10}$ with the same memory consumption.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Asynchronous stochastic price pump, Abstract: We propose a model for equity trading in a population of agents where each agent acts to achieve his or her target stock-to-bond ratio, and, as a feedback mechanism, follows a market adaptive strategy. In this model only a fraction of agents participates in buying and selling stock during a trading period, while the rest of the group accepts the newly set price. Using numerical simulations we show that the stochastic process settles on a stationary regime for the returns. The mean return can be greater or less than the return on the bond and it is determined by the parameters of the adaptive mechanism. When the number of interacting agents is fixed, the distribution of the returns follows the log-normal density. In this case, we give an analytic formula for the mean rate of return in terms of the rate of change of agents' risk levels and confirm the formula by numerical simulations. However, when the number of interacting agents per period is random, the distribution of returns can significantly deviate from the log-normal, especially as the variance of the distribution for the number of interacting agents increases.
[ 0, 0, 0, 0, 0, 1 ]
[ "Quantitative Finance", "Statistics" ]
Title: Improving Bi-directional Generation between Different Modalities with Variational Autoencoders, Abstract: We investigate deep generative models that can exchange multiple modalities bi-directionally, e.g., generating images from corresponding texts and vice versa. A major approach to achieve this objective is to train a model that integrates all the information of different modalities into a joint representation and then to generate one modality from the corresponding other modality via this joint representation. We simply applied this approach to variational autoencoders (VAEs), which we call a joint multimodal variational autoencoder (JMVAE). However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully. Furthermore, we confirmed that this difficulty cannot be resolved even using a known solution. Therefore, in this study, we propose two models to prevent this difficulty: JMVAE-kl and JMVAE-h. Results of our experiments demonstrate that these methods can prevent the difficulty above and that they generate modalities bi-directionally with equal or higher likelihood than conventional VAE methods, which generate in only one direction. Moreover, we confirm that these methods can obtain the joint representation appropriately, so that they can generate various variations of modality by moving over the joint representation or changing the value of another modality.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Matching neural paths: transfer from recognition to correspondence search, Abstract: Many machine learning tasks require finding per-part correspondences between objects. In this work we focus on low-level correspondences - a highly ambiguous matching problem. We propose to use a hierarchical semantic representation of the objects, coming from a convolutional neural network, to solve this ambiguity. Training it for low-level correspondence prediction directly might not be an option in some domains where the ground-truth correspondences are hard to obtain. We show how transfer from recognition can be used to avoid such training. Our idea is to mark parts as "matching" if their features are close to each other at all the levels of convolutional feature hierarchy (neural paths). Although the overall number of such paths is exponential in the number of layers, we propose a polynomial algorithm for aggregating all of them in a single backward pass. The empirical validation is done on the task of stereo correspondence and demonstrates that we achieve competitive results among the methods which do not use labeled target domain data.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: On the affine random walk on the torus, Abstract: Let $\mu$ be a borelian probability measure on $\mathbf{G}:=\mathrm{SL}_d(\mathbb{Z}) \ltimes \mathbb{T}^d$. Define, for $x\in \mathbb{T}^d$, a random walk starting at $x$ denoting for $n\in \mathbb{N}$, \[ \left\{\begin{array}{rcl} X_0 &=&x\\ X_{n+1} &=& a_{n+1} X_n + b_{n+1} \end{array}\right. \] where $((a_n,b_n))\in \mathbf{G}^\mathbb{N}$ is an iid sequence of law $\mu$. Then, we denote by $\mathbb{P}_x$ the measure on $(\mathbb{T}^d)^\mathbb{N}$ that is the image of $\mu^{\otimes \mathbb{N}}$ by the map $\left((g_n) \mapsto (x,g_1 x, g_2 g_1 x, \dots , g_n \dots g_1 x, \dots)\right)$ and for any $\varphi \in \mathrm{L}^1((\mathbb{T}^d)^\mathbb{N}, \mathbb{P}_x)$, we set $\mathbb{E}_x \varphi((X_n)) = \int \varphi((X_n)) \mathrm{d}\mathbb{P}_x((X_n))$. Bourgain, Furmann, Lindenstrauss and Mozes studied this random walk when $\mu$ is concentrated on $\mathrm{SL}_d(\mathbb{Z}) \ltimes\{0\}$ and this allowed us to study, for any hölder-continuous function $f$ on the torus, the sequence $(f(X_n))$ when $x$ is not too well approximable by rational points. In this article, we are interested in the case where $\mu$ is not concentrated on $\mathrm{SL}_d(\mathbb{Z}) \ltimes \mathbb{Q}^d/\mathbb{Z}^d$ and we prove that, under assumptions on the group spanned by the support of $\mu$, the Lebesgue's measure $\nu$ on the torus is the only stationary probability measure and that for any hölder-continuous function $f$ on the torus, $\mathbb{E}_x f(X_n)$ converges exponentially fast to $\int f\mathrm{d}\nu$. Then, we use this to prove the law of large numbers, a non-concentration inequality, the functional central limit theorem and it's almost-sure version for the sequence $(f(X_n))$. In the appendix, we state a non-concentration inequality for products of random matrices without any irreducibility assumption.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: Neutral evolution and turnover over centuries of English word popularity, Abstract: Here we test Neutral models against the evolution of English word frequency and vocabulary at the population scale, as recorded in annual word frequencies from three centuries of English language books. Against these data, we test both static and dynamic predictions of two neutral models, including the relation between corpus size and vocabulary size, frequency distributions, and turnover within those frequency distributions. Although a commonly used Neutral model fails to replicate all these emergent properties at once, we find that modified two-stage Neutral model does replicate the static and dynamic properties of the corpus data. This two-stage model is meant to represent a relatively small corpus (population) of English books, analogous to a `canon', sampled by an exponentially increasing corpus of books in the wider population of authors. More broadly, this mode -- a smaller neutral model within a larger neutral model -- could represent more broadly those situations where mass attention is focused on a small subset of the cultural variants.
[ 1, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology", "Statistics" ]
Title: On Statistically-Secure Quantum Homomorphic Encryption, Abstract: Homomorphic encryption is an encryption scheme that allows computations to be evaluated on encrypted inputs without knowledge of their raw messages. Recently Ouyang et al. constructed a quantum homomorphic encryption (QHE) scheme for Clifford circuits with statistical security (or information-theoretic security (IT-security)). It is desired to see whether an information-theoretically-secure (ITS) quantum FHE exists. If not, what other nontrivial class of quantum circuits can be homomorphically evaluated with IT-security? We provide a limitation for the first question that an ITS quantum FHE necessarily incurs exponential overhead. As for the second one, we propose a QHE scheme for the instantaneous quantum polynomial-time (IQP) circuits. Our QHE scheme for IQP circuits follows from the one-time pad.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Physics" ]
Title: Quantum torus algebras and B(C) type Toda systems, Abstract: In this paper, we construct a new even constrained B(C) type Toda hierarchy and derive its B(C) type Block type additional symmetry. Also we generalize the B(C) type Toda hierarchy to the $N$-component B(C) type Toda hierarchy which is proved to have symmetries of a coupled $\bigotimes^NQT_+ $ algebra ( $N$-folds direct product of the positive half of the quantum torus algebra $QT$).
[ 0, 1, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: A Decidable Intuitionistic Temporal Logic, Abstract: We introduce the logic $\sf ITL^e$, an intuitionistic temporal logic based on structures $(W,\preccurlyeq,S)$, where $\preccurlyeq$ is used to interpret intuitionistic implication and $S$ is a $\preccurlyeq$-monotone function used to interpret temporal modalities. Our main result is that the satisfiability and validity problems for $\sf ITL^e$ are decidable. We prove this by showing that the logic enjoys the strong finite model property. In contrast, we also consider a `persistent' version of the logic, $\sf ITL^p$, whose models are similar to Cartesian products. We prove that, unlike $\sf ITL^e$, $\sf ITL^p$ does not have the finite model property.
[ 0, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: A simple proof that the $(n^2-1)$-puzzle is hard, Abstract: The 15 puzzle is a classic reconfiguration puzzle with fifteen uniquely labeled unit squares within a $4 \times 4$ board in which the goal is to slide the squares (without ever overlapping) into a target configuration. By generalizing the puzzle to an $n \times n$ board with $n^2-1$ squares, we can study the computational complexity of problems related to the puzzle; in particular, we consider the problem of determining whether a given end configuration can be reached from a given start configuration via at most a given number of moves. This problem was shown NP-complete in Ratner and Warmuth (1990). We provide an alternative simpler proof of this fact by reduction from the rectilinear Steiner tree problem.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Is Proxima Centauri b habitable? -- A study of atmospheric loss, Abstract: We address the important question of whether the newly discovered exoplanet, Proxima Centauri b (PCb), is capable of retaining an atmosphere over long periods of time. This is done by adapting a sophisticated multi-species MHD model originally developed for Venus and Mars, and computing the ion escape losses from PCb. The results suggest that the ion escape rates are about two orders of magnitude higher than the terrestrial planets of our Solar system if PCb is unmagnetized. In contrast, if the planet does have an intrinsic dipole magnetic field, the rates are lowered for certain values of the stellar wind dynamic pressure, but they are still higher than the observed values for our Solar system's terrestrial planets. These results must be interpreted with due caution, since most of the relevant parameters for PCb remain partly or wholly unknown.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: A Geometric Perspective on the Power of Principal Component Association Tests in Multiple Phenotype Studies, Abstract: Joint analysis of multiple phenotypes can increase statistical power in genetic association studies. Principal component analysis, as a popular dimension reduction method, especially when the number of phenotypes is high-dimensional, has been proposed to analyze multiple correlated phenotypes. It has been empirically observed that the first PC, which summarizes the largest amount of variance, can be less powerful than higher order PCs and other commonly used methods in detecting genetic association signals. In this paper, we investigate the properties of PCA-based multiple phenotype analysis from a geometric perspective by introducing a novel concept called principal angle. A particular PC is powerful if its principal angle is $0^o$ and is powerless if its principal angle is $90^o$. Without prior knowledge about the true principal angle, each PC can be powerless. We propose linear, non-linear and data-adaptive omnibus tests by combining PCs. We show that the omnibus PC test is robust and powerful in a wide range of scenarios. We study the properties of the proposed methods using power analysis and eigen-analysis. The subtle differences and close connections between these combined PC methods are illustrated graphically in terms of their rejection boundaries. Our proposed tests have convex acceptance regions and hence are admissible. The $p$-values for the proposed tests can be efficiently calculated analytically and the proposed tests have been implemented in a publicly available R package {\it MPAT}. We conduct simulation studies in both low and high dimensional settings with various signal vectors and correlation structures. We apply the proposed tests to the joint analysis of metabolic syndrome related phenotypes with data sets collected from four international consortia to demonstrate the effectiveness of the proposed combined PC testing procedures.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Mathematics", "Quantitative Biology" ]
Title: Development of ICA and IVA Algorithms with Application to Medical Image Analysis, Abstract: Independent component analysis (ICA) is a widely used BSS method that can uniquely achieve source recovery, subject to only scaling and permutation ambiguities, through the assumption of statistical independence on the part of the latent sources. Independent vector analysis (IVA) extends the applicability of ICA by jointly decomposing multiple datasets through the exploitation of the dependencies across datasets. Though both ICA and IVA algorithms cast in the maximum likelihood (ML) framework enable the use of all available statistical information in reality, they often deviate from their theoretical optimality properties due to improper estimation of the probability density function (PDF). This motivates the development of flexible ICA and IVA algorithms that closely adhere to the underlying statistical description of the data. Although it is attractive minimize the assumptions, important prior information about the data, such as sparsity, is usually available. If incorporated into the ICA model, use of this additional information can relax the independence assumption, resulting in an improvement in the overall separation performance. Therefore, the development of a unified mathematical framework that can take into account both statistical independence and sparsity is of great interest. In this work, we first introduce a flexible ICA algorithm that uses an effective PDF estimator to accurately capture the underlying statistical properties of the data. We then discuss several techniques to accurately estimate the parameters of the multivariate generalized Gaussian distribution, and how to integrate them into the IVA model. Finally, we provide a mathematical framework that enables direct control over the influence of statistical independence and sparsity, and use this framework to develop an effective ICA algorithm that can jointly exploit these two forms of diversity.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics", "Mathematics" ]
Title: Observability of characteristic binary-induced structures in circumbinary disks, Abstract: Context: A substantial fraction of protoplanetary disks forms around stellar binaries. The binary system generates a time-dependent non-axisymmetric gravitational potential, inducing strong tidal forces on the circumbinary disk. This leads to a change in basic physical properties of the circumbinary disk, which should in turn result in unique structures that are potentially observable with the current generation of instruments. Aims: The goal of this study is to identify these characteristic structures, to constrain the physical conditions that cause them, and to evaluate the feasibility to observe them in circumbinary disks. Methods: To achieve this, at first two-dimensional hydrodynamic simulations are performed. The resulting density distributions are post-processed with a 3D radiative transfer code to generate re-emission and scattered light maps. Based on these, we study the influence of various parameters, such as the mass of the stellar components, the mass of the disk and the binary separation on observable features in circumbinary disks. Results: We find that the Atacama Large (sub-)Millimetre Array (ALMA) as well as the European Extremely Large Telescope (E-ELT) are capable of tracing asymmetries in the inner region of circumbinary disks which are affected most by the binary-disk interaction. Observations at submillimetre/millimetre wavelengths will allow the detection of the density waves at the inner rim of the disk and the inner cavity. With the E-ELT one can partially resolve the innermost parts of the disk in the infrared wavelength range, including the disk's rim, accretion arms and potentially the expected circumstellar disks around each of the binary components.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Chaotic zones around rotating small bodies, Abstract: Small bodies of the Solar system, like asteroids, trans-Neptunian objects, cometary nuclei, planetary satellites, with diameters smaller than one thousand kilometers usually have irregular shapes, often resembling dumb-bells, or contact binaries. The spinning of such a gravitating dumb-bell creates around it a zone of chaotic orbits. We determine its extent analytically and numerically. We find that the chaotic zone swells significantly if the rotation rate is decreased, in particular, the zone swells more than twice if the rotation rate is decreased ten times with respect to the "centrifugal breakup" threshold. We illustrate the properties of the chaotic orbital zones in examples of the global orbital dynamics about asteroid 243 Ida (which has a moon, Dactyl, orbiting near the edge of the chaotic zone) and asteroid 25143 Itokawa.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Structure and Evolution of Internally Heated Hot Jupiters, Abstract: Hot Jupiters receive strong stellar irradiation, producing equilibrium temperatures of $1000 - 2500 \ \mathrm{Kelvin}$. Incoming irradiation directly heats just their thin outer layer, down to pressures of $\sim 0.1 \ \mathrm{bars}$. In standard irradiated evolution models of hot Jupiters, predicted transit radii are too small. Previous studies have shown that deeper heating -- at a small fraction of the heating rate from irradiation -- can explain observed radii. Here we present a suite of evolution models for HD 209458b where we systematically vary both the depth and intensity of internal heating, without specifying the uncertain heating mechanism(s). Our models start with a hot, high entropy planet whose radius decreases as the convective interior cools. The applied heating suppresses this cooling. We find that very shallow heating -- at pressures of $1 - 10 \ \mathrm{bars}$ -- does not significantly suppress cooling, unless the total heating rate is $\gtrsim 10\%$ of the incident stellar power. Deeper heating, at $100 \ \mathrm{bars}$, requires heating at only $1\%$ of the stellar irradiation to explain the observed transit radius of $1.4 R_{\rm Jup}$ after 5 Gyr of cooling. In general, more intense and deeper heating results in larger hot Jupiter radii. Surprisingly, we find that heat deposited at $10^4 \ \mathrm{bars}$ -- which is exterior to $\approx 99\%$ of the planet's mass -- suppresses planetary cooling as effectively as heating at the center. In summary, we find that relatively shallow heating is required to explain the radii of most hot Jupiters, provided that this heat is applied early and persists throughout their evolution.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Inference-Based Distributed Channel Allocation in Wireless Sensor Networks, Abstract: Interference-aware resource allocation of time slots and frequency channels in single-antenna, halfduplex radio wireless sensor networks (WSN) is challenging. Devising distributed algorithms for such task further complicates the problem. This work studiesWSN joint time and frequency channel allocation for a given routing tree, such that: a) allocation is performed in a fully distributed way, i.e., information exchange is only performed among neighboring WSN terminals, within communication up to two hops, and b) detection of potential interfering terminals is simplified and can be practically realized. The algorithm imprints space, time, frequency and radio hardware constraints into a loopy factor graph and performs iterative message passing/ loopy belief propagation (BP) with randomized initial priors. Sufficient conditions for convergence to a valid solution are offered, for the first time in the literature, exploiting the structure of the proposed factor graph. Based on theoretical findings, modifications of BP are devised that i) accelerate convergence to a valid solution and ii) reduce computation cost. Simulations reveal promising throughput results of the proposed distributed algorithm, even though it utilizes simplified interfering terminals set detection. Future work could modify the constraints such that other disruptive wireless technologies (e.g., full-duplex radios or network coding) could be accommodated within the same inference framework.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Schrödinger operators periodic in octants, Abstract: We consider Schrödinger operators with periodic potentials in the positive quadrant for dim $>1$ with Dirichlet boundary condition. We show that for any integer $N$ and any interval $I$ there exists a periodic potential such that the Schrödinger operator has $N$ eigenvalues counted with the multiplicity on this interval and there is no other spectrum on the interval. Furthermore, to the right and to the left of it there is a essential spectrum. Moreover, we prove similar results for Schrödinger operators for other domains. The proof is based on the inverse spectral theory for Hill operators on the real line.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Optimized Quantification of Spin Relaxation Times in the Hybrid State, Abstract: Purpose: The analysis of optimized spin ensemble trajectories for relaxometry in the hybrid state. Methods: First, we constructed visual representations to elucidate the differential equation that governs spin dynamics in hybrid state. Subsequently, numerical optimizations were performed to find spin ensemble trajectories that minimize the Cramér-Rao bound for $T_1$-encoding, $T_2$-encoding, and their weighted sum, respectively, followed by a comparison of the Cramér-Rao bounds obtained with our optimized spin-trajectories, as well as Look-Locker and multi-spin-echo methods. Finally, we experimentally tested our optimized spin trajectories with in vivo scans of the human brain. Results: After a nonrecurring inversion segment on the southern hemisphere of the Bloch sphere, all optimized spin trajectories pursue repetitive loops on the northern half of the sphere in which the beginning of the first and the end of the last loop deviate from the others. The numerical results obtained in this work align well with intuitive insights gleaned directly from the governing equation. Our results suggest that hybrid-state sequences outperform traditional methods. Moreover, hybrid-state sequences that balance $T_1$- and $T_2$-encoding still result in near optimal signal-to-noise efficiency. Thus, the second parameter can be encoded at virtually no extra cost. Conclusion: We provide insights regarding the optimal encoding processes of spin relaxation times in order to guide the design of robust and efficient pulse sequences. We find that joint acquisitions of $T_1$ and $T_2$ in the hybrid state are substantially more efficient than sequential encoding techniques.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Reviving and Improving Recurrent Back-Propagation, Abstract: In this paper, we revisit the recurrent back-propagation (RBP) algorithm, discuss the conditions under which it applies as well as how to satisfy them in deep neural networks. We show that RBP can be unstable and propose two variants based on conjugate gradient on the normal equations (CG-RBP) and Neumann series (Neumann-RBP). We further investigate the relationship between Neumann-RBP and back propagation through time (BPTT) and its truncated version (TBPTT). Our Neumann-RBP has the same time complexity as TBPTT but only requires constant memory, whereas TBPTT's memory cost scales linearly with the number of truncation steps. We examine all RBP variants along with BPTT and TBPTT in three different application domains: associative memory with continuous Hopfield networks, document classification in citation networks using graph neural networks and hyperparameter optimization for fully connected networks. All experiments demonstrate that RBPs, especially the Neumann-RBP variant, are efficient and effective for optimizing convergent recurrent neural networks.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: A family of Dirichlet-Morrey spaces, Abstract: To each weighted Dirichlet space $\mathcal{D}_p$, $0<p<1$, we associate a family of Morrey-type spaces ${\mathcal{D}}_p^{\lambda}$, $0< \lambda < 1$, constructed by imposing growth conditions on the norm of hyperbolic translates of functions. We indicate some of the properties of these spaces, mention the characterization in terms of boundary values, and study integration and multiplication operators on them.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Fast Characterization of Segmental Duplications in Genome Assemblies, Abstract: Segmental duplications (SDs), or low-copy repeats (LCR), are segments of DNA greater than 1 Kbp with high sequence identity that are copied to other regions of the genome. SDs are among the most important sources of evolution, a common cause of genomic structural variation, and several are associated with diseases of genomic origin. Despite their functional importance, SDs present one of the major hurdles for de novo genome assembly due to the ambiguity they cause in building and traversing both state-of-the-art overlap-layout-consensus and de Bruijn graphs. This causes SD regions to be misassembled, collapsed into a unique representation, or completely missing from assembled reference genomes for various organisms. In turn, this missing or incorrect information limits our ability to fully understand the evolution and the architecture of the genomes. Despite the essential need to accurately characterize SDs in assemblies, there is only one tool that has been developed for this purpose, called Whole Genome Assembly Comparison (WGAC). WGAC is comprised of several steps that employ different tools and custom scripts, which makes it difficult and time consuming to use. Thus there is still a need for algorithms to characterize within-assembly SDs quickly, accurately, and in a user friendly manner. Here we introduce a SEgmental Duplication Evaluation Framework (SEDEF) to rapidly detect SDs through sophisticated filtering strategies based on Jaccard similarity and local chaining. We show that SEDEF accurately detects SDs while maintaining substantial speed up over WGAC that translates into practical run times of minutes instead of weeks. Notably, our algorithm captures up to 25% pairwise error between segments, where previous studies focused on only 10%, allowing us to more deeply track the evolutionary history of the genome. SEDEF is available at this https URL
[ 0, 0, 0, 0, 1, 0 ]
[ "Quantitative Biology", "Computer Science" ]
Title: Cross validation for locally stationary processes, Abstract: We propose an adaptive bandwidth selector via cross validation for local M-estimators in locally stationary processes. We prove asymptotic optimality of the procedure under mild conditions on the underlying parameter curves. The results are applicable to a wide range of locally stationary processes such linear and nonlinear processes. A simulation study shows that the method works fairly well also in misspecified situations.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Weyl nodes in Andreev spectra of multiterminal Josephson junctions: Chern numbers, conductances and supercurrents, Abstract: We consider mesoscopic four-terminal Josephson junctions and study emergent topological properties of the Andreev subgap bands. We use symmetry-constrained analysis for Wigner-Dyson classes of scattering matrices to derive band dispersions. When scattering matrix of the normal region connecting superconducting leads is energy-independent, the determinant formula for Andreev spectrum can be reduced to a palindromic equation that admits a complete analytical solution. Band topology manifests with an appearance of the Weyl nodes which serve as monopoles of finite Berry curvature. The corresponding fluxes are quantified by Chern numbers that translate into a quantized nonlocal conductance that we compute explicitly for the time-reversal-symmetric scattering matrix. The topological regime can be also identified by supercurrents as Josephson current-phase relationships exhibit pronounced nonanalytic behavior and discontinuities near Weyl points that can be controllably accessed in experiments.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: RPC: A Large-Scale Retail Product Checkout Dataset, Abstract: Over recent years, emerging interest has occurred in integrating computer vision technology into the retail industry. Automatic checkout (ACO) is one of the critical problems in this area which aims to automatically generate the shopping list from the images of the products to purchase. The main challenge of this problem comes from the large scale and the fine-grained nature of the product categories as well as the difficulty for collecting training images that reflect the realistic checkout scenarios due to continuous update of the products. Despite its significant practical and research value, this problem is not extensively studied in the computer vision community, largely due to the lack of a high-quality dataset. To fill this gap, in this work we propose a new dataset to facilitate relevant research. Our dataset enjoys the following characteristics: (1) It is by far the largest dataset in terms of both product image quantity and product categories. (2) It includes single-product images taken in a controlled environment and multi-product images taken by the checkout system. (3) It provides different levels of annotations for the check-out images. Comparing with the existing datasets, ours is closer to the realistic setting and can derive a variety of research problems. Besides the dataset, we also benchmark the performance on this dataset with various approaches. The dataset and related resources can be found at \url{this https URL}.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: KiDS-450: Tomographic Cross-Correlation of Galaxy Shear with {\it Planck} Lensing, Abstract: We present the tomographic cross-correlation between galaxy lensing measured in the Kilo Degree Survey (KiDS-450) with overlapping lensing measurements of the cosmic microwave background (CMB), as detected by Planck 2015. We compare our joint probe measurement to the theoretical expectation for a flat $\Lambda$CDM cosmology, assuming the best-fitting cosmological parameters from the KiDS-450 cosmic shear and Planck CMB analyses. We find that our results are consistent within $1\sigma$ with the KiDS-450 cosmology, with an amplitude re-scaling parameter $A_{\rm KiDS} = 0.86 \pm 0.19$. Adopting a Planck cosmology, we find our results are consistent within $2\sigma$, with $A_{\it Planck} = 0.68 \pm 0.15$. We show that the agreement is improved in both cases when the contamination to the signal by intrinsic galaxy alignments is accounted for, increasing $A$ by $\sim 0.1$. This is the first tomographic analysis of the galaxy lensing -- CMB lensing cross-correlation signal, and is based on five photometric redshift bins. We use this measurement as an independent validation of the multiplicative shear calibration and of the calibrated source redshift distribution at high redshifts. We find that constraints on these two quantities are strongly correlated when obtained from this technique, which should therefore not be considered as a stand-alone competitive calibration tool.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Astrophysics" ]
Title: Tidal disruptions by rotating black holes: relativistic hydrodynamics with Newtonian codes, Abstract: We propose an approximate approach for studying the relativistic regime of stellar tidal disruptions by rotating massive black holes. It combines an exact relativistic description of the hydrodynamical evolution of a test fluid in a fixed curved spacetime with a Newtonian treatment of the fluid's self-gravity. Explicit expressions for the equations of motion are derived for Kerr spacetime using two different coordinate systems. We implement the new methodology within an existing Newtonian Smoothed Particle Hydrodynamics code and show that including the additional physics involves very little extra computational cost. We carefully explore the validity of the novel approach by first testing its ability to recover geodesic motion, and then by comparing the outcome of tidal disruption simulations against previous relativistic studies. We further compare simulations in Boyer--Lindquist and Kerr--Schild coordinates and conclude that our approach allows accurate simulation even of tidal disruption events where the star penetrates deeply inside the tidal radius of a rotating black hole. Finally, we use the new method to study the effect of the black hole spin on the morphology and fallback rate of the debris streams resulting from tidal disruptions, finding that while the spin has little effect on the fallback rate, it does imprint heavily on the stream morphology, and can even be a determining factor in the survival or disruption of the star itself. Our methodology is discussed in detail as a reference for future astrophysical applications.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: A promise checked is a promise kept: Inspection Testing, Abstract: Occasionally, developers need to ensure that the compiler treats their code in a specific way that is only visible by inspecting intermediate or final compilation artifacts. This is particularly common with carefully crafted compositional libraries, where certain usage patterns are expected to trigger an intricate sequence of compiler optimizations -- stream fusion is a well-known example. The developer of such a library has to manually inspect build artifacts and check for the expected properties. Because this is too tedious to do often, it will likely go unnoticed if the property is broken by a change to the library code, its dependencies or the compiler. The lack of automation has led to released versions of such libraries breaking their documented promises. This indicates that there is an unrecognized need for a new testing paradigm, inspection testing, where the programmer declaratively describes non-functional properties of an compilation artifact and the compiler checks these properties. We define inspection testing abstractly, implement it in the context of Haskell and show that it increases the quality of such libraries.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Linear-Time Sequence Classification using Restricted Boltzmann Machines, Abstract: Classification of sequence data is the topic of interest for dynamic Bayesian models and Recurrent Neural Networks (RNNs). While the former can explicitly model the temporal dependencies between class variables, the latter have a capability of learning representations. Several attempts have been made to improve performance by combining these two approaches or increasing the processing capability of the hidden units in RNNs. This often results in complex models with a large number of learning parameters. In this paper, a compact model is proposed which offers both representation learning and temporal inference of class variables by rolling Restricted Boltzmann Machines (RBMs) and class variables over time. We address the key issue of intractability in this variant of RBMs by optimising a conditional distribution, instead of a joint distribution. Experiments reported in the paper on melody modelling and optical character recognition show that the proposed model can outperform the state-of-the-art. Also, the experimental results on optical character recognition, part-of-speech tagging and text chunking demonstrate that our model is comparable to recurrent neural networks with complex memory gates while requiring far fewer parameters.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A central $U(1)$-extension of a double Lie groupoid, Abstract: In this paper, we introduce a notion of a central $U(1)$-extension of a double Lie groupoid and show that it defines a cocycle in the certain triple complex.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: The NIEP, Abstract: The nonnegative inverse eigenvalue problem (NIEP) asks which lists of $n$ complex numbers (counting multiplicity) occur as the eigenvalues of some $n$-by-$n$ entry-wise nonnegative matrix. The NIEP has a long history and is a known hard (perhaps the hardest in matrix analysis?) and sought after problem. Thus, there are many subproblems and relevant results in a variety of directions. We survey most work on the problem and its several variants, with an emphasis on recent results, and include 130 references. The survey is divided into: a) the single eigenvalue problems; b) necessary conditions; c) low dimensional results; d) sufficient conditions; e) appending 0's to achieve realizability; f) the graph NIEP's; g) Perron similarities; and h) the relevance of Jordan structure.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Synthesizing Bijective Lenses, Abstract: Bidirectional transformations between different data representations occur frequently in modern software systems. They appear as serializers and deserializers, as database views and view updaters, and more. Manually building bidirectional transformations---by writing two separate functions that are intended to be inverses---is tedious and error prone. A better approach is to use a domain-specific language in which both directions can be written as a single expression. However, these domain-specific languages can be difficult to program in, requiring programmers to manage fiddly details while working in a complex type system. To solve this, we present Optician, a tool for type-directed synthesis of bijective string transformers. The inputs to Optician are two ordinary regular expressions representing two data formats and a few concrete examples for disambiguation. The output is a well-typed program in Boomerang (a bidirectional language based on the theory of lenses). The main technical challenge involves navigating the vast program search space efficiently enough. Unlike most prior work on type-directed synthesis, our system operates in the context of a language with a rich equivalence relation on types (the theory of regular expressions). We synthesize terms of a equivalent language and convert those generated terms into our lens language. We prove the correctness of our synthesis algorithm. We also demonstrate empirically that our new language changes the synthesis problem from one that admits intractable solutions to one that admits highly efficient solutions. We evaluate Optician on a benchmark suite of 39 examples including both microbenchmarks and realistic examples derived from other data management systems including Flash Fill, a tool for synthesizing string transformations in spreadsheets, and Augeas, a tool for bidirectional processing of Linux system configuration files.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Embedding for bulk systems using localized atomic orbitals, Abstract: We present an embedding approach for semiconductors and insulators based on or- bital rotations in the space of occupied Kohn-Sham orbitals. We have implemented our approach in the popular VASP software package. We demonstrate its power for defect structures in silicon and polaron formation in titania, two challenging cases for conventional Kohn-Sham density functional theory.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Computer Science" ]
Title: Particle trapping and conveying using an optical Archimedes' screw, Abstract: Trapping and manipulation of particles using laser beams has become an important tool in diverse fields of research. In recent years, particular interest is given to the problem of conveying optically trapped particles over extended distances either down or upstream the direction of the photons momentum flow. Here, we propose and demonstrate experimentally an optical analogue of the famous Archimedes' screw where the rotation of a helical-intensity beam is transferred to the axial motion of optically-trapped micro-meter scale airborne carbon based particles. With this optical screw, particles were easily conveyed with controlled velocity and direction, upstream or downstream the optical flow, over a distance of half a centimeter. Our results offer a very simple optical conveyor that could be adapted to a wide range of optical trapping scenarios.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Scalable Inference for Nested Chinese Restaurant Process Topic Models, Abstract: Nested Chinese Restaurant Process (nCRP) topic models are powerful nonparametric Bayesian methods to extract a topic hierarchy from a given text corpus, where the hierarchical structure is automatically determined by the data. Hierarchical Latent Dirichlet Allocation (hLDA) is a popular instance of nCRP topic models. However, hLDA has only been evaluated at small scale, because the existing collapsed Gibbs sampling and instantiated weight variational inference algorithms either are not scalable or sacrifice inference quality with mean-field assumptions. Moreover, an efficient distributed implementation of the data structures, such as dynamically growing count matrices and trees, is challenging. In this paper, we propose a novel partially collapsed Gibbs sampling (PCGS) algorithm, which combines the advantages of collapsed and instantiated weight algorithms to achieve good scalability as well as high model quality. An initialization strategy is presented to further improve the model quality. Finally, we propose an efficient distributed implementation of PCGS through vectorization, pre-processing, and a careful design of the concurrent data structures and communication strategy. Empirical studies show that our algorithm is 111 times more efficient than the previous open-source implementation for hLDA, with comparable or even better model quality. Our distributed implementation can extract 1,722 topics from a 131-million-document corpus with 28 billion tokens, which is 4-5 orders of magnitude larger than the previous largest corpus, with 50 machines in 7 hours.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Equations of $\,\overline{M}_{0,n}$, Abstract: Following work of Keel and Tevelev, we give explicit polynomials in the Cox ring of $\mathbb{P}^1\times\cdots\times\mathbb{P}^{n-3}$ that, conjecturally, determine $\overline{M}_{0,n}$ as a subscheme. Using Macaulay2, we prove that these equations generate the ideal for $n=5, 6, 7, 8$. For $n \leq 6$ we give a cohomological proof that these polynomials realize $\overline{M}_{0,n}$ as a projective variety, embedded in $\mathbb{P}^{(n-2)!-1}$ by the complete log canonical linear system.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Characterizing a CCD detector for astronomical purposes: OAUNI Project, Abstract: This work verifies the instrumental characteristics of the CCD detector which is part of the UNI astronomical observatory. We measured the linearity of the CCD detector of the SBIG STXL6303E camera, along with the associated gain and readout noise. The linear response to the incident light of the detector is extremely linear (R2 =99.99%), its effective gain is 1.65 +/- 0.01 e-/ADU and its readout noise is 12.2 e-. These values are in agreement with the manufacturer. We confirm that this detector is extremely precise to make measurements for astronomical purposes.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Testing small scale gravitational wave detectors with dynamical mass distributions, Abstract: The recent discovery of gravitational waves by the LIGO-Virgo collaboration created renewed interest in the investigation of alternative gravitational detector designs, such as small scale resonant detectors. In this article, it is shown how proposed small scale detectors can be tested by generating dynamical gravitational fields with appropriate distributions of moving masses. A series of interesting experiments will be possible with this setup. In particular, small scale detectors can be tested very early in the development phase and tests can be used to progress quickly in their development. This could contribute to the emerging field of gravitational wave astronomy.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Mining within-trial oscillatory brain dynamics to address the variability of optimized spatial filters, Abstract: Data-driven spatial filtering algorithms optimize scores such as the contrast between two conditions to extract oscillatory brain signal components. Most machine learning approaches for filter estimation, however, disregard within-trial temporal dynamics and are extremely sensitive to changes in training data and involved hyperparameters. This leads to highly variable solutions and impedes the selection of a suitable candidate for, e.g.,~neurotechnological applications. Fostering component introspection, we propose to embrace this variability by condensing the functional signatures of a large set of oscillatory components into homogeneous clusters, each representing specific within-trial envelope dynamics. The proposed method is exemplified by and evaluated on a complex hand force task with a rich within-trial structure. Based on electroencephalography data of 18 healthy subjects, we found that the components' distinct temporal envelope dynamics are highly subject-specific. On average, we obtained seven clusters per subject, which were strictly confined regarding their underlying frequency bands. As the analysis method is not limited to a specific spatial filtering algorithm, it could be utilized for a wide range of neurotechnological applications, e.g., to select and monitor functionally relevant features for brain-computer interface protocols in stroke rehabilitation.
[ 0, 0, 0, 1, 1, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: Neural Architecture Search with Bayesian Optimisation and Optimal Transport, Abstract: Bayesian Optimisation (BO) refers to a class of methods for global optimisation of a function $f$ which is only accessible via point evaluations. It is typically used in settings where $f$ is expensive to evaluate. A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model. Conventional BO methods have focused on Euclidean and categorical domains, which, in the context of model selection, only permits tuning scalar hyper-parameters of machine learning algorithms. However, with the surge of interest in deep learning, there is an increasing demand to tune neural network \emph{architectures}. In this work, we develop NASBOT, a Gaussian process based BO framework for neural architecture search. To accomplish this, we develop a distance metric in the space of neural network architectures which can be computed efficiently via an optimal transport program. This distance might be of independent interest to the deep learning community as it may find applications outside of BO. We demonstrate that NASBOT outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: An improved parametric model for hysteresis loop approximation, Abstract: A number of improvements have been added to the existing analytical model of hysteresis loop defined in parametric form. In particular, three phase shifts are included in the model, which permits to tilt the hysteresis loop smoothly by the required angle at the split point as well as to smoothly change the curvature of the loop. As a result, the error of approximation of a hysteresis loop by the improved model does not exceed 1%, which is several times less than the error of the existing model. The improved model is capable of approximating most of the known types of rate-independent symmetrical hysteresis loops encountered in the practice of physical measurements. The model allows building smooth, piecewise-linear, hybrid, minor, mirror-reflected, inverse, reverse, double and triple loops. One of the possible applications of the model developed is linearization of a probe microscope piezoscanner. The improved model can be found useful for the tasks of simulation of scientific instruments that contain hysteresis elements.
[ 1, 1, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: To prune, or not to prune: exploring the efficacy of pruning for model compression, Abstract: Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: Connecting Software Metrics across Versions to Predict Defects, Abstract: Accurate software defect prediction could help software practitioners allocate test resources to defect-prone modules effectively and efficiently. In the last decades, much effort has been devoted to build accurate defect prediction models, including developing quality defect predictors and modeling techniques. However, current widely used defect predictors such as code metrics and process metrics could not well describe how software modules change over the project evolution, which we believe is important for defect prediction. In order to deal with this problem, in this paper, we propose to use the Historical Version Sequence of Metrics (HVSM) in continuous software versions as defect predictors. Furthermore, we leverage Recurrent Neural Network (RNN), a popular modeling technique, to take HVSM as the input to build software prediction models. The experimental results show that, in most cases, the proposed HVSM-based RNN model has a significantly better effort-aware ranking effectiveness than the commonly used baseline models.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Constructive Euler hydrodynamics for one-dimensional attractive particle systems, Abstract: We review a (constructive) approach first introduced in [6] and further developed in [7, 8, 38, 9] for hydrodynamic limits of asymmetric attractive particle systems, in a weak or in a strong (that is, almost sure) sense, in an homogeneous or in a quenched disordered setting.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Cyber Insurance for Heterogeneous Wireless Networks, Abstract: Heterogeneous wireless networks (HWNs) composed of densely deployed base stations of different types with various radio access technologies have become a prevailing trend to accommodate ever-increasing traffic demand in enormous volume. Nowadays, users rely heavily on HWNs for ubiquitous network access that contains valuable and critical information such as financial transactions, e-health, and public safety. Cyber risks, representing one of the most significant threats to network security and reliability, are increasing in severity. To address this problem, this article introduces the concept of cyber insurance to transfer the cyber risk (i.e., service outage, as a consequence of cyber risks in HWNs) to a third party insurer. Firstly, a review of the enabling technologies for HWNs and their vulnerabilities to cyber risks is presented. Then, the fundamentals of cyber insurance are introduced, and subsequently, a cyber insurance framework for HWNs is presented. Finally, open issues are discussed and the challenges are highlighted for integrating cyber insurance as a service of next generation HWNs.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Quantitative Finance" ]
Title: Combining Contrast Invariant L1 Data Fidelities with Nonlinear Spectral Image Decomposition, Abstract: This paper focuses on multi-scale approaches for variational methods and corresponding gradient flows. Recently, for convex regularization functionals such as total variation, new theory and algorithms for nonlinear eigenvalue problems via nonlinear spectral decompositions have been developed. Those methods open new directions for advanced image filtering. However, for an effective use in image segmentation and shape decomposition, a clear interpretation of the spectral response regarding size and intensity scales is needed but lacking in current approaches. In this context, $L^1$ data fidelities are particularly helpful due to their interesting multi-scale properties such as contrast invariance. Hence, the novelty of this work is the combination of $L^1$-based multi-scale methods with nonlinear spectral decompositions. We compare $L^1$ with $L^2$ scale-space methods in view of spectral image representation and decomposition. We show that the contrast invariant multi-scale behavior of $L^1-TV$ promotes sparsity in the spectral response providing more informative decompositions. We provide a numerical method and analyze synthetic and biomedical images at which decomposition leads to improved segmentation.
[ 1, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: PaccMann: Prediction of anticancer compound sensitivity with multi-modal attention-based neural networks, Abstract: We present a novel approach for the prediction of anticancer compound sensitivity by means of multi-modal attention-based neural networks (PaccMann). In our approach, we integrate three key pillars of drug sensitivity, namely, the molecular structure of compounds, transcriptomic profiles of cancer cells as well as prior knowledge about interactions among proteins within cells. Our models ingest a drug-cell pair consisting of SMILES encoding of a compound and the gene expression profile of a cancer cell and predicts an IC50 sensitivity value. Gene expression profiles are encoded using an attention-based encoding mechanism that assigns high weights to the most informative genes. We present and study three encoders for SMILES string of compounds: 1) bidirectional recurrent 2) convolutional 3) attention-based encoders. We compare our devised models against a baseline model that ingests engineered fingerprints to represent the molecular structure. We demonstrate that using our attention-based encoders, we can surpass the baseline model. The use of attention-based encoders enhance interpretability and enable us to identify genes, bonds and atoms that were used by the network to make a prediction.
[ 0, 0, 0, 0, 1, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: Multi-Path Region-Based Convolutional Neural Network for Accurate Detection of Unconstrained "Hard Faces", Abstract: Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6% for the Average Precision.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Naturally occurring $^{32}$Si and low-background silicon dark matter detectors, Abstract: The naturally occurring radioisotope $^{32}$Si represents a potentially limiting background in future dark matter direct-detection experiments. We investigate sources of $^{32}$Si and the vectors by which it comes to reside in silicon crystals used for fabrication of radiation detectors. We infer that the $^{32}$Si concentration in commercial single-crystal silicon is likely variable, dependent upon the specific geologic and hydrologic history of the source (or sources) of silicon "ore" and the details of the silicon-refinement process. The silicon production industry is large, highly segmented by refining step, and multifaceted in terms of final product type, from which we conclude that production of $^{32}$Si-mitigated crystals requires both targeted silicon material selection and a dedicated refinement-through-crystal-production process. We review options for source material selection, including quartz from an underground source and silicon isotopically reduced in $^{32}$Si. To quantitatively evaluate the $^{32}$Si content in silicon metal and precursor materials, we propose analytic methods employing chemical processing and radiometric measurements. Ultimately, it appears feasible to produce silicon detectors with low levels of $^{32}$Si, though significant assay method development is required to validate this claim and thereby enable a quality assurance program during an actual controlled silicon-detector production cycle.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: A Syllable-based Technique for Word Embeddings of Korean Words, Abstract: Word embedding has become a fundamental component to many NLP tasks such as named entity recognition and machine translation. However, popular models that learn such embeddings are unaware of the morphology of words, so it is not directly applicable to highly agglutinative languages such as Korean. We propose a syllable-based learning model for Korean using a convolutional neural network, in which word representation is composed of trained syllable vectors. Our model successfully produces morphologically meaningful representation of Korean words compared to the original Skip-gram embeddings. The results also show that it is quite robust to the Out-of-Vocabulary problem.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: On discrete homology of a free pro-$p$-group, Abstract: For a prime $p$, let $\hat F_p$ be a finitely generated free pro-$p$-group of rank $\geq 2$. We show that the second discrete homology group $H_2(\hat F_p,\mathbb Z/p)$ is an uncountable $\mathbb Z/p$-vector space. This answers a problem of A.K. Bousfield.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Conceptual Modeling of Inventory Management Processes as a Thinging Machine, Abstract: A control model is typically classified into three forms: conceptual, mathematical and simulation (computer). This paper analyzes a conceptual modeling application with respect to an inventory management system. Today, most organizations utilize computer systems for inventory control that provide protection when interruptions or breakdowns occur within work processes. Modeling the inventory processes is an active area of research that utilizes many diagrammatic techniques, including data flow diagrams, Universal Modeling Language (UML) diagrams and Integration DEFinition (IDEF). We claim that current conceptual modeling frameworks lack uniform notions and have inability to appeal to designers and analysts. We propose modeling an inventory system as an abstract machine, called a Thinging Machine (TM), with five operations: creation, processing, receiving, releasing and transferring. The paper provides side-by-side contrasts of some existing examples of conceptual modeling methodologies that apply to TM. Additionally, TM is applied in a case study of an actual inventory system that uses IBM Maximo. The resulting conceptual depictions point to the viability of FM as a valuable tool for developing a high-level representation of inventory processes.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: 11 T Dipole for the Dispersion Suppressor Collimators, Abstract: Chapter 11 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report. The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Spectral Analysis of Jet Substructure with Neural Networks: Boosted Higgs Case, Abstract: Jets from boosted heavy particles have a typical angular scale which can be used to distinguish them from QCD jets. We introduce a machine learning strategy for jet substructure analysis using a spectral function on the angular scale. The angular spectrum allows us to scan energy deposits over the angle between a pair of particles in a highly visual way. We set up an artificial neural network (ANN) to find out characteristic shapes of the spectra of the jets from heavy particle decays. By taking the Higgs jets and QCD jets as examples, we show that the ANN of the angular spectrum input has similar performance to existing taggers. In addition, some improvement is seen when additional extra radiations occur. Notably, the new algorithm automatically combines the information of the multi-point correlations in the jet.
[ 0, 0, 0, 1, 0, 0 ]
[ "Physics", "Computer Science" ]
Title: UTD-CRSS Submission for MGB-3 Arabic Dialect Identification: Front-end and Back-end Advancements on Broadcast Speech, Abstract: This study presents systems submitted by the University of Texas at Dallas, Center for Robust Speech Systems (UTD-CRSS) to the MGB-3 Arabic Dialect Identification (ADI) subtask. This task is defined to discriminate between five dialects of Arabic, including Egyptian, Gulf, Levantine, North African, and Modern Standard Arabic. We develop multiple single systems with different front-end representations and back-end classifiers. At the front-end level, feature extraction methods such as Mel-frequency cepstral coefficients (MFCCs) and two types of bottleneck features (BNF) are studied for an i-Vector framework. As for the back-end level, Gaussian back-end (GB), and Generative Adversarial Networks (GANs) classifiers are applied alternately. The best submission (contrastive) is achieved for the ADI subtask with an accuracy of 76.94% by augmenting the randomly chosen part of the development dataset. Further, with a post evaluation correction in the submitted system, final accuracy is increased to 79.76%, which represents the best performance achieved so far for the challenge on the test dataset.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Basis Adaptive Sample Efficient Polynomial Chaos (BASE-PC), Abstract: For a large class of orthogonal basis functions, there has been a recent identification of expansion methods for computing accurate, stable approximations of a quantity of interest. This paper presents, within the context of uncertainty quantification, a practical implementation using basis adaptation, and coherence motivated sampling, which under assumptions has satisfying guarantees. This implementation is referred to as Basis Adaptive Sample Efficient Polynomial Chaos (BASE-PC). A key component of this is the use of anisotropic polynomial order which admits evolving global bases for approximation in an efficient manner, leading to consistently stable approximation for a practical class of smooth functionals. This fully adaptive, non-intrusive method, requires no a priori information of the solution, and has satisfying theoretical guarantees of recovery. A key contribution to stability is the use of a presented correction sampling for coherence-optimal sampling in order to improve stability and accuracy within the adaptive basis scheme. Theoretically, the method may dramatically reduce the impact of dimensionality in function approximation, and numerically the method is demonstrated to perform well on problems with dimension up to 1000.
[ 0, 0, 1, 1, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: Deep Learning Methods for Efficient Large Scale Video Labeling, Abstract: We present a solution to "Google Cloud and YouTube-8M Video Understanding Challenge" that ranked 5th place. The proposed model is an ensemble of three model families, two frame level and one video level. The training was performed on augmented dataset, with cross validation.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: Low-level Active Visual Navigation: Increasing robustness of vision-based localization using potential fields, Abstract: This paper proposes a low-level visual navigation algorithm to improve visual localization of a mobile robot. The algorithm, based on artificial potential fields, associates each feature in the current image frame with an attractive or neutral potential energy, with the objective of generating a control action that drives the vehicle towards the goal, while still favoring feature rich areas within a local scope, thus improving the localization performance. One key property of the proposed method is that it does not rely on mapping, and therefore it is a lightweight solution that can be deployed on miniaturized aerial robots, in which memory and computational power are major constraints. Simulations and real experimental results using a mini quadrotor equipped with a downward looking camera demonstrate that the proposed method can effectively drive the vehicle to a designated goal through a path that prevents localization failure.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Robotics" ]
Title: Detecting Cyber-Physical Attacks in Additive Manufacturing using Digital Audio Signing, Abstract: Additive Manufacturing (AM, or 3D printing) is a novel manufacturing technology that is being adopted in industrial and consumer settings. However, the reliance of this technology on computerization has raised various security concerns. In this paper we address sabotage via tampering with the 3D printing process. We present an object verification system using side-channel emanations: sound generated by onboard stepper motors. The contributions of this paper are following. We present two algorithms: one which generates a master audio fingerprint for the unmodified printing process, and one which computes the similarity between other print recordings and the master audio fingerprint. We then evaluate the deviation due to tampering, focusing on the detection of minimal tampering primitives. By detecting the deviation at the time of its occurrence, we can stop the printing process for compromised objects, thus save time and prevent material waste. We discuss impacts on the method by aspects like background noise, or different audio recorder positions. We further outline our vision with use cases incorporating our approach.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Short-wavelength out-of-band EUV emission from Sn laser-produced plasma, Abstract: We present the results of spectroscopic measurements in the extreme ultraviolet (EUV) regime (7-17 nm) of molten tin microdroplets illuminated by a high-intensity 3-J, 60-ns Nd:YAG laser pulse. The strong 13.5 nm emission from this laser-produced plasma is of relevance for next-generation nanolithography machines. Here, we focus on the shorter wavelength features between 7 and 12 nm which have so far remained poorly investigated despite their diagnostic relevance. Using flexible atomic code calculations and local thermodynamic equilibrium arguments, we show that the line features in this region of the spectrum can be explained by transitions from high-lying configurations within the Sn$^{8+}$-Sn$^{15+}$ ions. The dominant transitions for all ions but Sn$^{8+}$ are found to be electric-dipole transitions towards the $n$=4 ground state from the core-excited configuration in which a 4$p$ electron is promoted to the 5$s$ sub-shell. Our results resolve some long-standing spectroscopic issues and provide reliable charge state identification for Sn laser-produced plasma, which could be employed as a useful tool for diagnostic purposes.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Compact Cardinals and Eight Values in Cichoń's Diagram, Abstract: Assuming three strongly compact cardinals, it is consistent that \[ \aleph_1 < \mathrm{add}(\mathrm{null}) < \mathrm{cov}(\mathrm{null}) < \mathfrak{b} < \mathfrak{d} < \mathrm{non}(\mathrm{null}) < \mathrm{cof}(\mathrm{null}) < 2^{\aleph_0}.\] Under the same assumption, it is consistent that \[ \aleph_1 < \mathrm{add}(\mathrm{null}) < \mathrm{cov}(\mathrm{null}) < \mathrm{non}(\mathrm{meager}) < \mathrm{cov}(\mathrm{meager}) < \mathrm{non}(\mathrm{null}) < \mathrm{cof}(\mathrm{null}) < 2^{\aleph_0}.\]
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Bootstrapping single-channel source separation via unsupervised spatial clustering on stereo mixtures, Abstract: Separating an audio scene into isolated sources is a fundamental problem in computer audition, analogous to image segmentation in visual scene analysis. Source separation systems based on deep learning are currently the most successful approaches for solving the underdetermined separation problem, where there are more sources than channels. Traditionally, such systems are trained on sound mixtures where the ground truth decomposition is already known. Since most real-world recordings do not have such a decomposition available, this limits the range of mixtures one can train on, and the range of mixtures the learned models may successfully separate. In this work, we use a simple blind spatial source separation algorithm to generate estimated decompositions of stereo mixtures. These estimates, together with a weighting scheme in the time-frequency domain, based on confidence in the separation quality, are used to train a deep learning model that can be used for single-channel separation, where no source direction information is available. This demonstrates how a simple cue such as the direction of origin of source can be used to bootstrap a model for source separation that can be used in situations where that cue is not available.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Origin of the pressure-dependent T$_c$ valley in superconducting simple cubic phosphorus, Abstract: Motivated by recent experiments, we investigate the pressure-dependent electronic structure and electron-phonon (\emph{e-ph}) coupling for simple cubic phosphorus by performing first-principle calculations within the full potential linearized augmented plane wave method. As a function of increasing pressure, our calculations show a valley feature in T$_c$, followed by an eventual decrease for higher pressures. We demonstrate that this T$_c$ valley at low pressures is due to two nearby Lifshitz transitions, as we analyze the band-resolved contributions to the \emph{e-ph} coupling. Below the first Lifshitz transition, the phonon hardening and shrinking of the $\gamma$ Fermi surface with $s$ orbital character results in a decreased T$_c$ with increasing pressure. After the second Lifshitz transition, the appearance of $\delta$ Fermi surfaces with $3d$ orbital character generate strong \emph{e-ph} inter-band couplings in $\alpha\delta$ and $\beta\delta$ channels, and hence lead to an increase of T$_c$. For higher pressures, the phonon hardening finally dominates, and T$_c$ decreases again. Our study reveals that the intriguing T$_c$} valley discovered in experiment can be attributed to Lifshitz transitions, while the plateau of T$_c$ detected at intermediate pressures appears to be beyond the scope of our analysis. This strongly suggests that besides \emph{e-ph} coupling, electronic correlations along with plasmonic contributions may be relevant for simple cubic phosphorous. Our findings hint at the notion that increasing pressure can shift the low-energy orbital weight towards $d$ character, and as such even trigger an enhanced importance of orbital-selective electronic correlations despite an increase of the overall bandwidth.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: V-cycle multigrid algorithms for discontinuous Galerkin methods on non-nested polytopic meshes, Abstract: In this paper we analyse the convergence properties of V-cycle multigrid algorithms for the numerical solution of the linear system of equations arising from discontinuous Galerkin discretization of second-order elliptic partial differential equations on polytopal meshes. Here, the sequence of spaces that stands at the basis of the multigrid scheme is possibly non nested and is obtained based on employing agglomeration with possible edge/face coarsening. We prove that the method converges uniformly with respect to the granularity of the grid and the polynomial approximation degree p, provided that the number of smoothing steps, which depends on p, is chosen sufficiently large.
[ 1, 0, 0, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: Average Case Constant Factor Time and Distance Optimal Multi-Robot Path Planning in Well-Connected Environments, Abstract: Fast algorithms for optimal multi-robot path planning are sought after in many real-world applications. Known methods, however, generally do not simultaneously guarantee good solution optimality and fast run time (e.g., polynomial). In this work, we develop a low-polynomial running time algorithm, called SplitAndGroup (SAG),that solves the multi-robot path planning problem on grids and grid-like environments and produces constant factor makespan-optimal solutions in the average case. That is, SAG is an average case O(1)-approximation algorithm. SAG computes solutions with sub-linear makespan and is capable of handling cases when the density of robots is extremely high - in a graph-theoretic setting, the algorithm supports cases where all vertices of the underlying graph are occupied by robots. SAG attains its desirable properties through a careful combination of divide-and-conquer technique and network flow based methods for routing the robots. Solutions from SAG, in a weaker sense, is also a constant factor approximation on total distance optimality.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Gaussian Process based Passivation of a Class of Nonlinear Systems with Unknown Dynamics, Abstract: The paper addresses the problem of passivation of a class of nonlinear systems where the dynamics are unknown. For this purpose, we use the highly flexible, data-driven Gaussian process regression for the identification of the unknown dynamics for feed-forward compensation. The closed loop system of the nonlinear system, the Gaussian process model and a feedback control law is guaranteed to be semi-passive with a specific probability. The predicted variance of the Gaussian process regression is used to bound the model error which additionally allows to specify the state space region where the closed-loop system behaves passive. Finally, the theoretical results are illustrated by a simulation.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: A Bayesian Method for Joint Clustering of Vectorial Data and Network Data, Abstract: We present a new model-based integrative method for clustering objects given both vectorial data, which describes the feature of each object, and network data, which indicates the similarity of connected objects. The proposed general model is able to cluster the two types of data simultaneously within one integrative probabilistic model, while traditional methods can only handle one data type or depend on transforming one data type to another. Bayesian inference of the clustering is conducted based on a Markov chain Monte Carlo algorithm. A special case of the general model combining the Gaussian mixture model and the stochastic block model is extensively studied. We used both synthetic data and real data to evaluate this new method and compare it with alternative methods. The results show that our simultaneous clustering method performs much better. This improvement is due to the power of the model-based probabilistic approach for efficiently integrating information.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Pointwise-generalized-inverses of linear maps between C$^*$-algebras and JB$^*$-triples, Abstract: We study pointwise-generalized-inverses of linear maps between C$^*$-algebras. Let $\Phi$ and $\Psi$ be linear maps between complex Banach algebras $A$ and $B$. We say that $\Psi$ is a pointwise-generalized-inverse of $\Phi$ if $\Phi(aba)=\Phi(a)\Psi(b)\Phi(a),$ for every $a,b\in A$. The pair $(\Phi,\Psi)$ is Jordan-triple multiplicative if $\Phi$ is a pointwise-generalized-inverse of $\Psi$ and the latter is a pointwise-generalized-inverse of $\Phi$. We study the basic properties of this maps in connection with Jordan homomorphism, triple homomorphisms and strongly preservers. We also determine conditions to guarantee the automatic continuity of the pointwise-generalized-inverse of continuous operator between C$^*$-algebras. An appropriate generalization is introduced in the setting of JB$^*$-triples.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: A constrained control-planning strategy for redundant manipulators, Abstract: This paper presents an interconnected control-planning strategy for redundant manipulators, subject to system and environmental constraints. The method incorporates low-level control characteristics and high-level planning components into a robust strategy for manipulators acting in complex environments, subject to joint limits. This strategy is formulated using an adaptive control rule, the estimated dynamic model of the robotic system and the nullspace of the linearized constraints. A path is generated that takes into account the capabilities of the platform. The proposed method is computationally efficient, enabling its implementation on a real multi-body robotic system. Through experimental results with a 7 DOF manipulator, we demonstrate the performance of the method in real-world scenarios.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Lagrangian for RLC circuits using analogy with the classical mechanics concepts, Abstract: We study and formulate the Lagrangian for the LC, RC, RL, and RLC circuits by using the analogy concept with the mechanical problem in classical mechanics formulations. We found that the Lagrangian for the LC and RLC circuits are governed by two terms i. e. kinetic energy-like and potential energy-like terms. The Lagrangian for the RC circuit is only a contribution from the potential energy-like term and the Lagrangian for the RL circuit is only from the kinetic energy-like term.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: A note on recent criticisms to Birnbaum's theorem, Abstract: In this note, we provide critical commentary on two articles that cast doubt on the validity and implications of Birnbaum's theorem: Evans (2013) and Mayo (2014). In our view, the proof is correct and the consequences of the theorem are alive and well.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics" ]
Title: On the radius of spatial analyticity for the quartic generalized KdV equation, Abstract: Lower bound on the rate of decrease in time of the uniform radius of spatial analyticity of solutions to the quartic generalized KdV equation is derived, which improves an earlier result by Bona, Grujić and Kalisch.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Calibrating Noise to Variance in Adaptive Data Analysis, Abstract: Datasets are often used multiple times and each successive analysis may depend on the outcome of previous analyses. Standard techniques for ensuring generalization and statistical validity do not account for this adaptive dependence. A recent line of work studies the challenges that arise from such adaptive data reuse by considering the problem of answering a sequence of "queries" about the data distribution where each query may depend arbitrarily on answers to previous queries. The strongest results obtained for this problem rely on differential privacy -- a strong notion of algorithmic stability with the important property that it "composes" well when data is reused. However the notion is rather strict, as it requires stability under replacement of an arbitrary data element. The simplest algorithm is to add Gaussian (or Laplace) noise to distort the empirical answers. However, analysing this technique using differential privacy yields suboptimal accuracy guarantees when the queries have low variance. Here we propose a relaxed notion of stability that also composes adaptively. We demonstrate that a simple and natural algorithm based on adding noise scaled to the standard deviation of the query provides our notion of stability. This implies an algorithm that can answer statistical queries about the dataset with substantially improved accuracy guarantees for low-variance queries. The only previous approach that provides such accuracy guarantees is based on a more involved differentially private median-of-means algorithm and its analysis exploits stronger "group" stability of the algorithm.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: The Lyman-alpha forest power spectrum from the XQ-100 Legacy Survey, Abstract: We present the Lyman-$\alpha$ flux power spectrum measurements of the XQ-100 sample of quasar spectra obtained in the context of the European Southern Observatory Large Programme "Quasars and their absorption lines: a legacy survey of the high redshift universe with VLT/XSHOOTER". Using $100$ quasar spectra with medium resolution and signal-to-noise ratio we measure the power spectrum over a range of redshifts $z = 3 - 4.2$ and over a range of scales $k = 0.003 - 0.06\,\mathrm{s\,km^{-1}}$. The results agree well with the measurements of the one-dimensional power spectrum found in the literature. The data analysis used in this paper is based on the Fourier transform and has been tested on synthetic data. Systematic and statistical uncertainties of our measurements are estimated, with a total error (statistical and systematic) comparable to the one of the BOSS data in the overlapping range of scales, and smaller by more than $50\%$ for higher redshift bins ($z>3.6$) and small scales ($k > 0.01\,\mathrm{s\,km^{-1}}$). The XQ-100 data set has the unique feature of having signal-to-noise ratios and resolution intermediate between the two data sets that are typically used to perform cosmological studies, i.e. BOSS and high-resolution spectra (e.g. UVES/VLT or HIRES). More importantly, the measured flux power spectra span the high redshift regime which is usually more constraining for structure formation models.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Conditional Independence, Conditional Mean Independence, and Zero Conditional Covariance, Abstract: Investigation of the reversibility of the directional hierarchy in the interdependency among the notions of conditional independence, conditional mean independence, and zero conditional covariance, for two random variables X and Y given a conditioning element Z which is not constrained by any topological restriction on its range, reveals that if the first moments of X, Y, and XY exist, then conditional independence implies conditional mean independence and conditional mean independence implies zero conditional covariance, but the direction of the hierarchy is not reversible in general. If the conditional expectation of Y given X and Z is "affine in X," which happens when X is Bernoulli, then the "intercept" and "slope" of the conditional expectation (that is, the nonparametric regression function) equal the "intercept" and "slope" of the "least-squares linear regression function", as a result of which zero conditional covariance implies conditional mean independence.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Gradient descent GAN optimization is locally stable, Abstract: Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the "gradient descent" form of GAN optimization i.e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator parameters. We show that even though GAN optimization does not correspond to a convex-concave game (even for simple parameterizations), under proper conditions, equilibrium points of this optimization procedure are still \emph{locally asymptotically stable} for the traditional GAN formulation. On the other hand, we show that the recently proposed Wasserstein GAN can have non-convergent limit cycles near equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which \emph{is} able to guarantee local stability for both the WGAN and the traditional GAN, and also shows practical promise in speeding up convergence and addressing mode collapse.
[ 1, 0, 1, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Transition rates and radiative lifetimes of Ca I, Abstract: We tabulate spontaneous emission rates for all possible 811 electric-dipole-allowed transitions between the 75 lowest-energy states of Ca I. These involve the $4sns$ ($n=4-8$), $4snp$ ($n=4-7$), $4snd$ ($n=3-6$), $4snf$ ($n=4-6$), $4p^2$, and $3d4p$ electronic configurations. We compile the transition rates by carrying out ab initio relativistic calculations using the combined method of configuration interaction and many-body perturbation theory. The results are compared to the available literature values. The tabulated rates can be useful in various applications, such as optimizing laser cooling in magneto-optical traps, estimating various systematic effects in optical clocks and evaluating static or dynamic polarizabilities and long-range atom-atom interaction coefficients and related atomic properties.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Local and collective magnetism of EuFe$_2$As$_2$, Abstract: We present an experimental study of the local and collective magnetism of $\mathrm{EuFe_2As_2}$, that is isostructural with the high temperature superconductor parent compound $\mathrm{BaFe_2As_2}$. In contrast to $\mathrm{BaFe_2As_2}$, where only Fe spins order, $\mathrm{EuFe_2As_2}$ has an additional magnetic transition below 20 K due to the ordering of the Eu$^{2+}$ spins ($J =7/2$, with $L=0$ and $S=7/2$) in an A-type antiferromagnetic texture (ferromagnetic layers stacked antiferromagnetically). This may potentially affect the FeAs layer and its local and correlated magnetism. Fe-K$_\beta$ x-ray emission experiments on $\mathrm{EuFe_2As_2}$ single crystals reveal a local magnetic moment of 1.3$\pm0.15~\mu_B$ at 15 K that slightly increases to 1.45$\pm0.15~\mu_B$ at 300 K. Resonant inelastic x-ray scattering (RIXS) experiments performed on the same crystals show dispersive broad (in energy) magnetic excitations along $\mathrm{(0, 0)\rightarrow(1, 0)}$ and $\mathrm{(0, 0)\rightarrow(1, 1)}$ with a bandwidth on the order of 170-180 meV. These results on local and collective magnetism are in line with other parent compounds of the $\mathrm{AFe_2As_2}$ series ($A=$ Ba, Ca, and Sr), especially the well characterized $\mathrm{BaFe_2As_2}$. Thus, our experiments lead us to the conclusion that the effect of the high magnetic moment of Eu on the magnitude of both Fe local magnetic moment and spin excitations is small and confined to low energy excitations.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Newton slopes for twisted Artin--Schreier--Witt Towers, Abstract: We fix a monic polynomial $f(x) \in \mathbb F_q[x]$ over a finite field of characteristic $p$ of degree relatively prime to $p$. Let $a\mapsto \omega(a)$ be the Teichmüller lift of $\mathbb F_q$, and let $\chi:\mathbb{Z}\to \mathbb C_p^\times$ be a finite character of $\mathbb Z_p$. The $L$-function associated to the polynomial $f$ and the so-called twisted character $\omega^u\times \chi$ is denoted by $L_f(\omega^u,\chi,s)$. We prove that, when the conductor of the character is large enough, the $p$-adic Newton slopes of this $L$-function form arithmetic progressions.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: End-to-end distance and contour length distribution functions of DNA helices, Abstract: We present a computational method to evaluate the end-to-end and the contour length distribution functions of short DNA molecules described by a mesoscopic Hamiltonian. The method generates a large statistical ensemble of possible configurations for each dimer in the sequence, selects the global equilibrium twist conformation for the molecule and determines the average base pair distances along the molecule backbone. Integrating over the base pair radial and angular fluctuations, we derive the room temperature distribution functions as a function of the sequence length. The obtained values for the most probable end-to-end distance and contour length distance, providing a measure of the global molecule size, are used to examine the DNA flexibility at short length scales. It is found that, also in molecules with less than $\sim 60$ base pairs, coiled configurations maintain a large statistical weight and, consistently, the persistence lengths may be much smaller than in kilo-base DNA.
[ 0, 0, 0, 0, 1, 0 ]
[ "Quantitative Biology", "Physics" ]
Title: Online Estimation of Multiple Dynamic Graphs in Pattern Sequences, Abstract: Many time-series data including text, movies, and biological signals can be represented as sequences of correlated binary patterns. These patterns may be described by weighted combinations of a few dominant structures that underpin specific interactions among the binary elements. To extract the dominant correlation structures and their contributions to generating data in a time-dependent manner, we model the dynamics of binary patterns using the state-space model of an Ising-type network that is composed of multiple undirected graphs. We provide a sequential Bayes algorithm to estimate the dynamics of weights on the graphs while gaining the graph structures online. This model can uncover overlapping graphs underlying the data better than a traditional orthogonal decomposition method, and outperforms an original time-dependent full Ising model. We assess the performance of the method by simulated data, and demonstrate that spontaneous activity of cultured hippocampal neurons is represented by dynamics of multiple graphs.
[ 1, 0, 0, 1, 1, 0 ]
[ "Computer Science", "Statistics", "Quantitative Biology" ]
Title: Fluid Communities: A Competitive, Scalable and Diverse Community Detection Algorithm, Abstract: We introduce a community detection algorithm (Fluid Communities) based on the idea of fluids interacting in an environment, expanding and contracting as a result of that interaction. Fluid Communities is based on the propagation methodology, which represents the state-of-the-art in terms of computational cost and scalability. While being highly efficient, Fluid Communities is able to find communities in synthetic graphs with an accuracy close to the current best alternatives. Additionally, Fluid Communities is the first propagation-based algorithm capable of identifying a variable number of communities in network. To illustrate the relevance of the algorithm, we evaluate the diversity of the communities found by Fluid Communities, and find them to be significantly different from the ones found by alternative methods.
[ 1, 1, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Identification of Dynamic Systems with Interval Arithmetic, Abstract: This paper aims to identify three electrical systems: a series RLC circuit, a motor/generator coupled system, and the Duffing-Ueda oscillator. In order to obtain the system's models was used the error reduction ratio and the Akaike information criterion. Our approach to handle the numerical errors was the interval arithmetic by means of the resolution of the least squares estimation. The routines was implemented in Intlab, a Matlab toolbox devoted to arithmetic interval. Finally, the interval RMSE was calculated to verify the quality of the obtained models. The applied methodology was satisfactory, since the obtained intervals encompass the system's data and allow to demonstrate how the numerical errors affect the answers.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Safety-Aware Apprenticeship Learning, Abstract: Apprenticeship learning (AL) is a kind of Learning from Demonstration techniques where the reward function of a Markov Decision Process (MDP) is unknown to the learning agent and the agent has to derive a good policy by observing an expert's demonstrations. In this paper, we study the problem of how to make AL algorithms inherently safe while still meeting its learning objective. We consider a setting where the unknown reward function is assumed to be a linear combination of a set of state features, and the safety property is specified in Probabilistic Computation Tree Logic (PCTL). By embedding probabilistic model checking inside AL, we propose a novel counterexample-guided approach that can ensure safety while retaining performance of the learnt policy. We demonstrate the effectiveness of our approach on several challenging AL scenarios where safety is essential.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Solitary wave solutions and their interactions for fully nonlinear water waves with surface tension in the generalized Serre equations, Abstract: Some effects of surface tension on fully-nonlinear, long, surface water waves are studied by numerical means. The differences between various solitary waves and their interactions in subcritical and supercritical surface tension regimes are presented. Analytical expressions for new peaked travelling wave solutions are presented in the case of critical surface tension. The numerical experiments were performed using a high-accurate finite element method based on smooth cubic splines and the four-stage, classical, explicit Runge-Kutta method of order four.
[ 0, 1, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Relative weak mixing of W*-dynamical systems via joinings, Abstract: A characterization of relative weak mixing in W*-dynamical systems in terms of a relatively independent joining is proven.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: A note on minimal dispersion of point sets in the unit cube, Abstract: We study the dispersion of a point set, a notion closely related to the discrepancy. Given a real $r\in (0,1)$ and an integer $d\geq 2$, let $N(r,d)$ denote the minimum number of points inside the $d$-dimensional unit cube $[0,1]^d$ such that they intersect every axis-aligned box inside $[0,1]^d$ of volume greater than $r$. We prove an upper bound on $N(r,d)$, matching a lower bound of Aistleitner et al. up to a multiplicative constant depending only on $r$. This fully determines the rate of growth of $N(r,d)$ if $r\in(0,1)$ is fixed.
[ 1, 0, 0, 0, 0, 0 ]
[ "Mathematics" ]
Title: Snyder Like Modified Gravity in Newton's Spacetime, Abstract: This work is focused on searching a geodesic interpretation of the dynamics of a particle under the effects of a Snyder like deformation in the background of the Kepler problem. In order to accomplish that task, a newtonian spacetime is used. Newtonian spacetime is not a metric manifold, but allows to introduce a torsion free connection in order to interpret the dynamic equations of the deformed Kepler problem as geodesics in a curved spacetime. These geodesics and the curvature terms of the Riemann and Ricci tensors show a mass and a fundamental length dependence as expected, but are velocity independent. In this sense, the effect of introducing a deformed algebra is examinated and the corresponding curvature terms calculated, as well as the modifications of the integrals of motion.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]