text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Efficient algorithms to discover alterations with complementary functional association in cancer, Abstract: Recent large cancer studies have measured somatic alterations in an unprecedented number of tumours. These large datasets allow the identification of cancer-related sets of genetic alterations by identifying relevant combinatorial patterns. Among such patterns, mutual exclusivity has been employed by several recent methods that have shown its effectivenes in characterizing gene sets associated to cancer. Mutual exclusivity arises because of the complementarity, at the functional level, of alterations in genes which are part of a group (e.g., a pathway) performing a given function. The availability of quantitative target profiles, from genetic perturbations or from clinical phenotypes, provides additional information that can be leveraged to improve the identification of cancer related gene sets by discovering groups with complementary functional associations with such targets. In this work we study the problem of finding groups of mutually exclusive alterations associated with a quantitative (functional) target. We propose a combinatorial formulation for the problem, and prove that the associated computation problem is computationally hard. We design two algorithms to solve the problem and implement them in our tool UNCOVER. We provide analytic evidence of the effectiveness of UNCOVER in finding high-quality solutions and show experimentally that UNCOVER finds sets of alterations significantly associated with functional targets in a variety of scenarios. In addition, our algorithms are much faster than the state-of-the-art, allowing the analysis of large datasets of thousands of target profiles from cancer cell lines. We show that on one such dataset from project Achilles our methods identify several significant gene sets with complementary functional associations with targets.
[ 0, 0, 0, 0, 1, 0 ]
Title: Laplace operators on holomorphic Lie algebroids, Abstract: The paper introduces Laplace-type operators for functions defined on the tangent space of a Finsler Lie algebroid, using a volume form on the prolongation of the algebroid. It also presents the construction of a horizontal Laplace operator for forms defined on the prolongation of the algebroid. All of the Laplace operators considered in the paper are also locally expressed using the Chern-Finsler connection of the algebroid.
[ 0, 0, 1, 0, 0, 0 ]
Title: Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction, Abstract: Despite the recent popularity of deep generative state space models, few comparisons have been made between network architectures and the inference steps of the Bayesian filtering framework -- with most models simultaneously approximating both state transition and update steps with a single recurrent neural network (RNN). In this paper, we introduce the Recurrent Neural Filter (RNF), a novel recurrent variational autoencoder architecture that learns distinct representations for each Bayesian filtering step, captured by a series of encoders and decoders. Testing this on three real-world time series datasets, we demonstrate that decoupling representations not only improves the accuracy of one-step-ahead forecasts while providing realistic uncertainty estimates, but also facilitates multistep prediction through the separation of encoder stages.
[ 1, 0, 0, 1, 0, 0 ]
Title: Counterexample-Guided k-Induction Verification for Fast Bug Detection, Abstract: Recently, the k-induction algorithm has proven to be a successful approach for both finding bugs and proving correctness. However, since the algorithm is an incremental approach, it might waste resources trying to prove incorrect programs. In this paper, we propose to extend the k-induction algorithm in order to shorten the number of steps required to find a property violation. We convert the algorithm into a meet-in-the-middle bidirectional search algorithm, using the counterexample produced from over-approximating the program. The preliminary results show that the number of steps required to find a property violation is reduced to $\lfloor\frac{k}{2} + 1\rfloor$ and the verification time for programs with large state space is reduced considerably.
[ 1, 0, 0, 0, 0, 0 ]
Title: Andreev Reflection without Fermi surface alignment in High T$_{c}$-Topological heterostructures, Abstract: We address the controversy over the proximity effect between topological materials and high T$_{c}$ superconductors. Junctions are produced between Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$ and materials with different Fermi surfaces (Bi$_{2}$Te$_{3}$ \& graphite). Both cases reveal tunneling spectra consistent with Andreev reflection. This is confirmed by magnetic field that shifts features via the Doppler effect. This is modeled with a single parameter that accounts for tunneling into a screening supercurrent. Thus the tunneling involves Cooper pairs crossing the heterostructure, showing the Fermi surface mis-match does not hinder the ability to form transparent interfaces, which is accounted for by the extended Brillouin zone and different lattice symmetries.
[ 0, 1, 0, 0, 0, 0 ]
Title: Structural Data Recognition with Graph Model Boosting, Abstract: This paper presents a novel method for structural data recognition using a large number of graph models. In general, prevalent methods for structural data recognition have two shortcomings: 1) Only a single model is used to capture structural variation. 2) Naive recognition methods are used, such as the nearest neighbor method. In this paper, we propose strengthening the recognition performance of these models as well as their ability to capture structural variation. The proposed method constructs a large number of graph models and trains decision trees using the models. This paper makes two main contributions. The first is a novel graph model that can quickly perform calculations, which allows us to construct several models in a feasible amount of time. The second contribution is a novel approach to structural data recognition: graph model boosting. Comprehensive structural variations can be captured with a large number of graph models constructed in a boosting framework, and a sophisticated classifier can be formed by aggregating the decision trees. Consequently, we can carry out structural data recognition with powerful recognition capability in the face of comprehensive structural variation. The experiments shows that the proposed method achieves impressive results and outperforms existing methods on datasets of IAM graph database repository.
[ 1, 0, 0, 1, 0, 0 ]
Title: Polynomiality for the Poisson centre of truncated maximal parabolic subalgebras, Abstract: We show that the Poisson centre of truncated maximal parabolic subalgebras of a simple Lie algebra of type B, D and E_6 is a polynomial algebra. In roughly half of the cases the polynomiality of the Poisson centre was already known by a completely different method. For the rest of the cases, our approach is to construct an algebraic slice in the sense of Kostant given by an adapted pair and the computation of an improved upper bound for the Poisson centre.
[ 0, 0, 1, 0, 0, 0 ]
Title: Row-Centric Lossless Compression of Markov Images, Abstract: Motivated by the question of whether the recently introduced Reduced Cutset Coding (RCC) offers rate-complexity performance benefits over conventional context-based conditional coding for sources with two-dimensional Markov structure, this paper compares several row-centric coding strategies that vary in the amount of conditioning as well as whether a model or an empirical table is used in the encoding of blocks of rows. The conclusion is that, at least for sources exhibiting low-order correlations, 1-sided model-based conditional coding is superior to the method of RCC for a given constraint on complexity, and conventional context-based conditional coding is nearly as good as the 1-sided model-based coding.
[ 1, 0, 0, 0, 0, 0 ]
Title: Planetesimal formation by the streaming instability in a photoevaporating disk, Abstract: Recent years have seen growing interest in the streaming instability as a candidate mechanism to produce planetesimals. However, these investigations have been limited to small-scale simulations. We now present the results of a global protoplanetary disk evolution model that incorporates planetesimal formation by the streaming instability, along with viscous accretion, photoevaporation by EUV, FUV, and X-ray photons, dust evolution, the water ice line, and stratified turbulence. Our simulations produce massive (60-130 $M_\oplus$) planetesimal belts beyond 100 au and up to $\sim 20 M_\oplus$ of planetesimals in the middle regions (3-100 au). Our most comprehensive model forms 8 $M_\oplus$ of planetesimals inside 3 au, where they can give rise to terrestrial planets. The planetesimal mass formed in the inner disk depends critically on the timing of the formation of an inner cavity in the disk by high-energy photons. Our results show that the combination of photoevaporation and the streaming instability are efficient at converting the solid component of protoplanetary disks into planetesimals. Our model, however, does not form enough early planetesimals in the inner and middle regions of the disk to give rise to giant planets and super-Earths with gaseous envelopes. Additional processes such as particle pileups and mass loss driven by MHD winds may be needed to drive the formation of early planetesimal generations in the planet forming regions of protoplanetary disks.
[ 0, 1, 0, 0, 0, 0 ]
Title: Hierarchical Bloom Filter Trees for Approximate Matching, Abstract: Bytewise approximate matching algorithms have in recent years shown significant promise in de- tecting files that are similar at the byte level. This is very useful for digital forensic investigators, who are regularly faced with the problem of searching through a seized device for pertinent data. A common scenario is where an investigator is in possession of a collection of "known-illegal" files (e.g. a collection of child abuse material) and wishes to find whether copies of these are stored on the seized device. Approximate matching addresses shortcomings in traditional hashing, which can only find identical files, by also being able to deal with cases of merged files, embedded files, partial files, or if a file has been changed in any way. Most approximate matching algorithms work by comparing pairs of files, which is not a scalable approach when faced with large corpora. This paper demonstrates the effectiveness of using a "Hierarchical Bloom Filter Tree" (HBFT) data structure to reduce the running time of collection-against-collection matching, with a specific focus on the MRSH-v2 algorithm. Three experiments are discussed, which explore the effects of different configurations of HBFTs. The proposed approach dramatically reduces the number of pairwise comparisons required, and demonstrates substantial speed gains, while maintaining effectiveness.
[ 1, 0, 0, 0, 0, 0 ]
Title: Pre-freezing transition in Boltzmann-Gibbs measures associated with log-correlated fields, Abstract: We consider Boltzmann-Gibbs measures associated with log-correlated Gaussian fields as potentials and study their multifractal properties which exhibit phase transitions. In particular, the pre-freezing and freezing phenomena of the annealed exponent, predicted by Fyodorov using a modified replica-symmetry-breaking ansatz, are generalised to arbitrary dimension and verified using results from Gaussian multiplicative chaos theory.
[ 0, 1, 0, 0, 0, 0 ]
Title: Learning Combinatorial Optimization Algorithms over Graphs, Abstract: The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.
[ 1, 0, 0, 1, 0, 0 ]
Title: Critical well-posedness and scattering results for fractional Hartree-type equations, Abstract: Scattering for the mass-critical fractional Schrödinger equation with a cubic Hartree-type nonlinearity for initial data in a small ball in the scale-invariant space of three-dimensional radial and square-integrable initial data is established. For this, we prove a bilinear estimate for free solutions and extend it to perturbations of bounded quadratic variation. This result is shown to be sharp by proving the unboundedness of a third order derivative of the flow map in the super-critical range.
[ 0, 0, 1, 0, 0, 0 ]
Title: Value Propagation for Decentralized Networked Deep Multi-agent Reinforcement Learning, Abstract: We consider the networked multi-agent reinforcement learning (MARL) problem in a fully decentralized setting, where agents learn to coordinate to achieve the joint success. This problem is widely encountered in many areas including traffic control, distributed control, and smart grids. We assume that the reward function for each agent can be different and observed only locally by the agent itself. Furthermore, each agent is located at a node of a communication network and can exchanges information only with its neighbors. Using softmax temporal consistency and a decentralized optimization method, we obtain a principled and data-efficient iterative algorithm. In the first step of each iteration, an agent computes its local policy and value gradients and then updates only policy parameters. In the second step, the agent propagates to its neighbors the messages based on its value function and then updates its own value function. Hence we name the algorithm value propagation. We prove a non-asymptotic convergence rate 1/T with the nonlinear function approximation. To the best of our knowledge, it is the first MARL algorithm with convergence guarantee in the control, off-policy and non-linear function approximation setting. We empirically demonstrate the effectiveness of our approach in experiments.
[ 1, 0, 0, 1, 0, 0 ]
Title: Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible, Abstract: Non-interactive Local Differential Privacy (LDP) requires data analysts to collect data from users through noisy channel at once. In this paper, we extend the frontiers of Non-interactive LDP learning and estimation from several aspects. For learning with smooth generalized linear losses, we propose an approximate stochastic gradient oracle estimated from non-interactive LDP channel, using Chebyshev expansion. Combined with inexact gradient methods, we obtain an efficient algorithm with quasi-polynomial sample complexity bound. For the high-dimensional world, we discover that under $\ell_2$-norm assumption on data points, high-dimensional sparse linear regression and mean estimation can be achieved with logarithmic dependence on dimension, using random projection and approximate recovery. We also extend our methods to Kernel Ridge Regression. Our work is the first one that makes learning and estimation possible for a broad range of learning tasks under non-interactive LDP model.
[ 1, 0, 0, 0, 0, 0 ]
Title: Efficiency versus instability in plasma accelerators, Abstract: Plasma wake-field acceleration is one of the main technologies being developed for future high-energy colliders. Potentially, it can create a cost-effective path to the highest possible energies for e+e- or {\gamma}-{\gamma} colliders and produce a profound effect on the developments for high-energy physics. Acceleration in a blowout regime, where all plasma electrons are swept away from the axis, is presently considered to be the primary choice for beam acceleration. In this paper, we derive a universal efficiency-instability relation, between the power efficiency and the key instability parameter of the trailing bunch for beam acceleration in the blowout regime. We also show that the suppression of instability in the trailing bunch can be achieved through BNS damping by the introduction of a beam energy variation along the bunch. Unfortunately, in the high efficiency regime, the required energy variation is quite high, and is not presently compatible with collider-quality beams. We would like to stress that the development of the instability imposes a fundamental limitation on the acceleration efficiency, and it is unclear how it could be overcome for high-luminosity linear colliders. With minor modifications, the considered limitation on the power efficiency is applicable to other types of acceleration.
[ 0, 1, 0, 0, 0, 0 ]
Title: Minimal Exploration in Structured Stochastic Bandits, Abstract: This paper introduces and addresses a wide class of stochastic bandit problems where the function mapping the arm to the corresponding reward exhibits some known structural properties. Most existing structures (e.g. linear, Lipschitz, unimodal, combinatorial, dueling, ...) are covered by our framework. We derive an asymptotic instance-specific regret lower bound for these problems, and develop OSSB, an algorithm whose regret matches this fundamental limit. OSSB is not based on the classical principle of "optimism in the face of uncertainty" or on Thompson sampling, and rather aims at matching the minimal exploration rates of sub-optimal arms as characterized in the derivation of the regret lower bound. We illustrate the efficiency of OSSB using numerical experiments in the case of the linear bandit problem and show that OSSB outperforms existing algorithms, including Thompson sampling.
[ 1, 0, 0, 1, 0, 0 ]
Title: On Optimistic versus Randomized Exploration in Reinforcement Learning, Abstract: We discuss the relative merits of optimistic and randomized approaches to exploration in reinforcement learning. Optimistic approaches presented in the literature apply an optimistic boost to the value estimate at each state-action pair and select actions that are greedy with respect to the resulting optimistic value function. Randomized approaches sample from among statistically plausible value functions and select actions that are greedy with respect to the random sample. Prior computational experience suggests that randomized approaches can lead to far more statistically efficient learning. We present two simple analytic examples that elucidate why this is the case. In principle, there should be optimistic approaches that fare well relative to randomized approaches, but that would require intractable computation. Optimistic approaches that have been proposed in the literature sacrifice statistical efficiency for the sake of computational efficiency. Randomized approaches, on the other hand, may enable simultaneous statistical and computational efficiency.
[ 1, 0, 0, 1, 0, 0 ]
Title: Fast Monte-Carlo Localization on Aerial Vehicles using Approximate Continuous Belief Representations, Abstract: Size, weight, and power constrained platforms impose constraints on computational resources that introduce unique challenges in implementing localization algorithms. We present a framework to perform fast localization on such platforms enabled by the compressive capabilities of Gaussian Mixture Model representations of point cloud data. Given raw structural data from a depth sensor and pitch and roll estimates from an on-board attitude reference system, a multi-hypothesis particle filter localizes the vehicle by exploiting the likelihood of the data originating from the mixture model. We demonstrate analysis of this likelihood in the vicinity of the ground truth pose and detail its utilization in a particle filter-based vehicle localization strategy, and later present results of real-time implementations on a desktop system and an off-the-shelf embedded platform that outperform localization results from running a state-of-the-art algorithm on the same environment.
[ 1, 0, 0, 0, 0, 0 ]
Title: Generalized two-field $α$-attractor models from geometrically finite hyperbolic surfaces, Abstract: We consider four-dimensional gravity coupled to a non-linear sigma model whose scalar manifold is a non-compact geometrically finite surface $\Sigma$ endowed with a Riemannian metric of constant negative curvature. When the space-time is an FLRW universe, such theories produce a very wide generalization of two-field $\alpha$-attractor models, being parameterized by a positive constant $\alpha$, by the choice of a finitely-generated surface group $\Gamma\subset \mathrm{PSL}(2,\mathbb{R})$ (which is isomorphic with the fundamental group of $\Sigma$) and by the choice of a scalar potential defined on $\Sigma$. The traditional two-field $\alpha$-attractor models arise when $\Gamma$ is the trivial group, in which case $\Sigma$ is the Poincaré disk. We give a general prescription for the study of such models through uniformization in the so-called "non-elementary" case and discuss some of their qualitative features in the gradient flow approximation, which we relate to Morse theory. We also discuss some aspects of the SRST approximation in these models, showing that it is generally not well-suited for studying dynamics near cusp ends. When $\Sigma$ is non-compact and the scalar potential is "well-behaved" at the ends, we show that, in the {\em naive} local one-field truncation, our generalized models have the same universal behavior as ordinary one-field $\alpha$-attractors if inflation happens near any of the ends of $\Sigma$ where the extended potential has a local maximum, for trajectories which are well approximated by non-canonically parameterized geodesics near the ends, we also discuss spiral trajectories near the ends.
[ 0, 1, 1, 0, 0, 0 ]
Title: $\overline{M}_{1,n}$ is usually not uniruled in characteristic $p$, Abstract: Using etale cohomology, we define a birational invariant for varieties in characteristic $p$ that serves as an obstruction to uniruledness - a variant on an obstruction to unirationality due to Ekedahl. We apply this to $\overline{M}_{1,n}$ and show that $\overline{M}_{1,n}$ is not uniruled in characteristic $p$ as long as $n \geq p \geq 11$. To do this, we use Deligne's description of the etale cohomology of $\overline{M}_{1,n}$ and apply the theory of congruences between modular forms.
[ 0, 0, 1, 0, 0, 0 ]
Title: Continuum Limit of Posteriors in Graph Bayesian Inverse Problems, Abstract: We consider the problem of recovering a function input of a differential equation formulated on an unknown domain $M$. We assume to have access to a discrete domain $M_n=\{x_1, \dots, x_n\} \subset M$, and to noisy measurements of the output solution at $p\le n$ of those points. We introduce a graph-based Bayesian inverse problem, and show that the graph-posterior measures over functions in $M_n$ converge, in the large $n$ limit, to a posterior over functions in $M$ that solves a Bayesian inverse problem with known domain. The proofs rely on the variational formulation of the Bayesian update, and on a new topology for the study of convergence of measures over functions on point clouds to a measure over functions on the continuum. Our framework, techniques, and results may serve to lay the foundations of robust uncertainty quantification of graph-based tasks in machine learning. The ideas are presented in the concrete setting of recovering the initial condition of the heat equation on an unknown manifold.
[ 0, 0, 1, 1, 0, 0 ]
Title: Automatic Conflict Detection in Police Body-Worn Audio, Abstract: Automatic conflict detection has grown in relevance with the advent of body-worn technology, but existing metrics such as turn-taking and overlap are poor indicators of conflict in police-public interactions. Moreover, standard techniques to compute them fall short when applied to such diversified and noisy contexts. We develop a pipeline catered to this task combining adaptive noise removal, non-speech filtering and new measures of conflict based on the repetition and intensity of phrases in speech. We demonstrate the effectiveness of our approach on body-worn audio data collected by the Los Angeles Police Department.
[ 1, 0, 0, 1, 0, 0 ]
Title: LAMOST telescope reveals that Neptunian cousins of hot Jupiters are mostly single offspring of stars that are rich in heavy elements, Abstract: We discover a population of short-period, Neptune-size planets sharing key similarities with hot Jupiters: both populations are preferentially hosted by metal-rich stars, and both are preferentially found in Kepler systems with single transiting planets. We use accurate LAMOST DR4 stellar parameters for main-sequence stars to study the distributions of short-period 1d < P < 10d Kepler planets as a function of host star metallicity. The radius distribution of planets around metal-rich stars is more "puffed up" as compared to that around metal-poor hosts. In two period-radius regimes, planets preferentially reside around metal-rich stars, while there are hardly any planets around metal-poor stars. One is the well-known hot Jupiters, and the other is a population of Neptune-size planets (2 R_Earth <~ R_p <~ 6 R_Earth), dubbed as "Hoptunes". Also like hot Jupiters, Hoptunes occur more frequently in systems with single transiting planets though the fraction of Hoptunes occurring in multiples is larger than that of hot Jupiters. About 1% of solar-type stars host "Hoptunes", and the frequencies of Hoptunes and hot Jupiters increase with consistent trends as a function of [Fe/H]. In the planet radius distribution, hot Jupiters and Hoptunes are separated by a "valley" at approximately Saturn size (in the range of 6 R_Earth <~ R_p <~ 10 R_Earth), and this "hot-Saturn valley" represents approximately an order-of-magnitude decrease in planet frequency compared to hot Jupiters and Hoptunes. The empirical "kinship" between Hoptunes and hot Jupiters suggests likely common processes (migration and/or formation) responsible for their existence.
[ 0, 1, 0, 0, 0, 0 ]
Title: Model enumeration in propositional circumscription via unsatisfiable core analysis, Abstract: Many practical problems are characterized by a preference relation over admissible solutions, where preferred solutions are minimal in some sense. For example, a preferred diagnosis usually comprises a minimal set of reasons that is sufficient to cause the observed anomaly. Alternatively, a minimal correction subset comprises a minimal set of reasons whose deletion is sufficient to eliminate the observed anomaly. Circumscription formalizes such preference relations by associating propositional theories with minimal models. The resulting enumeration problem is addressed here by means of a new algorithm taking advantage of unsatisfiable core analysis. Empirical evidence of the efficiency of the algorithm is given by comparing the performance of the resulting solver, CIRCUMSCRIPTINO, with HCLASP, CAMUS MCS, LBX and MCSLS on the enumeration of minimal models for problems originating from practical applications. This paper is under consideration for acceptance in TPLP.
[ 1, 0, 0, 0, 0, 0 ]
Title: Variations on a Visserian Theme, Abstract: A first order theory T is said to be "tight" if for any two deductively closed extensions U and V of T (both of which are formulated in the language of T), U and V are bi-interpretable iff U = V. By a theorem of Visser, PA (Peano Arithmetic) is tight. Here we show that Z_2 (second order arithmetic), ZF (Zermelo-Fraenkel set theory), and KM (Kelley-Morse theory of classes) are also tight theories.
[ 0, 0, 1, 0, 0, 0 ]
Title: Improved Query Reformulation for Concept Location using CodeRank and Document Structures, Abstract: During software maintenance, developers usually deal with a significant number of software change requests. As a part of this, they often formulate an initial query from the request texts, and then attempt to map the concepts discussed in the request to relevant source code locations in the software system (a.k.a., concept location). Unfortunately, studies suggest that they often perform poorly in choosing the right search terms for a change task. In this paper, we propose a novel technique --ACER-- that takes an initial query, identifies appropriate search terms from the source code using a novel term weight --CodeRank, and then suggests effective reformulation to the initial query by exploiting the source document structures, query quality analysis and machine learning. Experiments with 1,675 baseline queries from eight subject systems report that our technique can improve 71% of the baseline queries which is highly promising. Comparison with five closely related existing techniques in query reformulation not only validates our empirical findings but also demonstrates the superiority of our technique.
[ 1, 0, 0, 0, 0, 0 ]
Title: High-performance parallel computing in the classroom using the public goods game as an example, Abstract: The use of computers in statistical physics is common because the sheer number of equations that describe the behavior of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.
[ 0, 1, 0, 0, 0, 0 ]
Title: Correlation decay in fermionic lattice systems with power-law interactions at non-zero temperature, Abstract: We study correlations in fermionic lattice systems with long-range interactions in thermal equilibrium. We prove a bound on the correlation decay between anti-commuting operators and generalize a long-range Lieb-Robinson type bound. Our results show that in these systems of spatial dimension $D$ with, not necessarily translation invariant, two-site interactions decaying algebraically with the distance with an exponent $\alpha \geq 2\,D$, correlations between such operators decay at least algebraically with an exponent arbitrarily close to $\alpha$ at any non-zero temperature. Our bound is asymptotically tight, which we demonstrate by a high temperature expansion and by numerically analyzing density-density correlations in the 1D quadratic (free, exactly solvable) Kitaev chain with long-range pairing.
[ 0, 1, 0, 0, 0, 0 ]
Title: Dimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization, Abstract: Stochastic optimization naturally arises in machine learning. Efficient algorithms with provable guarantees, however, are still largely missing, when the objective function is nonconvex and the data points are dependent. This paper studies this fundamental challenge through a streaming PCA problem for stationary time series data. Specifically, our goal is to estimate the principle component of time series data with respect to the covariance matrix of the stationary distribution. Computationally, we propose a variant of Oja's algorithm combined with downsampling to control the bias of the stochastic gradient caused by the data dependency. Theoretically, we quantify the uncertainty of our proposed stochastic algorithm based on diffusion approximations. This allows us to prove the asymptotic rate of convergence and further implies near optimal asymptotic sample complexity. Numerical experiments are provided to support our analysis.
[ 0, 0, 0, 1, 0, 0 ]
Title: Efficient tracking of a growing number of experts, Abstract: We consider a variation on the problem of prediction with expert advice, where new forecasters that were unknown until then may appear at each round. As often in prediction with expert advice, designing an algorithm that achieves near-optimal regret guarantees is straightforward, using aggregation of experts. However, when the comparison class is sufficiently rich, for instance when the best expert and the set of experts itself changes over time, such strategies naively require to maintain a prohibitive number of weights (typically exponential with the time horizon). By contrast, designing strategies that both achieve a near-optimal regret and maintain a reasonable number of weights is highly non-trivial. We consider three increasingly challenging objectives (simple regret, shifting regret and sparse shifting regret) that extend existing notions defined for a fixed expert ensemble; in each case, we design strategies that achieve tight regret bounds, adaptive to the parameters of the comparison class, while being computationally inexpensive. Moreover, our algorithms are anytime, agnostic to the number of incoming experts and completely parameter-free. Such remarkable results are made possible thanks to two simple but highly effective recipes: first the "abstention trick" that comes from the specialist framework and enables to handle the least challenging notions of regret, but is limited when addressing more sophisticated objectives. Second, the "muting trick" that we introduce to give more flexibility. We show how to combine these two tricks in order to handle the most challenging class of comparison strategies.
[ 1, 0, 0, 1, 0, 0 ]
Title: Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data, Abstract: Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios.
[ 1, 0, 1, 0, 0, 0 ]
Title: A stencil scaling approach for accelerating matrix-free finite element implementations, Abstract: We present a novel approach to fast on-the-fly low order finite element assembly for scalar elliptic partial differential equations of Darcy type with variable coefficients optimized for matrix-free implementations. Our approach introduces a new operator that is obtained by appropriately scaling the reference stiffness matrix from the constant coefficient case. Assuming sufficient regularity, an a priori analysis shows that solutions obtained by this approach are unique and have asymptotically optimal order convergence in the $H^1$- and the $L^2$-norm on hierarchical hybrid grids. For the pre-asymptotic regime, we present a local modification that guarantees uniform ellipticity of the operator. Cost considerations show that our novel approach requires roughly one third of the floating-point operations compared to a classical finite element assembly scheme employing nodal integration. Our theoretical considerations are illustrated by numerical tests that confirm the expectations with respect to accuracy and run-time. A large scale application with more than a hundred billion ($1.6\cdot10^{11}$) degrees of freedom executed on 14,310 compute cores demonstrates the efficiency of the new scaling approach.
[ 1, 0, 0, 0, 0, 0 ]
Title: Asymptotic behaviour methods for the Heat Equation. Convergence to the Gaussian, Abstract: In this expository work we discuss the asymptotic behaviour of the solutions of the classical heat equation posed in the whole Euclidean space. After an introductory review of the main facts on the existence and properties of solutions, we proceed with the proofs of convergence to the Gaussian fundamental solution, a result that holds for all integrable solutions, and represents in the PDE setting the Central Limit Theorem of probability. We present several methods of proof: first, the scaling method. Then several versions of the representation method. This is followed by the functional analysis approach that leads to the famous related equations, Fokker-Planck and Ornstein-Uhlenbeck. The analysis of this connection is also given in rather complete form here. Finally, we present the Boltzmann entropy method, coming from kinetic equations. The different methods are interesting because of the possible extension to prove the asymptotic behaviour or stabilization analysis for more general equations, linear or nonlinear. It all depends a lot on the particular features, and only one or some of the methods work in each case.Other settings of the Heat Equation are briefly discussed in Section 9 and a longer mention of results for different equations is done in Section 10.
[ 0, 0, 1, 0, 0, 0 ]
Title: Magnetization dynamics of weakly interacting sub-100 nm square artificial spin ices, Abstract: Artificial Spin Ice (ASI), consisting of a two dimensional array of nanoscale magnetic elements, provides a fascinating opportunity to observe the physics of out of equilibrium systems. Initial studies concentrated on the static, frozen state, whilst more recent studies have accessed the out-of-equilibrium dynamic, fluctuating state. This opens up exciting possibilities such as the observation of systems exploring their energy landscape through monopole quasiparticle creation, potentially leading to ASI magnetricity, and to directly observe unconventional phase transitions. In this work we have measured and analysed the magnetic relaxation of thermally active ASI systems by means of SQUID magnetometry. We have investigated the effect of the interaction strength on the magnetization dynamics at different temperatures in the range where the nanomagnets are thermally active and have observed that they follow an Arrhenius-type Néel-Brown behaviour. An unexpected negative correlation of the average blocking temperature with the interaction strength is also observed, which is supported by Monte Carlo simulations. The magnetization relaxation measurements show faster relaxation for more strongly coupled nanoelements with similar dimensions. The analysis of the stretching exponents obtained from the measurements suggest 1-D chain-like magnetization dynamics. This indicates that the nature of the interactions between nanoelements lowers the dimensionality of the ASI from 2-D to 1-D. Finally, we present a way to quantify the effective interaction energy of a square ASI system, and compare it to the interaction energy calculated from a simple dipole model and also to the magnetostatic energy computed with micromagnetic simulations.
[ 0, 1, 0, 0, 0, 0 ]
Title: Structured Connectivity Augmentation, Abstract: We initiate the algorithmic study of the following "structured augmentation" question: is it possible to increase the connectivity of a given graph G by superposing it with another given graph H? More precisely, graph F is the superposition of G and H with respect to injective mapping \phi: V(H)->V(G) if every edge uv of F is either an edge of G, or \phi^{-1}(u)\phi^{-1}(v) is an edge of H. We consider the following optimization problem. Given graphs G,H, and a weight function \omega assigning non-negative weights to pairs of vertices of V(G), the task is to find \varphi of minimum weight \omega(\phi)=\sum_{xy\in E(H)}\omega(\phi(x)\varphi(y)) such that the edge connectivity of the superposition F of G and H with respect to \phi is higher than the edge connectivity of G. Our main result is the following "dichotomy" complexity classification. We say that a class of graphs C has bounded vertex-cover number, if there is a constant t depending on C only such that the vertex-cover number of every graph from C does not exceed t. We show that for every class of graphs C with bounded vertex-cover number, the problems of superposing into a connected graph F and to 2-edge connected graph F, are solvable in polynomial time when H\in C. On the other hand, for any hereditary class C with unbounded vertex-cover number, both problems are NP-hard when H\in C. For the unweighted variants of structured augmentation problems, i.e. the problems where the task is to identify whether there is a superposition of graphs of required connectivity, we provide necessary and sufficient combinatorial conditions on the existence of such superpositions. These conditions imply polynomial time algorithms solving the unweighted variants of the problems.
[ 1, 0, 0, 0, 0, 0 ]
Title: Transition probability of Brownian motion in the octant and its application to default modeling, Abstract: We derive a semi-analytic formula for the transition probability of three-dimensional Brownian motion in the positive octant with absorption at the boundaries. Separation of variables in spherical coordinates leads to an eigenvalue problem for the resulting boundary value problem in the two angular components. The main theoretical result is a solution to the original problem expressed as an expansion into special functions and an eigenvalue which has to be chosen to allow a matching of the boundary condition. We discuss and test several computational methods to solve a finite-dimensional approximation to this nonlinear eigenvalue problem. Finally, we apply our results to the computation of default probabilities and credit valuation adjustments in a structural credit model with mutual liabilities.
[ 0, 0, 0, 0, 0, 1 ]
Title: Block-Sparse Recurrent Neural Networks, Abstract: Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling. Sparsity is a technique to reduce compute and memory requirements of deep learning models. Sparse RNNs are easier to deploy on devices and high-end server processors. Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on different hardware platforms. In order to address this issue, we investigate two different approaches to induce block sparsity in RNNs: pruning blocks of weights in a layer and using group lasso regularization to create blocks of weights with zeros. Using these techniques, we demonstrate that we can create block-sparse RNNs with sparsity ranging from 80% to 90% with small loss in accuracy. This allows us to reduce the model size by roughly 10x. Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count. Our technique works with a variety of block sizes up to 32x32. Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity.
[ 1, 0, 0, 1, 0, 0 ]
Title: Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe, Abstract: We consider the problem of bandit optimization, inspired by stochastic optimization and online learning problems with bandit feedback. In this problem, the objective is to minimize a global loss function of all the actions, not necessarily a cumulative loss. This framework allows us to study a very general class of problems, with applications in statistics, machine learning, and other fields. To solve this problem, we analyze the Upper-Confidence Frank-Wolfe algorithm, inspired by techniques for bandits and convex optimization. We give theoretical guarantees for the performance of this algorithm over various classes of functions, and discuss the optimality of these results.
[ 0, 0, 1, 1, 0, 0 ]
Title: Ages and structural and dynamical parameters of two globular clusters in the M81 group, Abstract: GC-1 and GC-2 are two globular clusters (GCs) in the remote halo of M81 and M82 in the M81 group discovered by Jang et al. using the {\it Hubble Space Telescope} ({\it HST}) images. These two GCs were observed as part of the Beijing--Arizona--Taiwan--Connecticut (BATC) Multicolor Sky Survey, using 14 intermediate-band filters covering a wavelength range of 4000--10000 \AA. We accurately determine these two clusters' ages and masses by comparing their spectral energy distributions (from 2267 to 20000~{\AA}, comprising photometric data in the near-ultraviolet of the {\it Galaxy Evolution Explorer}, 14 BATC intermediate-band, and Two Micron All Sky Survey near-infrared $JHK_{\rm s}$ filters) with theoretical stellar population-synthesis models, resulting in ages of $15.50\pm3.20$ for GC-1 and $15.10\pm2.70$ Gyr for GC-2. The masses of GC-1 and GC-2 obtained here are $1.77-2.04\times 10^6$ and $5.20-7.11\times 10^6 \rm~M_\odot$, respectively. In addition, the deep observations with the Advanced Camera for Surveys and Wide Field Camera 3 on the {\it HST} are used to provide the surface brightness profiles of GC-1 and GC-2. The structural and dynamical parameters are derived from fitting the profiles to three different models; in particular, the internal velocity dispersions of GC-1 and GC-2 are derived, which can be compared with ones obtained based on spectral observations in the future. For the first time, in this paper, the $r_h$ versus $M_V$ diagram shows that GC-2 is an ultra-compact dwarf in the M81 group.
[ 0, 1, 0, 0, 0, 0 ]
Title: Graphons: A Nonparametric Method to Model, Estimate, and Design Algorithms for Massive Networks, Abstract: Many social and economic systems are naturally represented as networks, from off-line and on-line social networks, to bipartite networks, like Netflix and Amazon, between consumers and products. Graphons, developed as limits of graphs, form a natural, nonparametric method to describe and estimate large networks like Facebook and LinkedIn. Here we describe the development of the theory of graphons, for both dense and sparse networks, over the last decade. We also review theorems showing that we can consistently estimate graphons from massive networks in a wide variety of models. Finally, we show how to use graphons to estimate missing links in a sparse network, which has applications from estimating social and information networks in development economics, to rigorously and efficiently doing collaborative filtering with applications to movie recommendations in Netflix and product suggestions in Amazon.
[ 1, 1, 0, 0, 0, 0 ]
Title: Bayesian Methods in Cosmology, Abstract: These notes aim at presenting an overview of Bayesian statistics, the underlying concepts and application methodology that will be useful to astronomers seeking to analyse and interpret a wide variety of data about the Universe. The level starts from elementary notions, without assuming any previous knowledge of statistical methods, and then progresses to more advanced, research-level topics. After an introduction to the importance of statistical inference for the physical sciences, elementary notions of probability theory and inference are introduced and explained. Bayesian methods are then presented, starting from the meaning of Bayes Theorem and its use as inferential engine, including a discussion on priors and posterior distributions. Numerical methods for generating samples from arbitrary posteriors (including Markov Chain Monte Carlo and Nested Sampling) are then covered. The last section deals with the topic of Bayesian model selection and how it is used to assess the performance of models, and contrasts it with the classical p-value approach. A series of exercises of various levels of difficulty are designed to further the understanding of the theoretical material, including fully worked out solutions for most of them.
[ 0, 1, 0, 1, 0, 0 ]
Title: Information Extraction in Illicit Domains, Abstract: Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have `long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18\% F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.
[ 1, 0, 0, 0, 0, 0 ]
Title: A Tutorial on Kernel Density Estimation and Recent Advances, Abstract: This tutorial provides a gentle introduction to kernel density estimation (KDE) and recent advances regarding confidence bands and geometric/topological features. We begin with a discussion of basic properties of KDE: the convergence rate under various metrics, density derivative estimation, and bandwidth selection. Then, we introduce common approaches to the construction of confidence intervals/bands, and we discuss how to handle bias. Next, we talk about recent advances in the inference of geometric and topological features of a density function using KDE. Finally, we illustrate how one can use KDE to estimate a cumulative distribution function and a receiver operating characteristic curve. We provide R implementations related to this tutorial at the end.
[ 0, 0, 0, 1, 0, 0 ]
Title: Optimizing expected word error rate via sampling for speech recognition, Abstract: State-level minimum Bayes risk (sMBR) training has become the de facto standard for sequence-level training of speech recognition acoustic models. It has an elegant formulation using the expectation semiring, and gives large improvements in word error rate (WER) over models trained solely using cross-entropy (CE) or connectionist temporal classification (CTC). sMBR training optimizes the expected number of frames at which the reference and hypothesized acoustic states differ. It may be preferable to optimize the expected WER, but WER does not interact well with the expectation semiring, and previous approaches based on computing expected WER exactly involve expanding the lattices used during training. In this paper we show how to perform optimization of the expected WER by sampling paths from the lattices used during conventional sMBR training. The gradient of the expected WER is itself an expectation, and so may be approximated using Monte Carlo sampling. We show experimentally that optimizing WER during acoustic model training gives 5% relative improvement in WER over a well-tuned sMBR baseline on a 2-channel query recognition task (Google Home).
[ 1, 0, 0, 1, 0, 0 ]
Title: Stochastic Canonical Correlation Analysis, Abstract: We tightly analyze the sample complexity of CCA, provide a learning algorithm that achieves optimal statistical performance in time linear in the required number of samples (up to log factors), as well as a streaming algorithm with similar guarantees.
[ 1, 0, 0, 1, 0, 0 ]
Title: Status maximization as a source of fairness in a networked dictator game, Abstract: Human behavioural patterns exhibit selfish or competitive, as well as selfless or altruistic tendencies, both of which have demonstrable effects on human social and economic activity. In behavioural economics, such effects have traditionally been illustrated experimentally via simple games like the dictator and ultimatum games. Experiments with these games suggest that, beyond rational economic thinking, human decision-making processes are influenced by social preferences, such as an inclination to fairness. In this study we suggest that the apparent gap between competitive and altruistic human tendencies can be bridged by assuming that people are primarily maximising their status, i.e., a utility function different from simple profit maximisation. To this end we analyse a simple agent-based model, where individuals play the repeated dictator game in a social network they can modify. As model parameters we consider the living costs and the rate at which agents forget infractions by others. We find that individual strategies used in the game vary greatly, from selfish to selfless, and that both of the above parameters determine when individuals form complex and cohesive social networks.
[ 1, 0, 0, 0, 0, 1 ]
Title: On Dziobek Special Central Configurations, Abstract: We study the special central configurations of the curved N-body problem in S^3. We show that there are special central configurations formed by N masses for any N >2. We then extend the concept of special central configurations to S^n, n>0, and study one interesting class of special central configurations in S^n, the Dziobek special central configurations. We obtain a criterion for them and reduce it to two sets of equations. Then we apply these equations to special central configurations of 3 bodies on S^1, 4 bodies on S^2, and 5 bodies in S^3.
[ 0, 0, 1, 0, 0, 0 ]
Title: Learning from a lot: Empirical Bayes in high-dimensional prediction settings, Abstract: Empirical Bayes is a versatile approach to `learn from a lot' in two ways: first, from a large number of variables and second, from a potentially large amount of prior information, e.g. stored in public repositories. We review applications of a variety of empirical Bayes methods to several well-known model-based prediction methods including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss `formal' empirical Bayes methods which maximize the marginal likelihood, but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross-validation and full Bayes, and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and $p$, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters which model a priori information on variables, termed `co-data'. In particular, we present two novel examples that allow for co-data. First, a Bayesian spike-and-slab setting that facilitates inclusion of multiple co-data sources and types; second, a hybrid empirical Bayes-full Bayes ridge regression approach for estimation of the posterior predictive interval.
[ 0, 0, 0, 1, 0, 0 ]
Title: Runout transition and clustering instability observed in binary-mixture avalanche deposits, Abstract: Binary mixtures of dry grains avalanching down a slope are experimentally studied in order to determine the interaction among coarse and fine grains and their effect on the deposit morphology. The distance travelled by the massive front of the avalanche over the horizontal plane of deposition area is measured as a function of mass content of fine particles in the mixture, grain-size ratio, and flume tilt. A sudden transition of the runout is detected at a critical content of fine particles, with a dependence on the grain-size ratio and flume tilt. This transition is explained as two simultaneous avalanches in different flowing regimes (a viscous-like one and an inertial one) competing against each other and provoking a full segregation and a split-off of the deposit into two well-defined, separated deposits. The formation of the distal deposit, in turn, depends on a critical amount of coarse particles. This allows the condensation of the pure coarse deposit around a small, initial seed cluster, which grows rapidly by braking and capturing subsequent colliding coarse particles. For different grain-size ratios and keeping a constant total mass, the change in the amount of fines needed for the transition to occur is found to be always less than 7%. For avalanches with a total mass of 4 kg we find that, most of the time, the runout of a binary avalanche is larger than the runout of monodisperse avalanches of corresponding constituent particles, due to lubrication on the coarse-dominated side or to drag by inertial particles on the fine-dominated side.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Simple Convex Layers Algorithm, Abstract: Given a set of $n$ points $P$ in the plane, the first layer $L_1$ of $P$ is formed by the points that appear on $P$'s convex hull. In general, a point belongs to layer $L_i$, if it lies on the convex hull of the set $P \setminus \bigcup_{j<i}\{L_j\}$. The \emph{convex layers problem} is to compute the convex layers $L_i$. Existing algorithms for this problem either do not achieve the optimal $\mathcal{O}\left(n\log n\right)$ runtime and linear space, or are overly complex and difficult to apply in practice. We propose a new algorithm that is both optimal and simple. The simplicity is achieved by independently computing four sets of monotone convex chains in $\mathcal{O}\left(n\log n\right)$ time and linear space. These are then merged in $\mathcal{O}\left(n\log n\right)$ time.
[ 1, 0, 0, 0, 0, 0 ]
Title: Fundamental solutions for Schrodinger operators with general inverse square potentials, Abstract: In this paper, we classify the fundamental solutions for a class of Schrodinger operators.
[ 0, 0, 1, 0, 0, 0 ]
Title: An Efficient Load Balancing Method for Tree Algorithms, Abstract: Nowadays, multiprocessing is mainstream with exponentially increasing number of processors. Load balancing is, therefore, a critical operation for the efficient execution of parallel algorithms. In this paper we consider the fundamental class of tree-based algorithms that are notoriously irregular, and hard to load-balance with existing static techniques. We propose a hybrid load balancing method using the utility of statistical random sampling in estimating the tree depth and node count distributions to uniformly partition an input tree. To conduct an initial performance study, we implemented the method on an Intel Xeon Phi accelerator system. We considered the tree traversal operation on both regular and irregular unbalanced trees manifested by Fibonacci and unbalanced (biased) randomly generated trees, respectively. The results show scalable performance for up to the 60 physical processors of the accelerator, as well as an extrapolated 128 processors case.
[ 1, 0, 0, 0, 0, 0 ]
Title: NSML: A Machine Learning Platform That Enables You to Focus on Your Models, Abstract: Machine learning libraries such as TensorFlow and PyTorch simplify model implementation. However, researchers are still required to perform a non-trivial amount of manual tasks such as GPU allocation, training status tracking, and comparison of models with different hyperparameter settings. We propose a system to handle these tasks and help researchers focus on models. We present the requirements of the system based on a collection of discussions from an online study group comprising 25k members. These include automatic GPU allocation, learning status visualization, handling model parameter snapshots as well as hyperparameter modification during learning, and comparison of performance metrics between models via a leaderboard. We describe the system architecture that fulfills these requirements and present a proof-of-concept implementation, NAVER Smart Machine Learning (NSML). We test the system and confirm substantial efficiency improvements for model development.
[ 1, 0, 0, 0, 0, 0 ]
Title: High order local absorbing boundary conditions for acoustic waves in terms of farfield expansions, Abstract: We devise a new high order local absorbing boundary condition (ABC) for radiating problems and scattering of time-harmonic acoustic waves from obstacles of arbitrary shape. By introducing an artificial boundary $S$ enclosing the scatterer, the original unbounded domain $\Omega$ is decomposed into a bounded computational domain $\Omega^{-}$ and an exterior unbounded domain $\Omega^{+}$. Then, we define interface conditions at the artificial boundary $S$, from truncated versions of the well-known Wilcox and Karp farfield expansion representations of the exact solution in the exterior region $\Omega^{+}$. As a result, we obtain a new local absorbing boundary condition (ABC) for a bounded problem on $\Omega^{-}$, which effectively accounts for the outgoing behavior of the scattered field. Contrary to the low order absorbing conditions previously defined, the order of the error induced by this ABC can easily match the order of the numerical method in $\Omega^{-}$. We accomplish this by simply adding as many terms as needed to the truncated farfield expansions of Wilcox or Karp. The convergence of these expansions guarantees that the order of approximation of the new ABC can be increased arbitrarily without having to enlarge the radius of the artificial boundary. We include numerical results in two and three dimensions which demonstrate the improved accuracy and simplicity of this new formulation when compared to other absorbing boundary conditions.
[ 0, 1, 1, 0, 0, 0 ]
Title: Bootstrapping Exchangeable Random Graphs, Abstract: We introduce two new bootstraps for exchangeable random graphs. One, the "empirical graphon", is based purely on resampling, while the other, the "histogram stochastic block model", is a model-based "sieve" bootstrap. We show that both of them accurately approximate the sampling distributions of motif densities, i.e., of the normalized counts of the number of times fixed subgraphs appear in the network. These densities characterize the distribution of (infinite) exchangeable networks. Our bootstraps therefore give, for the first time, a valid quantification of uncertainty in inferences about fundamental network statistics, and so of parameters identifiable from them.
[ 0, 0, 0, 1, 0, 0 ]
Title: The Discrete Stochastic Galerkin Method for Hyperbolic Equations with Non-smooth and Random Coefficients, Abstract: We develop a general polynomial chaos (gPC) based stochastic Galerkin (SG) for hyperbolic equations with random and singular coefficients. Due to the singu- lar nature of the solution, the standard gPC-SG methods may suffer from a poor or even non convergence. Taking advantage of the fact that the discrete solution, by the central type finite difference or finite volume approximations in space and time for example, is smoother, we first discretize the equation by a smooth finite difference or finite volume scheme, and then use the gPC-SG approximation to the discrete system. The jump condition at the interface is treated using the immersed upwind methods introduced in [8, 12]. This yields a method that converges with the spectral accuracy for finite mesh size and time step. We use a linear hyperbolic equation with discontinuous and random coefficient, and the Liouville equation with discontinuous and random potential, to illustrate our idea, with both one and second order spatial discretizations. Spectral convergence is established for the first equation, and numerical examples for both equations show the desired accu- racy of the method.
[ 0, 0, 1, 0, 0, 0 ]
Title: Seasonal evolution of $\mathrm{C_2N_2}$, $\mathrm{C_3H_4}$, and $\mathrm{C_4H_2}$ abundances in Titan's lower stratosphere, Abstract: We study the seasonal evolution of Titan's lower stratosphere (around 15~mbar) in order to better understand the atmospheric dynamics and chemistry in this part of the atmosphere. We analysed Cassini/CIRS far-IR observations from 2006 to 2016 in order to measure the seasonal variations of three photochemical by-products: $\mathrm{C_4H_2}$, $\mathrm{C_3H_4}$, and $\mathrm{C_2N_2}$. We show that the abundances of these three gases have evolved significantly at northern and southern high latitudes since 2006. We measure a sudden and steep increase of the volume mixing ratios of $\mathrm{C_4H_2}$, $\mathrm{C_3H_4}$, and $\mathrm{C_2N_2}$ at the south pole from 2012 to 2013, whereas the abundances of these gases remained approximately constant at the north pole over the same period. At northern mid-latitudes, $\mathrm{C_2N_2}$ and $\mathrm{C_4H_2}$ abundances decrease after 2012 while $\mathrm{C_3H_4}$ abundances stay constant. The comparison of these volume mixing ratio variations with the predictions of photochemical and dynamical models provides constraints on the seasonal evolution of atmospheric circulation and chemical processes at play.
[ 0, 1, 0, 0, 0, 0 ]
Title: Systems, Actors and Agents: Operation in a multicomponent environment, Abstract: Multi-agent approach has become popular in computer science and technology. However, the conventional models of multi-agent and multicomponent systems implicitly or explicitly assume existence of absolute time or even do not include time in the set of defining parameters. At the same time, it is proved theoretically and validated experimentally that there are different times and time scales in a variety of real systems - physical, chemical, biological, social, informational, etc. Thus, the goal of this work is construction of a multi-agent multicomponent system models with concurrency of processes and diversity of actions. To achieve this goal, a mathematical system actor model is elaborated and its properties are studied.
[ 1, 0, 0, 0, 0, 0 ]
Title: The Painlevé property of $\mathbb{C}P^{N-1}$ sigma models, Abstract: We test the $\mathbb{C}P^{N-1}$ sigma models for the Painlevé property. While the construction of finite action solutions ensures their meromorphicity, the general case requires testing. The test is performed for the equations in the homogeneous variables, with their first component normalised to one. No constraints are imposed on the dimensionality of the model or the values of the initial exponents. This makes the test nontrivial, as the number of equations and dependent variables are indefinite. A $\mathbb{C}P^{N-1}$ system proves to have a $(4N-5)$-parameter family of solutions whose movable singularities are only poles, while the order of the investigated system is $4N-4$. The remaining degree of freedom, connected with an extra negative resonance, may correspond to a branching movable essential singularity. An example of such a solution is provided.
[ 0, 1, 0, 0, 0, 0 ]
Title: A comment on Stein's unbiased risk estimate for reduced rank estimators, Abstract: In the framework of matrix valued observables with low rank means, Stein's unbiased risk estimate (SURE) can be useful for risk estimation and for tuning the amount of shrinkage towards low rank matrices. This was demonstrated by Candès et al. (2013) for singular value soft thresholding, which is a Lipschitz continuous estimator. SURE provides an unbiased risk estimate for an estimator whenever the differentiability requirements for Stein's lemma are satisfied. Lipschitz continuity of the estimator is sufficient, but it is emphasized that differentiability Lebesgue almost everywhere isn't. The reduced rank estimator, which gives the best approximation of the observation with a fixed rank, is an example of a discontinuous estimator for which Stein's lemma actually applies. This was observed by Mukherjee et al. (2015), but the proof was incomplete. This brief note gives a sufficient condition for Stein's lemma to hold for estimators with discontinuities, which is then shown to be fulfilled for a class of spectral function estimators including the reduced rank estimator. Singular value hard thresholding does, however, not satisfy the condition, and Stein's lemma does not apply to this estimator.
[ 0, 0, 1, 1, 0, 0 ]
Title: Static and Fluctuating Magnetic Moments in the Ferroelectric Metal LiOsO$_3$, Abstract: LiOsO$_3$ is the first example of a new class of material called a ferroelectric metal. We performed zero-field and longitudinal-field $\mu$SR, along with a combination of electronic structure and dipole field calculations, to determine the magnetic ground state of LiOsO$_3$. We find that the sample contains both static Li nuclear moments and dynamic Os electronic moments. Below $\approx 0.7\,$K, the fluctuations of the Os moments slow down, though remain dynamic down to 0.08$\,$K. We expect this could result in a frozen-out, disordered ground state at even lower temperatures.
[ 0, 1, 0, 0, 0, 0 ]
Title: Ranking with Adaptive Neighbors, Abstract: Retrieving the most similar objects in a large-scale database for a given query is a fundamental building block in many application domains, ranging from web searches, visual, cross media, and document retrievals. State-of-the-art approaches have mainly focused on capturing the underlying geometry of the data manifolds. Graph-based approaches, in particular, define various diffusion processes on weighted data graphs. Despite success, these approaches rely on fixed-weight graphs, making ranking sensitive to the input affinity matrix. In this study, we propose a new ranking algorithm that simultaneously learns the data affinity matrix and the ranking scores. The proposed optimization formulation assigns adaptive neighbors to each point in the data based on the local connectivity, and the smoothness constraint assigns similar ranking scores to similar data points. We develop a novel and efficient algorithm to solve the optimization problem. Evaluations using synthetic and real datasets suggest that the proposed algorithm can outperform the existing methods.
[ 0, 0, 0, 1, 0, 0 ]
Title: Identities and central polynomials of real graded division algebras, Abstract: Let $A$ be a finite dimensional real algebra with a division grading by a finite abelian group $G$. In this paper we provide finite basis for the $T_G$-ideal of graded identities and for the $T_G$-space of graded central polynomials for $A$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Two-Dimensional Large Gap Topological Insulators with Large Rashba Spin-Orbit Coupling in Group-IV films, Abstract: Rashba spin orbit coupling in topological insulators has attracted much interest due to its exotic properties closely related to spintronic devices. The coexistence of nontrivial topology and giant Rashba splitting, however, has rare been observed in two-dimensional films, limiting severely its potential applications at room temperature. Here, we propose a series of inversion asymmetric group IV films, ABZ2, whose stability are confirmed by phonon spectrum calculations. The analyses of electronic structures reveal that they are intrinsic 2D TIs with a bulk gap as large as 0.74 eV, except for GeSiF2, SnSiCl2, GeSiCl2 and GeSiBr2 monolayers which can transform from normal to topological phases under appropriate tensile strains. Another prominent intriguing feature is the giant Rashba spin splitting with a magnitude reaching 0.15 eV, the largest value reported in 2D films. These results present a platform to explore 2D TIs for room temperature device applications.
[ 0, 1, 0, 0, 0, 0 ]
Title: Assumption-Based Approaches to Reasoning with Priorities, Abstract: This paper maps out the relation between different approaches for handling preferences in argumentation with strict rules and defeasible assumptions by offering translations between them. The systems we compare are: non-prioritized defeats i.e. attacks, preference-based defeats, and preference-based defeats extended with reverse defeat.
[ 1, 0, 0, 0, 0, 0 ]
Title: On the Impact of Transposition Errors in Diffusion-Based Channels, Abstract: In this work, we consider diffusion-based molecular communication with and without drift between two static nano-machines. We employ type-based information encoding, releasing a single molecule per information bit. At the receiver, we consider an asynchronous detection algorithm which exploits the arrival order of the molecules. In such systems, transposition errors fundamentally undermine reliability and capacity. Thus, in this work we study the impact of transpositions on the system performance. Towards this, we present an analytical expression for the exact bit error probability (BEP) caused by transpositions and derive computationally tractable approximations of the BEP for diffusion-based channels with and without drift. Based on these results, we analyze the BEP when background is not negligible and derive the optimal bit interval that minimizes the BEP. Simulation results confirm the theoretical results and show the error and goodput performance for different parameters such as block size or noise generation rate.
[ 1, 0, 1, 0, 0, 0 ]
Title: Uniform $L^p$-improving for weighted averages on curves, Abstract: We define variable parameter analogues of the affine arclength measure on curves and prove near-optimal $L^p$-improving estimates for associated multilinear generalized Radon transforms. Some of our results are new even in the convolution case.
[ 0, 0, 1, 0, 0, 0 ]
Title: Finite Sample Differentially Private Confidence Intervals, Abstract: We study the problem of estimating finite sample confidence intervals of the mean of a normal population under the constraint of differential privacy. We consider both the known and unknown variance cases and construct differentially private algorithms to estimate confidence intervals. Crucially, our algorithms guarantee a finite sample coverage, as opposed to an asymptotic coverage. Unlike most previous differentially private algorithms, we do not require the domain of the samples to be bounded. We also prove lower bounds on the expected size of any differentially private confidence set showing that our the parameters are optimal up to polylogarithmic factors.
[ 1, 0, 1, 1, 0, 0 ]
Title: Muon Spin Rotation Analysis of the Internal Magnetic Field of Heavy Fermion System Uranium Beryllium-13, Abstract: Uranium beryllium-13 is a heavy fermion system whose anomalous behavior may be explained by its poorly understood internal magnetic structure. Here, uranium beryllium-13's magnetic distribution is probed via muon spin spectroscopy ($\mu$SR)-a process where positive muons localize at magnetically unique sites in the crystal lattice and precess at characteristic Larmor frequencies, providing measurements of the internal field. Muon spin experiments using the transverse-field technique conducted at varying temperatures and external magnetic field strengths are analyzed via statistical methods on ROOT. Two precession frequencies are observed at low temperatures with an amplitude ratio in the Fourier transform of 2:1, enabling muon stopping sites to be traced at the geometric centers of the edges of the crystal lattice. Characteristic strong and weak magnetic sites are deduced, additionally verified by mathematical relationships. Results can readily be applied to other heavy fermion systems, and recent identification of quantum critical points in a host of heavy fermion compounds show a promising future for the application of these systems in quantum technology. Note that this paper is an analysis of data, and all experiments mentioned here are conducted by a third party.
[ 0, 1, 0, 0, 0, 0 ]
Title: Unitary Groups as Stabilizers of Orbits, Abstract: We show that a finite unitary group which has orbits spanning the whole space is necessarily the setwise stabilizer of a certain orbit.
[ 0, 0, 1, 0, 0, 0 ]
Title: Algebras of generalized dihedral type, Abstract: We provide a complete classification of all algebras of generalised dihedral type, which are natural generalizations of algebras which occurred in the study of blocks with dihedral defect groups. This gives a description by quivers and relations coming from surface triangulations.
[ 0, 0, 1, 0, 0, 0 ]
Title: Average whenever you meet: Opportunistic protocols for community detection, Abstract: Consider the following asynchronous, opportunistic communication model over a graph $G$: in each round, one edge is activated uniformly and independently at random and (only) its two endpoints can exchange messages and perform local computations. Under this model, we study the following random process: The first time a vertex is an endpoint of an active edge, it chooses a random number, say $\pm 1$ with probability $1/2$; then, in each round, the two endpoints of the currently active edge update their values to their average. We show that, if $G$ exhibits a two-community structure (for example, two expanders connected by a sparse cut), the values held by the nodes will collectively reflect the underlying community structure over a suitable phase of the above process, allowing efficient and effective recovery in important cases. In more detail, we first provide a first-moment analysis showing that, for a large class of almost-regular clustered graphs that includes the stochastic block model, the expected values held by all but a negligible fraction of the nodes eventually reflect the underlying cut signal. We prove this property emerges after a mixing period of length $\mathcal O(n\log n)$. We further provide a second-moment analysis for a more restricted class of regular clustered graphs that includes the regular stochastic block model. For this case, we are able to show that most nodes can efficiently and locally identify their community of reference over a suitable time window. This results in the first opportunistic protocols that approximately recover community structure using only polylogarithmic work per node. Even for the above class of regular graphs, our second moment analysis requires new concentration bounds on the product of certain random matrices that are technically challenging and possibly of independent interest.
[ 1, 0, 0, 0, 0, 0 ]
Title: Provably Accurate Double-Sparse Coding, Abstract: Sparse coding is a crucial subroutine in algorithms for various signal processing, deep learning, and other machine learning applications. The central goal is to learn an overcomplete dictionary that can sparsely represent a given input dataset. However, a key challenge is that storage, transmission, and processing of the learned dictionary can be untenably high if the data dimension is high. In this paper, we consider the double-sparsity model introduced by Rubinstein et al. (2010b) where the dictionary itself is the product of a fixed, known basis and a data-adaptive sparse component. First, we introduce a simple algorithm for double-sparse coding that can be amenable to efficient implementation via neural architectures. Second, we theoretically analyze its performance and demonstrate asymptotic sample complexity and running time benefits over existing (provable) approaches for sparse coding. To our knowledge, our work introduces the first computationally efficient algorithm for double-sparse coding that enjoys rigorous statistical guarantees. Finally, we support our analysis via several numerical experiments on simulated data, confirming that our method can indeed be useful in problem sizes encountered in practical applications.
[ 1, 0, 0, 1, 0, 0 ]
Title: Sequential two-fold Pearson chi-squared test and tails of the Bessel process distributions, Abstract: We find asymptotic formulas for error probabilities of two-fold Pearson goodness-of-fit test as functions of two critical levels. These results may be reformulated in terms of tails of two-dimensional distributions of the Bessel process. Necessary properties of the Infeld function are obtained.
[ 0, 0, 1, 1, 0, 0 ]
Title: Tracking Gaze and Visual Focus of Attention of People Involved in Social Interaction, Abstract: The visual focus of attention (VFOA) has been recognized as a prominent conversational cue. We are interested in estimating and tracking the VFOAs associated with multi-party social interactions. We note that in this type of situations the participants either look at each other or at an object of interest; therefore their eyes are not always visible. Consequently both gaze and VFOA estimation cannot be based on eye detection and tracking. We propose a method that exploits the correlation between eye gaze and head movements. Both VFOA and gaze are modeled as latent variables in a Bayesian switching state-space model. The proposed formulation leads to a tractable learning procedure and to an efficient algorithm that simultaneously tracks gaze and visual focus. The method is tested and benchmarked using two publicly available datasets that contain typical multi-party human-robot and human-human interactions.
[ 1, 0, 0, 0, 0, 0 ]
Title: Magnetic properties in ultra-thin 3d transition metal alloys II: Experimental verification of quantitative theories of damping and spin-pumping, Abstract: A systematic experimental study of Gilbert damping is performed via ferromagnetic resonance for the disordered crystalline binary 3d transition metal alloys Ni-Co, Ni-Fe and Co-Fe over the full range of alloy compositions. After accounting for inhomogeneous linewidth broadening, the damping shows clear evidence of both interfacial damping enhancement (by spin pumping) and radiative damping. We quantify these two extrinsic contributions and thereby determine the intrinsic damping. The comparison of the intrinsic damping to multiple theoretical calculations yields good qualitative and quantitative agreement in most cases. Furthermore, the values of the damping obtained in this study are in good agreement with a wide range of published experimental and theoretical values. Additionally, we find a compositional dependence of the spin mixing conductance.
[ 0, 1, 0, 0, 0, 0 ]
Title: Detection of Anomalies in Large Scale Accounting Data using Deep Autoencoder Networks, Abstract: Learning to detect fraud in large-scale accounting data is one of the long-standing challenges in financial statement audits or fraud investigations. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. To overcome this disadvantage and inspired by the recent success of deep learning we propose the application of deep autoencoder neural networks to detect anomalous journal entries. We demonstrate that the trained network's reconstruction error obtainable for a journal entry and regularized by the entry's individual attribute probabilities can be interpreted as a highly adaptive anomaly assessment. Experiments on two real-world datasets of journal entries, show the effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A) and 16.95 (dataset B) and less false positive alerts compared to state of the art baseline methods. Initial feedback received by chartered accountants and fraud examiners underpinned the quality of the approach in capturing highly relevant accounting anomalies.
[ 1, 0, 0, 0, 0, 0 ]
Title: Alternate Estimation of a Classifier and the Class-Prior from Positive and Unlabeled Data, Abstract: We consider a problem of learning a binary classifier only from positive data and unlabeled data (PU learning) and estimating the class-prior in unlabeled data under the case-control scenario. Most of the recent methods of PU learning require an estimate of the class-prior probability in unlabeled data, and it is estimated in advance with another method. However, such a two-step approach which first estimates the class prior and then trains a classifier may not be the optimal approach since the estimation error of the class-prior is not taken into account when a classifier is trained. In this paper, we propose a novel unified approach to estimating the class-prior and training a classifier alternately. Our proposed method is simple to implement and computationally efficient. Through experiments, we demonstrate the practical usefulness of the proposed method.
[ 0, 0, 0, 1, 0, 0 ]
Title: Rate Optimal Binary Linear Locally Repairable Codes with Small Availability, Abstract: A locally repairable code with availability has the property that every code symbol can be recovered from multiple, disjoint subsets of other symbols of small size. In particular, a code symbol is said to have $(r,t)$-availability if it can be recovered from $t$ disjoint subsets, each of size at most $r$. A code with availability is said to be 'rate-optimal', if its rate is maximum among the class of codes with given locality, availability, and alphabet size. This paper focuses on rate-optimal binary, linear codes with small availability, and makes four contributions. First, it establishes tight upper bounds on the rate of binary linear codes with $(r,2)$ and $(2,3)$ availability. Second, it establishes a uniqueness result for binary rate-optimal codes, showing that for certain classes of binary linear codes with $(r,2)$ and $(2,3)$-availability, any rate optimal code must be a direct sum of shorter rate optimal codes. Third, it presents novel upper bounds on the rates of binary linear codes with $(2,t)$ and $(r,3)$-availability. In particular, the main contribution here is a new method for bounding the number of cosets of the dual of a code with availability, using its covering properties. Finally, it presents a class of locally repairable linear codes associated with convex polyhedra, focusing on the codes associated with the Platonic solids. It demonstrates that these codes are locally repairable with $t = 2$, and that the codes associated with (geometric) dual polyhedra are (coding theoretic) duals of each other.
[ 1, 0, 1, 0, 0, 0 ]
Title: Occupation times for the finite buffer fluid queue with phase-type ON-times, Abstract: In this short communication we study a fluid queue with a finite buffer. The performance measure we are interested in is the occupation time over a finite time period, i.e., the fraction of time the workload process is below some fixed target level. We construct an alternating sequence of sojourn times $D_1,U_1,...$ where the pairs $(D_i,U_i)_{i\in\mathbb{N}}$ are i.i.d. random vectors. We use this sequence to determine the distribution function of the occupation time in terms of its double transform.
[ 0, 0, 1, 0, 0, 0 ]
Title: Affine forward variance models, Abstract: We introduce the class of affine forward variance (AFV) models of which both the conventional Heston model and the rough Heston model are special cases. We show that AFV models can be characterized by the affine form of their cumulant generating function, which can be obtained as solution of a convolution Riccati equation. We further introduce the class of affine forward order flow intensity (AFI) models, which are structurally similar to AFV models, but driven by jump processes, and which include Hawkes-type models. We show that the cumulant generating function of an AFI model satisfies a generalized convolution Riccati equation and that a high-frequency limit of AFI models converges in distribution to the AFV model.
[ 0, 0, 0, 0, 0, 1 ]
Title: Measurement of the muon-induced neutron seasonal modulation with LVD, Abstract: Cosmic ray muons with the average energy of 280 GeV and neutrons produced by muons are detected with the Large Volume Detector at LNGS. We present an analysis of the seasonal variation of the neutron flux on the basis of the data obtained during 15 years. The measurement of the seasonal variation of the specific number of neutrons generated by muons allows to obtaine the variation magnitude of of the average energy of the muon flux at the depth of the LVD location. The source of the seasonal variation of the total neutron flux is a change of the intensity and the average energy of the muon flux.
[ 0, 1, 0, 0, 0, 0 ]
Title: A sub-super solution method for a class of nonlocal problems involving the p(x)-Laplacian operator and applications, Abstract: In the present paper we study the existence of solutions for some nonlocal problems involving the p(x)-Laplacian operator. The approach is based on a new sub-supersolution method
[ 0, 0, 1, 0, 0, 0 ]
Title: A Las Vegas algorithm to solve the elliptic curve discrete logarithm problem, Abstract: In this paper, we describe a new Las Vegas algorithm to solve the elliptic curve discrete logarithm problem. The algorithm depends on a property of the group of rational points of an elliptic curve and is thus not a generic algorithm. The algorithm that we describe has some similarities with the most powerful index-calculus algorithm for the discrete logarithm problem over a finite field.
[ 1, 0, 1, 0, 0, 0 ]
Title: Spectral sequences via examples, Abstract: These are lecture notes for a short course about spectral sequences that was held at Málaga, October 18--20 (2016), during the "Fifth Young Spanish Topologists Meeting". The approach was to illustrate the basic notions via fully computed examples arising from Algebraic Topology and Group Theory.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Note on Prediction Markets, Abstract: In a prediction market, individuals can sequentially place bets on the outcome of a future event. This leaves a trail of personal probabilities for the event, each being conditional on the current individual's private background knowledge and on the previously announced probabilities of other individuals, which give partial information about their private knowledge. By means of theory and examples, we revisit some results in this area. In particular, we consider the case of two individuals, who start with the same overall probability distribution but different private information, and then take turns in updating their probabilities. We note convergence of the announced probabilities to a limiting value, which may or may not be the same as that based on pooling their private information.
[ 0, 0, 1, 1, 0, 0 ]
Title: A recurrence relation for the odd order moments of the Fabius function, Abstract: A simple recurrence relation for the even order moments of the Fabius function is proven. Also, a very similar formula for the odd order moments in terms of the even order moments is proved. The matrices corresponding to these formulas (and their inverses) are multiplied so as to obtain a matrix that correspond to a recurrence relation for the odd order moments in terms of themselves. The theorem at the end gives a closed-form for the coefficients.
[ 0, 0, 1, 0, 0, 0 ]
Title: Performance of Range Separated Hybrids: Study within BECKE88 family and Semilocal Exchange Hole based Range Separated Hybrid, Abstract: A long range corrected range separated hybrid functional is developed based on the density matrix expansion (DME) based semilocal exchange hole with Lee-Yang-Parr (LYP) correlation. An extensive study involving the proposed range separated hybrid for thermodynamic as well as properties related to the fractional occupation number is compared with different BECKE88 family semilocal, hybrid and range separated hybrids. It has been observed that using Kohn-Sham kinetic energy dependent exchange hole several properties related to the fractional occupation number can be improved without hindering the thermochemical accuracy. The newly constructed range separated hybrid accurately describe the hydrogen and non-hydrogen reaction barrier heights. The present range separated functional has been constructed using full semilocal meta-GGA type exchange hole having exact properties related to exchange hole therefore, it has a strong physical basis.
[ 0, 1, 0, 0, 0, 0 ]
Title: Manifold Mixup: Learning Better Representations by Interpolating Hidden States, Abstract: Deep networks often perform well on the data distribution on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution. This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. Ideally, a model would assign lower confidence to points unlike those from the training distribution. We propose a regularizer which addresses this issue by training with interpolated hidden states and encouraging the classifier to be less confident at these points. Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes. This has a major advantage in that it avoids the underfitting which can result from interpolating in the input space. We prove that the exact condition for this problem of underfitting to be avoided by Manifold Mixup is that the dimensionality of the hidden states exceeds the number of classes, which is often the case in practice. Additionally, this concentration can be seen as making the features in earlier layers more discriminative. We show that despite requiring no significant additional computation, Manifold Mixup achieves large improvements over strong baselines in supervised learning, robustness to single-step adversarial attacks, semi-supervised learning, and Negative Log-Likelihood on held out samples.
[ 0, 0, 0, 1, 0, 0 ]
Title: Small Resolution Proofs for QBF using Dependency Treewidth, Abstract: In spite of the close connection between the evaluation of quantified Boolean formulas (QBF) and propositional satisfiability (SAT), tools and techniques which exploit structural properties of SAT instances are known to fail for QBF. This is especially true for the structural parameter treewidth, which has allowed the design of successful algorithms for SAT but cannot be straightforwardly applied to QBF since it does not take into account the interdependencies between quantified variables. In this work we introduce and develop dependency treewidth, a new structural parameter based on treewidth which allows the efficient solution of QBF instances. Dependency treewidth pushes the frontiers of tractability for QBF by overcoming the limitations of previously introduced variants of treewidth for QBF. We augment our results by developing algorithms for computing the decompositions that are required to use the parameter.
[ 1, 0, 0, 0, 0, 0 ]
Title: Lattice thermal expansion and anisotropic displacements in urea, bromomalonic aldehyde, pentachloropyridine and naphthalene, Abstract: Anisotropic displacement parameters (ADPs) are commonly used in crystallography, chemistry and related fields to describe and quantify thermal motion of atoms. Within the very recent years, these ADPs have become predictable by lattice dynamics in combination with first-principles theory. Here, we study four very different molecular crystals, namely urea, bromomalonic aldehyde, pentachloropyridine, and naphthalene, by first-principles theory to assess the quality of ADPs calculated in the quasi-harmonic approximation. In addition, we predict both thermal expansion and thermal motion within the quasi-harmonic approximation and compare the predictions with experimental data. Very reliable ADPs are calculated within the quasi-harmonic approximation for all four cases up to at least 200 K, and they turn out to be in better agreement with experiment than the harmonic ones. In one particular case, ADPs can even reliably be predicted up to room temperature. Our results also hint at the importance of normal-mode anharmonicity in the calculation of ADPs.
[ 0, 1, 0, 0, 0, 0 ]
Title: Modal operators and toric ideals, Abstract: In the present paper we consider modal propositional logic and look for the constraints that are imposed to the propositions of the special type $\Box a$ by the structure of the relevant finite Kripke frame. We translate the usual language of modal propositional logic in terms of notions of commutative algebra, namely polynomial rings, ideals, and bases of ideals. We use extensively the perspective obtained in previous works in Algebraic Statistics. We prove that the constraints on $\Box a$ can be derived through a binomial ideal containing a toric ideal and we give sufficient conditions under which the toric ideal fully describes the constraints.
[ 0, 0, 1, 0, 0, 0 ]
Title: Metadynamics for Training Neural Network Model Chemistries: a Competitive Assessment, Abstract: Neural network (NN) model chemistries (MCs) promise to facilitate the accurate exploration of chemical space and simulation of large reactive systems. One important path to improving these models is to add layers of physical detail, especially long-range forces. At short range, however, these models are data driven and data limited. Little is systematically known about how data should be sampled, and `test data' chosen randomly from some sampling techniques can provide poor information about generality. If the sampling method is narrow `test error' can appear encouragingly tiny while the model fails catastrophically elsewhere. In this manuscript we competitively evaluate two common sampling methods: molecular dynamics (MD), normal-mode sampling (NMS) and one uncommon alternative, Metadynamics (MetaMD), for preparing training geometries. We show that MD is an inefficient sampling method in the sense that additional samples do not improve generality. We also show MetaMD is easily implemented in any NNMC software package with cost that scales linearly with the number of atoms in a sample molecule. MetaMD is a black-box way to ensure samples always reach out to new regions of chemical space, while remaining relevant to chemistry near $k_bT$. It is one cheap tool to address the issue of generalization.
[ 0, 1, 0, 1, 0, 0 ]
Title: Multi-timescale memory dynamics in a reinforcement learning network with attention-gated memory, Abstract: Learning and memory are intertwined in our brain and their relationship is at the core of several recent neural network models. In particular, the Attention-Gated MEmory Tagging model (AuGMEnT) is a reinforcement learning network with an emphasis on biological plausibility of memory dynamics and learning. We find that the AuGMEnT network does not solve some hierarchical tasks, where higher-level stimuli have to be maintained over a long time, while lower-level stimuli need to be remembered and forgotten over a shorter timescale. To overcome this limitation, we introduce hybrid AuGMEnT, with leaky or short-timescale and non-leaky or long-timescale units in memory, that allow to exchange lower-level information while maintaining higher-level one, thus solving both hierarchical and distractor tasks.
[ 1, 0, 0, 1, 0, 0 ]
Title: Dynamical structure of entangled polymers simulated under shear flow, Abstract: The non-linear response of entangled polymers to shear flow is complicated. Its current understanding is framed mainly as a rheological description in terms of the complex viscosity. However, the full picture requires an assessment of the dynamical structure of individual polymer chains which give rise to the macroscopic observables. Here we shed new light on this problem, using a computer simulation based on a blob model, extended to describe shear flow in polymer melts and semi-dilute solutions. We examine the diffusion and the intermediate scattering spectra during a steady shear flow. The relaxation dynamics are found to speed up along the flow direction, but slow down along the shear gradient direction. The third axis, vorticity, shows a slowdown at the short scale of a tube, but reaches a net speedup at the large scale of the chain radius of gyration.
[ 0, 1, 0, 0, 0, 0 ]
Title: Current induced magnetization switching in PtCoCr structures with enhanced perpendicular magnetic anisotropy and spin-orbit torques, Abstract: Magnetic trilayers having large perpendicular magnetic anisotropy (PMA) and high spin-orbit torques (SOTs) efficiency are the key to fabricate nonvolatile magnetic memory and logic devices. In this work, PMA and SOTs are systematically studied in Pt/Co/Cr stacks as a function of Cr thickness. An enhanced perpendicular anisotropy field around 10189 Oe is obtained and is related to the interface between Co and Cr layers. In addition, an effective spin Hall angle up to 0.19 is observed due to the improved antidamping-like torque by employing dissimilar metals Pt and Cr with opposite signs of spin Hall angles on opposite sides of Co layer. Finally, we observed a nearly linear dependence between spin Hall angle and longitudinal resistivity from their temperature dependent properties, suggesting that the spin Hall effect may arise from extrinsic skew scattering mechanism. Our results indicate that 3d transition metal Cr with a large negative spin Hall angle could be used to engineer the interfaces of trilayers to enhance PMA and SOTs.
[ 0, 1, 0, 0, 0, 0 ]
Title: Quantum Black Holes and Atomic Nuclei are Hollow, Abstract: The quantum Schrodinger-Newton equation is solved for a self-gravitating Bose gas at zero temperature. It is derived that the density is non-uniform and a central hollow cavity exists. The radial distribution of the particle momentum is uniform. It is shown that a quantum black hole can be formed only above a certain critical mass. The temperature effect is accounted for via the Schrodinger-Poisson-Boltzmann equation, where low and high temperature solutions are obtained. The theoretical analysis is extended to a strong interacting gas via the Schrodinger-Yukawa equation, showing that the atomic nuclei are also hollow. Hollow self-gravitating Fermi gases are described by the Thomas-Fermi equation.
[ 0, 1, 0, 0, 0, 0 ]
Title: Learning Non-Discriminatory Predictors, Abstract: We consider learning a predictor which is non-discriminatory with respect to a "protected attribute" according to the notion of "equalized odds" proposed by Hardt et al. [2016]. We study the problem of learning such a non-discriminatory predictor from a finite training set, both statistically and computationally. We show that a post-hoc correction approach, as suggested by Hardt et al, can be highly suboptimal, present a nearly-optimal statistical procedure, argue that the associated computational problem is intractable, and suggest a second moment relaxation of the non-discrimination definition for which learning is tractable.
[ 1, 0, 0, 0, 0, 0 ]
Title: Pinned, locked, pushed, and pulled traveling waves in structured environments, Abstract: Traveling fronts describe the transition between two alternative states in a great number of physical and biological systems. Examples include the spread of beneficial mutations, chemical reactions, and the invasions by foreign species. In homogeneous environments, the alternative states are separated by a smooth front moving at a constant velocity. This simple picture can break down in structured environments such as tissues, patchy landscapes, and microfluidic devices. Habitat fragmentation can pin the front at a particular location or lock invasion velocities into specific values. Locked velocities are not sensitive to moderate changes in dispersal or growth and are determined by the spatial and temporal periodicity of the environment. The synchronization with the environment results in discontinuous fronts that propagate as periodic pulses. We characterize the transition from continuous to locked invasions and show that it is controlled by positive density-dependence in dispersal or growth. We also demonstrate that velocity locking is robust to demographic and environmental fluctuations and examine stochastic dynamics and evolution in locked invasions.
[ 0, 0, 0, 0, 1, 0 ]