title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Memory effects, transient growth, and wave breakup in a model of paced atrium
The mechanisms underlying cardiac fibrillation have been investigated for over a century, but we are still finding surprising results that change our view of this phenomenon. The present study focuses on the transition from normal rhythm to atrial fibrillation associated with a gradual increase in the pacing rate. While some of our findings are consistent with existing experimental, numerical, and theoretical studies of this problem, one result appears to contradict the accepted picture. Specifically we show that, in a two-dimensional model of paced homogeneous atrial tissue, transition from discordant alternans to conduction block, wave breakup, reentry, and spiral wave chaos is associated with transient growth of finite amplitude disturbances rather than a conventional instability. It is mathematically very similar to subcritical, or bypass, transition from laminar fluid flow to turbulence, which allows many of the tools developed in the context of fluid turbulence to be used for improving our understanding of cardiac arrhythmias.
0
1
0
0
0
0
An Analysis of the Value of Information when Exploring Stochastic, Discrete Multi-Armed Bandits
In this paper, we propose an information-theoretic exploration strategy for stochastic, discrete multi-armed bandits that achieves optimal regret. Our strategy is based on the value of information criterion. This criterion measures the trade-off between policy information and obtainable rewards. High amounts of policy information are associated with exploration-dominant searches of the space and yield high rewards. Low amounts of policy information favor the exploitation of existing knowledge. Information, in this criterion, is quantified by a parameter that can be varied during search. We demonstrate that a simulated-annealing-like update of this parameter, with a sufficiently fast cooling schedule, leads to an optimal regret that is logarithmic with respect to the number of episodes.
1
0
0
1
0
0
Probabilistic Generative Adversarial Networks
We introduce the Probabilistic Generative Adversarial Network (PGAN), a new GAN variant based on a new kind of objective function. The central idea is to integrate a probabilistic model (a Gaussian Mixture Model, in our case) into the GAN framework which supports a new kind of loss function (based on likelihood rather than classification loss), and at the same time gives a meaningful measure of the quality of the outputs generated by the network. Experiments with MNIST show that the model learns to generate realistic images, and at the same time computes likelihoods that are correlated with the quality of the generated images. We show that PGAN is better able to cope with instability problems that are usually observed in the GAN training procedure. We investigate this from three aspects: the probability landscape of the discriminator, gradients of the generator, and the perfect discriminator problem.
1
0
0
1
0
0
Model comparison for Gibbs random fields using noisy reversible jump Markov chain Monte Carlo
The reversible jump Markov chain Monte Carlo (RJMCMC) method offers an across-model simulation approach for Bayesian estimation and model comparison, by exploring the sampling space that consists of several models of possibly varying dimensions. A naive implementation of RJMCMC to models like Gibbs random fields suffers from computational difficulties: the posterior distribution for each model is termed doubly-intractable since computation of the likelihood function is rarely available. Consequently, it is simply impossible to simulate a transition of the Markov chain in the presence of likelihood intractability. A variant of RJMCMC is presented, called noisy RJMCMC, where the underlying transition kernel is replaced with an approximation based on unbiased estimators. Based on previous theoretical developments, convergence guarantees for the noisy RJMCMC algorithm are provided. The experiments show that the noisy RJMCMC algorithm can be much more efficient than other exact methods, provided that an estimator with controlled Monte Carlo variance is used, a fact which is in agreement with the theoretical analysis.
0
0
0
1
0
0
Functorial compactification of linear spaces
We define compactifications of vector spaces which are functorial with respect to certain linear maps. These "many-body" compactifications are manifolds with corners, and the linear maps lift to b-maps in the sense of Melrose. We derive a simple criterion under which the lifted maps are in fact b-fibrations, and identify how these restrict to boundary hypersurfaces. This theory is an application of a general result on the iterated blow-up of cleanly intersecting submanifolds which extends related results in the literature.
0
0
1
0
0
0
Almost complex structures on connected sums of complex projective spaces
We show that the m-fold connected sum $m\#\mathbb{C}\mathbb{P}^{2n}$ admits an almost complex structure if and only if m is odd.
0
0
1
0
0
0
Raman Scattering by a Two-Dimensional Fermi Liquid with Spin-Orbit Coupling
We present a microscopic theory of Raman scattering by a two-dimensional Fermi liquid (FL) with Rashba and Dresselhaus types of spin-orbit coupling, and subject to an in-plane magnetic field (B). In the long-wavelength limit, the Raman spectrum probes the collective modes of such a FL: the chiral spin waves. The characteristic features of these modes are a linear-in-q term in the dispersion and the dependence of the mode frequency on the directions of both q and B. All of these features have been observed in recent Raman experiments on CdTe quantum wells.
0
1
0
0
0
0
Nearly Optimal Robust Subspace Tracking
In this work, we study the robust subspace tracking (RST) problem and obtain one of the first two provable guarantees for it. The goal of RST is to track sequentially arriving data vectors that lie in a slowly changing low-dimensional subspace, while being robust to corruption by additive sparse outliers. It can also be interpreted as a dynamic (time-varying) extension of robust PCA (RPCA), with the minor difference that RST also requires a short tracking delay. We develop a recursive projected compressive sensing algorithm that we call Nearly Optimal RST via ReProCS (ReProCS-NORST) because its tracking delay is nearly optimal. We prove that NORST solves both the RST and the dynamic RPCA problems under weakened standard RPCA assumptions, two simple extra assumptions (slow subspace change and most outlier magnitudes lower bounded), and a few minor assumptions. Our guarantee shows that NORST enjoys a near optimal tracking delay of $O(r \log n \log(1/\epsilon))$. Its required delay between subspace change times is the same, and its memory complexity is $n$ times this value. Thus both these are also nearly optimal. Here $n$ is the ambient space dimension, $r$ is the subspaces' dimension, and $\epsilon$ is the tracking accuracy. NORST also has the best outlier tolerance compared with all previous RPCA or RST methods, both theoretically and empirically (including for real videos), without requiring any model on how the outlier support is generated. This is possible because of the extra assumptions it uses.
0
0
0
1
0
0
The Authority of "Fair" in Machine Learning
In this paper, we argue for the adoption of a normative definition of fairness within the machine learning community. After characterizing this definition, we review the current literature of Fair ML in light of its implications. We end by suggesting ways to incorporate a broader community and generate further debate around how to decide what is fair in ML.
1
0
0
0
0
0
The Social Bow Tie
Understanding tie strength in social networks, and the factors that influence it, have received much attention in a myriad of disciplines for decades. Several models incorporating indicators of tie strength have been proposed and used to quantify relationships in social networks, and a standard set of structural network metrics have been applied to predominantly online social media sites to predict tie strength. Here, we introduce the concept of the "social bow tie" framework, a small subgraph of the network that consists of a collection of nodes and ties that surround a tie of interest, forming a topological structure that resembles a bow tie. We also define several intuitive and interpretable metrics that quantify properties of the bow tie. We use random forests and regression models to predict categorical and continuous measures of tie strength from different properties of the bow tie, including nodal attributes. We also investigate what aspects of the bow tie are most predictive of tie strength in two distinct social networks: a collection of 75 rural villages in India and a nationwide call network of European mobile phone users. Our results indicate several of the bow tie metrics are highly predictive of tie strength, and we find the more the social circles of two individuals overlap, the stronger their tie, consistent with previous findings. However, we also find that the more tightly-knit their non-overlapping social circles, the weaker the tie. This new finding complements our current understanding of what drives the strength of ties in social networks.
0
0
0
1
0
0
Response Regimes in Equivalent Mechanical Model of Moderately Nonlinear Liquid Sloshing
The paper considers non-stationary responses in reduced-order model of partially liquid-filled tank under external forcing. The model involves one common degree of freedom for the tank and the non-sloshing portion of the liquid, and the other one -- for the sloshing portion of the liquid. The coupling between these degrees of freedom is nonlinear, with the lowest-order potential dictated by symmetry considerations. Since the mass of the sloshing liquid in realistic conditions does not exceed 10% of the total mass of the system, the reduced-order model turns to be formally equivalent to well-studied oscillatory systems with nonlinear energy sinks (NES). Exploiting this analogy, and applying the methodology known from the studies of the systems with the NES, we predict a multitude of possible non-stationary responses in the considered model. These responses conform, at least on the qualitative level, to the responses observed in experimental sloshing settings, multi-modal theoretical models and full-scale numeric simulations.
0
1
0
0
0
0
Opinion evolution in time-varying social influence networks with prejudiced agents
Investigation of social influence dynamics requires mathematical models that are "simple" enough to admit rigorous analysis, and yet sufficiently "rich" to capture salient features of social groups. Thus, the mechanism of iterative opinion pooling from (DeGroot, 1974), which can explain the generation of consensus, was elaborated in (Friedkin and Johnsen, 1999) to take into account individuals' ongoing attachments to their initial opinions, or prejudices. The "anchorage" of individuals to their prejudices may disable reaching consensus and cause disagreement in a social influence network. Further elaboration of this model may be achieved by relaxing its restrictive assumption of a time-invariant influence network. During opinion dynamics on an issue, arcs of interpersonal influence may be added or subtracted from the network, and the influence weights assigned by an individual to his/her neighbors may alter. In this paper, we establish new important properties of the (Friedkin and Johnsen, 1999) opinion formation model, and also examine its extension to time-varying social influence networks.
1
1
1
0
0
0
k*-Nearest Neighbors: From Global to Local
The weighted k-nearest neighbors algorithm is one of the most fundamental non-parametric methods in pattern recognition and machine learning. The question of setting the optimal number of neighbors as well as the optimal weights has received much attention throughout the years, nevertheless this problem seems to have remained unsettled. In this paper we offer a simple approach to locally weighted regression/classification, where we make the bias-variance tradeoff explicit. Our formulation enables us to phrase a notion of optimal weights, and to efficiently find these weights as well as the optimal number of neighbors efficiently and adaptively, for each data point whose value we wish to estimate. The applicability of our approach is demonstrated on several datasets, showing superior performance over standard locally weighted methods.
1
0
0
1
0
0
Network Slicing for Ultra-Reliable Low Latency Communication in Industry 4.0 Scenarios
An important novelty of 5G is its role in transforming the industrial production into Industry 4.0. Specifically, Ultra-Reliable Low Latency Communications (URLLC) will, in many cases, enable replacement of cables with wireless connections and bring freedom in designing and operating interconnected machines, robots, and devices. However, not all industrial links will be of URLLC type; e.g. some applications will require high data rates. Furthermore, these industrial networks will be highly heterogeneous, featuring various communication technologies. We consider network slicing as a mechanism to handle the diverse set of requirements to the network. We present methods for slicing deterministic and packet-switched industrial communication protocols at an abstraction level that is decoupled from the specific implementation of the underlying technologies. Finally, we show how network calculus can be used to assess the end-to-end properties of the network slices.
1
0
0
0
0
0
Markov Decision Processes with Continuous Side Information
We consider a reinforcement learning (RL) setting in which the agent interacts with a sequence of episodic MDPs. At the start of each episode the agent has access to some side-information or context that determines the dynamics of the MDP for that episode. Our setting is motivated by applications in healthcare where baseline measurements of a patient at the start of a treatment episode form the context that may provide information about how the patient might respond to treatment decisions. We propose algorithms for learning in such Contextual Markov Decision Processes (CMDPs) under an assumption that the unobserved MDP parameters vary smoothly with the observed context. We also give lower and upper PAC bounds under the smoothness assumption. Because our lower bound has an exponential dependence on the dimension, we consider a tractable linear setting where the context is used to create linear combinations of a finite set of MDPs. For the linear setting, we give a PAC learning algorithm based on KWIK learning techniques.
1
0
0
1
0
0
Nonlinear stage of Benjamin-Feir instability in forced/damped deep water waves
We study a three-wave truncation of a recently proposed damped/forced high-order nonlinear Schrödinger equation for deep-water gravity waves under the effect of wind and viscosity. The evolution of the norm (wave-action) and spectral mean of the full model are well captured by the reduced dynamics. Three regimes are found for the wind-viscosity balance: we classify them according to the attractor in the phase-plane of the truncated system and to the shift of the spectral mean. A downshift can coexist with both net forcing and damping, i.e., attraction to period-1 or period-2 solutions. Upshift is associated with stronger winds, i.e., to a net forcing where the attractor is always a period-1 solution. The applicability of our classification to experiments in long wave-tanks is verified.
0
1
0
0
0
0
Computational Thinking in Patch
With the future likely to see even more pervasive computation, computational thinking (problem-solving skills incorporating computing knowledge) is now being recognized as a fundamental skill needed by all students. Computational thinking is conceptualizing as opposed to programming, promotes natural human thinking style than algorithmic reasoning, complements and combines mathematical and engineering thinking, and it emphasizes ideas, not artifacts. In this paper, we outline a new visual language, called Patch, using which students are able to express their solutions to eScience computational problems in abstract visual tools. Patch is closer to high level procedural languages such as C++ or Java than Scratch or Snap! but similar to them in ease of use and combines simplicity and expressive power in one single platform.
1
0
0
0
0
0
Skoda's Ideal Generation from Vanishing Theorem for Semipositive Nakano Curvature and Cauchy-Schwarz Inequality for Tensors
Skoda's 1972 result on ideal generation is a crucial ingredient in the analytic approach to the finite generation of the canonical ring and the abundance conjecture. Special analytic techniques developed by Skoda, other than applications of the usual vanishing theorems and L2 estimates for the d-bar equation, are required for its proof. This note (which is part of a lecture given in the 60th birthday conference for Lawrence Ein) gives a simpler, more straightforward proof of Skoda's result, which makes it a natural consequence of the standard techniques in vanishing theorems and solving d-bar equation with L2 estimates. The proof involves the following three ingredients: (i) one particular Cauchy-Schwarz inequality for tensors with a special factor which accounts for the exponent of the denominator in the formulation of the integral condition for Skoda's ideal generation, (ii) the nonnegativity of Nakano curvature of the induced metric of a special co-rank-1 subbundle of a trivial vector bundle twisted by a special scalar weight function, and (iii) the vanishing theorem and solvability of d-bar equation with L2 estimates for vector bundles of nonnegative Nakano curvature on a strictly pseudoconvex domain. Our proof gives readily other similar results on ideal generation.
0
0
1
0
0
0
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data. In light of this capacity for overfitting, it is remarkable that simple algorithms like SGD reliably return solutions with low test error. One roadblock to explaining these phenomena in terms of implicit regularization, structural properties of the solution, and/or easiness of the data is that many learning bounds are quantitatively vacuous when applied to networks learned by SGD in this "deep learning" regime. Logically, in order to explain generalization, we need nonvacuous bounds. We return to an idea by Langford and Caruana (2001), who used PAC-Bayes bounds to compute nonvacuous numerical bounds on generalization error for stochastic two-layer two-hidden-unit neural networks via a sensitivity analysis. By optimizing the PAC-Bayes bound directly, we are able to extend their approach and obtain nonvacuous generalization bounds for deep stochastic neural network classifiers with millions of parameters trained on only tens of thousands of examples. We connect our findings to recent and old work on flat minima and MDL-based explanations of generalization.
1
0
0
0
0
0
Method for Computationally Efficient Design of Dielectric Laser Accelerators
Dielectric microstructures have generated much interest in recent years as a means of accelerating charged particles when powered by solid state lasers. The acceleration gradient (or particle energy gain per unit length) is an important figure of merit. To design structures with high acceleration gradients, we explore the adjoint variable method, a highly efficient technique used to compute the sensitivity of an objective with respect to a large number of parameters. With this formalism, the sensitivity of the acceleration gradient of a dielectric structure with respect to its entire spatial permittivity distribution is calculated by the use of only two full-field electromagnetic simulations, the original and adjoint. The adjoint simulation corresponds physically to the reciprocal situation of a point charge moving through the accelerator gap and radiating. Using this formalism, we perform numerical optimizations aimed at maximizing acceleration gradients, which generate fabricable structures of greatly improved performance in comparison to previously examined geometries.
0
1
0
0
0
0
Computing and Using Minimal Polynomials
Given a zero-dimensional ideal I in a polynomial ring, many computations start by finding univariate polynomials in I. Searching for a univariate polynomial in I is a particular case of considering the minimal polynomial of an element in P/I. It is well known that minimal polynomials may be computed via elimination, therefore this is considered to be a "resolved problem". But being the key of so many computations, it is worth investigating its meaning, its optimization, its applications.
1
0
1
0
0
0
DiVM: Model Checking with LLVM and Graph Memory
In this paper, we introduce the concept of a virtual machine with graph-organised memory as a versatile backend for both explicit-state and abstraction-driven verification of software. Our virtual machine uses the LLVM IR as its instruction set, enriched with a small set of hypercalls. We show that the provided hypercalls are sufficient to implement a small operating system, which can then be linked with applications to provide a POSIX-compatible verification environment. Finally, we demonstrate the viability of the approach through a comparison with a more traditionally-designed LLVM model checker.
1
0
0
0
0
0
Uhlenbeck's decomposition in Sobolev and Morrey-Sobolev spaces
We present a self-contained proof of Uhlenbeck's decomposition theorem for $\Omega\in L^p(\mathbb{B}^n,so(m)\otimes\Lambda^1\mathbb{R}^n)$ for $p\in (1,n)$ with Sobolev type estimates in the case $p \in[n/2,n)$ and Morrey-Sobolev type estimates in the case $p\in (1,n/2)$. We also prove an analogous theorem in the case when $\Omega\in L^p( \mathbb{B}^n, TCO_{+}(m) \otimes \Lambda^1\mathbb{R}^n)$, which corresponds to Uhlenbeck's theorem with conformal gauge group.
0
0
1
0
0
0
Making Asynchronous Distributed Computations Robust to Noise
We consider the problem of making distributed computations robust to noise, in particular to worst-case (adversarial) corruptions of messages. We give a general distributed interactive coding scheme which simulates any asynchronous distributed protocol while tolerating an optimal corruption of a $\Theta(1/n)$ fraction of all messages while incurring a moderate blowup of $O(n\log^2 n)$ in the communication complexity. Our result is the first fully distributed interactive coding scheme in which the topology of the communication network is not known in advance. Prior work required either a coordinating node to be connected to all other nodes in the network or assumed a synchronous network in which all nodes already know the complete topology of the network.
1
0
0
0
0
0
Justifications in Constraint Handling Rules for Logical Retraction in Dynamic Algorithms
We present a straightforward source-to-source transformation that introduces justifications for user-defined constraints into the CHR programming language. Then a scheme of two rules suffices to allow for logical retraction (deletion, removal) of constraints during computation. Without the need to recompute from scratch, these rules remove not only the constraint but also undo all consequences of the rule applications that involved the constraint. We prove a confluence result concerning the rule scheme and show its correctness. When algorithms are written in CHR, constraints represent both data and operations. CHR is already incremental by nature, i.e. constraints can be added at runtime. Logical retraction adds decrementality. Hence any algorithm written in CHR with justifications will become fully dynamic. Operations can be undone and data can be removed at any point in the computation without compromising the correctness of the result. We present two classical examples of dynamic algorithms, written in our prototype implementation of CHR with justifications that is available online: maintaining the minimum of a changing set of numbers and shortest paths in a graph whose edges change.
1
0
0
0
0
0
Bose-Hubbard lattice as a controllable environment for open quantum systems
We investigate the open dynamics of an atomic impurity embedded in a one-dimensional Bose-Hubbard lattice. We derive the reduced evolution equation for the impurity and show that the Bose-Hubbard lattice behaves as a tunable engineered environment allowing to simulate both Markovian and non-Markovian dynamics in a controlled and experimentally realisable way. We demonstrate that the presence or absence of memory effects is a signature of the nature of the excitations induced by the impurity, being delocalized or localized in the two limiting cases of superfluid and Mott insulator, respectively. Furthermore, our findings show how the excitations supported in the two phases can be characterized as information carriers.
0
1
0
0
0
0
Semi-decidable equivalence relations obtained by composition and lattice join of decidable equivalence relations
Composition and lattice join (transitive closure of a union) of equivalence relations are operations taking pairs of decidable equivalence relations to relations that are semi-decidable, but not necessarily decidable. This article addresses the question, is every semi-decidable equivalence relation obtainable in those ways from a pair of decidable equivalence relations? It is shown that every semi-decidable equivalence relation, of which every equivalence class is infinite, is obtainable as both a composition and a lattice join of decidable equivalence relations having infinite equivalence classes. An example is constructed of a semi-decidable, but not decidable, equivalence relation having finite equivalence classes that can be obtained from decidable equivalence relations, both by composition and also by lattice join. Another example is constructed, in which such a relation cannot be obtained from decidable equivalence relations in either of the two ways.
0
0
1
0
0
0
ClipAudit: A Simple Risk-Limiting Post-Election Audit
We propose a simple risk-limiting audit for elections, ClipAudit. To determine whether candidate A (the reported winner) actually beat candidate B in a plurality election, ClipAudit draws ballots at random, without replacement, until either all cast ballots have been drawn, or until \[ a - b \ge \beta \sqrt{a+b} \] where $a$ is the number of ballots in the sample for the reported winner A, and $b$ is the number of ballots in the sample for opponent B, and where $\beta$ is a constant determined a priori as a function of the number $n$ of ballots cast and the risk-limit $\alpha$. ClipAudit doesn't depend on the unofficial margin (as does Bravo). We show how to extend ClipAudit to contests with multiple winners or losers, or to multiple contests.
1
0
0
1
0
0
LCA(2), Weil index, and product formula
In this paper we study the category LCA(2) of certain non-locally compact abelian topological groups, and extend the notion of Weil index. As applications we deduce some product formulas for curves over local fields and arithmetic surfaces.
0
0
1
0
0
0
A Dichotomy for Sampling Barrier-Crossing Events of Random Walks with Regularly Varying Tails
We study how to sample paths of a random walk up to the first time it crosses a fixed barrier, in the setting where the step sizes are iid with negative mean and have a regularly varying right tail. We introduce a desirable property for a change of measure to be suitable for exact simulation. We study whether the change of measure of Blanchet and Glynn (2008) satisfies this property and show that it does so if and only if the tail index $\alpha$ of the right tail lies in the interval $(1, \, 3/2)$.
0
0
1
1
0
0
Crawling migration under chemical signalling: a stochastic particle model
Cell migration is a fundamental process involved in physiological phenomena such as the immune response and morphogenesis, but also in pathological processes, such as the development of tumor metastasis. These functions are effectively ensured because cells are active systems that adapt to their environment. In this work, we consider a migrating cell as an active particle, where its intracellular activity is responsible for motion. Such system was already modeled in a previous model where the protrusion activity of the cell was described by a stochastic Markovian jump process. The model was proven able to capture the diversity in observed trajectories. Here, we add a description of the effect of an external chemical attractive signal on the protrusion dynamics, that may vary in time. We show that the resulting stochastic model is a well-posed non-homogeneous Markovian process, and provide cell trajectories in different settings, illustrating the effects of the signal on long-term trajectories.
0
0
0
0
1
0
Towards a Deeper Understanding of Adversarial Losses
Recent work has proposed various adversarial losses for training generative adversarial networks. Yet, it remains unclear what certain types of functions are valid adversarial loss functions, and how these loss functions perform against one another. In this paper, we aim to gain a deeper understanding of adversarial losses by decoupling the effects of their component functions and regularization terms. We first derive some necessary and sufficient conditions of the component functions such that the adversarial loss is a divergence-like measure between the data and the model distributions. In order to systematically compare different adversarial losses, we then propose DANTest, a new, simple framework based on discriminative adversarial networks. With this framework, we evaluate an extensive set of adversarial losses by combining different component functions and regularization approaches. This study leads to some new insights into the adversarial losses. For reproducibility, all source code is available at this https URL .
1
0
0
1
0
0
Transit Visibility Zones of the Solar System Planets
The detection of thousands of extrasolar planets by the transit method naturally raises the question of whether potential extrasolar observers could detect the transits of the Solar System planets. We present a comprehensive analysis of the regions in the sky from where transit events of the Solar System planets can be detected. We specify how many different Solar System planets can be observed from any given point in the sky, and find the maximum number to be three. We report the probabilities of a randomly positioned external observer to be able to observe single and multiple Solar System planet transits; specifically, we find a probability of 2.518% to be able to observe at least one transiting planet, 0.229% for at least two transiting planets, and 0.027% for three transiting planets. We identify 68 known exoplanets that have a favourable geometric perspective to allow transit detections in the Solar System and we show how the ongoing K2 mission will extend this list. We use occurrence rates of exoplanets to estimate that there are $3.2\pm1.2$ and $6.6^{+1.3}_{-0.8}$ temperate Earth-sized planets orbiting GK and M dwarf stars brighter than $V=13$ and $V=16$ respectively, that are located in the Earth's transit zone.
0
1
0
0
0
0
Nearest-neighbour Markov point processes on graphs with Euclidean edges
We define nearest-neighbour point processes on graphs with Euclidean edges and linear networks. They can be seen as the analogues of renewal processes on the real line. We show that the Delaunay neighbourhood relation on a tree satisfies the Baddeley--M{\o}ller consistency conditions and provide a characterisation of Markov functions with respect to this relation. We show that a modified relation defined in terms of the local geometry of the graph satisfies the consistency conditions for all graphs with Euclidean edges.
0
0
1
1
0
0
A Hierarchical Bayesian Linear Regression Model with Local Features for Stochastic Dynamics Approximation
One of the challenges in model-based control of stochastic dynamical systems is that the state transition dynamics are involved, and it is not easy or efficient to make good-quality predictions of the states. Moreover, there are not many representational models for the majority of autonomous systems, as it is not easy to build a compact model that captures the entire dynamical subtleties and uncertainties. In this work, we present a hierarchical Bayesian linear regression model with local features to learn the dynamics of a micro-robotic system as well as two simpler examples, consisting of a stochastic mass-spring damper and a stochastic double inverted pendulum on a cart. The model is hierarchical since we assume non-stationary priors for the model parameters. These non-stationary priors make the model more flexible by imposing priors on the priors of the model. To solve the maximum likelihood (ML) problem for this hierarchical model, we use the variational expectation maximization (EM) algorithm, and enhance the procedure by introducing hidden target variables. The algorithm yields parsimonious model structures, and consistently provides fast and accurate predictions for all our examples involving large training and test sets. This demonstrates the effectiveness of the method in learning stochastic dynamics, which makes it suitable for future use in a paradigm, such as model-based reinforcement learning, to compute optimal control policies in real time.
0
0
0
1
0
0
Multitask Learning and Benchmarking with Clinical Time Series Data
Health care is one of the most exciting frontiers in data mining and machine learning. Successful adoption of electronic health records (EHRs) created an explosion in digital clinical data available for analysis, but progress in machine learning for healthcare research has been difficult to measure because of the absence of publicly available benchmark data sets. To address this problem, we propose four clinical prediction benchmarks using data derived from the publicly available Medical Information Mart for Intensive Care (MIMIC-III) database. These tasks cover a range of clinical problems including modeling risk of mortality, forecasting length of stay, detecting physiologic decline, and phenotype classification. We propose strong linear and neural baselines for all four tasks and evaluate the effect of deep supervision, multitask training and data-specific architectural modifications on the performance of neural models.
1
0
0
1
0
0
Essentially Finite Vector Bundles on Normal Pseudo-proper Algebraic Stacks
Let $X$ be a normal, connected and projective variety over an algebraically closed field $k$. It is known that a vector bundle $V$ on $X$ is essentially finite if and only if it is trivialized by a proper surjective morphism $f:Y\to X$. In this paper we introduce a different approach to this problem which allows to extend the results to normal, connected and strongly pseudo-proper algebraic stack of finite type over an arbitrary field $k$.
0
0
1
0
0
0
Fluid flow across a wavy channel brought in contact
A pressure driven flow in contact interface between elastic solids with wavy surfaces is studied. We consider a strong coupling between the solid and the fluid problems, which is relevant when the fluid pressure is comparable with the contact pressure. An approximate analytical solution is obtained for this coupled problem. A finite-element monolithically coupled framework is used to solve the problem numerically. A good agreement is obtained between the two solutions within the region of the validity of the analytical one. A power-law interface transmissivity decay is observed near the percolation. Finally, we showed that the external pressure needed to seal the channel is an affine function of the inlet pressure and does not depend on the outlet pressure.
0
1
0
0
0
0
Species tree estimation using ASTRAL: how many genes are enough?
Species tree reconstruction from genomic data is increasingly performed using methods that account for sources of gene tree discordance such as incomplete lineage sorting. One popular method for reconstructing species trees from unrooted gene tree topologies is ASTRAL. In this paper, we derive theoretical sample complexity results for the number of genes required by ASTRAL to guarantee reconstruction of the correct species tree with high probability. We also validate those theoretical bounds in a simulation study. Our results indicate that ASTRAL requires $\mathcal{O}(f^{-2} \log n)$ gene trees to reconstruct the species tree correctly with high probability where n is the number of species and f is the length of the shortest branch in the species tree. Our simulations, which are the first to test ASTRAL explicitly under the anomaly zone, show trends consistent with the theoretical bounds and also provide some practical insights on the conditions where ASTRAL works well.
1
0
1
1
0
0
Two weight Commutators in the Dirichlet and Neumann Laplacian settings
In this paper we establish the characterization of the weighted BMO via two weight commutators in the settings of the Neumann Laplacian $\Delta_{N_+}$ on the upper half space $\mathbb{R}^n_+$ and the reflection Neumann Laplacian $\Delta_N$ on $\mathbb{R}^n$ with respect to the weights associated to $\Delta_{N_+}$ and $\Delta_{N}$ respectively. This in turn yields a weak factorization for the corresponding weighted Hardy spaces, where in particular, the weighted class associated to $\Delta_{N}$ is strictly larger than the Muckenhoupt weighted class and contains non-doubling weights. In our study, we also make contributions to the classical Muckenhoupt--Wheeden weighted Hardy space (BMO space respectively) by showing that it can be characterized via area function (Carleson measure respectively) involving the semigroup generated by the Laplacian on $\mathbb{R}^n$ and that the duality of these weighted Hardy and BMO spaces holds for Muckenhoupt $A^p$ weights with $p\in (1,2]$ while the previously known related results cover only $p\in (1,{n+1\over n}]$. We also point out that this two weight commutator theorem might not be true in the setting of general operators $L$, and in particular we show that it is not true when $L$ is the Dirichlet Laplacian $\Delta_{D_+}$ on $\mathbb{R}^n_+$.
0
0
1
0
0
0
Schoenberg Representations and Gramian Matrices of Matérn Functions
We represent Matérn functions in terms of Schoenberg's integrals which ensure the positive definiteness and prove the systems of translates of Matérn functions form Riesz sequences in $L^2(\R^n)$ or Sobolev spaces. Our approach is based on a new class of integral transforms that generalize Fourier transforms for radial functions. We also consider inverse multi-quadrics and obtain similar results.
0
0
1
0
0
0
A generalization of the Hasse-Witt matrix of a hypersurface
The Hasse-Witt matrix of a hypersurface in ${\mathbb P}^n$ over a finite field of characteristic $p$ gives essentially complete mod $p$ information about the zeta function of the hypersurface. But if the degree $d$ of the hypersurface is $\leq n$, the zeta function is trivial mod $p$ and the Hasse-Witt matrix is zero-by-zero. We generalize a classical formula for the Hasse-Witt matrix to obtain a matrix that gives a nontrivial congruence for the zeta function for all $d$. We also describe the differential equations satisfied by this matrix and prove that it is generically invertible.
0
0
1
0
0
0
Few-Shot Learning with Graph Neural Networks
We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models. Besides providing improved numerical performance, our framework is easily extended to variants of few-shot learning, such as semi-supervised or active learning, demonstrating the ability of graph-based models to operate well on 'relational' tasks.
1
0
0
1
0
0
High-precision measurement of the proton's atomic mass
We report on the precise measurement of the atomic mass of a single proton with a purpose-built Penning-trap system. With a precision of 32 parts-per-trillion our result not only improves on the current CODATA literature value by a factor of three, but also disagrees with it at a level of about 3 standard deviations.
0
1
0
0
0
0
Prospects of detecting HI using redshifted 21 cm radiation at z ~ 3
Distribution of cold gas in the post-reionization era provides an important link between distribution of galaxies and the process of star formation. Redshifted 21 cm radiation from the Hyperfine transition of neutral Hydrogen allows us to probe the neutral component of cold gas, most of which is to be found in the interstellar medium of galaxies. Existing and upcoming radio telescopes can probe the large scale distribution of neutral Hydrogen via HI intensity mapping. In this paper we use an estimate of the HI power spectrum derived using an ansatz to compute the expected signal from the large scale HI distribution at z ~ 3. We find that the scale dependence of bias at small scales makes a significant difference to the expected signal even at large angular scales. We compare the predicted signal strength with the sensitivity of radio telescopes that can observe such radiation and calculate the observation time required for detecting neutral Hydrogen at these redshifts. We find that OWFA (Ooty Wide Field Array) offers the best possibility to detect neutral Hydrogen at z ~ 3 before the SKA (Square Kilometer Array) becomes operational. We find that the OWFA should be able to make a 3 sigma or a more significant detection in 2000 hours of observations at several angular scales. Calculations done using the Fisher matrix approach indicate that a 5 sigma detection of the binned HI power spectrum via measurement of the amplitude of the HI power spectrum is possible in 1000 hours (Sarkar, Bharadwaj and Ali, 2017).
0
1
0
0
0
0
Unconditional bases of subspaces related to non-self-adjoint perturbations of self-adjoint operators
Assume that $T$ is a self-adjoint operator on a Hilbert space $\mathcal{H}$ and that the spectrum of $T$ is confined in the union $\bigcup_{j\in J}\Delta_j$, $J\subseteq\mathbb{Z}$, of segments $\Delta_j=[\alpha_j, \beta_j]\subset\mathbb{R}$ such that $\alpha_{j+1}>\beta_j$ and $$ \inf_{j} \left(\alpha_{j+1}-\beta_j\right) = d > 0. $$ If $B$ is a bounded (in general non-self-adjoint) perturbation of $T$ with $\|B\|=:b<d/2$ then the spectrum of the perturbed operator $A=T+B$ lies in the union $\bigcup_{j\in J} U_{b}(\Delta_j)$ of the mutually disjoint closed $b$-neighborhoods $U_{b}(\Delta_j)$ of the segments $\Delta_j$ in $\mathbb{C}$. Let $Q_j$ be the Riesz projection onto the invariant subspace of $A$ corresponding to the part of the spectrum of $A$ lying in $U_{b}\left(\Delta_j\right)$, $j\in J$. Our main result is as follows: The subspaces $\mathcal{L}_j=Q_j(\mathcal H)$, $j\in J$, form an unconditional basis in the whole space $\mathcal H$.
0
0
1
0
0
0
Centralities of Nodes and Influences of Layers in Large Multiplex Networks
We formulate and propose an algorithm (MultiRank) for the ranking of nodes and layers in large multiplex networks. MultiRank takes into account the full multiplex network structure of the data and exploits the dual nature of the network in terms of nodes and layers. The proposed centrality of the layers (influences) and the centrality of the nodes are determined by a coupled set of equations. The basic idea consists in assigning more centrality to nodes that receive links from highly influential layers and from already central nodes. The layers are more influential if highly central nodes are active in them. The algorithm applies to directed/undirected as well as to weighted/unweighted multiplex networks. We discuss the application of MultiRank to three major examples of multiplex network datasets: the European Air Transportation Multiplex Network, the Pierre Auger Multiplex Collaboration Network and the FAO Multiplex Trade Network.
1
1
0
0
0
0
Twofold triple systems with cyclic 2-intersecting Gray codes
Given a combinatorial design $\mathcal{D}$ with block set $\mathcal{B}$, the block-intersection graph (BIG) of $\mathcal{D}$ is the graph that has $\mathcal{B}$ as its vertex set, where two vertices $B_{1} \in \mathcal{B}$ and $B_{2} \in \mathcal{B} $ are adjacent if and only if $|B_{1} \cap B_{2}| > 0$. The $i$-block-intersection graph ($i$-BIG) of $\mathcal{D}$ is the graph that has $\mathcal{B}$ as its vertex set, where two vertices $B_{1} \in \mathcal{B}$ and $B_{2} \in \mathcal{B}$ are adjacent if and only if $|B_{1} \cap B_{2}| = i$. In this paper several constructions are obtained that start with twofold triple systems (TTSs) with Hamiltonian $2$-BIGs and result in larger TTSs that also have Hamiltonian $2$-BIGs. These constructions collectively enable us to determine the complete spectrum of TTSs with Hamiltonian $2$-BIGs (equivalently TTSs with cyclic $2$-intersecting Gray codes) as well as the complete spectrum for TTSs with $2$-BIGs that have Hamilton paths (i.e., for TTSs with $2$-intersecting Gray codes). In order to prove these spectrum results, we sometimes require ingredient TTSs that have large partial parallel classes; we prove lower bounds on the sizes of partial parallel clasess in arbitrary TTSs, and then construct larger TTSs with both cyclic $2$-intersecting Gray codes and parallel classes.
0
0
1
0
0
0
Consequences of Unhappiness While Developing Software
The growing literature on affect among software developers mostly reports on the linkage between happiness, software quality, and developer productivity. Understanding the positive side of happiness -- positive emotions and moods -- is an attractive and important endeavor. Scholars in industrial and organizational psychology have suggested that also studying the negative side -- unhappiness -- could lead to cost-effective ways of enhancing working conditions, job performance, and to limiting the occurrence of psychological disorders. Our comprehension of the consequences of (un)happiness among developers is still too shallow, and is mainly expressed in terms of development productivity and software quality. In this paper, we attempt to uncover the experienced consequences of unhappiness among software developers. Using qualitative data analysis of the responses given by 181 questionnaire participants, we identified 49 consequences of unhappiness while doing software development. We found detrimental consequences on developers' mental well-being, the software development process, and the produced artifacts. Our classification scheme, available as open data, will spawn new happiness research opportunities of cause-effect type, and it can act as a guideline for practitioners for identifying damaging effects of unhappiness and for fostering happiness on the job.
1
0
0
0
0
0
Typesafe Abstractions for Tensor Operations
We propose a typesafe abstraction to tensors (i.e. multidimensional arrays) exploiting the type-level programming capabilities of Scala through heterogeneous lists (HList), and showcase typesafe abstractions of common tensor operations and various neural layers such as convolution or recurrent neural networks. This abstraction could lay the foundation of future typesafe deep learning frameworks that runs on Scala/JVM.
1
0
0
0
0
0
Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training
Exploiting sparsity enables hardware systems to run neural networks faster and more energy-efficiently. However, most prior sparsity-centric optimization techniques only accelerate the forward pass of neural networks and usually require an even longer training process with iterative pruning and retraining. We observe that artificially inducing sparsity in the gradients of the gates in an LSTM cell has little impact on the training quality. Further, we can enforce structured sparsity in the gate gradients to make the LSTM backward pass up to 45% faster than the state-of-the-art dense approach and 168% faster than the state-of-the-art sparsifying method on modern GPUs. Though the structured sparsifying method can impact the accuracy of a model, this performance gap can be eliminated by mixing our sparse training method and the standard dense training method. Experimental results show that the mixed method can achieve comparable results in a shorter time span than using purely dense training.
0
0
0
1
0
0
Linear density-based clustering with a discrete density model
Density-based clustering techniques are used in a wide range of data mining applications. One of their most attractive features con- sists in not making use of prior knowledge of the number of clusters that a dataset contains along with their shape. In this paper we propose a new algorithm named Linear DBSCAN (Lin-DBSCAN), a simple approach to clustering inspired by the density model introduced with the well known algorithm DBSCAN. Designed to minimize the computational cost of density based clustering on geospatial data, Lin-DBSCAN features a linear time complexity that makes it suitable for real-time applications on low-resource devices. Lin-DBSCAN uses a discrete version of the density model of DBSCAN that takes ad- vantage of a grid-based scan and merge approach. The name of the algorithm stems exactly from its main features outlined above. The algorithm was tested with well known data sets. Experimental results prove the efficiency and the validity of this approach over DBSCAN in the context of spatial data clustering, enabling the use of a density-based clustering technique on large datasets with low computational cost.
0
0
0
1
0
0
An inexact subsampled proximal Newton-type method for large-scale machine learning
We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an $\epsilon$-suboptimal point in $\tilde{\mathcal{O}}(d(n + \sqrt{\kappa d})\log(\frac{1}{\epsilon}))$ FLOPS, where $n$ is number of samples, $d$ is feature dimension, and $\kappa$ is the condition number. As long as $n > d$, the proposed method is more efficient than state-of-the-art accelerated stochastic first-order methods for non-smooth regularizers which requires $\tilde{\mathcal{O}}(d(n + \sqrt{\kappa n})\log(\frac{1}{\epsilon}))$ FLOPS. The key idea is to form the subsampled Newton subproblem in a way that preserves the finite sum structure of the objective, thereby allowing us to leverage recent developments in stochastic first-order methods to solve the subproblem. Experimental results verify that the proposed algorithm outperforms previous algorithms for $\ell_1$-regularized logistic regression on real datasets.
1
0
0
1
0
0
Future Energy Consumption Prediction Based on Grey Forecast Model
We use grey forecast model to predict the future energy consumption of four states in the U.S, and make some improvments to the model.
0
0
0
1
0
0
AutoPass: An Automatic Password Generator
Text password has long been the dominant user authentication technique and is used by large numbers of Internet services. If they follow recommended practice, users are faced with the almost insuperable problem of generating and managing a large number of site-unique and strong (i.e. non-guessable) passwords. One way of addressing this problem is through the use of a password generator, i.e. a client-side scheme which generates (and regenerates) site-specific strong passwords on demand, with the minimum of user input. This paper provides a detailed specification and analysis of AutoPass, a password generator scheme previously outlined as part of a general analysis of such schemes. AutoPass has been designed to address issues identified in previously proposed password generators, and incorporates novel techniques to address these issues. Unlike almost all previously proposed schemes, AutoPass enables the generation of passwords that meet important real-world requirements, including forced password changes, use of pre-specified passwords, and generation of passwords meeting site-specific requirements.
1
0
0
0
0
0
A Practically Competitive and Provably Consistent Algorithm for Uplift Modeling
Randomized experiments have been critical tools of decision making for decades. However, subjects can show significant heterogeneity in response to treatments in many important applications. Therefore it is not enough to simply know which treatment is optimal for the entire population. What we need is a model that correctly customize treatment assignment base on subject characteristics. The problem of constructing such models from randomized experiments data is known as Uplift Modeling in the literature. Many algorithms have been proposed for uplift modeling and some have generated promising results on various data sets. Yet little is known about the theoretical properties of these algorithms. In this paper, we propose a new tree-based ensemble algorithm for uplift modeling. Experiments show that our algorithm can achieve competitive results on both synthetic and industry-provided data. In addition, by properly tuning the "node size" parameter, our algorithm is proved to be consistent under mild regularity conditions. This is the first consistent algorithm for uplift modeling that we are aware of.
1
0
0
1
0
0
The cosmic shoreline: the evidence that escape determines which planets have atmospheres, and what this may mean for Proxima Centauri b
The planets of the Solar System divide neatly between those with atmospheres and those without when arranged by insolation ($I$) and escape velocity ($v_{\mathrm{esc}}$). The dividing line goes as $I \propto v_{\mathrm{esc}}^4$. Exoplanets with reported masses and radii are shown to crowd against the extrapolation of the Solar System trend, making a metaphorical cosmic shoreline that unites all the planets. The $I \propto v_{\mathrm{esc}}^4$ relation may implicate thermal escape. We therefore address the general behavior of hydrodynamic thermal escape models ranging from Pluto to highly-irradiated Extrasolar Giant Planets (EGPs). Energy-limited escape is harder to test because copious XUV radiation is mostly a feature of young stars, and hence requires extrapolating to historic XUV fluences ($I_{\mathrm{xuv}}$) using proxies and power laws. An energy-limited shoreline should scale as $I_{\mathrm{xuv}} \propto v_{\mathrm{esc}}^3\sqrt{\rho}$, which differs distinctly from the apparent $I_{\mathrm{xuv}} \propto v_{\mathrm{esc}}^4$ relation. Energy-limited escape does provide good quantitative agreement to the highly irradiated EGPs. Diffusion-limited escape implies that no planet can lose more than 1% of its mass as H$_2$. Impact erosion, to the extent that impact velocities $v_{\mathrm{imp}}$ can be estimated for exoplanets, fits to a $v_{\mathrm{imp}} \approx 4\,-\,5\, v_{\mathrm{esc}}$ shoreline. The proportionality constant is consistent with what the collision of comet Shoemaker-Levy 9 showed us we should expect of modest impacts in deep atmospheres. With respect to the shoreline, Proxima Centauri b is on the metaphorical beach. Known hazards include its rapid energetic accretion, high impact velocities, its early life on the wrong side of the runaway greenhouse, and Proxima Centauri's XUV radiation. In its favor is a vast phase space of unknown unknowns.
0
1
0
0
0
0
Universal kinetics for engagement of mechanosensing pathways in cell adhesion
When plated onto substrates, cell morphology and even stem cell differentiation are influenced by the stiffness of their environment. Stiffer substrates give strongly spread (eventually polarized) cells with strong focal adhesions, and stress fibers; very soft substrates give a less developed cytoskeleton, and much lower cell spreading. The kinetics of this process of cell spreading is studied extensively, and important universal relationships are established on how the cell area grows with time. Here we study the population dynamics of spreading cells, investigating the characteristic processes involved in cell response to the substrate. We show that unlike the individual cell morphology, this population dynamics does not depend on the substrate stiffness. Instead, a strong activation temperature dependence is observed. Different cell lines on different substrates all have long-time statistics controlled by the thermal activation over a single energy barrier dG=19 kcal/mol, while the early-time kinetics follows a power law $t^5$. This implies that the rate of spreading depends on an internal process of adhesion-mechanosensing complex assembly and activation: the operational complex must have 5 component proteins, and the last process in the sequence (which we believe is the activation of focal adhesion kinase) is controlled by the binding energy dG.
0
0
0
0
1
0
Agent-based computing from multi-agent systems to agent-based Models: a visual survey
Agent-Based Computing is a diverse research domain concerned with the building of intelligent software based on the concept of "agents". In this paper, we use Scientometric analysis to analyze all sub-domains of agent-based computing. Our data consists of 1,064 journal articles indexed in the ISI web of knowledge published during a twenty year period: 1990-2010. These were retrieved using a topic search with various keywords commonly used in sub-domains of agent-based computing. In our proposed approach, we have employed a combination of two applications for analysis, namely Network Workbench and CiteSpace - wherein Network Workbench allowed for the analysis of complex network aspects of the domain, detailed visualization-based analysis of the bibliographic data was performed using CiteSpace. Our results include the identification of the largest cluster based on keywords, the timeline of publication of index terms, the core journals and key subject categories. We also identify the core authors, top countries of origin of the manuscripts along with core research institutes. Finally, our results have interestingly revealed the strong presence of agent-based computing in a number of non-computing related scientific domains including Life Sciences, Ecological Sciences and Social Sciences.
1
1
0
0
0
0
Large Spontaneous Hall Effects in Chiral Topological Magnets
As novel topological phases in correlated electron systems, we have found two examples of non-ferromagnetic states that exhibit a large anomalous Hall effect. One is the chiral spin liquid compound Pr$_{2}$Ir$_{2}$O$_{7}$, which exhibits a spontaneous Hall effect in a spin liquid state due to spin ice correlation. The other is the chiral antiferromagnets Mn$_{3}$Sn and Mn$_{3}$Ge that exhibit a large anomalous Hall effect at room temperature. The latter shows a sign change of the anomalous Hall effect by a small change in the magnetic field by a few 100 G, which should be useful for various applications. We will discuss that the magnetic Weyl metal states are the origin for such a large anomalous Hall effect observed in both the spin liquid and antiferromagnet that possess almost no magnetization.
0
1
0
0
0
0
Mott metal-insulator transition in the Doped Hubbard-Holstein model
Motivated by the current interest in the understanding of the Mott insulators away from half filling, observed in many perovskite oxides, we study the Mott metal-insulator transition (MIT) in the doped Hubbard-Holstein model using the Hatree-Fock mean field theory. The Hubbard-Holstein model is the simplest model containing both the Coulomb and the electron-lattice interactions, which are important ingredients in the physics of the perovskite oxides. In contrast to the half-filled Hubbard model, which always results in a single phase (either metallic or insulating), our results show that away from half-filling, a mixed phase of metallic and insulating regions occur. As the dopant concentration is increased, the metallic part progressively grows in volume, until it exceeds the percolation threshold, leading to percolative conduction. This happens above a critical dopant concentration $\delta_c$, which, depending on the strength of the electron-lattice interaction, can be a significant fraction of unity. This means that the material could be insulating even for a substantial amount of doping, in contrast to the expectation that doped holes would destroy the insulating behavior of the half-filled Hubbard model. Our theory provides a framework for the understanding of the density-driven metal-insulator transition observed in many complex oxides.
0
1
0
0
0
0
ASDA : Analyseur Syntaxique du Dialecte Alg{é}rien dans un but d'analyse s{é}mantique
Opinion mining and sentiment analysis in social media is a research issue having a great interest in the scientific community. However, before begin this analysis, we are faced with a set of problems. In particular, the problem of the richness of languages and dialects within these media. To address this problem, we propose in this paper an approach of construction and implementation of Syntactic analyzer named ASDA. This tool represents a parser for the Algerian dialect that label the terms of a given corpus. Thus, we construct a labeling table containing for each term its stem, different prefixes and suffixes, allowing us to determine the different grammatical parts a sort of POS tagging. This labeling will serve us later in the semantic processing of the Algerian dialect, like the automatic translation of this dialect or sentiment analysis
1
0
0
0
0
0
Latent Intention Dialogue Models
Developing a dialogue agent that is capable of making autonomous decisions and communicating by natural language is one of the long-term goals of machine learning research. Traditional approaches either rely on hand-crafting a small state-action set for applying reinforcement learning that is not scalable or constructing deterministic models for learning dialogue sentences that fail to capture natural conversational variability. In this paper, we propose a Latent Intention Dialogue Model (LIDM) that employs a discrete latent variable to learn underlying dialogue intentions in the framework of neural variational inference. In a goal-oriented dialogue scenario, these latent intentions can be interpreted as actions guiding the generation of machine responses, which can be further refined autonomously by reinforcement learning. The experimental evaluation of LIDM shows that the model out-performs published benchmarks for both corpus-based and human evaluation, demonstrating the effectiveness of discrete latent variable models for learning goal-oriented dialogues.
1
0
0
1
0
0
Quasiconvex elastodynamics: weak-strong uniqueness for measure-valued solutions
A weak-strong uniqueness result is proved for measure-valued solutions to the system of conservation laws arising in elastodynamics. The main novelty brought forward by the present work is that the underlying stored-energy function of the material is assumed strongly quasiconvex. The proof employs tools from the calculus of variations to establish general convexity-type bounds on quasiconvex functions and recasts them in order to adapt the relative entropy method to quasiconvex elastodynamics.
0
0
1
0
0
0
Accelerated Dual Learning by Homotopic Initialization
Gradient descent and coordinate descent are well understood in terms of their asymptotic behavior, but less so in a transient regime often used for approximations in machine learning. We investigate how proper initialization can have a profound effect on finding near-optimal solutions quickly. We show that a certain property of a data set, namely the boundedness of the correlations between eigenfeatures and the response variable, can lead to faster initial progress than expected by commonplace analysis. Convex optimization problems can tacitly benefit from that, but this automatism does not apply to their dual formulation. We analyze this phenomenon and devise provably good initialization strategies for dual optimization as well as heuristics for the non-convex case, relevant for deep learning. We find our predictions and methods to be experimentally well-supported.
1
0
0
0
0
0
Inverse Reinforcement Learning from Summary Data
Inverse reinforcement learning (IRL) aims to explain observed strategic behavior by fitting reinforcement learning models to behavioral data. However, traditional IRL methods are only applicable when the observations are in the form of state-action paths. This assumption may not hold in many real-world modeling settings, where only partial or summarized observations are available. In general, we may assume that there is a summarizing function $\sigma$, which acts as a filter between us and the true state-action paths that constitute the demonstration. Some initial approaches to extending IRL to such situations have been presented, but with very specific assumptions about the structure of $\sigma$, such as that only certain state observations are missing. This paper instead focuses on the most general case of the problem, where no assumptions are made about the summarizing function, except that it can be evaluated. We demonstrate that inference is still possible. The paper presents exact and approximate inference algorithms that allow full posterior inference, which is particularly important for assessing parameter uncertainty in this challenging inference situation. Empirical scalability is demonstrated to reasonably sized problems, and practical applicability is demonstrated by estimating the posterior for a cognitive science RL model based on an observed user's task completion time only.
1
0
0
1
0
0
Algorithm for Optimization and Interpolation based on Hyponormality
On one hand, consider the problem of finding global solutions to a polynomial optimization problem and, on the other hand, consider the problem of interpolating a set of points with a complex exponential function. This paper proposes a single algorithm to address both problems. It draws on the notion of hyponormality in operator theory. Concerning optimization, it seems to be the first algorithm that is capable of extracting global solutions from a polynomial optimization problem where the variables and data are complex numbers. It also applies to real polynomial optimization, a special case of complex polynomial optimization, and thus extends the work of Henrion and Lasserre implemented in GloptiPoly. Concerning interpolation, the algorithm provides an alternative to Prony's method based on the Autonne-Takagi factorization and it avoids solving a Vandermonde system. The algorithm and its proof are based exclusively on linear algebra. They are devoid of notions from algebraic geometry, contrary to existing methods for interpolation. The algorithm is tested on a series of examples, each illustrating a different facet of the approach. One of the examples demonstrates that hyponormality can be enforced numerically to strenghten a convex relaxation and to force its solution to have rank one.
0
0
1
0
0
0
Human experts vs. machines in taxa recognition
The step of expert taxa recognition currently slows down the response time of many bioassessments. Shifting to quicker and cheaper state-of-the-art machine learning approaches is still met with expert scepticism towards the ability and logic of machines. In our study, we investigate both the differences in accuracy and in the identification logic of taxonomic experts and machines. We propose a systematic approach utilizing deep Convolutional Neural Nets with the transfer learning paradigm and extensively evaluate it over a multi-label and multi-pose taxonomic dataset specifically created for this comparison. We also study the prediction accuracy on different ranks of taxonomic hierarchy in detail. Our results revealed that human experts using actual specimens yield the lowest classification error. However, our proposed, much faster, automated approach using deep Convolutional Neural Nets comes very close to human accuracy. Contrary to previous findings in the literature, we find that machines following a typical flat classification approach commonly used in machine learning performs better than forcing machines to adopt a hierarchical, local per parent node approach used by human taxonomic experts. Finally, we publicly share our unique dataset to serve as a public benchmark dataset in this field.
1
0
0
1
0
0
A micrometer-thick oxide film with high thermoelectric performance at temperature ranging from 20-400 K
Thermoelectric (TE) materials achieve localised conversion between thermal and electric energies, and the conversion efficiency is determined by a figure of merit zT. Up to date, two-dimensional electron gas (2DEG) related TE materials hold the records for zT near room-temperature. A sharp increase in zT up to ~2.0 was observed previously for superlattice materials such as PbSeTe, Bi2Te3/Sb2Te3 and SrNb0.2Ti0.8O3/SrTiO3, when the thicknesses of these TE materials were spatially confine within sub-nanometre scale. The two-dimensional confinement of carriers enlarges the density of states near the Fermi energy3-6 and triggers electron phonon coupling. This overcomes the conventional {\sigma}-S trade-off to more independently improve S, and thereby further increases thermoelectric power factors (PF=S2{\sigma}). Nevertheless, practical applications of the present 2DEG materials for high power energy conversions are impeded by the prerequisite of spatial confinement, as the amount of TE material is insufficient. Here, we report similar TE properties to 2DEGs but achieved in SrNb0.2Ti0.8O3 films with thickness within sub-micrometer scale by regulating interfacial and lattice polarizations. High power factor (up to 103 {\mu}Wcm-1K-2) and zT value (up to 1.6) were observed for the film materials near room-temperature and below. Even reckon in the thickness of the substrate, an integrated power factor of both film and substrate approaching to be 102 {\mu}Wcm-1K-2 was achieved in a 2 {\mu}m-thick SrNb0.2Ti0.8O3 film grown on a 100 {\mu}m-thick SrTiO3 substrate. The dependence of high TE performances on size-confinement is reduced by ~103 compared to the conventional 2DEG-related TE materials. As-grown oxide films are less toxic and not dependent on large amounts of heavy elements, potentially paving the way towards applications in localised refrigeration and electric power generations.
0
1
0
0
0
0
MIT at SemEval-2017 Task 10: Relation Extraction with Convolutional Neural Networks
Over 50 million scholarly articles have been published: they constitute a unique repository of knowledge. In particular, one may infer from them relations between scientific concepts, such as synonyms and hyponyms. Artificial neural networks have been recently explored for relation extraction. In this work, we continue this line of work and present a system based on a convolutional neural network to extract relations. Our model ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).
1
0
0
1
0
0
Maximizing acquisition functions for Bayesian optimization
Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes' decision rule, but this ideal is difficult to achieve since these functions are frequently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, high-dimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization. Subsequently, we identify a common family of acquisition functions, including EI and UCB, whose properties not only facilitate but justify use of greedy approaches for their maximization.
0
0
0
1
0
0
Angular momentum evolution of galaxies over the past 10-Gyr: A MUSE and KMOS dynamical survey of 400 star-forming galaxies from z=0.3-1.7
We present a MUSE and KMOS dynamical study 405 star-forming galaxies at redshift z=0.28-1.65 (median redshift z=0.84). Our sample are representative of star-forming, main-sequence galaxies, with star-formation rates of SFR=0.1-30Mo/yr and stellar masses M=10^8-10^11Mo. For 49+/-4% of our sample, the dynamics suggest rotational support, 24+/-3% are unresolved systems and 5+/-2% appear to be early-stage major mergers with components on 8-30kpc scales. The remaining 22+/-5% appear to be dynamically complex, irregular (or face-on systems). For galaxies whose dynamics suggest rotational support, we derive inclination corrected rotational velocities and show these systems lie on a similar scaling between stellar mass and specific angular momentum as local spirals with j*=J/M*\propto M^(2/3) but with a redshift evolution that scales as j*\propto M^{2/3}(1+z)^(-1). We identify a correlation between specific angular momentum and disk stability such that galaxies with the highest specific angular momentum, log(j*/M^(2/3))>2.5, are the most stable, with Toomre Q=1.10+/-0.18, compared to Q=0.53+/-0.22 for galaxies with log(j*/M^(2/3))<2.5. At a fixed mass, the HST morphologies of galaxies with the highest specific angular momentum resemble spiral galaxies, whilst those with low specific angular momentum are morphologically complex and dominated by several bright star-forming regions. This suggests that angular momentum plays a major role in defining the stability of gas disks: at z~1, massive galaxies that have disks with low specific angular momentum, appear to be globally unstable, clumpy and turbulent systems. In contrast, galaxies with high specific angular have evolved in to stable disks with spiral structures.
0
1
0
0
0
0
Iterative Object and Part Transfer for Fine-Grained Recognition
The aim of fine-grained recognition is to identify sub-ordinate categories in images like different species of birds. Existing works have confirmed that, in order to capture the subtle differences across the categories, automatic localization of objects and parts is critical. Most approaches for object and part localization relied on the bottom-up pipeline, where thousands of region proposals are generated and then filtered by pre-trained object/part models. This is computationally expensive and not scalable once the number of objects/parts becomes large. In this paper, we propose a nonparametric data-driven method for object and part localization. Given an unlabeled test image, our approach transfers annotations from a few similar images retrieved in the training set. In particular, we propose an iterative transfer strategy that gradually refine the predicted bounding boxes. Based on the located objects and parts, deep convolutional features are extracted for recognition. We evaluate our approach on the widely-used CUB200-2011 dataset and a new and large dataset called Birdsnap. On both datasets, we achieve better results than many state-of-the-art approaches, including a few using oracle (manually annotated) bounding boxes in the test images.
1
0
0
0
0
0
On measures of edge-uncolorability of cubic graphs: A brief survey and some new results
There are many hard conjectures in graph theory, like Tutte's 5-flow conjecture, and the 5-cycle double cover conjecture, which would be true in general if they would be true for cubic graphs. Since most of them are trivially true for 3-edge-colorable cubic graphs, cubic graphs which are not 3-edge-colorable, often called {\em snarks}, play a key role in this context. Here, we survey parameters measuring how far apart a non 3-edge-colorable graph is from being 3-edge-colorable. We study their interrelation and prove some new results. Besides getting new insight into the structure of snarks, we show that such measures give partial results with respect to these important conjectures. The paper closes with a list of open problems and conjectures.
0
0
1
0
0
0
Gas around galaxy haloes - III: hydrogen absorption signatures around galaxies and QSOs in the Sherwood simulation suite
Modern theories of galaxy formation predict that galaxies impact on their gaseous surroundings, playing the fundamental role of regulating the amount of gas converted into stars. While star-forming galaxies are believed to provide feedback through galactic winds, Quasi-Stellar Objects (QSOs) are believed instead to provide feedback through the heat generated by accretion onto a central supermassive black hole. A quantitative difference in the impact of feedback on the gaseous environments of star-forming galaxies and QSOs has not been established through direct observations. Using the Sherwood cosmological simulations, we demonstrate that measurements of neutral hydrogen in the vicinity of star-forming galaxies and QSOs during the era of peak galaxy formation show excess LyA absorption extending up to comoving radii of about 150 kpc for star-forming galaxies and 300 - 700 kpc for QSOs. Simulations including supernovae-driven winds with the wind velocity scaling like the escape velocity of the halo account for the absorption around star-forming galaxies but not QSOs.
0
1
0
0
0
0
Distributed Newton Methods for Deep Neural Networks
Deep learning involves a difficult non-convex optimization problem with a large number of weights between any two adjacent layers of a deep structure. To handle large data sets or complicated networks, distributed training is needed, but the calculation of function, gradient, and Hessian is expensive. In particular, the communication and the synchronization cost may become a bottleneck. In this paper, we focus on situations where the model is distributedly stored, and propose a novel distributed Newton method for training deep neural networks. By variable and feature-wise data partitions, and some careful designs, we are able to explicitly use the Jacobian matrix for matrix-vector products in the Newton method. Some techniques are incorporated to reduce the running time as well as the memory consumption. First, to reduce the communication cost, we propose a diagonalization method such that an approximate Newton direction can be obtained without communication between machines. Second, we consider subsampled Gauss-Newton matrices for reducing the running time as well as the communication cost. Third, to reduce the synchronization cost, we terminate the process of finding an approximate Newton direction even though some nodes have not finished their tasks. Details of some implementation issues in distributed environments are thoroughly investigated. Experiments demonstrate that the proposed method is effective for the distributed training of deep neural networks. In compared with stochastic gradient methods, it is more robust and may give better test accuracy.
0
0
0
1
0
0
Personalized advice for enhancing well-being using automated impulse response analysis --- AIRA
The attention for personalized mental health care is thriving. Research data specific to the individual, such as time series sensor data or data from intensive longitudinal studies, is relevant from a research perspective, as analyses on these data can reveal the heterogeneity among the participants and provide more precise and individualized results than with group-based methods. However, using this data for self-management and to help the individual to improve his or her mental health has proven to be challenging. The present work describes a novel approach to automatically generate personalized advice for the improvement of the well-being of individuals by using time series data from intensive longitudinal studies: Automated Impulse Response Analysis (AIRA). AIRA analyzes vector autoregression models of well-being by generating impulse response functions. These impulse response functions are used in simulations to determine which variables in the model have the largest influence on the other variables and thus on the well-being of the participant. The effects found can be used to support self-management. We demonstrate the practical usefulness of AIRA by performing analysis on longitudinal self-reported data about psychological variables. To evaluate its effectiveness and efficacy, we ran its algorithms on two data sets ($N=4$ and $N=5$), and discuss the results. Furthermore, we compare AIRA's output to the results of a previously published study and show that the results are comparable. By automating Impulse Response Function Analysis, AIRA fulfills the need for accurate individualized models of health outcomes at a low resource cost with the potential for upscaling.
1
0
0
0
0
0
Being Robust (in High Dimensions) Can Be Practical
Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.
1
0
0
1
0
0
Properties of In-Plane Graphene/MoS2 Heterojunctions
The graphene/MoS2 heterojunction formed by joining the two components laterally in a single plane promises to exhibit a low-resistance contact according to the Schottky-Mott rule. Here we provide an atomic-scale description of the structural, electronic, and magnetic properties of this type of junction. We first identify the energetically favorable structures in which the preference of forming C-S or C-Mo bonds at the boundary depends on the chemical conditions. We find that significant charge transfer between graphene and MoS2 is localized at the boundary. We show that the abundant 1D boundary states substantially pin the Fermi level in the lateral contact between graphene and MoS2, in close analogy to the effect of 2D interfacial states in the contacts between 3D materials. Furthermore, we propose specific ways in which these effects can be exploited to achieve spin-polarized currents.
0
1
0
0
0
0
Semi-extraspecial groups with an abelian subgroup of maximal possible order
Let $p$ be a prime. A $p$-group $G$ is defined to be semi-extraspecial if for every maximal subgroup $N$ in $Z(G)$ the quotient $G/N$ is a an extraspecial group. In addition, we say that $G$ is ultraspecial if $G$ is semi-extraspecial and $|G:G'| = |G'|^2$. In this paper, we prove that every $p$-group of nilpotence class $2$ is isomorphic to a subgroup of some ultraspecial group. Given a prime $p$ and a positive integer $n$, we provide a framework to construct of all the ultraspecial groups order $p^{3n}$ that contain an abelian subgroup of order $p^{2n}$. In the literature, it has been proved that every ultraspecial group $G$ order $p^{3n}$ with at least two abelian subgroups of order $p^{2n}$ can be associated to a semifield. We provide a generalization of semifield, and then we show that every semi-extraspecial group $G$ that is the product of two abelian subgroups can be associated with this generalization of semifield.
0
0
1
0
0
0
Genetic and Memetic Algorithm with Diversity Equilibrium based on Greedy Diversification
The lack of diversity in a genetic algorithm's population may lead to a bad performance of the genetic operators since there is not an equilibrium between exploration and exploitation. In those cases, genetic algorithms present a fast and unsuitable convergence. In this paper we develop a novel hybrid genetic algorithm which attempts to obtain a balance between exploration and exploitation. It confronts the diversity problem using the named greedy diversification operator. Furthermore, the proposed algorithm applies a competition between parent and children so as to exploit the high quality visited solutions. These operators are complemented by a simple selection mechanism designed to preserve and take advantage of the population diversity. Additionally, we extend our proposal to the field of memetic algorithms, obtaining an improved model with outstanding results in practice. The experimental study shows the validity of the approach as well as how important is taking into account the exploration and exploitation concepts when designing an evolutionary algorithm.
1
0
0
0
0
0
Determinants of Mobile Money Adoption in Pakistan
In this work, we analyze the problem of adoption of mobile money in Pakistan by using the call detail records of a major telecom company as our input. Our results highlight the fact that different sections of the society have different patterns of adoption of digital financial services but user mobility related features are the most important one when it comes to adopting and using mobile money services.
0
0
0
1
0
0
Cherlin's conjecture for almost simple groups of Lie rank 1
We prove Cherlin's conjecture, concerning binary primitive permutation groups, for those groups with socle isomorphic to $\mathrm{PSL}_2(q)$, ${^2\mathrm{B}_2}(q)$, ${^2\mathrm{G}_2}(q)$ or $\mathrm{PSU}_3(q)$. Our method uses the notion of a "strongly non-binary action".
0
0
1
0
0
0
Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width $w_{\text{min}}(d)$ so that ReLU nets of width $w_{\text{min}}(d)$ (and arbitrary depth) can approximate any continuous function on the unit cube $[0,1]^d$ aribitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? Our approach to this paper is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well-suited for representing convex functions. In particular, we prove that ReLU nets with width $d+1$ can approximate any continuous convex function of $d$ variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the $d$-dimensional cube $[0,1]^d$ by ReLU nets with width $d+3.$
1
0
1
1
0
0
Network Essence: PageRank Completion and Centrality-Conforming Markov Chains
Jiří Matoušek (1963-2015) had many breakthrough contributions in mathematics and algorithm design. His milestone results are not only profound but also elegant. By going beyond the original objects --- such as Euclidean spaces or linear programs --- Jirka found the essence of the challenging mathematical/algorithmic problems as well as beautiful solutions that were natural to him, but were surprising discoveries to the field. In this short exploration article, I will first share with readers my initial encounter with Jirka and discuss one of his fundamental geometric results from the early 1990s. In the age of social and information networks, I will then turn the discussion from geometric structures to network structures, attempting to take a humble step towards the holy grail of network science, that is to understand the network essence that underlies the observed sparse-and-multifaceted network data. I will discuss a simple result which summarizes some basic algebraic properties of personalized PageRank matrices. Unlike the traditional transitive closure of binary relations, the personalized PageRank matrices take "accumulated Markovian closure" of network data. Some of these algebraic properties are known in various contexts. But I hope featuring them together in a broader context will help to illustrate the desirable properties of this Markovian completion of networks, and motivate systematic developments of a network theory for understanding vast and ubiquitous multifaceted network data.
1
0
0
1
0
0
A Regularized Framework for Sparse and Structured Neural Attention
Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax.
1
0
0
1
0
0
Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning
Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3.
1
0
0
0
0
0
Sentiment Perception of Readers and Writers in Emoji use
Previous research has traditionally analyzed emoji sentiment from the point of view of the reader of the content not the author. Here, we analyze emoji sentiment from the point of view of the author and present a emoji sentiment benchmark that was built from an employee happiness dataset where emoji happen to be annotated with daily happiness of the author of the comment. The data spans over 3 years, and 4k employees of 56 companies based in Barcelona. We compare sentiment of writers to readers. Results indicate that, there is an 82% agreement in how emoji sentiment is perceived by readers and writers. Finally, we report that when authors use emoji they report higher levels of happiness. Emoji use was not found to be correlated with differences in author moodiness.
1
0
0
0
0
0
On C-class equations
The concept of a C-class of differential equations goes back to E. Cartan with the upshot that generic equations in a C-class can be solved without integration. While Cartan's definition was in terms of differential invariants being first integrals, all results exhibiting C-classes that we are aware of are based on the fact that a canonical Cartan geometry associated to the equations in the class descends to the space of solutions. For sufficiently low orders, these geometries belong to the class of parabolic geometries and the results follow from the general characterization of geometries descending to a twistor space. In this article we answer the question of whether a canonical Cartan geometry descends to the space of solutions in the case of scalar ODEs of order at least four and of systems of ODEs of order at least three. As in the lower order cases, this is characterized by the vanishing of the generalized Wilczynski invariants, which are defined via the linearization at a solution. The canonical Cartan geometries (which are not parabolic geometries) are a slight variation of those available in the literature based on a recent general construction. All the verifications needed to apply this construction for the classes of ODEs we study are carried out in the article, which thus also provides a complete alternative proof for the existence of canonical Cartan connections associated to higher order (systems of) ODEs.
0
0
1
0
0
0
Confidence Intervals for Quantiles from Histograms and Other Grouped Data
Interval estimation of quantiles has been treated by many in the literature. However, to the best of our knowledge there has been no consideration for interval estimation when the data are available in grouped format. Motivated by this, we introduce several methods to obtain confidence intervals for quantiles when only grouped data is available. Our preferred method for interval estimation is to approximate the underlying density using the Generalized Lambda Distribution (GLD) to both estimate the quantiles and variance of the quantile estimators. We compare the GLD method with some other methods that we also introduce which are based on a frequency approximation approach and a linear interpolation approximation of the density. Our methods are strongly supported by simulations showing that excellent coverage can be achieved for a wide number of distributions. These distributions include highly-skewed distributions such as the log-normal, Dagum and Singh-Maddala distributions. We also apply our methods to real data and show that inference can be carried out on published outcomes that have been summarized only by a histogram. Our methods are therefore useful for a broad range of applications. We have also created a web application that can be used to conveniently calculate the estimators.
0
0
0
1
0
0
Reach and speed of judgment propagation in the laboratory
In recent years, a large body of research has demonstrated that judgments and behaviors can propagate from person to person. Phenomena as diverse as political mobilization, health practices, altruism, and emotional states exhibit similar dynamics of social contagion. The precise mechanisms of judgment propagation are not well understood, however, because it is difficult to control for confounding factors such as homophily or dynamic network structures. We introduce a novel experimental design that renders possible the stringent study of judgment propagation. In this design, experimental chains of individuals can revise their initial judgment in a visual perception task after observing a predecessor's judgment. The positioning of a very good performer at the top of a chain created a performance gap, which triggered waves of judgment propagation down the chain. We evaluated the dynamics of judgment propagation experimentally. Despite strong social influence within pairs of individuals, the reach of judgment propagation across a chain rarely exceeded a social distance of three to four degrees of separation. Furthermore, computer simulations showed that the speed of judgment propagation decayed exponentially with the social distance from the source. We show that information distortion and the overweighting of other people's errors are two individual-level mechanisms hindering judgment propagation at the scale of the chain. Our results contribute to the understanding of social contagion processes, and our experimental method offers numerous new opportunities to study judgment propagation in the laboratory.
1
1
0
0
0
0
Texture Characterization by Using Shape Co-occurrence Patterns
Texture characterization is a key problem in image understanding and pattern recognition. In this paper, we present a flexible shape-based texture representation using shape co-occurrence patterns. More precisely, texture images are first represented by tree of shapes, each of which is associated with several geometrical and radiometric attributes. Then four typical kinds of shape co-occurrence patterns based on the hierarchical relationship of the shapes in the tree are learned as codewords. Three different coding methods are investigated to learn the codewords, with which, any given texture image can be encoded into a descriptive vector. In contrast with existing works, the proposed method not only inherits the strong ability to depict geometrical aspects of textures and the high robustness to variations of imaging conditions from the shape-based method, but also provides a flexible way to consider shape relationships and to compute high-order statistics on the tree. To our knowledge, this is the first time to use co-occurrence patterns of explicit shapes as a tool for texture analysis. Experiments on various texture datasets and scene datasets demonstrate the efficiency of the proposed method.
1
0
0
0
0
0
NGC 3105: A Young Cluster in the Outer Galaxy
Images and spectra of the open cluster NGC 3105 have been obtained with GMOS on Gemini South. The (i', g'-i') color-magnitude diagram (CMD) constructed from these data extends from the brightest cluster members to g'~23. This is 4 - 5 mag fainter than previous CMDs at visible wavelengths and samples cluster members with sub-solar masses. Assuming a half-solar metallicity, comparisons with isochrones yield a distance of 6.6+/-0.3 kpc. An age of at least 32 Myr is found based on the photometric properties of the brightest stars, coupled with the apparent absence of pre-main sequence stars in the lower regions of the CMD. The luminosity function of stars between 50 and 70 arcsec from the cluster center is consistent with a Chabrier lognormal mass function. However, at radii smaller than 50 arcsec there is a higher specific frequency of the most massive main sequence stars than at larger radii. Photometry obtained from archival SPITZER images reveals that some of the brightest stars near NGC 3105 have excess infrared emission, presumably from warm dust envelopes. Halpha emission is detected in a few early-type stars in and around the cluster, building upon previous spectroscopic observations that found Be stars near NGC 3105. The equivalent width of the NaD lines in the spectra of early type stars is consistent with the reddening found from comparisons with isochrones. Stars with i'~18.5 that fall near the cluster main sequence have a spectral-type A5V, and a distance modulus that is consistent with that obtained by comparing isochrones with the CMD is found assuming solar neighborhood intrinsic brightnesses for these stars.
0
1
0
0
0
0
Exact solution of a two-species quantum dimer model for pseudogap metals
We present an exact ground state solution of a quantum dimer model introduced in Ref.[1], which features ordinary bosonic spin-singlet dimers as well as fermionic dimers that can be viewed as bound states of spinons and holons in a hole-doped resonating valence bond liquid. Interestingly, this model captures several essential properties of the metallic pseudogap phase in high-$T_c$ cuprate superconductors. We identify a line in parameter space where the exact ground state wave functions can be constructed at an arbitrary density of fermionic dimers. At this exactly solvable line the ground state has a huge degeneracy, which can be interpreted as a flat band of fermionic excitations. Perturbing around the exactly solvable line, this degeneracy is lifted and the ground state is a fractionalized Fermi liquid with a small pocket Fermi surface in the low doping limit.
0
1
0
0
0
0
Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement
We study the instability of standing wave solutions for nonlinear Schrödinger equations with a one-dimensional harmonic potential in dimension $N\ge 2$. We prove that if the nonlinearity is $L^2$-critical or supercritical in dimension $N-1$, then any ground states are strongly unstable by blowup.
0
0
1
0
0
0
Theory and Applications of Matrix-Weighted Consensus
This paper proposes the matrix-weighted consensus algorithm, which is a generalization of the consensus algorithm in the literature. Given a networked dynamical system where the interconnections between agents are weighted by nonnegative definite matrices instead of nonnegative scalars, consensus and clustering phenomena naturally exist. We examine algebraic and algebraic graph conditions for achieving a consensus, and provide an algorithm for finding all clusters of a given system. Finally, we illustrate two applications of the proposed consensus algorithm in clustered consensus and in bearing-based formation control.
0
0
1
0
0
0
Multi-armed Bandit Problems with Strategic Arms
We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round. When pulled, the arm receives some private reward $v_a$ and can choose an amount $x_a$ to pass on to the principal (keeping $v_a-x_a$ for itself). All non-pulled arms get reward $0$. Each strategic arm tries to maximize its own utility over the course of $T$ rounds. Our goal is to design an algorithm for the principal incentivizing these arms to pass on as much of their private rewards as possible. When private rewards are stochastically drawn each round ($v_a^t \leftarrow D_a$), we show that: - Algorithms that perform well in the classic adversarial multi-armed bandit setting necessarily perform poorly: For all algorithms that guarantee low regret in an adversarial setting, there exist distributions $D_1,\ldots,D_k$ and an approximate Nash equilibrium for the arms where the principal receives reward $o(T)$. - Still, there exists an algorithm for the principal that induces a game among the arms where each arm has a dominant strategy. When each arm plays its dominant strategy, the principal sees expected reward $\mu'T - o(T)$, where $\mu'$ is the second-largest of the means $\mathbb{E}[D_{a}]$. This algorithm maintains its guarantee if the arms are non-strategic ($x_a = v_a$), and also if there is a mix of strategic and non-strategic arms.
1
0
0
1
0
0
Emerging Topics in Assistive Reading Technology: From Presentation to Content Accessibility
With the recent focus in the accessibility field, researchers from academia and industry have been very active in developing innovative techniques and tools for assistive technology. Especially with handheld devices getting ever powerful and being able to recognize the user's voice, screen magnification for individuals with low-vision, and eye tracking devices used in studies with individuals with physical and intellectual disabilities, the science field is quickly adapting and creating conclusions as well as products to help. In this paper, we will focus on new technology and tools to help make reading easier--including reformatting document presentation (for people with physical vision impairments) and text simplification to make information itself easier to interpret (for people with intellectual disabilities). A real-world case study is reported based on our experience to make documents more accessible.
1
0
0
0
0
0
Supervised learning with quantum enhanced feature spaces
Machine learning and quantum computing are two technologies each with the potential for altering how computation is performed to address previously untenable problems. Kernel methods for machine learning are ubiquitous for pattern recognition, with support vector machines (SVMs) being the most well-known method for classification problems. However, there are limitations to the successful solution to such problems when the feature space becomes large, and the kernel functions become computationally expensive to estimate. A core element to computational speed-ups afforded by quantum algorithms is the exploitation of an exponentially large quantum state space through controllable entanglement and interference. Here, we propose and experimentally implement two novel methods on a superconducting processor. Both methods represent the feature space of a classification problem by a quantum state, taking advantage of the large dimensionality of quantum Hilbert space to obtain an enhanced solution. One method, the quantum variational classifier builds on [1,2] and operates through using a variational quantum circuit to classify a training set in direct analogy to conventional SVMs. In the second, a quantum kernel estimator, we estimate the kernel function and optimize the classifier directly. The two methods present a new class of tools for exploring the applications of noisy intermediate scale quantum computers [3] to machine learning.
0
0
0
1
0
0
On The Limitation of Some Fully Observable Multiple Session Resilient Shoulder Surfing Defense Mechanisms
Using password based authentication technique, a system maintains the login credentials (username, password) of the users in a password file. Once the password file is compromised, an adversary obtains both the login credentials. With the advancement of technology, even if a password is maintained in hashed format, then also the adversary can invert the hashed password to get the original one. To mitigate this threat, most of the systems nowadays store some system generated fake passwords (also known as honeywords) along with the original password of a user. This type of setup confuses an adversary while selecting the original password. If the adversary chooses any of these honeywords and submits that as a login credential, then system detects the attack. A large number of significant work have been done on designing methodologies (identified as $\text{M}^{\text{DS}}_{\text{OA}}$) that can protect password against observation or, shoulder surfing attack. Under this attack scenario, an adversary observes (or records) the login information entered by a user and later uses those credentials to impersonate the genuine user. In this paper, we have shown that because of their design principle, a large subset of $\text{M}^{\text{DS}}_{\text{OA}}$ (identified as $\text{M}^{\text{FODS}}_{\text{SOA}}$) cannot afford to store honeywords in password file. Thus these methods, belonging to $\text{M}^{\text{FODS}}_{\text{SOA}}$, are unable to provide any kind of security once password file gets compromised. Through our contribution in this paper, by still using the concept of honeywords, we have proposed few generic principles to mask the original password of $\text{M}^{\text{FODS}}_{\text{SOA}}$ category methods. We also consider few well-established methods like S3PAS, CHC, PAS and COP belonging to $\text{M}^{\text{FODS}}_{\text{SOA}}$, to show that proposed idea is implementable in practice.
1
0
0
0
0
0