title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
The path to high-energy electron-positron colliders: from Wideroe's betatron to Touschek's AdA and to LEP
We describe the road which led to the construction and exploitation of electron positron colliders, hightlighting how the young physics student Bruno Touschek met the Norwegian engineer Rolf Wideroe in Germany, during WWII, and collaborated in building the 15 MeV betatron, a secret project directed by Wideroe and financed by the Ministry of Aviation of the Reich. This is how Bruno Touschek learnt the science of making particle accelerators and was ready, many years later, to propose and build AdA, the first electron positron collider, in Frascati, Italy, in 1960. We shall then see how AdA was brought from Frascati to Orsay, in France. Taking advantage of the Orsay Linear Accelerator as injector, the Franco-Italian team was able to prove that collisions had taken place, opening the way to the use of particle colliders as a mean to explore high energy physics.
0
1
0
0
0
0
Lazarsfeld-Mukai Reflexive Sheaves and their Stability
Consider an ample and globally generated line bundle $L$ on a smooth projective variety $X$ of dimension $N\geq 2$ over $\mathbb{C}$. Let $D$ be a smooth divisor in the complete linear system of $L$. We construct reflexive sheaves on $X$ by an elementary transformation of a trivial bundle on $X$ along certain globally generated torsion-free sheaves on $D$. The dual reflexive sheaves are called the Lazarsfeld-Mukai reflexive sheaves. We prove the $\mu_L$-(semi)stability of such reflexive sheaves under certain conditions.
0
0
1
0
0
0
Dimensionality reduction for acoustic vehicle classification with spectral embedding
We propose a method for recognizing moving vehicles, using data from roadside audio sensors. This problem has applications ranging widely, from traffic analysis to surveillance. We extract a frequency signature from the audio signal using a short-time Fourier transform, and treat each time window as an individual data point to be classified. By applying a spectral embedding, we decrease the dimensionality of the data sufficiently for K-nearest neighbors to provide accurate vehicle identification.
1
0
0
1
0
0
Efficient Estimation of Linear Functionals of Principal Components
We study principal component analysis (PCA) for mean zero i.i.d. Gaussian observations $X_1,\dots, X_n$ in a separable Hilbert space $\mathbb{H}$ with unknown covariance operator $\Sigma.$ The complexity of the problem is characterized by its effective rank ${\bf r}(\Sigma):= \frac{{\rm tr}(\Sigma)}{\|\Sigma\|},$ where ${\rm tr}(\Sigma)$ denotes the trace of $\Sigma$ and $\|\Sigma\|$ denotes its operator norm. We develop a method of bias reduction in the problem of estimation of linear functionals of eigenvectors of $\Sigma.$ Under the assumption that ${\bf r}(\Sigma)=o(n),$ we establish the asymptotic normality and asymptotic properties of the risk of the resulting estimators and prove matching minimax lower bounds, showing their semi-parametric optimality.
0
0
1
1
0
0
A stroll in the jungle of error bounds
The aim of this paper is to give a short overview on error bounds and to provide the first bricks of a unified theory. Inspired by the works of [8, 15, 13, 16, 10], we show indeed the centrality of the Lojasiewicz gradient inequality. For this, we review some necessary and sufficient conditions for global/local error bounds, both in the convex and nonconvex case. We also recall some results on quantitative error bounds which play a major role in convergence rate analysis and complexity theory of many optimization methods.
0
0
1
0
0
0
Variational obstacle avoidance problem on Riemannian manifolds
We introduce variational obstacle avoidance problems on Riemannian manifolds and derive necessary conditions for the existence of their normal extremals. The problem consists of minimizing an energy functional depending on the velocity and covariant acceleration, among a set of admissible curves, and also depending on a navigation function used to avoid an obstacle on the workspace, a Riemannian manifold. We study two different scenarios, a general one on a Riemannian manifold and, a sub-Riemannian problem. By introducing a left-invariant metric on a Lie group, we also study the variational obstacle avoidance problem on a Lie group. We apply the results to the obstacle avoidance problem of a planar rigid body and an unicycle.
1
0
1
0
0
0
An Extension of Heron's Formula
This paper introduces an extension of Heron's formula to approximate area of cyclic n-gons where the error never exceeds $\frac{\pi}{e}-1$
0
0
1
0
0
0
Learning Less-Overlapping Representations
In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regulariza- tion approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regu- larized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance.
1
0
0
1
0
0
LDPC Code Design for Distributed Storage: Balancing Repair Bandwidth, Reliability and Storage Overhead
Distributed storage systems suffer from significant repair traffic generated due to frequent storage node failures. This paper shows that properly designed low-density parity-check (LDPC) codes can substantially reduce the amount of required block downloads for repair thanks to the sparse nature of their factor graph representation. In particular, with a careful construction of the factor graph, both low repair-bandwidth and high reliability can be achieved for a given code rate. First, a formula for the average repair bandwidth of LDPC codes is developed. This formula is then used to establish that the minimum repair bandwidth can be achieved by forcing a regular check node degree in the factor graph. Moreover, it is shown that given a fixed code rate, the variable node degree should also be regular to yield minimum repair bandwidth, under some reasonable minimum variable node degree constraint. It is also shown that for a given repair-bandwidth requirement, LDPC codes can yield substantially higher reliability than currently utilized Reed-Solomon (RS) codes. Our reliability analysis is based on a formulation of the general equation for the mean-time-to-data-loss (MTTDL) associated with LDPC codes. The formulation reveals that the stopping number is closely related to the MTTDL. It is further shown that LDPC codes can be designed such that a small loss of repair-bandwidth optimality may be traded for a large improvement in erasure-correction capability and thus the MTTDL.
1
0
0
0
0
0
Structural, magnetic, and electronic properties of GdTiO3 Mott insulator thin films grown by pulsed laser deposition
We report on the optimization process to synthesize epitaxial thin films of GdTiO3 on SrLaGaO4 substrates by pulsed laser deposition. Optimized films are free of impurity phases and are fully strained. They possess a magnetic Curie temperature TC = 31.8 K with a saturation magnetization of 4.2 muB per formula unit at 10 K. Transport measurements reveal an insulating response, as expected. Optical spectroscopy indicates a band gap of 0.7 eV, comparable to the bulk value. Our work adds ferrimagnetic orthotitanates to the palette of perovskite materials for the design of emergent strongly correlated states at oxide interfaces using a versatile growth technique such as pulsed laser deposition.
0
1
0
0
0
0
GPU-Based High-Performance Imaging for Mingantu Spectral RadioHeliograph
As a dedicated solar radio interferometer, the MingantU SpEctral RadioHeliograph (MUSER) generates massive observational data in the frequency range of 400 MHz -- 15 GHz. High-performance imaging forms a significantly important aspect of MUSER's massive data processing requirements. In this study, we implement a practical high-performance imaging pipeline for MUSER data processing. At first, the specifications of the MUSER are introduced and its imaging requirements are analyzed. Referring to the most commonly used radio astronomy software such as CASA and MIRIAD, we then implement a high-performance imaging pipeline based on the Graphics Processing Unit (GPU) technology with respect to the current operational status of the MUSER. A series of critical algorithms and their pseudo codes, i.e., detection of the solar disk and sky brightness, automatic centering of the solar disk and estimation of the number of iterations for clean algorithms, are proposed in detail. The preliminary experimental results indicate that the proposed imaging approach significantly increases the processing performance of MUSER and generates images with high-quality, which can meet the requirements of the MUSER data processing.
0
1
0
0
0
0
Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning
We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training ten times faster. We scale Deep Voice 3 to data set sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on one single-GPU server.
1
0
0
0
0
0
Ubenwa: Cry-based Diagnosis of Birth Asphyxia
Every year, 3 million newborns die within the first month of life. Birth asphyxia and other breathing-related conditions are a leading cause of mortality during the neonatal phase. Current diagnostic methods are too sophisticated in terms of equipment, required expertise, and general logistics. Consequently, early detection of asphyxia in newborns is very difficult in many parts of the world, especially in resource-poor settings. We are developing a machine learning system, dubbed Ubenwa, which enables diagnosis of asphyxia through automated analysis of the infant cry. Deployed via smartphone and wearable technology, Ubenwa will drastically reduce the time, cost and skill required to make accurate and potentially life-saving diagnoses.
1
0
0
1
0
0
Fluid-Structure Interaction with the Entropic Lattice Boltzmann Method
We propose a novel fluid-structure interaction (FSI) scheme using the entropic multi-relaxation time lattice Boltzmann (KBC) model for the fluid domain in combination with a nonlinear finite element solver for the structural part. We show validity of the proposed scheme for various challenging set-ups by comparison to literature data. Beyond validation, we extend the KBC model to multiphase flows and couple it with FEM solver. Robustness and viability of the entropic multi-relaxation time model for complex FSI applications is shown by simulations of droplet impact on elastic superhydrophobic surfaces.
0
1
0
0
0
0
Two provably consistent divide and conquer clustering algorithms for large networks
In this article, we advance divide-and-conquer strategies for solving the community detection problem in networks. We propose two algorithms which perform clustering on a number of small subgraphs and finally patches the results into a single clustering. The main advantage of these algorithms is that they bring down significantly the computational cost of traditional algorithms, including spectral clustering, semi-definite programs, modularity based methods, likelihood based methods etc., without losing on accuracy and even improving accuracy at times. These algorithms are also, by nature, parallelizable. Thus, exploiting the facts that most traditional algorithms are accurate and the corresponding optimization problems are much simpler in small problems, our divide-and-conquer methods provide an omnibus recipe for scaling traditional algorithms up to large networks. We prove consistency of these algorithms under various subgraph selection procedures and perform extensive simulations and real-data analysis to understand the advantages of the divide-and-conquer approach in various settings.
0
0
1
1
0
0
Feature-based visual odometry prior for real-time semi-dense stereo SLAM
Robust and fast motion estimation and mapping is a key prerequisite for autonomous operation of mobile robots. The goal of performing this task solely on a stereo pair of video cameras is highly demanding and bears conflicting objectives: on one hand, the motion has to be tracked fast and reliably, on the other hand, high-level functions like navigation and obstacle avoidance depend crucially on a complete and accurate environment representation. In this work, we propose a two-layer approach for visual odometry and SLAM with stereo cameras that runs in real-time and combines feature-based matching with semi-dense direct image alignment. Our method initializes semi-dense depth estimation, which is computationally expensive, from motion that is tracked by a fast but robust keypoint-based method. Experiments on public benchmark and proprietary datasets show that our approach is faster than state-of-the-art methods without losing accuracy and yields comparable map building capabilities. Moreover, our approach is shown to handle large inter-frame motion and illumination changes much more robustly than its direct counterparts.
1
0
0
0
0
0
Effective Description of Higher-Order Scalar-Tensor Theories
Most existing theories of dark energy and/or modified gravity, involving a scalar degree of freedom, can be conveniently described within the framework of the Effective Theory of Dark Energy, based on the unitary gauge where the scalar field is uniform. We extend this effective approach by allowing the Lagrangian in unitary gauge to depend on the time derivative of the lapse function. Although this dependence generically signals the presence of an extra scalar degree of freedom, theories that contain only one propagating scalar degree of freedom, in addition to the usual tensor modes, can be constructed by requiring the initial Lagrangian to be degenerate. Starting from a general quadratic action, we derive the dispersion relations for the linear perturbations around Minkowski and a cosmological background. Our analysis directly applies to the recently introduced Degenerate Higher-Order Scalar-Tensor (DHOST) theories. For these theories, we find that one cannot recover a Poisson-like equation in the static linear regime except for the subclass that includes the Horndeski and so-called "beyond Horndeski" theories. We also discuss Lorentz-breaking models inspired by Horava gravity.
0
1
0
0
0
0
Compressive Sensing Approaches for Autonomous Object Detection in Video Sequences
Video analytics requires operating with large amounts of data. Compressive sensing allows to reduce the number of measurements required to represent the video using the prior knowledge of sparsity of the original signal, but it imposes certain conditions on the design matrix. The Bayesian compressive sensing approach relaxes the limitations of the conventional approach using the probabilistic reasoning and allows to include different prior knowledge about the signal structure. This paper presents two Bayesian compressive sensing methods for autonomous object detection in a video sequence from a static camera. Their performance is compared on the real datasets with the non-Bayesian greedy algorithm. It is shown that the Bayesian methods can provide the same accuracy as the greedy algorithm but much faster; or if the computational time is not critical they can provide more accurate results.
1
0
0
1
0
0
Making Neural QA as Simple as Possible but not Simpler
Recent development of large-scale question answering (QA) datasets triggered a substantial amount of research into end-to-end neural architectures for QA. Increasingly complex systems have been conceived without comparison to simpler neural baseline systems that would justify their complexity. In this work, we propose a simple heuristic that guides the development of neural baseline systems for the extractive QA task. We find that there are two ingredients necessary for building a high-performing neural QA system: first, the awareness of question words while processing the context and second, a composition function that goes beyond simple bag-of-words modeling, such as recurrent neural networks. Our results show that FastQA, a system that meets these two requirements, can achieve very competitive performance compared with existing models. We argue that this surprising finding puts results of previous systems and the complexity of recent QA datasets into perspective.
1
0
0
0
0
0
Finite-dimensional Gaussian approximation with linear inequality constraints
Introducing inequality constraints in Gaussian process (GP) models can lead to more realistic uncertainties in learning a great variety of real-world problems. We consider the finite-dimensional Gaussian approach from Maatouk and Bay (2017) which can satisfy inequality conditions everywhere (either boundedness, monotonicity or convexity). Our contributions are threefold. First, we extend their approach in order to deal with general sets of linear inequalities. Second, we explore several Markov Chain Monte Carlo (MCMC) techniques to approximate the posterior distribution. Third, we investigate theoretical and numerical properties of the constrained likelihood for covariance parameter estimation. According to experiments on both artificial and real data, our full framework together with a Hamiltonian Monte Carlo-based sampler provides efficient results on both data fitting and uncertainty quantification.
1
0
0
1
0
0
A Generalization of Quasi-twisted Codes: Multi-twisted codes
Cyclic codes and their various generalizations, such as quasi-twisted (QT) codes, have a special place in algebraic coding theory. Among other things, many of the best-known or optimal codes have been obtained from these classes. In this work we introduce a new generalization of QT codes that we call multi-twisted (MT) codes and study some of their basic properties. Presenting several methods of constructing codes in this class and obtaining bounds on the minimum distances, we show that there exist codes with good parameters in this class that cannot be obtained as QT or constacyclic codes. This suggests that considering this larger class in computer searches is promising for constructing codes with better parameters than currently best-known linear codes. Working with this new class of codes motivated us to consider a problem about binomials over finite fields and to discover a result that is interesting in its own right.
1
0
1
0
0
0
On the Successive Cancellation Decoding of Polar Codes with Arbitrary Linear Binary Kernels
A method for efficiently successive cancellation (SC) decoding of polar codes with high-dimensional linear binary kernels (HDLBK) is presented and analyzed. We devise a $l$-expressions method which can obtain simplified recursive formulas of SC decoder in likelihood ratio form for arbitrary linear binary kernels to reduce the complexity of corresponding SC decoder. By considering the bit-channel transition probabilities $W_{G}^{(\cdot)}(\cdot|0)$ and $W_{G}^{(\cdot)}(\cdot|1)$ separately, a $W$-expressions method is proposed to further reduce the complexity of HDLBK based SC decoder. For a $m\times m$ binary kernel, the complexity of straightforward SC decoder is $O(2^{m}N\log N)$. With $W$-expressions, we reduce the complexity of straightforward SC decoder to $O(m^{2}N\log N)$ when $m\leq 16$. Simulation results show that $16\times16$ kernel polar codes offer significant advantages in terms of error performances compared with $2\times2$ kernel polar codes under SC and list SC decoders.
1
0
1
0
0
0
OpenML: An R Package to Connect to the Machine Learning Platform OpenML
OpenML is an online machine learning platform where researchers can easily share data, machine learning tasks and experiments as well as organize them online to work and collaborate more efficiently. In this paper, we present an R package to interface with the OpenML platform and illustrate its usage in combination with the machine learning R package mlr. We show how the OpenML package allows R users to easily search, download and upload data sets and machine learning tasks. Furthermore, we also show how to upload results of experiments, share them with others and download results from other users. Beyond ensuring reproducibility of results, the OpenML platform automates much of the drudge work, speeds up research, facilitates collaboration and increases the users' visibility online.
1
0
0
1
0
0
Discovering Latent Patterns of Urban Cultural Interactions in WeChat for Modern City Planning
Cultural activity is an inherent aspect of urban life and the success of a modern city is largely determined by its capacity to offer generous cultural entertainment to its citizens. To this end, the optimal allocation of cultural establishments and related resources across urban regions becomes of vital importance, as it can reduce financial costs in terms of planning and improve quality of life in the city, more generally. In this paper, we make use of a large longitudinal dataset of user location check-ins from the online social network WeChat to develop a data-driven framework for cultural planning in the city of Beijing. We exploit rich spatio-temporal representations on user activity at cultural venues and use a novel extended version of the traditional latent Dirichlet allocation model that incorporates temporal information to identify latent patterns of urban cultural interactions. Using the characteristic typologies of mobile user cultural activities emitted by the model, we determine the levels of demand for different types of cultural resources across urban areas. We then compare those with the corresponding levels of supply as driven by the presence and spatial reach of cultural venues in local areas to obtain high resolution maps that indicate urban regions with lack of cultural resources, and thus give suggestions for further urban cultural planning and investment optimisation.
1
0
0
1
0
0
Direct mapping of the temperature and velocity gradients in discs. Imaging the vertical CO snow line around IM Lupi
Accurate measurements of the physical structure of protoplanetary discs are critical inputs for planet formation models. These constraints are traditionally established via complex modelling of continuum and line observations. Instead, we present an empirical framework to locate the CO isotopologue emitting surfaces from high spectral and spatial resolution ALMA observations. We apply this framework to the disc surrounding IM Lupi, where we report the first direct, i.e. model independent, measurements of the radial and vertical gradients of temperature and velocity in a protoplanetary disc. The measured disc structure is consistent with an irradiated self-similar disc structure, where the temperature increases and the velocity decreases towards the disc surface. We also directly map the vertical CO snow line, which is located at about one gas scale height at radii between 150 and 300 au, with a CO freeze-out temperature of $21\pm2$ K. In the outer disc ($> 300$ au), where the gas surface density transitions from a power law to an exponential taper, the velocity rotation field becomes significantly sub-Keplerian, in agreement with the expected steeper pressure gradient. The sub-Keplerian velocities should result in a very efficient inward migration of large dust grains, explaining the lack of millimetre continuum emission outside of 300 au. The sub-Keplerian motions may also be the signature of the base of an externally irradiated photo-evaporative wind. In the same outer region, the measured CO temperature above the snow line decreases to $\approx$ 15 K because of the reduced gas density, which can result in a lower CO freeze-out temperature, photo-desorption, or deviations from local thermodynamic equilibrium.
0
1
0
0
0
0
Learning Rates for Kernel-Based Expectile Regression
Conditional expectiles are becoming an increasingly important tool in finance as well as in other areas of applications. We analyse a support vector machine type approach for estimating conditional expectiles and establish learning rates that are minimax optimal modulo a logarithmic factor if Gaussian RBF kernels are used and the desired expectile is smooth in a Besov sense. As a special case, our learning rates improve the best known rates for kernel-based least squares regression in this scenario. Key ingredients of our statistical analysis are a general calibration inequality for the asymmetric least squares loss, a corresponding variance bound as well as an improved entropy number bound for Gaussian RBF kernels.
0
0
0
1
0
0
Critical role of electronic correlations in determining crystal structure of transition metal compounds
The choice that a solid system "makes" when adopting a crystal structure (stable or metastable) is ultimately governed by the interactions between electrons forming chemical bonds. By analyzing 6 prototypical binary transition-metal compounds we demonstrate here that the orbitally-selective strong $d$-electron correlations influence dramatically the behavior of the energy as a function of the spatial arrangements of the atoms. Remarkably, we find that the main qualitative features of this complex behavior can be traced back to simple electrostatics, i.e., to the fact that the strong $d$-electron correlations influence substantially the charge transfer mechanism, which, in turn, controls the electrostatic interactions. This result advances our understanding of the influence of strong correlations on the crystal structure, opens a new avenue for extending structure prediction methodologies to strongly correlated materials, and paves the way for predicting and studying metastability and polymorphism in these systems.
0
1
0
0
0
0
A Machine Learning Alternative to P-values
This paper presents an alternative approach to p-values in regression settings. This approach, whose origins can be traced to machine learning, is based on the leave-one-out bootstrap for prediction error. In machine learning this is called the out-of-bag (OOB) error. To obtain the OOB error for a model, one draws a bootstrap sample and fits the model to the in-sample data. The out-of-sample prediction error for the model is obtained by calculating the prediction error for the model using the out-of-sample data. Repeating and averaging yields the OOB error, which represents a robust cross-validated estimate of the accuracy of the underlying model. By a simple modification to the bootstrap data involving "noising up" a variable, the OOB method yields a variable importance (VIMP) index, which directly measures how much a specific variable contributes to the prediction precision of a model. VIMP provides a scientifically interpretable measure of the effect size of a variable, we call the "predictive effect size", that holds whether the researcher's model is correct or not, unlike the p-value whose calculation is based on the assumed correctness of the model. We also discuss a marginal VIMP index, also easily calculated, which measures the marginal effect of a variable, or what we call "the discovery effect". The OOB procedure can be applied to both parametric and nonparametric regression models and requires only that the researcher can repeatedly fit their model to bootstrap and modified bootstrap data. We illustrate this approach on a survival data set involving patients with systolic heart failure and to a simulated survival data set where the model is incorrectly specified to illustrate its robustness to model misspecification.
1
0
0
1
0
0
Image Registration Techniques: A Survey
Image Registration is the process of aligning two or more images of the same scene with reference to a particular image. The images are captured from various sensors at different times and at multiple view-points. Thus to get a better picture of any change of a scene or object over a considerable period of time image registration is important. Image registration finds application in medical sciences, remote sensing and in computer vision. This paper presents a detailed review of several approaches which are classified accordingly along with their contributions and drawbacks. The main steps of an image registration procedure are also discussed. Different performance measures are presented that determine the registration quality and accuracy. The scope for the future research are presented as well.
1
0
0
0
0
0
Learning Whenever Learning is Possible: Universal Learning under General Stochastic Processes
This work initiates a general study of learning and generalization without the i.i.d. assumption, starting from first principles. While the standard approach to statistical learning theory is based on assumptions chosen largely for their convenience (e.g., i.i.d. or stationary ergodic), in this work we are interested in developing a theory of learning based only on the most fundamental and natural assumptions implicit in the requirements of the learning problem itself. We specifically study universally consistent function learning, where the objective is to obtain low long-run average loss for any target function, when the data follow a given stochastic process. We are then interested in the question of whether there exist learning rules guaranteed to be universally consistent given only the assumption that universally consistent learning is possible for the given data process. The reasoning that motivates this criterion emanates from a kind of optimist's decision theory, and so we refer to such learning rules as being optimistically universal. We study this question in three natural learning settings: inductive, self-adaptive, and online. Remarkably, as our strongest positive result, we find that optimistically universal learning rules do indeed exist in the self-adaptive learning setting. Establishing this fact requires us to develop new approaches to the design of learning algorithms. Along the way, we also identify concise characterizations of the family of processes under which universally consistent learning is possible in the inductive and self-adaptive settings. We additionally pose a number of enticing open problems, particularly for the online learning setting.
1
0
1
1
0
0
Obstructions for three-coloring and list three-coloring $H$-free graphs
A graph is $H$-free if it has no induced subgraph isomorphic to $H$. We characterize all graphs $H$ for which there are only finitely many minimal non-three-colorable $H$-free graphs. Such a characterization was previously known only in the case when $H$ is connected. This solves a problem posed by Golovach et al. As a second result, we characterize all graphs $H$ for which there are only finitely many $H$-free minimal obstructions for list 3-colorability.
1
0
0
0
0
0
Hybrid quantum-classical modeling of quantum dot devices
The design of electrically driven quantum dot devices for quantum optical applications asks for modeling approaches combining classical device physics with quantum mechanics. We connect the well-established fields of semi-classical semiconductor transport theory and the theory of open quantum systems to meet this requirement. By coupling the van Roosbroeck system with a quantum master equation in Lindblad form, we introduce a new hybrid quantum-classical modeling approach, which provides a comprehensive description of quantum dot devices on multiple scales: It enables the calculation of quantum optical figures of merit and the spatially resolved simulation of the current flow in realistic semiconductor device geometries in a unified way. We construct the interface between both theories in such a way, that the resulting hybrid system obeys the fundamental axioms of (non-)equilibrium thermodynamics. We show that our approach guarantees the conservation of charge, consistency with the thermodynamic equilibrium and the second law of thermodynamics. The feasibility of the approach is demonstrated by numerical simulations of an electrically driven single-photon source based on a single quantum dot in the stationary and transient operation regime.
0
1
0
0
0
0
Efficient exploration with Double Uncertain Value Networks
This paper studies directed exploration for reinforcement learning agents by tracking uncertainty about the value of each available action. We identify two sources of uncertainty that are relevant for exploration. The first originates from limited data (parametric uncertainty), while the second originates from the distribution of the returns (return uncertainty). We identify methods to learn these distributions with deep neural networks, where we estimate parametric uncertainty with Bayesian drop-out, while return uncertainty is propagated through the Bellman equation as a Gaussian distribution. Then, we identify that both can be jointly estimated in one network, which we call the Double Uncertain Value Network. The policy is directly derived from the learned distributions based on Thompson sampling. Experimental results show that both types of uncertainty may vastly improve learning in domains with a strong exploration challenge.
1
0
0
1
0
0
Auxiliary Variables for Multi-Dirichlet Priors
Bayesian models that mix multiple Dirichlet prior parameters, called Multi-Dirichlet priors (MD) in this paper, are gaining popularity. Inferring mixing weights and parameters of mixed prior distributions seems tricky, as sums over Dirichlet parameters complicate the joint distribution of model parameters. This paper shows a novel auxiliary variable scheme which helps to simplify the inference for models involving hierarchical MDs and MDPs. Using this scheme, it is easy to derive fully collapsed inference schemes which allow for an efficient inference.
0
0
0
1
0
0
Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter
We present a new matched filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar Point Spread Function (PSF) is first subtracted using a Karhunen-Loéve Image Processing (KLIP) algorithm with Angular and Spectral Differential Imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the Signal-to-Noise Ratio (SNR) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal SNR loss. We also developed a complete pipeline for the automated detection of point source candidates, the calculation of Receiver Operating Characteristics (ROC), false positives based contrast curves, and completeness contours. We process in a uniform manner more than 330 datasets from the Gemini Planet Imager Exoplanet Survey (GPIES) and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false positive rate. We show that the new forward model matched filter allows the detection of $50\%$ fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false positive rate.
0
1
0
0
0
0
Numerical non-LTE 3D radiative transfer using a multigrid method
3D non-LTE radiative transfer problems are computationally demanding, and this sets limits on the size of the problems that can be solved. So far Multilevel Accelerated Lambda Iteration (MALI) has been to the method of choice to perform high-resolution computations in multidimensional problems. The disadvantage of MALI is that its computing time scales as $\mathcal{O}(n^2)$, with $n$ the number of grid points. When the grid gets finer, the computational cost increases quadratically. We aim to develop a 3D non-LTE radiative transfer code that is more efficient than MALI. We implement a non-linear multigrid, fast approximation storage scheme, into the existing Multi3D radiative transfer code. We verify our multigrid implementation by comparing with MALI computations. We show that multigrid can be employed in realistic problems with snapshots from 3D radiative-MHD simulations as input atmospheres. With multigrid, we obtain a factor 3.3-4.5 speedup compared to MALI. With full-multigrid the speed-up increases to a factor 6. The speedup is expected to increase for input atmospheres with more grid points and finer grid spacing. Solving 3D non-LTE radiative transfer problems using non-linear multigrid methods can be applied to realistic atmospheres with a substantial speed-up.
0
1
0
0
0
0
Test Case Prioritization Techniques for Model-Based Testing: A Replicated Study
Recently, several Test Case Prioritization (TCP) techniques have been proposed to order test cases for achieving a goal during test execution, particularly, revealing faults sooner. In the Model-Based Testing (MBT) context, such techniques are usually based on heuristics related to structural elements of the model and derived test cases. In this sense, techniques' performance may vary due to a number of factors. While empirical studies comparing the performance of TCP techniques have already been presented in literature, there is still little knowledge, particularly in the MBT context, about which factors may influence the outcomes suggested by a TCP technique. In a previous family of empirical studies focusing on labeled transition systems, we identified that the model layout, i.e. amount of branches, joins, and loops in the model, alone may have little influence on the performance of TCP techniques investigated, whereas characteristics of test cases that actually fail definitely influences their performance. However, we considered only synthetic artifacts in the study, which reduced the ability of representing properly the reality. In this paper, we present a replication of one of these studies, now with a larger and more representative selection of techniques and considering test suites from industrial applications as experimental objects. Our objective is to find out whether the results remain while increasing the validity in comparison to the original study. Results reinforce that there is no best performer among the investigated techniques and characteristics of test cases that fail represent an important factor, although adaptive random based techniques are less affected by it.
1
0
0
0
0
0
Magnetic order and interactions in ferrimagnetic Mn3Si2Te6
The magnetism in Mn$_3$Si$_2$Te$_6$ has been investigated using thermodynamic measurements, first principles calculations, neutron diffraction and diffuse neutron scattering on single crystals. These data confirm that Mn$_3$Si$_2$Te$_6$ is a ferrimagnet below a Curie temperature of $T_C$ approximately 78K. The magnetism is anisotropic, with magnetization and neutron diffraction demonstrating that the moments lie within the basal plane of the trigonal structure. The saturation magnetization of approximately 1.6$\mu_B$/Mn at 5K originates from the different multiplicities of the two antiferromagnetically-aligned Mn sites. First principles calculations reveal antiferromagnetic exchange for the three nearest Mn-Mn pairs, which leads to a competition between the ferrimagnetic ground state and three other magnetic configurations. The ferrimagnetic state results from the energy associated with the third-nearest neighbor interaction, and thus long-range interactions are essential for the observed behavior. Diffuse magnetic scattering is observed around the 002 Bragg reflection at 120K, which indicates the presence of strong spin correlations well above $T_C$. These are promoted by the competing ground states that result in a relative suppression of $T_C$, and may be associated with a small ferromagnetic component that produces anisotropic magnetism below $\approx$330K.
0
1
0
0
0
0
On the Solution of Linear Programming Problems in the Age of Big Data
The Big Data phenomenon has spawned large-scale linear programming problems. In many cases, these problems are non-stationary. In this paper, we describe a new scalable algorithm called NSLP for solving high-dimensional, non-stationary linear programming problems on modern cluster computing systems. The algorithm consists of two phases: Quest and Targeting. The Quest phase calculates a solution of the system of inequalities defining the constraint system of the linear programming problem under the condition of dynamic changes in input data. To this end, the apparatus of Fejer mappings is used. The Targeting phase forms a special system of points having the shape of an n-dimensional axisymmetric cross. The cross moves in the n-dimensional space in such a way that the solution of the linear programming problem is located all the time in an "-vicinity of the central point of the cross.
1
0
1
0
0
0
Highly accurate acoustic scattering: Isogeometric Analysis coupled with local high order Farfield Expansion ABC
This work is concerned with a unique combination of high order local absorbing boundary conditions (ABC) with a general curvilinear Finite Element Method (FEM) and its implementation in Isogeometric Analysis (IGA) for time-harmonic acoustic waves. The ABC employed were recently devised by Villamizar, Acosta and Dastrup [J. Comput. Phys. 333 (2017) 331] . They are derived from exact Farfield Expansions representations of the outgoing waves in the exterior of the regions enclosed by the artificial boundary. As a consequence, the error due to the ABC on the artificial boundary can be reduced conveniently such that the dominant error comes from the volume discretization method used in the interior of the computational domain. Reciprocally, the error in the interior can be made as small as the error at the artificial boundary by appropriate implementation of {\it p-} and {\it h}- refinement. We apply this novel method to cylindrical, spherical and arbitrary shape scatterers including a prototype submarine. Our numerical results exhibits spectral-like approximation and high order convergence rate. Additionally, they show that the proposed method can reduce both the pollution and artificial boundary errors to negligible levels even in very low- and high- frequency regimes with rather coarse discretization densities in the IGA. As a result, we have developed a highly accurate computational platform to numerically solve time-harmonic acoustic wave scattering in two- and three-dimensions.
1
0
0
0
0
0
Deep Domain Adaptation Based Video Smoke Detection using Synthetic Smoke Images
In this paper, a deep domain adaptation based method for video smoke detection is proposed to extract a powerful feature representation of smoke. Due to the smoke image samples limited in scale and diversity for deep CNN training, we systematically produced adequate synthetic smoke images with a wide variation in the smoke shape, background and lighting conditions. Considering that the appearance gap (dataset bias) between synthetic and real smoke images degrades significantly the performance of the trained model on the test set composed fully of real images, we build deep architectures based on domain adaptation to confuse the distributions of features extracted from synthetic and real smoke images. This approach expands the domain-invariant feature space for smoke image samples. With their approximate feature distribution off non-smoke images, the recognition rate of the trained model is improved significantly compared to the model trained directly on mixed dataset of synthetic and real images. Experimentally, several deep architectures with different design choices are applied to the smoke detector. The ultimate framework can get a satisfactory result on the test set. We believe that our approach is a start in the direction of utilizing deep neural networks enhanced with synthetic smoke images for video smoke detection.
1
0
0
0
0
0
NaCl crystal from salt solution with far below saturated concentration under ambient condition
Under ambient conditions, we directly observed NaCl crystals experimentally in the rGO membranes soaked in the salt solution with concentration below and far below the saturated concentration. Moreover, in most probability, the NaCl crystals show stoichiometries behavior. We attribute this unexpected crystallization to the cation-{\pi} interactions between the ions and the aromatic rings of the rGO.
0
1
0
0
0
0
Plasmonic properties of refractory titanium nitride
The development of plasmonic and metamaterial devices requires the research of high-performance materials, alternative to standard noble metals. Renewed as refractory stable compound for durable coatings, titanium nitride has been recently proposed as an efficient plasmonic material. Here, by using a first principles approach, we investigate the plasmon dispersion relations of TiN bulk and we predict the effect of pressure on its optoelectronic properties. Our results explain the main features of TiN in the visible range and prove a universal scaling law which relates its mechanical and plasmonic properties as a function of pressure. Finally, we address the formation and stability of surface-plasmon polaritons at different TiN/dielectric interfaces proposed by recent experiments. The unusual combination of plasmonics and refractory features paves the way for the realization of plasmonic devices able to work at conditions not sustainable by usual noble metals.
0
1
0
0
0
0
The localization transition in SU(3) gauge theory
We study the Anderson-like localization transition in the spectrum of the Dirac operator of quenched QCD. Above the deconfining transition we determine the temperature dependence of the mobility edge separating localized and delocalized eigenmodes in the spectrum. We show that the temperature where the mobility edge vanishes and localized modes disappear from the spectrum, coincides with the critical temperature of the deconfining transition. We also identify topological charge related close to zero modes in the Dirac spectrum and show that they account for only a small fraction of localized modes, a fraction that is rapidly falling as the temperature increases.
0
1
0
0
0
0
Conformation Clustering of Long MD Protein Dynamics with an Adversarial Autoencoder
Recent developments in specialized computer hardware have greatly accelerated atomic level Molecular Dynamics (MD) simulations. A single GPU-attached cluster is capable of producing microsecond-length trajectories in reasonable amounts of time. Multiple protein states and a large number of microstates associated with folding and with the function of the protein can be observed as conformations sampled in the trajectories. Clustering those conformations, however, is needed for identifying protein states, evaluating transition rates and understanding protein behavior. In this paper, we propose a novel data-driven generative conformation clustering method based on the adversarial autoencoder (AAE) and provide the associated software implementation Cong. The method was tested using a 208 microseconds MD simulation of the fast-folding peptide Trp-Cage (20 residues) obtained from the D.E. Shaw Research Group. The proposed clustering algorithm identifies many of the salient features of the folding process by grouping a large number of conformations that share common features not easily identifiable in the trajectory.
0
0
0
0
1
0
Optimal Low-Rank Dynamic Mode Decomposition
Dynamic Mode Decomposition (DMD) has emerged as a powerful tool for analyzing the dynamics of non-linear systems from experimental datasets. Recently, several attempts have extended DMD to the context of low-rank approximations. This extension is of particular interest for reduced-order modeling in various applicative domains, e.g. for climate prediction, to study molecular dynamics or micro-electromechanical devices. This low-rank extension takes the form of a non-convex optimization problem. To the best of our knowledge, only sub-optimal algorithms have been proposed in the literature to compute the solution of this problem. In this paper, we prove that there exists a closed-form optimal solution to this problem and design an effective algorithm to compute it based on Singular Value Decomposition (SVD). A toy-example illustrates the gain in performance of the proposed algorithm compared to state-of-the-art techniques.
0
0
0
1
0
0
Improving Community Detection by Mining Social Interactions
Social relationships can be divided into different classes based on the regularity with which they occur and the similarity among them. Thus, rare and somewhat similar relationships are random and cause noise in a social network, thus hiding the actual structure of the network and preventing an accurate analysis of it. In this context, in this paper we propose a process to handle social network data that exploits temporal features to improve the detection of communities by existing algorithms. By removing random interactions, we observe that social networks converge to a topology with more purely social relationships and more modular communities.
1
0
0
0
0
0
The set of quantum correlations is not closed
We construct a linear system non-local game which can be played perfectly using a limit of finite-dimensional quantum strategies, but which cannot be played perfectly on any finite-dimensional Hilbert space, or even with any tensor-product strategy. In particular, this shows that the set of (tensor-product) quantum correlations is not closed. The constructed non-local game provides another counterexample to the "middle" Tsirelson problem, with a shorter proof than our previous paper (though at the loss of the universal embedding theorem). We also show that it is undecidable to determine if a linear system game can be played perfectly with a finite-dimensional strategy, or a limit of finite-dimensional quantum strategies.
0
0
1
0
0
0
Correction to the article: Floer homology and splicing knot complements
This note corrects the mistakes in the splicing formulas of the paper "Floer homology and splicing knot complements". The mistakes are the result of the incorrect assumption that for a knot $K$ inside a homology sphere $Y$, the involution on the knot Floer homology of $K$ which corresponds to moving the basepoints by one full twist around $K$ is trivial. The correction implicitly involves considering the contribution from this (possibly non-trivial) involution in a number of places.
0
0
1
0
0
0
Tree tribes and lower bounds for switching lemmas
We show tight upper and lower bounds for switching lemmas obtained by the action of random $p$-restrictions on boolean functions that can be expressed as decision trees in which every vertex is at a distance of at most $t$ from some leaf, also called $t$-clipped decision trees. More specifically, we show the following: $\bullet$ If a boolean function $f$ can be expressed as a $t$-clipped decision tree, then under the action of a random $p$-restriction $\rho$, the probability that the smallest depth decision tree for $f|_{\rho}$ has depth greater than $d$ is upper bounded by $(4p2^{t})^{d}$. $\bullet$ For every $t$, there exists a function $g_{t}$ that can be expressed as a $t$-clipped decision tree, such that under the action of a random $p$-restriction $\rho$, the probability that the smallest depth decision tree for $g_{t}|_{\rho}$ has depth greater than $d$ is lower bounded by $(c_{0}p2^{t})^{d}$, for $0\leq p\leq c_{p}2^{-t}$ and $0\leq d\leq c_{d}\frac{\log n}{2^{t}\log t}$, where $c_{0},c_{p},c_{d}$ are universal constants.
1
0
0
0
0
0
Gyrokinetic ion and drift kinetic electron model for electromagnetic simulation in the toroidal geometry
The kinetic effects of electrons are important to long wavelength magnetohydrodynamic(MHD)instabilities and short wavelength drift-Alfvenic instabilities responsible for turbulence transport in magnetized plasmas, since the non-adiabatic electron can interact with, modify and drive the low frequency instabilities. A novel conservative split weight scheme is proposed for the electromagnetic simulation with drift kinetic electrons in tokamak plasmas, which shows great computational advantages that there is no numerical constrain of electron skin depth on the perpendicular grid size without sacrificing any physics. Both kinetic Alfven wave and collision-less tearing mode are verified by using this model, which has already been implemented into the gyrokinetic toroidal code(GTC). This model will be used for the micro tearing mode and neoclassical tearing mode simulation based on the first principle in the future.
0
1
0
0
0
0
Transferring Agent Behaviors from Videos via Motion GANs
A major bottleneck for developing general reinforcement learning agents is determining rewards that will yield desirable behaviors under various circumstances. We introduce a general mechanism for automatically specifying meaningful behaviors from raw pixels. In particular, we train a generative adversarial network to produce short sub-goals represented through motion templates. We demonstrate that this approach generates visually meaningful behaviors in unknown environments with novel agents and describe how these motions can be used to train reinforcement learning agents.
1
0
0
1
0
0
Learning to Segment and Represent Motion Primitives from Driving Data for Motion Planning Applications
Developing an intelligent vehicle which can perform human-like actions requires the ability to learn basic driving skills from a large amount of naturalistic driving data. The algorithms will become efficient if we could decompose the complex driving tasks into motion primitives which represent the elementary compositions of driving skills. Therefore, the purpose of this paper is to segment unlabeled trajectory data into a library of motion primitives. By applying a probabilistic inference based on an iterative Expectation-Maximization algorithm, our method segments the collected trajectories while learning a set of motion primitives represented by the dynamic movement primitives. The proposed method utilizes the mutual dependencies between the segmentation and representation of motion primitives and the driving-specific based initial segmentation. By utilizing this mutual dependency and the initial condition, this paper presents how we can enhance the performance of both the segmentation and the motion primitive library establishment. We also evaluate the applicability of the primitive representation method to imitation learning and motion planning algorithms. The model is trained and validated by using the driving data collected from the Beijing Institute of Technology intelligent vehicle platform. The results show that the proposed approach can find the proper segmentation and establish the motion primitive library simultaneously.
1
0
0
0
0
0
NeuralPower: Predict and Deploy Energy-Efficient Convolutional Neural Networks
"How much energy is consumed for an inference made by a convolutional neural network (CNN)?" With the increased popularity of CNNs deployed on the wide-spectrum of platforms (from mobile devices to workstations), the answer to this question has drawn significant attention. From lengthening battery life of mobile devices to reducing the energy bill of a datacenter, it is important to understand the energy efficiency of CNNs during serving for making an inference, before actually training the model. In this work, we propose NeuralPower: a layer-wise predictive framework based on sparse polynomial regression, for predicting the serving energy consumption of a CNN deployed on any GPU platform. Given the architecture of a CNN, NeuralPower provides an accurate prediction and breakdown for power and runtime across all layers in the whole network, helping machine learners quickly identify the power, runtime, or energy bottlenecks. We also propose the "energy-precision ratio" (EPR) metric to guide machine learners in selecting an energy-efficient CNN architecture that better trades off the energy consumption and prediction accuracy. The experimental results show that the prediction accuracy of the proposed NeuralPower outperforms the best published model to date, yielding an improvement in accuracy of up to 68.5%. We also assess the accuracy of predictions at the network level, by predicting the runtime, power, and energy of state-of-the-art CNN architectures, achieving an average accuracy of 88.24% in runtime, 88.34% in power, and 97.21% in energy. We comprehensively corroborate the effectiveness of NeuralPower as a powerful framework for machine learners by testing it on different GPU platforms and Deep Learning software tools.
1
0
0
1
0
0
The Minor Fall, the Major Lift: Inferring Emotional Valence of Musical Chords through Lyrics
We investigate the association between musical chords and lyrics by analyzing a large dataset of user-contributed guitar tablatures. Motivated by the idea that the emotional content of chords is reflected in the words used in corresponding lyrics, we analyze associations between lyrics and chord categories. We also examine the usage patterns of chords and lyrics in different musical genres, historical eras, and geographical regions. Our overall results confirms a previously known association between Major chords and positive valence. We also report a wide variation in this association across regions, genres, and eras. Our results suggest possible existence of different emotional associations for other types of chords.
1
0
0
0
0
0
Effective computation of $\mathrm{SO}(3)$ and $\mathrm{O}(3)$ linear representations symmetry classes
We propose a general algorithm to compute all the symmetry classes of any $\mathrm{SO}(3)$ or $\mathrm{O}(3)$ linear representation. This method relies on the introduction of a binary operator between sets of conjugacy classes of closed subgroups, called the clips. We compute explicit tables for this operation which allows to solve definitively the problem.
0
0
1
0
0
0
Subextensions for co-induced modules
Using cohomological methods, we prove a criterion for the embedding of a group extension with abelian kernel into the split extension of a co-induced module. This generalises some earlier similar results. We also prove an assertion about the conjugacy of complements in split extensions of co-induced modules. Both results follow from a relation between homomorphisms of certain cohomology groups.
0
0
1
0
0
0
Specht Polytopes and Specht Matroids
The generators of the classical Specht module satisfy intricate relations. We introduce the Specht matroid, which keeps track of these relations, and the Specht polytope, which also keeps track of convexity relations. We establish basic facts about the Specht polytope, for example, that the symmetric group acts transitively on its vertices and irreducibly on its ambient real vector space. A similar construction builds a matroid and polytope for a tensor product of Specht modules, giving "Kronecker matroids" and "Kronecker polytopes" instead of the usual Kronecker coefficients. We dub this process of upgrading numbers to matroids and polytopes "matroidification," giving two more examples. In the course of describing these objects, we also give an elementary account of the construction of Specht modules different from the standard one. Finally, we provide code to compute with Specht matroids and their Chow rings.
0
0
1
0
0
0
Existence theorems for the Cauchy problem of 2D nonhomogeneous incompressible non-resistive MHD equations with vacuum
In this paper, we investigate the Cauchy problem of the nonhomogeneous incompressible non-resistive MHD on $\mathbb{R}^2$ with vacuum as far field density and prove that the 2D Cauchy problem has a unique local strong solution provided that the initial density and magnetic field decay not too slow at infinity. Furthermore, if the initial data satisfy some additional regularity and compatibility conditions, the strong solution becomes a classical one.
0
0
1
0
0
0
Mass Conservative and Energy Stable Finite Difference Methods for the Quasi-incompressible Navier-Stokes-Cahn-Hilliard system: Primitive Variable and Projection-Type Schemes
In this paper we describe two fully mass conservative, energy stable, finite difference methods on a staggered grid for the quasi-incompressible Navier-Stokes-Cahn-Hilliard (q-NSCH) system governing a binary incompressible fluid flow with variable density and viscosity. Both methods, namely the primitive method (finite difference method in the primitive variable formulation) and the projection method (finite difference method in a projection-type formulation), are so designed that the mass of the binary fluid is preserved, and the energy of the system equations is always non-increasing in time at the fully discrete level. We also present an efficient, practical nonlinear multigrid method - comprised of a standard FAS method for the Cahn-Hilliard equation, and a method based on the Vanka-type smoothing strategy for the Navier-Stokes equation - for solving these equations. We test the scheme in the context of Capillary Waves, rising droplets and Rayleigh-Taylor instability. Quantitative comparisons are made with existing analytical solutions or previous numerical results that validate the accuracy of our numerical schemes. Moreover, in all cases, mass of the single component and the binary fluid was conserved up to 10 to -8 and energy decreases in time.
0
0
1
0
0
0
Density-equalizing maps for simply-connected open surfaces
In this paper, we are concerned with the problem of creating flattening maps of simply-connected open surfaces in $\mathbb{R}^3$. Using a natural principle of density diffusion in physics, we propose an effective algorithm for computing density-equalizing flattening maps with any prescribed density distribution. By varying the initial density distribution, a large variety of mappings with different properties can be achieved. For instance, area-preserving parameterizations of simply-connected open surfaces can be easily computed. Experimental results are presented to demonstrate the effectiveness of our proposed method. Applications to data visualization and surface remeshing are explored.
1
0
1
0
0
0
Measuring the radius and mass of Planet Nine
Batygin and Brown (2016) have suggested the existence of a new Solar System planet supposed to be responsible for the perturbation of eccentric orbits of small outer bodies. The main challenge is now to detect and characterize this putative body. Here we investigate the principles of the determination of its physical parameters, mainly its mass and radius. For that purpose we concentrate on two methods, stellar occultations and gravitational microlensing effects (amplification, deflection and time delay). We estimate the main characteristics of a possible occultation or gravitational effects: flux variation of a background star, duration and probability of occurence. We investigate also additional benefits of direct imaging and of an occultation.
0
1
0
0
0
0
Efficient and principled score estimation with Nyström kernel exponential families
We propose a fast method with statistical guarantees for learning an exponential family density model where the natural parameter is in a reproducing kernel Hilbert space, and may be infinite-dimensional. The model is learned by fitting the derivative of the log density, the score, thus avoiding the need to compute a normalization constant. Our approach improves the computational efficiency of an earlier solution by using a low-rank, Nyström-like solution. The new solution retains the consistency and convergence rates of the full-rank solution (exactly in Fisher distance, and nearly in other distances), with guarantees on the degree of cost and storage reduction. We evaluate the method in experiments on density estimation and in the construction of an adaptive Hamiltonian Monte Carlo sampler. Compared to an existing score learning approach using a denoising autoencoder, our estimator is empirically more data-efficient when estimating the score, runs faster, and has fewer parameters (which can be tuned in a principled and interpretable way), in addition to providing statistical guarantees.
1
0
0
1
0
0
Small presentations of model categories and Vopěnka's principle
We prove existence results for small presentations of model categories generalizing a theorem of D. Dugger from combinatorial model categories to more general model categories. Some of these results are shown under the assumption of Vopěnka's principle. Our main theorem applies in particular to cofibrantly generated model categories where the domains of the generating cofibrations satisfy a slightly stronger smallness condition. As a consequence, assuming Vopěnka's principle, such a cofibrantly generated model category is Quillen equivalent to a combinatorial model category. Moreover, if there are generating sets which consist of presentable objects, then the same conclusion holds without the assumption of Vopěnka's principle. We also correct a mistake from previous work that made similar claims.
0
0
1
0
0
0
Automatic Trimap Generation for Image Matting
Image matting is a longstanding problem in computational photography. Although, it has been studied for more than two decades, yet there is a challenge of developing an automatic matting algorithm which does not require any human efforts. Most of the state-of-the-art matting algorithms require human intervention in the form of trimap or scribbles to generate the alpha matte form the input image. In this paper, we present a simple and efficient approach to automatically generate the trimap from the input image and make the whole matting process free from human-in-the-loop. We use learning based matting method to generate the matte from the automatically generated trimap. Experimental results demonstrate that our method produces good quality trimap which results into accurate matte estimation. We validate our results by replacing the automatically generated trimap by manually created trimap while using the same image matting algorithm.
1
0
0
0
0
0
X-ray spectral properties of seven heavily obscured Seyfert 2 galaxies
We present the combined Chandra and Swift-BAT spectral analysis of seven Seyfert 2 galaxies selected from the Swift-BAT 100-month catalog. We selected nearby (z<=0.03) sources lacking of a ROSAT counterpart and never previously observed with Chandra in the 0.3-10 keV energy range, and targeted these objects with 10 ks Chandra ACIS-S observations. The X-ray spectral fitting over the 0.3-150 keV energy range allows us to determine that all the objects are significantly obscured, having NH>=1E23 cm^(-2) at a >99% confidence level. Moreover, one to three sources are candidate Compton thick Active Galactic Nuclei (CT-AGN), i.e., have NH>=1E24 cm^(-2). We also test the recent "spectral curvature" method developed by Koss et al. (2016) to find candidate CT-AGN, finding a good agreement between our results and their predictions. Since the selection criteria we adopted have been effective in detecting highly obscured AGN, further observations of these and other Seyfert 2 galaxies selected from the Swift-BAT 100-month catalog will allow us to create a statistically significant sample of highly obscured AGN, therefore better understanding the physics of the obscuration processes.
0
1
0
0
0
0
Towards the LISA Backlink: Experiment design for comparing optical phase reference distribution systems
LISA is a proposed space-based laser interferometer detecting gravitational waves by measuring distances between free-floating test masses housed in three satellites in a triangular constellation with laser links in-between. Each satellite contains two optical benches that are articulated by moving optical subassemblies for compensating the breathing angle in the constellation. The phase reference distribution system, also known as backlink, forms an optical bi-directional path between the intra-satellite benches. In this work we discuss phase reference implementations with a target non-reciprocity of at most $2\pi\,\mathrm{\mu rad/\sqrt{Hz}}$, equivalent to $1\,\mathrm{pm/\sqrt{Hz}}$ for a wavelength of $1064\,\mathrm{nm}$ in the frequency band from $0.1\,\mathrm{mHz}$ to $1\,\mathrm{Hz}$. One phase reference uses a steered free beam connection, the other one a fiber together with additional laser frequencies. The noise characteristics of these implementations will be compared in a single interferometric set-up with a previously successfully tested direct fiber connection. We show the design of this interferometer created by optical simulations including ghost beam analysis, component alignment and noise estimation. First experimental results of a free beam laser link between two optical set-ups that are co-rotating by $\pm 1^\circ$ are presented. This experiment demonstrates sufficient thermal stability during rotation of less than $10^{-4}\,\mathrm{K/\sqrt{Hz}}$ at $1\,\mathrm{mHz}$ and operation of the free beam steering mirror control over more than 1 week.
0
1
0
0
0
0
Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs
Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from 'serialised' MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data.
1
0
0
1
0
0
Embedded eigenvalues of generalized Schrödinger operators
We provide examples of operators $T(D)+V$ with decaying potentials that have embedded eigenvalues. The decay of the potential depends on the curvature of the Fermi surfaces of constant kinetic energy $T$. We make the connection to counterexamples in Fourier restriction theory.
0
0
1
0
0
0
Multichannel Robot Speech Recognition Database: MChRSR
In real human robot interaction (HRI) scenarios, speech recognition represents a major challenge due to robot noise, background noise and time-varying acoustic channel. This document describes the procedure used to obtain the Multichannel Robot Speech Recognition Database (MChRSR). It is composed of 12 hours of multichannel evaluation data recorded in a real mobile HRI scenario. This database was recorded with a PR2 robot performing different translational and azimuthal movements. Accordingly, 16 evaluation sets were obtained re-recording the clean set of the Aurora 4 database in different movement conditions.
1
0
0
0
0
0
The Algorithmic Inflection of Russian and Generation of Grammatically Correct Text
We present a deterministic algorithm for Russian inflection. This algorithm is implemented in a publicly available web-service www.passare.ru which provides functions for inflection of single words, word matching and synthesis of grammatically correct Russian text. The inflectional functions have been tested against the annotated corpus of Russian language OpenCorpora.
1
0
0
0
0
0
Automatic Analysis of EEGs Using Big Data and Hybrid Deep Learning Architectures
Objective: A clinical decision support tool that automatically interprets EEGs can reduce time to diagnosis and enhance real-time applications such as ICU monitoring. Clinicians have indicated that a sensitivity of 95% with a specificity below 5% was the minimum requirement for clinical acceptance. We propose a highperformance classification system based on principles of big data and machine learning. Methods: A hybrid machine learning system that uses hidden Markov models (HMM) for sequential decoding and deep learning networks for postprocessing is proposed. These algorithms were trained and evaluated using the TUH EEG Corpus, which is the world's largest publicly available database of clinical EEG data. Results: Our approach delivers a sensitivity above 90% while maintaining a specificity below 5%. This system detects three events of clinical interest: (1) spike and/or sharp waves, (2) periodic lateralized epileptiform discharges, (3) generalized periodic epileptiform discharges. It also detects three events used to model background noise: (1) artifacts, (2) eye movement (3) background. Conclusions: A hybrid HMM/deep learning system can deliver a low false alarm rate on EEG event detection, making automated analysis a viable option for clinicians. Significance: The TUH EEG Corpus enables application of highly data consumptive machine learning algorithms to EEG analysis. Performance is approaching clinical acceptance for real-time applications.
1
0
0
1
0
0
Implementing a Concept Network Model
The same concept can mean different things or be instantiated in different forms depending on context, suggesting a degree of flexibility within the conceptual system. We propose that a compositional network model can be used to capture and predict this flexibility. We modeled individual concepts (e.g., BANANA, BOTTLE) as graph-theoretical networks, in which properties (e.g., YELLOW, SWEET) were represented as nodes and their associations as edges. In this framework, networks capture the within-concept statistics that reflect how properties correlate with each other across instances of a concept. We ran a classification analysis using graph eigendecomposition to validate these models, and find that these models can successfully discriminate between object concepts. We then computed formal measures from these concept networks and explored their relationship to conceptual structure. We find that diversity coefficients and core-periphery structure can be interpreted as network-based measures of conceptual flexibility and stability, respectively. These results support the feasibility of a concept network framework and highlight its ability to formally capture important characteristics of the conceptual system.
0
0
0
0
1
0
Two simple observations on representations of metaplectic groups
M. Hanzer and I. Matic have proved that the genuine unitary principal series representations of the metaplectic groups are irreducible. A simple consequence of that paper is a criterion for the irreducibility of the non-unitary principal series representations of the metaplectic groups that we give in this paper.
0
0
1
0
0
0
Spinors in Spacetime Algebra and Euclidean 4-Space
This article explores the geometric algebra of Minkowski spacetime, and its relationship to the geometric algebra of Euclidean 4-space. Both of these geometric algebras are algebraically isomorphic to the 2x2 matrix algebra over Hamilton's famous quaternions, and provide a rich geometric framework for various important topics in mathematics and physics, including stereographic projection and spinors, and both spherical and hyperbolic geometry. In addition, by identifying the time-like Minkowski unit vector with the extra dimension of Euclidean 4-space, David Hestenes' Space-Time Algebra of Minkowski spacetime is unified with William Baylis' Algebra of Physical Space.
0
0
1
0
0
0
Near-sphere lattices with constant nonlocal mean curvature
We are concerned with unbounded sets of $\mathbb{R}^N$ whose boundary has constant nonlocal (or fractional) mean curvature, which we call CNMC sets. This is the equation associated to critical points of the fractional perimeter functional under a volume constraint. We construct CNMC sets which are the countable union of a certain bounded domain and all its translations through a periodic integer lattice of dimension $M\leq N$. Our CNMC sets form a $C^2$ branch emanating from the unit ball alone and where the parameter in the branch is essentially the distance to the closest lattice point. Thus, the new translated near-balls (or near-spheres) appear from infinity. We find their exact asymptotic shape as the parameter tends to infinity.
0
0
1
0
0
0
Stability, convergence, and limit cycles in some human physiological processes
Mathematical models for physiological processes aid qualitative understanding of the impact of various parameters on the underlying process. We analyse two such models for human physiological processes: the Mackey-Glass and the Lasota equations, which model the change in the concentration of blood cells in the human body. We first study the local stability of these models, and derive bounds on various model parameters and the feedback delay for the concentration to equilibrate. We then deduce conditions for non-oscillatory convergence of the solutions, which could ensure that the blood cell concentration does not oscillate. Further, we define the convergence characteristics of the solutions which govern the rate at which the concentration equilibrates when the system is stable. Owing to the possibility that physiological parameters can seldom be estimated precisely, we also derive bounds for robust stability\textemdash which enable one to ensure that the blood cell concentration equilibrates despite parametric uncertainty. We also highlight that when the necessary and sufficient condition for local stability is violated, the system transits into instability via a Hopf bifurcation, leading to limit cycles in the blood cell concentration. We then outline a framework to characterise the type of the Hopf bifurcation and determine the asymptotic orbital stability of limit cycles. The analysis is complemented with numerical examples, stability charts and bifurcation diagrams. The insights into the dynamical properties of the mathematical models may serve to guide the study of dynamical diseases.
1
0
0
0
0
0
Poisson multi-Bernoulli mixture filter: direct derivation and implementation
We provide a derivation of the Poisson multi-Bernoulli mixture (PMBM) filter for multi-target tracking with the standard point target measurements without using probability generating functionals or functional derivatives. We also establish the connection with the \delta-generalised labelled multi-Bernoulli (\delta-GLMB) filter, showing that a \delta-GLMB density represents a multi-Bernoulli mixture with labelled targets so it can be seen as a special case of PMBM. In addition, we propose an implementation for linear/Gaussian dynamic and measurement models and how to efficiently obtain typical estimators in the literature from the PMBM. The PMBM filter is shown to outperform other filters in the literature in a challenging scenario.
1
0
0
1
0
0
Surface magnetism in a chiral d-wave superconductor with hexagonal symmetry
Surface properties are examined in a chiral d-wave superconductor with hexagonal symmetry, whose one-body Hamiltonian possesses the intrinsic spin-orbit coupling identical to the one characterizing the topological nature of the Kane-Mele honeycomb insulator. In the normal state spin-orbit coupling gives rise to spontaneous surface spin currents, whereas in the superconducting state there exist besides the spin currents also charge surface currents, due to the chiral pairing symmetry. Interestingly, the combination of these two currents results in a surface spin polarization, whose spatial dependence is markedly different on the zigzag and armchair surfaces. We discuss various potential candidate materials, such as SrPtAs, which may exhibit these surface properties.
0
1
0
0
0
0
Penetrating a Social Network: The Follow-back Problem
Modern threats have emerged from the prevalence of social networks. Hostile actors, such as extremist groups or foreign governments, utilize these networks to run propaganda campaigns with different aims. For extremists, these campaigns are designed for recruiting new members or inciting violence. For foreign governments, the aim may be to create instability in rival nations. Proper social network counter-measures are needed to combat these threats. Here we present one important counter-measure: penetrating social networks. This means making target users connect with or follow agents deployed in the social network. Once such connections are established with the targets, the agents can influence them by sharing content which counters the influence campaign. In this work we study how to penetrate a social network, which we call the follow-back problem. The goal here is to find a policy that maximizes the number of targets that follow the agent. We conduct an empirical study to understand what behavioral and network features affect the probability of a target following an agent. We find that the degree of the target and the size of the mutual neighborhood of the agent and target in the network affect this probability. Based on our empirical findings, we then propose a model for targets following an agent. Using this model, we solve the follow-back problem exactly on directed acyclic graphs and derive a closed form expression for the expected number of follows an agent receives under the optimal policy. We then formulate the follow-back problem on an arbitrary graph as an integer program. To evaluate our integer program based policies, we conduct simulations on real social network topologies in Twitter. We find that our polices result in more effective network penetration, with significant increases in the expected number of targets that follow the agent.
1
0
0
1
0
0
Universality of density waves in p-doped La2CuO4 and n-doped Nd2CuO4+y
The contribution of $O^{2-}$ ions to antiferromagnetism in $La_{2-x}Ae_xCuO_4$ ($Ae = Sr, Ba)$ is highly sensitive to doped holes. In contrast, the contribution of $Cu^{2+}$ ions to antiferromagnetism in $Nd_{2-x}Ce_xCuO_{4+y}$ is much less sensitive to doped electrons. The difference causes the precarious and, respectively, robust antiferromagnetic phase of these cuprates. The same sensitivities affect the doping dependence of the incommensurability of density waves, $\delta (x)$. In the hole-doped compounds this gives rise to a doping offset for magnetic and charge density waves, $\delta_{m,c}^p(x) \propto \sqrt{x-x_{0p}^N}$. Here $x_{0p}^N$ is the doping concentration where the Néel temperature vanishes, $T_N(x_{0p}^N) = 0$. No such doping offset occurs for density waves in the electron-doped compound. Instead, excess oxygen (necessary for stability in crystal growth) of concentration $y$ causes a different doping offset in the latter case, $\delta_{m,c}^n(x) \propto \sqrt{x- 2y}$. The square-root formulas result from the assumption of superlattice formation through partitioning of the $CuO_2$ plane by pairs of itinerant charge carriers. Agreement of observed incommensurability $\delta(x)$ with the formulas is very good for the hole-doped compounds and reasonable for the electron-doped compound. The deviation in the latter case may be caused by residual excess oxygen.
0
1
0
0
0
0
The perceived assortativity of social networks: Methodological problems and solutions
Networks describe a range of social, biological and technical phenomena. An important property of a network is its degree correlation or assortativity, describing how nodes in the network associate based on their number of connections. Social networks are typically thought to be distinct from other networks in being assortative (possessing positive degree correlations); well-connected individuals associate with other well-connected individuals, and poorly-connected individuals associate with each other. We review the evidence for this in the literature and find that, while social networks are more assortative than non-social networks, only when they are built using group-based methods do they tend to be positively assortative. Non-social networks tend to be disassortative. We go on to show that connecting individuals due to shared membership of a group, a commonly used method, biases towards assortativity unless a large enough number of censuses of the network are taken. We present a number of solutions to overcoming this bias by drawing on advances in sociological and biological fields. Adoption of these methods across all fields can greatly enhance our understanding of social networks and networks in general.
1
0
0
1
0
0
Generalized Sheet Transition Conditions (GSTCs) for a Metascreen -- A Fishnet Metasurface
We used a multiple-scale homogenization method to derive generalized sheet transition conditions (GSTCs) for electromagnetic fields at the surface of a metascreen---a metasurface with a "fishnet" structure. These surfaces are characterized by periodically-spaced arbitrary-shaped apertures in an otherwise relatively impenetrable surface. The parameters in these GSTCs are interpreted as effective surface susceptibilities and surface porosities, which are related to the geometry of the apertures that constitute the metascreen. Finally, we emphasize the subtle but important difference between the GSTCs required for metascreens and those required for metafilms (a metasurface with a "cermet" structure, i.e., an array of isolated (non-touching) scatterers).
0
1
0
0
0
0
$k^{τ,ε}$-anonymity: Towards Privacy-Preserving Publishing of Spatiotemporal Trajectory Data
Mobile network operators can track subscribers via passive or active monitoring of device locations. The recorded trajectories offer an unprecedented outlook on the activities of large user populations, which enables developing new networking solutions and services, and scaling up studies across research disciplines. Yet, the disclosure of individual trajectories raises significant privacy concerns: thus, these data are often protected by restrictive non-disclosure agreements that limit their availability and impede potential usages. In this paper, we contribute to the development of technical solutions to the problem of privacy-preserving publishing of spatiotemporal trajectories of mobile subscribers. We propose an algorithm that generalizes the data so that they satisfy $k^{\tau,\epsilon}$-anonymity, an original privacy criterion that thwarts attacks on trajectories. Evaluations with real-world datasets demonstrate that our algorithm attains its objective while retaining a substantial level of accuracy in the data. Our work is a step forward in the direction of open, privacy-preserving datasets of spatiotemporal trajectories.
1
0
0
0
0
0
The Taipan Galaxy Survey: Scientific Goals and Observing Strategy
Taipan is a multi-object spectroscopic galaxy survey starting in 2017 that will cover 2pi steradians over the southern sky, and obtain optical spectra for about two million galaxies out to z<0.4. Taipan will use the newly-refurbished 1.2m UK Schmidt Telescope at Siding Spring Observatory with the new TAIPAN instrument, which includes an innovative 'Starbugs' positioning system capable of rapidly and simultaneously deploying up to 150 spectroscopic fibres (and up to 300 with a proposed upgrade) over the 6-deg diameter focal plane, and a purpose-built spectrograph operating from 370 to 870nm with resolving power R>2000. The main scientific goals of Taipan are: (i) to measure the distance scale of the Universe (primarily governed by the local expansion rate, H_0) to 1% precision, and the structure growth rate of structure to 5%; (ii) to make the most extensive map yet constructed of the mass distribution and motions in the local Universe, using peculiar velocities based on improved Fundamental Plane distances, which will enable sensitive tests of gravitational physics; and (iii) to deliver a legacy sample of low-redshift galaxies as a unique laboratory for studying galaxy evolution as a function of mass and environment. The final survey, which will be completed within 5 years, will consist of a complete magnitude-limited sample (i<17) of about 1.2x10^6 galaxies, supplemented by an extension to higher redshifts and fainter magnitudes (i<18.1) of a luminous red galaxy sample of about 0.8x10^6 galaxies. Observations and data processing will be carried out remotely and in a fully-automated way, using a purpose-built automated 'virtual observer' software and an automated data reduction pipeline. The Taipan survey is deliberately designed to maximise its legacy value, by complementing and enhancing current and planned surveys of the southern sky at wavelengths from the optical to the radio.
0
1
0
0
0
0
The Externalities of Exploration and How Data Diversity Helps Exploitation
Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users for information that will lead to better decisions in the future. Recently, concerns have been raised about whether the process of exploration could be viewed as unfair, placing too much burden on certain individuals or groups. Motivated by these concerns, we initiate the study of the externalities of exploration - the undesirable side effects that the presence of one party may impose on another - under the linear contextual bandits model. We introduce the notion of a group externality, measuring the extent to which the presence of one population of users impacts the rewards of another. We show that this impact can in some cases be negative, and that, in a certain sense, no algorithm can avoid it. We then study externalities at the individual level, interpreting the act of exploration as an externality imposed on the current user of a system by future users. This drives us to ask under what conditions inherent diversity in the data makes explicit exploration unnecessary. We build on a recent line of work on the smoothed analysis of the greedy algorithm that always chooses the action that currently looks optimal, improving on prior results to show that a greedy approach almost matches the best possible Bayesian regret rate of any other algorithm on the same problem instance whenever the diversity conditions hold, and that this regret is at most $\tilde{O}(T^{1/3})$. Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm. Together, our results uncover a sharp contrast between the high externalities that exist in the worst case, and the ability to remove all externalities if the data is sufficiently diverse.
0
0
0
1
0
0
Life-span of blowup solutions to semilinear wave equation with space-dependent critical damping
This paper is concerned with the blowup phenomena for initial value problem of semilinear wave equation with critical space-dependent damping term (DW:$V$). The main result of the present paper is to give a solution of the problem and to provide a sharp estimate for lifespan for such a solution when $\frac{N}{N-1}<p\leq p_S(N+V_0)$, where $p_S(N)$ is the Strauss exponent for (DW:$0$). The main idea of the proof is due to the technique of test functions for (DW:$0$) originated by Zhou--Han (2014, MR3169791). Moreover, we find a new threshold value $V_0=\frac{(N-1)^2}{N+1}$ for the coefficient of critical and singular damping $|x|^{-1}$.
0
0
1
0
0
0
Invariant submanifolds of (LCS)n-Manifolds with respect to quarter symmetric metric connection
The object of the present paper is to study invariant submanifolds of (LCS)n-manifolds with respect to quarter symmetric metric connection. It is shown that the mean curvature of an invariant submanifold of (LCS)n-manifold with respect to quarter symmetric metric connection and Levi-Civita connection are equal. An example is constructed to illustrate the results of the paper. We also obtain some equivalent conditions of such notion.
0
0
1
0
0
0
Iterated filtering methods for Markov process epidemic models
Dynamic epidemic models have proven valuable for public health decision makers as they provide useful insights into the understanding and prevention of infectious diseases. However, inference for these types of models can be difficult because the disease spread is typically only partially observed e.g. in form of reported incidences in given time periods. This chapter discusses how to perform likelihood-based inference for partially observed Markov epidemic models when it is relatively easy to generate samples from the Markov transmission model while the likelihood function is intractable. The first part of the chapter reviews the theoretical background of inference for partially observed Markov processes (POMP) via iterated filtering. In the second part of the chapter the performance of the method and associated practical difficulties are illustrated on two examples. In the first example a simulated outbreak data set consisting of the number of newly reported cases aggregated by week is fitted to a POMP where the underlying disease transmission model is assumed to be a simple Markovian SIR model. The second example illustrates possible model extensions such as seasonal forcing and over-dispersion in both, the transmission and observation model, which can be used, e.g., when analysing routinely collected rotavirus surveillance data. Both examples are implemented using the R-package pomp (King et al., 2016) and the code is made available online.
0
0
1
1
0
0
An Algebraic-Combinatorial Proof Technique for the GM-MDS Conjecture
This paper considers the problem of designing maximum distance separable (MDS) codes over small fields with constraints on the support of their generator matrices. For any given $m\times n$ binary matrix $M$, the GM-MDS conjecture, due to Dau et al., states that if $M$ satisfies the so-called MDS condition, then for any field $\mathbb{F}$ of size $q\geq n+m-1$, there exists an $[n,m]_q$ MDS code whose generator matrix $G$, with entries in $\mathbb{F}$, fits $M$ (i.e., $M$ is the support matrix of $G$). Despite all the attempts by the coding theory community, this conjecture remains still open in general. It was shown, independently by Yan et al. and Dau et al., that the GM-MDS conjecture holds if the following conjecture, referred to as the TM-MDS conjecture, holds: if $M$ satisfies the MDS condition, then the determinant of a transformation matrix $T$, such that $TV$ fits $M$, is not identically zero, where $V$ is a Vandermonde matrix with distinct parameters. In this work, we generalize the TM-MDS conjecture, and present an algebraic-combinatorial approach based on polynomial-degree reduction for proving this conjecture. Our proof technique's strength is based primarily on reducing inherent combinatorics in the proof. We demonstrate the strength of our technique by proving the TM-MDS conjecture for the cases where the number of rows ($m$) of $M$ is upper bounded by $5$. For this class of special cases of $M$ where the only additional constraint is on $m$, only cases with $m\leq 4$ were previously proven theoretically, and the previously used proof techniques are not applicable to cases with $m > 4$.
1
0
0
0
0
0
The Word Problem of $\mathbb{Z}^n$ Is a Multiple Context-Free Language
The \emph{word problem} of a group $G = \langle \Sigma \rangle$ can be defined as the set of formal words in $\Sigma^*$ that represent the identity in $G$. When viewed as formal languages, this gives a strong connection between classes of groups and classes of formal languages. For example, Anisimov showed that a group is finite if and only if its word problem is a regular language, and Muller and Schupp showed that a group is virtually-free if and only if its word problem is a context-free language. Above this, not much was known, until Salvati showed recently that the word problem of $\mathbb{Z}^2$ is a multiple context-free language, giving first such example. We generalize Salvati's result to show that the word problem of $\mathbb{Z}^n$ is a multiple context-free language for any $n$.
1
0
1
0
0
0
Rigorous proof of the Boltzmann-Gibbs distribution of money on connected graphs
Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution is universal in the sense that it holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.
0
0
1
0
0
0
Stochastic population dynamics in spatially extended predator-prey systems
Spatially extended population dynamics models that incorporate intrinsic noise serve as case studies for the role of fluctuations and correlations in biological systems. Including spatial structure and stochastic noise in predator-prey competition invalidates the deterministic Lotka-Volterra picture of neutral population cycles. Stochastic models yield long-lived erratic population oscillations stemming from a resonant amplification mechanism. In spatially extended predator-prey systems, one observes noise-stabilized activity and persistent correlations. Fluctuation-induced renormalizations of the oscillation parameters can be analyzed perturbatively. The critical dynamics and the non-equilibrium relaxation kinetics at the predator extinction threshold are characterized by the directed percolation universality class. Spatial or environmental variability results in more localized patches which enhances both species densities. Affixing variable rates to individual particles and allowing for trait inheritance subject to mutations induces fast evolutionary dynamics for the rate distributions. Stochastic spatial variants of cyclic competition with rock-paper-scissors interactions illustrate connections between population dynamics and evolutionary game theory, and demonstrate how space can help maintain diversity. In two dimensions, three-species cyclic competition models of the May-Leonard type are characterized by the emergence of spiral patterns whose properties are elucidated by a mapping onto a complex Ginzburg-Landau equation. Extensions to general food networks can be classified on the mean-field level, which provides both a fundamental understanding of ensuing cooperativity and emergence of alliances. Novel space-time patterns emerge as a result of the formation of competing alliances, such as coarsening domains that each incorporate rock-paper-scissors competition games.
0
1
0
0
0
0
Learning Policies for Markov Decision Processes from Data
We consider the problem of learning a policy for a Markov decision process consistent with data captured on the state-actions pairs followed by the policy. We assume that the policy belongs to a class of parameterized policies which are defined using features associated with the state-action pairs. The features are known a priori, however, only an unknown subset of them could be relevant. The policy parameters that correspond to an observed target policy are recovered using $\ell_1$-regularized logistic regression that best fits the observed state-action samples. We establish bounds on the difference between the average reward of the estimated and the original policy (regret) in terms of the generalization error and the ergodic coefficient of the underlying Markov chain. To that end, we combine sample complexity theory and sensitivity analysis of the stationary distribution of Markov chains. Our analysis suggests that to achieve regret within order $O(\sqrt{\epsilon})$, it suffices to use training sample size on the order of $\Omega(\log n \cdot poly(1/\epsilon))$, where $n$ is the number of the features. We demonstrate the effectiveness of our method on a synthetic robot navigation example.
1
0
1
1
0
0
Expecting the Unexpected: Training Detectors for Unusual Pedestrians with Adversarial Imposters
As autonomous vehicles become an every-day reality, high-accuracy pedestrian detection is of paramount practical importance. Pedestrian detection is a highly researched topic with mature methods, but most datasets focus on common scenes of people engaged in typical walking poses on sidewalks. But performance is most crucial for dangerous scenarios, such as children playing in the street or people using bicycles/skateboards in unexpected ways. Such "in-the-tail" data is notoriously hard to observe, making both training and testing difficult. To analyze this problem, we have collected a novel annotated dataset of dangerous scenarios called the Precarious Pedestrian dataset. Even given a dedicated collection effort, it is relatively small by contemporary standards (around 1000 images). To allow for large-scale data-driven learning, we explore the use of synthetic data generated by a game engine. A significant challenge is selected the right "priors" or parameters for synthesis: we would like realistic data with poses and object configurations that mimic true Precarious Pedestrians. Inspired by Generative Adversarial Networks (GANs), we generate a massive amount of synthetic data and train a discriminative classifier to select a realistic subset, which we deem the Adversarial Imposters. We demonstrate that this simple pipeline allows one to synthesize realistic training data by making use of rendering/animation engines within a GAN framework. Interestingly, we also demonstrate that such data can be used to rank algorithms, suggesting that Adversarial Imposters can also be used for "in-the-tail" validation at test-time, a notoriously difficult challenge for real-world deployment.
1
0
0
0
0
0
The sequence of open and closed prefixes of a Sturmian word
A finite word is closed if it contains a factor that occurs both as a prefix and as a suffix but does not have internal occurrences, otherwise it is open. We are interested in the {\it oc-sequence} of a word, which is the binary sequence whose $n$-th element is $0$ if the prefix of length $n$ of the word is open, or $1$ if it is closed. We exhibit results showing that this sequence is deeply related to the combinatorial and periodic structure of a word. In the case of Sturmian words, we show that these are uniquely determined (up to renaming letters) by their oc-sequence. Moreover, we prove that the class of finite Sturmian words is a maximal element with this property in the class of binary factorial languages. We then discuss several aspects of Sturmian words that can be expressed through this sequence. Finally, we provide a linear-time algorithm that computes the oc-sequence of a finite word, and a linear-time algorithm that reconstructs a finite Sturmian word from its oc-sequence.
1
0
1
0
0
0
Capital Regulation under Price Impacts and Dynamic Financial Contagion
We construct a continuous time model for price-mediated contagion precipitated by a common exogenous stress to the trading book of all firms in the financial system. In this setting, firms are constrained so as to satisfy a risk-weight based capital ratio requirement. We use this model to find analytical bounds on the risk-weights for an asset as a function of the market liquidity. Under these appropriate risk-weights, we find existence and uniqueness for the joint system of firm behavior and the asset price. We further consider an analytical bound on the firm liquidations, which allows us to construct exact formulas for stress testing the financial system with deterministic or random stresses. Numerical case studies are provided to demonstrate various implications of this model and analytical bounds.
0
0
0
0
0
1
On seaweed subalgebras and meander graphs in type D
In 2000, Dergachev and Kirillov introduced subalgebras of "seaweed type" in $\mathfrak{gl}_n$ and computed their index using certain graphs, which we call type-${\sf A}$ meander graphs. Then the subalgebras of seaweed type, or just "seaweeds", have been defined by Panyushev (2001) for arbitrary reductive Lie algebras. Recently, a meander graph approach to computing the index in types ${\sf B}$ and ${\sf C}$ has been developed by the authors. In this article, we consider the most difficult and interesting case of type ${\sf D}$. Some new phenomena occurring here are related to the fact that the Dynkin diagram has a branching node.
0
0
1
0
0
0
Singular Riemannian flows and characteristic numbers
Let $M$ be an even-dimensional, oriented closed manifold. We show that the restriction of a singular Riemannian flow on $M$ to a small tubular neighborhood of each connected component of its singular stratum is foliated-diffeomorphic to an isometric flow on the same neighborhood. We then prove a formula that computes characteristic numbers of $M$ as the sum of residues associated to the infinitesimal foliation at the components of the singular stratum of the flow.
0
0
1
0
0
0
Transformation Models in High-Dimensions
Transformation models are a very important tool for applied statisticians and econometricians. In many applications, the dependent variable is transformed so that homogeneity or normal distribution of the error holds. In this paper, we analyze transformation models in a high-dimensional setting, where the set of potential covariates is large. We propose an estimator for the transformation parameter and we show that it is asymptotically normally distributed using an orthogonalized moment condition where the nuisance functions depend on the target parameter. In a simulation study, we show that the proposed estimator works well in small samples. A common practice in labor economics is to transform wage with the log-function. In this study, we test if this transformation holds in CPS data from the United States.
0
0
1
1
0
0