title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Semi-Supervised Approaches to Efficient Evaluation of Model Prediction Performance
In many modern machine learning applications, the outcome is expensive or time-consuming to collect while the predictor information is easy to obtain. Semi-supervised learning (SSL) aims at utilizing large amounts of `unlabeled' data along with small amounts of `labeled' data to improve the efficiency of a classical supervised approach. Though numerous SSL classification and prediction procedures have been proposed in recent years, no methods currently exist to evaluate the prediction performance of a working regression model. In the context of developing phenotyping algorithms derived from electronic medical records (EMR), we present an efficient two-step estimation procedure for evaluating a binary classifier based on various prediction performance measures in the semi-supervised (SS) setting. In step I, the labeled data is used to obtain a non-parametrically calibrated estimate of the conditional risk function. In step II, SS estimates of the prediction accuracy parameters are constructed based on the estimated conditional risk function and the unlabeled data. We demonstrate that under mild regularity conditions the proposed estimators are consistent and asymptotically normal. Importantly, the asymptotic variance of the SS estimators is always smaller than that of the supervised counterparts under correct model specification. We also correct for potential overfitting bias in the SS estimators in finite sample with cross-validation and develop a perturbation resampling procedure to approximate their distributions. Our proposals are evaluated through extensive simulation studies and illustrated with two real EMR studies aiming to develop phenotyping algorithms for rheumatoid arthritis and multiple sclerosis.
0
0
0
1
0
0
Linearization of the box-ball system: an elementary approach
Kuniba, Okado, Takagi and Yamada have found that the time-evolution of the Takahashi-Satsuma box-ball system can be linearized by considering rigged configurations associated with states of the box-ball system. We introduce a simple way to understand the rigged configuration of $\mathfrak{sl}_2$-type, and give an elementary proof of the linearization property. Our approach can be applied to a box-ball system with finite carrier, which is related to a discrete modified KdV equation, and also to the combinatorial $R$-matrix of $A_1^{(1)}$-type. We also discuss combinatorial statistics and related fermionic formulas associated with the states of the box-ball systems. A fermionic-type formula we obtain for the finite carrier case seems to be new.
0
1
0
0
0
0
Controlling Sources of Inaccuracy in Stochastic Kriging
Scientists and engineers commonly use simulation models to study real systems for which actual experimentation is costly, difficult, or impossible. Many simulations are stochastic in the sense that repeated runs with the same input configuration will result in different outputs. For expensive or time-consuming simulations, stochastic kriging \citep{ankenman} is commonly used to generate predictions for simulation model outputs subject to uncertainty due to both function approximation and stochastic variation. Here, we develop and justify a few guidelines for experimental design, which ensure accuracy of stochastic kriging emulators. We decompose error in stochastic kriging predictions into nominal, numeric, parameter estimation and parameter estimation numeric components and provide means to control each in terms of properties of the underlying experimental design. The design properties implied for each source of error are weakly conflicting and broad principles are proposed. In brief, space-filling properties "small fill distance" and "large separation distance" should balance with replication at distinct input configurations, with number of replications depending on the relative magnitudes of stochastic and process variability. Non-stationarity implies higher input density in more active regions, while regression functions imply a balance with traditional design properties. A few examples are presented to illustrate the results.
0
0
1
1
0
0
Implications of Decentralized Q-learning Resource Allocation in Wireless Networks
Reinforcement Learning is gaining attention by the wireless networking community due to its potential to learn good-performing configurations only from the observed results. In this work we propose a stateless variation of Q-learning, which we apply to exploit spatial reuse in a wireless network. In particular, we allow networks to modify both their transmission power and the channel used solely based on the experienced throughput. We concentrate in a completely decentralized scenario in which no information about neighbouring nodes is available to the learners. Our results show that although the algorithm is able to find the best-performing actions to enhance aggregate throughput, there is high variability in the throughput experienced by the individual networks. We identify the cause of this variability as the adversarial setting of our setup, in which the most played actions provide intermittent good/poor performance depending on the neighbouring decisions. We also evaluate the effect of the intrinsic learning parameters of the algorithm on this variability.
1
0
0
0
0
0
Exponential Ergodicity of the Bouncy Particle Sampler
Non-reversible Markov chain Monte Carlo schemes based on piecewise deterministic Markov processes have been recently introduced in applied probability, automatic control, physics and statistics. Although these algorithms demonstrate experimentally good performance and are accordingly increasingly used in a wide range of applications, geometric ergodicity results for such schemes have only been established so far under very restrictive assumptions. We give here verifiable conditions on the target distribution under which the Bouncy Particle Sampler algorithm introduced in \cite{P_dW_12} is geometrically ergodic. This holds whenever the target satisfies a curvature condition and has tails decaying at least as fast as an exponential and at most as fast as a Gaussian distribution. This allows us to provide a central limit theorem for the associated ergodic averages. When the target has tails thinner than a Gaussian distribution, we propose an original modification of this scheme that is geometrically ergodic. For thick-tailed target distributions, such as $t$-distributions, we extend the idea pioneered in \cite{J_G_12} in a random walk Metropolis context. We apply a change of variable to obtain a transformed target satisfying the tail conditions for geometric ergodicity. By sampling the transformed target using the Bouncy Particle Sampler and mapping back the Markov process to the original parameterization, we obtain a geometrically ergodic algorithm.
0
0
0
1
0
0
Analysis and X-ray tomography
These are lecture notes for the course "MATS4300 Analysis and X-ray tomography" given at the University of Jyväskylä in Fall 2017. The course is a broad overview of various tools in analysis that can be used to study X-ray tomography. The focus is on tools and ideas, not so much on technical details and minimal assumptions. Only very basic functional analysis is assumed as background. Exercise problems are included.
0
0
1
0
0
0
Spatially Transformed Adversarial Examples
Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the $\mathcal{L}_p$ distance for penalizing perturbations. Researchers have explored different defense methods to defend against such adversarial attacks. While the effectiveness of $\mathcal{L}_p$ distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works. Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.
0
0
0
1
0
0
Arrow Categories of Monoidal Model Categories
We prove that the arrow category of a monoidal model category, equipped with the pushout product monoidal structure and the projective model structure, is a monoidal model category. This answers a question posed by Mark Hovey, and has the important consequence that it allows for the consideration of a monoidal product in cubical homotopy theory. As illustrations we include numerous examples of non-cofibrantly generated monoidal model categories, including chain complexes, small categories, topological spaces, and pro-categories.
0
0
1
0
0
0
Differential-operator representations of Weyl group and singular vectors
Given a suitable ordering of the positive root system associated with a semisimple Lie algebra, there exists a natural correspondence between Verma modules and related polynomial algebras. With this, the Lie algebra action on a Verma module can be interpreted as a differential operator action on polynomials, and thus on the corresponding truncated formal power series. We prove that the space of truncated formal power series is a differential-operator representation of the Weyl group $W$. We also introduce a system of partial differential equations to investigate singular vectors in the Verma module. It is shown that the solution space of the system in the space of truncated formal power series is the span of $\{w(1)\ |\ w\in W\}$. Those $w(1)$ that are polynomials correspond to singular vectors in the Verma module. This elementary approach by partial differential equations also gives a new proof of the well-known BGG-Verma Theorem.
0
0
1
0
0
0
Faithful Semitoric Systems
This paper consists of two parts. The first provides a review of the basic properties of integrable and almost-toric systems, with a particular emphasis on the integral affine structure associated to an integrable system. The second part introduces faithful semitoric systems, a generalization of semitoric systems (introduced by Vu Ngoc and classified by Pelayo and Vu Ngoc) that provides the language to develop surgeries on almost-toric systems in dimension 4. We prove that faithful semitoric systems are natural building blocks of almost-toric systems. Moreover, we show that they enjoy many of the properties that their (proper) semitoric counterparts do.
0
1
1
0
0
0
HoloScope: Topology-and-Spike Aware Fraud Detection
As online fraudsters invest more resources, including purchasing large pools of fake user accounts and dedicated IPs, fraudulent attacks become less obvious and their detection becomes increasingly challenging. Existing approaches such as average degree maximization suffer from the bias of including more nodes than necessary, resulting in lower accuracy and increased need for manual verification. Hence, we propose HoloScope, which uses information from graph topology and temporal spikes to more accurately detect groups of fraudulent users. In terms of graph topology, we introduce "contrast suspiciousness," a dynamic weighting approach, which allows us to more accurately detect fraudulent blocks, particularly low-density blocks. In terms of temporal spikes, HoloScope takes into account the sudden bursts and drops of fraudsters' attacking patterns. In addition, we provide theoretical bounds for how much this increases the time cost needed for fraudsters to conduct adversarial attacks. Additionally, from the perspective of ratings, HoloScope incorporates the deviation of rating scores in order to catch fraudsters more accurately. Moreover, HoloScope has a concise framework and sub-quadratic time complexity, making the algorithm reproducible and scalable. Extensive experiments showed that HoloScope achieved significant accuracy improvements on synthetic and real data, compared with state-of-the-art fraud detection methods.
1
0
0
0
0
0
On Approximation Guarantees for Greedy Low Rank Optimization
We provide new approximation guarantees for greedy low rank matrix estimation under standard assumptions of restricted strong convexity and smoothness. Our novel analysis also uncovers previously unknown connections between the low rank estimation and combinatorial optimization, so much so that our bounds are reminiscent of corresponding approximation bounds in submodular maximization. Additionally, we also provide statistical recovery guarantees. Finally, we present empirical comparison of greedy estimation with established baselines on two important real-world problems.
1
0
0
1
0
0
Topology Estimation in Bulk Power Grids: Guarantees on Exact Recovery
The topology of a power grid affects its dynamic operation and settlement in the electricity market. Real-time topology identification can enable faster control action following an emergency scenario like failure of a line. This article discusses a graphical model framework for topology estimation in bulk power grids (both loopy transmission and radial distribution) using measurements of voltage collected from the grid nodes. The graphical model for the probability distribution of nodal voltages in linear power flow models is shown to include additional edges along with the operational edges in the true grid. Our proposed estimation algorithms first learn the graphical model and subsequently extract the operational edges using either thresholding or a neighborhood counting scheme. For grid topologies containing no three-node cycles (two buses do not share a common neighbor), we prove that an exact extraction of the operational topology is theoretically guaranteed. This includes a majority of distribution grids that have radial topologies. For grids that include cycles of length three, we provide sufficient conditions that ensure existence of algorithms for exact reconstruction. In particular, for grids with constant impedance per unit length and uniform injection covariances, this observation leads to conditions on geographical placement of the buses. The performance of algorithms is demonstrated in test case simulations.
1
0
1
1
0
0
Wasserstein Introspective Neural Networks
We present Wasserstein introspective neural networks (WINN) that are both a generator and a discriminator within a single model. WINN provides a significant improvement over the recent introspective neural networks (INN) method by enhancing INN's generative modeling capability. WINN has three interesting properties: (1) A mathematical connection between the formulation of the INN algorithm and that of Wasserstein generative adversarial networks (WGAN) is made. (2) The explicit adoption of the Wasserstein distance into INN results in a large enhancement to INN, achieving compelling results even with a single classifier --- e.g., providing nearly a 20 times reduction in model size over INN for unsupervised generative modeling. (3) When applied to supervised classification, WINN also gives rise to improved robustness against adversarial examples in terms of the error reduction. In the experiments, we report encouraging results on unsupervised learning problems including texture, face, and object modeling, as well as a supervised classification task against adversarial attacks.
1
0
0
0
0
0
Convexity of level lines of Martin functions and applications
Let $\Omega$ be an unbounded domain in $\mathbb{R}\times\mathbb{R}^{d}.$ A positive harmonic function $u$ on $\Omega$ that vanishes on the boundary of $\Omega$ is called a Martin function. In this note, we show that, when $\Omega$ is convex, the superlevel sets of a Martin function are also convex. As a consequence we obtain that if in addition $\Omega$ is symmetric, then the maximum of any Martin function along a slice $\Omega\cap (\{t\}\times\mathbb{R}^d)$ is attained at $(t,0).$
0
0
1
0
0
0
New skein invariants of links
We introduce new skein invariants of links based on a procedure where we first apply the skein relation only to crossings of distinct components, so as to produce collections of unlinked knots. We then evaluate the resulting knots using a given invariant. A skein invariant can be computed on each link solely by the use of skein relations and a set of initial conditions. The new procedure, remarkably, leads to generalizations of the known skein invariants. We make skein invariants of classical links, $H[R]$, $K[Q]$ and $D[T]$, based on the invariants of knots, $R$, $Q$ and $T$, denoting the regular isotopy version of the Homflypt polynomial, the Kauffman polynomial and the Dubrovnik polynomial. We provide skein theoretic proofs of the well-definedness of these invariants. These invariants are also reformulated into summations of the generating invariants ($R$, $Q$, $T$) on sublinks of a given link $L$, obtained by partitioning $L$ into collections of sublinks.
0
0
1
0
0
0
HSTREAM: A directive-based language extension for heterogeneous stream computing
Big data streaming applications require utilization of heterogeneous parallel computing systems, which may comprise multiple multi-core CPUs and many-core accelerating devices such as NVIDIA GPUs and Intel Xeon Phis. Programming such systems require advanced knowledge of several hardware architectures and device-specific programming models, including OpenMP and CUDA. In this paper, we present HSTREAM, a compiler directive-based language extension to support programming stream computing applications for heterogeneous parallel computing systems. HSTREAM source-to-source compiler aims to increase the programming productivity by enabling programmers to annotate the parallel regions for heterogeneous execution and generate target specific code. The HSTREAM runtime automatically distributes the workload across CPUs and accelerating devices. We demonstrate the usefulness of HSTREAM language extension with various applications from the STREAM benchmark. Experimental evaluation results show that HSTREAM can keep the same programming simplicity as OpenMP, and the generated code can deliver performance beyond what CPUs-only and GPUs-only executions can deliver.
1
0
0
0
0
0
Faddeev-Jackiw approach of the noncommutative spacetime Podolsky electromagnetic theory
The interest in higher derivatives field theories has its origin mainly in their influence concerning the renormalization properties of physical models and to remove ultraviolet divergences. The noncommutative Podolsky theory is a constrained system that cannot by directly quantized by the canonical way. In this work we have used the Faddeev-Jackiw method in order to obtain the Dirac brackets of the NC Podolsky theory.
0
1
0
0
0
0
Spin Transport and Accumulation in 2D Weyl Fermion System
In this work, we study the spin Hall effect and Rashba-Edelstein effect of a 2D Weyl fermion system in the clean limit using the Kubo formalism. Spin transport is solely due to the spin-torque current in this strongly spin-orbit coupled (SOC) system, and chiral spin-flip scattering off non-SOC scalar impurities, with potential strength $V$ and size $a$, gives rise to a skew-scattering mechanism for the spin Hall effect. The key result is that the resultant spin-Hall angle has a fixed sign, with $\theta^{SH} \sim O \left(\tfrac{V^2}{v_F^2/a^2} (k_F a)^4 \right)$ being a strongly-dependent function of $k_F a$, with $k_F$ and $v_F$ being the Fermi wave-vector and Fermi velocity respectively. This, therefore, allows for the possibility of tuning the SHE by adjusting the Fermi energy or impurity size.
0
1
0
0
0
0
Model-Based Control Using Koopman Operators
This paper explores the application of Koopman operator theory to the control of robotic systems. The operator is introduced as a method to generate data-driven models that have utility for model-based control methods. We then motivate the use of the Koopman operator towards augmenting model-based control. Specifically, we illustrate how the operator can be used to obtain a linearizable data-driven model for an unknown dynamical process that is useful for model-based control synthesis. Simulated results show that with increasing complexity in the choice of the basis functions, a closed-loop controller is able to invert and stabilize a cart- and VTOL-pendulum systems. Furthermore, the specification of the basis function are shown to be of importance when generating a Koopman operator for specific robotic systems. Experimental results with the Sphero SPRK robot explore the utility of the Koopman operator in a reduced state representation setting where increased complexity in the basis function improve open- and closed-loop controller performance in various terrains, including sand.
1
0
0
0
0
0
Turbulence Hierarchy in a Random Fibre Laser
Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here, we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold, and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov's theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform.
0
1
0
0
0
0
Optimal Rates for Learning with Nyström Stochastic Gradient Methods
In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nyström subsampling, allowing multiple passes over the data and mini-batches. Generalization error bounds for the studied algorithm are provided. Particularly, optimal learning rates are derived considering different possible choices of the step-size, the mini-batch size, the number of iterations/passes, and the subsampling level. In comparison with state-of-the-art algorithms such as the classic stochastic gradient methods and kernel ridge regression with Nyström, the studied algorithm has advantages on the computational complexity, while achieving the same optimal learning rates. Moreover, our results indicate that using mini-batches can reduce the total computational cost while achieving the same optimal statistical results.
1
0
1
1
0
0
Run Procrustes, Run! On the convergence of accelerated Procrustes Flow
In this work, we present theoretical results on the convergence of non-convex accelerated gradient descent in matrix factorization models. The technique is applied to matrix sensing problems with squared loss, for the estimation of a rank $r$ optimal solution $X^\star \in \mathbb{R}^{n \times n}$. We show that the acceleration leads to linear convergence rate, even under non-convex settings where the variable $X$ is represented as $U U^\top$ for $U \in \mathbb{R}^{n \times r}$. Our result has the same dependence on the condition number of the objective --and the optimal solution-- as that of the recent results on non-accelerated algorithms. However, acceleration is observed in practice, both in synthetic examples and in two real applications: neuronal multi-unit activities recovery from single electrode recordings, and quantum state tomography on quantum computing simulators.
0
0
0
1
0
0
A note on the bijectivity of antipode of a Hopf algebra and its applications
Certain sufficient homological and ring-theoretical conditions are given for a Hopf algebra to have bijective antipode with applications to noetherian Hopf algebras regarding their homological behaviors.
0
0
1
0
0
0
Perfect Edge Domination: Hard and Solvable Cases
Let $G$ be an undirected graph. An edge of $G$ dominates itself and all edges adjacent to it. A subset $E'$ of edges of $G$ is an edge dominating set of $G$, if every edge of the graph is dominated by some edge of $E'$. We say that $E'$ is a perfect edge dominating set of $G$, if every edge not in $E'$ is dominated by exactly one edge of $E'$. The perfect edge dominating problem is to determine a least cardinality perfect edge dominating set of $G$. For this problem, we describe two NP-completeness proofs, for the classes of claw-free graphs of degree at most 3, and for bounded degree graphs, of maximum degree at most $d \geq 3$ and large girth. In contrast, we prove that the problem admits an $O(n)$ time solution, for cubic claw-free graphs. In addition, we prove a complexity dichotomy theorem for the perfect edge domination problem, based on the results described in the paper. Finally, we describe a linear time algorithm for finding a minimum weight perfect edge dominating set of a $P_5$-free graph. The algorithm is robust, in the sense that, given an arbitrary graph $G$, either it computes a minimum weight perfect edge dominating set of $G$, or it exhibits an induced subgraph of $G$, isomorphic to a $P_5$.
1
0
0
0
0
0
On the presentation of Hecke-Hopf algebras for non-simply-laced type
Hecke-Hopf algebras were defined by A. Berenstein and D. Kazhdan. We give an explicit presentation of an Hecke-Hopf algebra when the parameter $m_{ij},$ associated to any two distinct vertices $i$ and $j$ in the presentation of a Coxeter group, equals $4,$ $5$ or $6$. As an application, we give a proof of a conjecture of Berenstein and Kazhdan when the Coxeter group is crystallographic and non-simply-laced. As another application, we show that another conjecture of Berenstein and Kazhdan holds when $m_{ij},$ associated to any two distinct vertices $i$ and $j,$ equals $4$ and that the conjecture does not hold when some $m_{ij}$ equals $6$ by giving a counterexample to it.
0
0
1
0
0
0
ILP-based Alleviation of Dense Meander Segments with Prioritized Shifting and Progressive Fixing in PCB Routing
Length-matching is an important technique to bal- ance delays of bus signals in high-performance PCB routing. Existing routers, however, may generate very dense meander segments. Signals propagating along these meander segments exhibit a speedup effect due to crosstalk between the segments of the same wire, thus leading to mismatch of arrival times even under the same physical wire length. In this paper, we present a post-processing method to enlarge the width and the distance of meander segments and hence distribute them more evenly on the board so that crosstalk can be reduced. In the proposed framework, we model the sharing of available routing areas after removing dense meander segments from the initial routing, as well as the generation of relaxed meander segments and their groups for wire length compensation. This model is transformed into an ILP problem and solved for a balanced distribution of wire patterns. In addition, we adjust the locations of long wire segments according to wire priorities to swap free spaces toward critical wires that need much length compensation. To reduce the problem space of the ILP model, we also introduce a progressive fixing technique so that wire patterns are grown gradually from the edge of the routing toward the center area. Experimental results show that the proposed method can expand meander segments significantly even under very tight area constraints, so that the speedup effect can be alleviated effectively in high- performance PCB designs.
1
0
0
0
0
0
Membrane Trafficking in the Yeast Saccharomyces cerevisiae Model
The yeast Saccharomyces cerevisiae is one of the best characterized eukaryotic models. The secretory pathway was the first trafficking pathway clearly understood mainly thanks to the work done in the laboratory of Randy Schekman in the 1980s. They have isolated yeast sec mutants unable to secrete an extracellular enzyme and these SEC genes were identified as encoding key effectors of the secretory machinery. For this work, the 2013 Nobel Prize in Physiology and Medicine has been awarded to Randy Schekman; the prize is shared with James Rothman and Thomas S{ü}dhof. Here, we present the different trafficking pathways of yeast S. cerevisiae. At the Golgi apparatus newly synthesized proteins are sorted between those transported to the plasma membrane (PM), or the external medium, via the exocytosis or secretory pathway (SEC), and those targeted to the vacuole either through endosomes (vacuolar protein sorting or VPS pathway) or directly (alkaline phosphatase or ALP pathway). Plasma membrane proteins can be internalized by endocytosis (END) and transported to endosomes where they are sorted between those targeted for vacuolar degradation and those redirected to the Golgi (recycling or RCY pathway). Studies in yeast S. cerevisiae allowed the identification of most of the known effectors, protein complexes, and trafficking pathways in eukaryotic cells, and most of them are conserved among eukaryotes.
0
0
0
0
1
0
Synthesizing Programs for Images using Reinforced Adversarial Learning
Advances in deep generative networks have led to impressive results in recent years. Nevertheless, such models can often waste their capacity on the minutiae of datasets, presumably due to weak inductive biases in their decoders. This is where graphics engines may come in handy since they abstract away low-level details and represent images as high-level programs. Current methods that combine deep learning and renderers are limited by hand-crafted likelihood or distance functions, a need for large amounts of supervision, or difficulties in scaling their inference algorithms to richer datasets. To mitigate these issues, we present SPIRAL, an adversarially trained agent that generates a program which is executed by a graphics engine to interpret and sample images. The goal of this agent is to fool a discriminator network that distinguishes between real and rendered data, trained with a distributed reinforcement learning setup without any supervision. A surprising finding is that using the discriminator's output as a reward signal is the key to allow the agent to make meaningful progress at matching the desired output rendering. To the best of our knowledge, this is the first demonstration of an end-to-end, unsupervised and adversarial inverse graphics agent on challenging real world (MNIST, Omniglot, CelebA) and synthetic 3D datasets.
0
0
0
1
0
0
A Correspondence Between Random Neural Networks and Statistical Field Theory
A number of recent papers have provided evidence that practical design questions about neural networks may be tackled theoretically by studying the behavior of random networks. However, until now the tools available for analyzing random neural networks have been relatively ad-hoc. In this work, we show that the distribution of pre-activations in random neural networks can be exactly mapped onto lattice models in statistical physics. We argue that several previous investigations of stochastic networks actually studied a particular factorial approximation to the full lattice model. For random linear networks and random rectified linear networks we show that the corresponding lattice models in the wide network limit may be systematically approximated by a Gaussian distribution with covariance between the layers of the network. In each case, the approximate distribution can be diagonalized by Fourier transformation. We show that this approximation accurately describes the results of numerical simulations of wide random neural networks. Finally, we demonstrate that in each case the large scale behavior of the random networks can be approximated by an effective field theory.
1
1
0
1
0
0
The nature of the progenitor of the M31 North-western stream: globular clusters as milestones of its orbit
We examine the nature, possible orbits and physical properties of the progenitor of the North-western stellar stream (NWS) in the halo of the Andromeda galaxy (M31). The progenitor is assumed to be an accreting dwarf galaxy with globular clusters (GCs). It is, in general, difficult to determine the progenitor's orbit precisely because of many necessary parameters. Recently, Veljanoski et al. 2014 reported five GCs whose positions and radial velocities suggest an association with the stream. We use this data to constrain the orbital motions of the progenitor using test-particle simulations. Our simulations split the orbit solutions into two branches according to whether the stream ends up in the foreground or in the background of M31. Upcoming observations that will determine the distance to the NWS will be able to reject one of the two branches. In either case, the solutions require that the pericentric radius of any possible orbit be over 2 kpc. We estimate the efficiency of the tidal disruption and confirm the consistency with the assumption for the progenitor being a dwarf galaxy. The progenitor requires the mass $\ga 2\times10^6 M_{\sun}$ and half-light radius $\ga 30$ pc. In addition, $N$-body simulations successfully reproduce the basic observed features of the NWS and the GCs' line-of-sight velocities.
0
1
0
0
0
0
On codimension two flats in Fermat-type arrangements
In the present note we study certain arrangements of codimension $2$ flats in projective spaces, we call them "Fermat arrangements". We describe algebraic properties of their defining ideals. In particular, we show that they provide counterexamples to an expected containment relation between ordinary and symbolic powers of homogeneous ideals.
0
0
1
0
0
0
Invariant Causal Prediction for Sequential Data
We investigate the problem of inferring the causal predictors of a response $Y$ from a set of $d$ explanatory variables $(X^1,\dots,X^d)$. Classical ordinary least squares regression includes all predictors that reduce the variance of $Y$. Using only the causal predictors instead leads to models that have the advantage of remaining invariant under interventions, loosely speaking they lead to invariance across different "environments" or "heterogeneity patterns". More precisely, the conditional distribution of $Y$ given its causal predictors remains invariant for all observations. Recent work exploits such a stability to infer causal relations from data with different but known environments. We show that even without having knowledge of the environments or heterogeneity pattern, inferring causal relations is possible for time-ordered (or any other type of sequentially ordered) data. In particular, this allows detecting instantaneous causal relations in multivariate linear time series which is usually not the case for Granger causality. Besides novel methodology, we provide statistical confidence bounds and asymptotic detection results for inferring causal predictors, and present an application to monetary policy in macroeconomics.
0
0
1
1
0
0
Smoothing with Couplings of Conditional Particle Filters
In state space models, smoothing refers to the task of estimating a latent stochastic process given noisy measurements related to the process. We propose an unbiased estimator of smoothing expectations. The lack-of-bias property has methodological benefits: independent estimators can be generated in parallel, and confidence intervals can be constructed from the central limit theorem to quantify the approximation error. To design unbiased estimators, we combine a generic debiasing technique for Markov chains with a Markov chain Monte Carlo algorithm for smoothing. The resulting procedure is widely applicable and we show in numerical experiments that the removal of the bias comes at a manageable increase in variance. We establish the validity of the proposed estimators under mild assumptions. Numerical experiments are provided on toy models, including a setting of highly-informative observations, and a realistic Lotka-Volterra model with an intractable transition density.
0
0
0
1
0
0
Formation of High Pressure Gradients at the Free Surface of a Liquid Dielectric in a Tangential Electric Field
Nonlinear dynamics of the free surface of an ideal incompressible non-conducting fluid with high dielectric constant subjected by strong horizontal electric field is simulated on the base of the method of conformal transformations. It is demonstrated that interaction of counter-propagating waves leads to formation of regions with steep wave front at the fluid surface; angles of the boundary inclination tend to {\pi}/2, and the curvature of surface extremely increases. A significant concentration of the energy of the system occurs at these points. From the physical point of view, the appearance of these singularities corresponds to formation of regions at the fluid surface where pressure exerted by electric field undergoes a discontinuity and dynamical pressure increases almost an order of magnitude.
0
1
0
0
0
0
Subsampling for Ridge Regression via Regularized Volume Sampling
Given $n$ vectors $\mathbf{x}_i\in \mathbb{R}^d$, we want to fit a linear regression model for noisy labels $y_i\in\mathbb{R}$. The ridge estimator is a classical solution to this problem. However, when labels are expensive, we are forced to select only a small subset of vectors $\mathbf{x}_i$ for which we obtain the labels $y_i$. We propose a new procedure for selecting the subset of vectors, such that the ridge estimator obtained from that subset offers strong statistical guarantees in terms of the mean squared prediction error over the entire dataset of $n$ labeled vectors. The number of labels needed is proportional to the statistical dimension of the problem which is often much smaller than $d$. Our method is an extension of a joint subsampling procedure called volume sampling. A second major contribution is that we speed up volume sampling so that it is essentially as efficient as leverage scores, which is the main i.i.d. subsampling procedure for this task. Finally, we show theoretically and experimentally that volume sampling has a clear advantage over any i.i.d. sampling when labels are expensive.
1
0
0
0
0
0
Ab initio calculations of the concentration dependent band gap reduction in dilute nitrides
While being of persistent interest for the integration of lattice-matched laser devices with silicon circuits, the electronic structure of dilute nitride III/V-semiconductors has presented a challenge to ab initio computational approaches. The root of this lies in the strong distortion N atoms exert on most host materials. Here, we resolve these issues by combining density functional theory calculations based on the meta-GGA functional presented by Tran and Blaha (TB09) with a supercell approach for the dilute nitride Ga(NAs). Exploring the requirements posed to supercells, we show that the distortion field of a single N atom must be allowed to decrease so far, that it does not overlap with its periodic images. This also prevents spurious electronic interactions between translational symmetric atoms, allowing to compute band gaps in very good agreement with experimentally derived reference values. These results open up the field of dilute nitride compound semiconductors to predictive ab initio calculations.
0
1
0
0
0
0
Outliers and related problems
We define outliers as a set of observations which contradicts the proposed mathematical (statistical) model and we discuss the frequently observed types of the outliers. Further we explore what changes in the model have to be made in order to avoid the occurance of the outliers. We observe that some variants of the outliers lead to classical results in probability, such as the law of large numbers and the concept of heavy tailed distributions. Key words: outlier; the law of large numbers; heavy tailed distributions; model rejection.
0
0
1
1
0
0
On Quadratic Convergence of DC Proximal Newton Algorithm for Nonconvex Sparse Learning in High Dimensions
We propose a DC proximal Newton algorithm for solving nonconvex regularized sparse learning problems in high dimensions. Our proposed algorithm integrates the proximal Newton algorithm with multi-stage convex relaxation based on the difference of convex (DC) programming, and enjoys both strong computational and statistical guarantees. Specifically, by leveraging a sophisticated characterization of sparse modeling structures/assumptions (i.e., local restricted strong convexity and Hessian smoothness), we prove that within each stage of convex relaxation, our proposed algorithm achieves (local) quadratic convergence, and eventually obtains a sparse approximate local optimum with optimal statistical properties after only a few convex relaxations. Numerical experiments are provided to support our theory.
1
0
1
1
0
0
Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning
In reinforcement learning, agents learn by performing actions and observing their outcomes. Sometimes, it is desirable for a human operator to \textit{interrupt} an agent in order to prevent dangerous situations from happening. Yet, as part of their learning process, agents may link these interruptions, that impact their reward, to specific states and deliberately avoid them. The situation is particularly challenging in a multi-agent context because agents might not only learn from their own past interruptions, but also from those of other agents. Orseau and Armstrong defined \emph{safe interruptibility} for one learner, but their work does not naturally extend to multi-agent systems. This paper introduces \textit{dynamic safe interruptibility}, an alternative definition more suited to decentralized learning problems, and studies this notion in two learning frameworks: \textit{joint action learners} and \textit{independent learners}. We give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners. We show however that if agents can detect interruptions, it is possible to prune the observations to ensure dynamic safe interruptibility even for independent learners.
1
0
0
1
0
0
Heuristic Optimization for Automated Distribution System Planning in Network Integration Studies
Network integration studies try to assess the impact of future developments, such as the increase of Renewable Energy Sources or the introduction of Smart Grid Technologies, on large-scale network areas. Goals can be to support strategic alignment in the regulatory framework or to adapt the network planning principles of Distribution System Operators. This study outlines an approach for the automated distribution system planning that can calculate network reconfiguration, reinforcement and extension plans in a fully automated fashion. This allows the estimation of the expected cost in massive probabilistic simulations of large numbers of real networks and constitutes a core component of a framework for large-scale network integration studies. Exemplary case study results are presented that were performed in cooperation with different major distribution system operators. The case studies cover the estimation of expected network reinforcement costs, technical and economical assessment of smart grid technologies and structural network optimisation.
1
0
0
0
0
0
The Sizes and Depletions of the Dust and Gas Cavities in the Transitional Disk J160421.7-213028
We report ALMA Cycle 2 observations of 230 GHz (1.3 mm) dust continuum emission, and $^{12}$CO, $^{13}$CO, and C$^{18}$O J = 2-1 line emission, from the Upper Scorpius transitional disk [PZ99] J160421.7-213028, with an angular resolution of ~0".25 (35 AU). Armed with these data and existing H-band scattered light observations, we measure the size and depth of the disk's central cavity, and the sharpness of its outer edge, in three components: sub-$\mu$m-sized "small" dust traced by scattered light, millimeter-sized "big" dust traced by the millimeter continuum, and gas traced by line emission. Both dust populations feature a cavity of radius $\sim$70 AU that is depleted by factors of at least 1000 relative to the dust density just outside. The millimeter continuum data are well explained by a cavity with a sharp edge. Scattered light observations can be fitted with a cavity in small dust that has either a sharp edge at 60 AU, or an edge that transitions smoothly over an annular width of 10 AU near 60 AU. In gas, the data are consistent with a cavity that is smaller, about 15 AU in radius, and whose surface density at 15 AU is $10^{3\pm1}$ times smaller than the surface density at 70 AU; the gas density grades smoothly between these two radii. The CO isotopologue observations rule out a sharp drop in gas surface density at 30 AU or a double-drop model as found by previous modeling. Future observations are needed to assess the nature of these gas and dust cavities, e.g., whether they are opened by multiple as-yet-unseen planets or photoevaporation.
0
1
0
0
0
0
Demystifying AlphaGo Zero as AlphaGo GAN
The astonishing success of AlphaGo Zero\cite{Silver_AlphaGo} invokes a worldwide discussion of the future of our human society with a mixed mood of hope, anxiousness, excitement and fear. We try to dymystify AlphaGo Zero by a qualitative analysis to indicate that AlphaGo Zero can be understood as a specially structured GAN system which is expected to possess an inherent good convergence property. Thus we deduct the success of AlphaGo Zero may not be a sign of a new generation of AI.
1
0
0
1
0
0
Effects of pressure and magnetic field on the re-entrant superconductor Eu(Fe$_{0.93}$Rh$_{0.07}$)$_2$As$_2$
Electron-doped Eu(Fe$_{0.93}$Rh$_{0.07}$)$_2$As$_2$ has been systematically studied by high pressure investigations of the magnetic and electrical transport properties, in order to unravel the complex interplay of superconductivity and magnetism. The compound reveals an exceedingly broad re-entrant transition to the superconducting state between $T_{\rm{c,on}} = 19.8$ K and $T_{\rm{c,0}} = 5.2$ K due to a canted A-type antiferromagnetic ordering of the Eu$^{2+}$ moments at $T_{\rm{N}} = 16.6$ K and a re-entrant spin glass transition at $T_{\rm{SG}} = 14.1$ K. At ambient pressure evidences for the coexistence of superconductivity and ferromagnetism could be observed, as well as a magnetic-field-induced enhancement of the zero-resistance temperature $T_{\rm{c,0}}$ up to $7.2$ K with small magnetic fields applied parallel to the \textit{ab}-plane of the crystal. We attribute the field-induced-enhancement of superconductivity to the suppression of the ferromagnetic component of the Eu$^{2+}$ moments along the \textit{c}-axis, which leads to a reduction of the orbital pair breaking effect. Application of hydrostatic pressure suppresses the superconducting state around $14$ kbar along with a linear temperature dependence of the resistivity, implying that a non-Fermi liquid region is located at the boundary of the superconducting phase. At intermediate pressure, an additional feature in the resistivity curves is identified, which can be suppressed by external magnetic fields and competes with the superconducting phase. We suggest that the effect of negative pressure by the chemical Rh substitution in Eu(Fe$_{0.93}$Rh$_{0.07}$)$_2$As$_2$ is partially reversed, leading to a re-activation of the spin density wave.
0
1
0
0
0
0
Commissioning and Operation
Chapter 16 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report. The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.
0
1
0
0
0
0
Only in the standard representation the Dirac theory is a quantum theory of a single fermion
It is shown that the relativistic quantum mechanics of a single fermion can be developed only on the basis of the standard representation of the Dirac bispinor. As in the nonrelativistic quantum mechanics, the arbitrariness in defining the bispinor, as a four-component wave function, is restricted by its multiplication by an arbitrary phase factor. We reveal the role of the large and small components of the bispinor, establish their link in the nonrelativistic limit with the Pauli spinor, as well as explain the role of states with negative energies. The Klein tunneling is treated here as a physical phenomenon analogous to the propagation of the electromagnetic wave in a medium with negative dielectric permittivity and permeability. For the case of localized stationary states we define the effective one-particle operators which act in the space of the large component but contain the contributions of both components. The effective operator of energy is presented in a compact analytical form.
0
1
0
0
0
0
Stable absorbing boundary conditions for molecular dynamics in general domains
A new type of absorbing boundary conditions for molecular dynamics simulations are presented. The exact boundary conditions for crystalline solids with harmonic approximation are expressed as a dynamic Dirichlet- to-Neumann (DtN) map. It connects the displacement of the atoms at the boundary to the traction on these atoms. The DtN map is valid for a domain with general geometry. To avoid evaluating the time convo- lution of the dynamic DtN map, we approximate the associated kernel function by rational functions in the Laplace domain. The parameters in the approximations are determined by interpolations. The explicit forms of the zeroth, first, and second order approximations will be presented. The stability of the molecular dynamics model, supplemented with these absorbing boundary conditions is established. Two numerical simulations are performed to demonstrate the effectiveness of the methods.
0
1
0
0
0
0
Algebraic operads up to homotopy
This paper deals with the homotopy theory of differential graded operads. We endow the Koszul dual category of curved conilpotent cooperads, where the notion of quasi-isomorphism barely makes sense, with a model category structure Quillen equivalent to that of operads. This allows us to describe the homotopy properties of differential graded operads in a simpler and richer way, using obstruction methods.
0
0
1
0
0
0
Nonlinear Kalman Filtering with Divergence Minimization
We consider the nonlinear Kalman filtering problem using Kullback-Leibler (KL) and $\alpha$-divergence measures as optimization criteria. Unlike linear Kalman filters, nonlinear Kalman filters do not have closed form Gaussian posteriors because of a lack of conjugacy due to the nonlinearity in the likelihood. In this paper we propose novel algorithms to optimize the forward and reverse forms of the KL divergence, as well as the alpha-divergence which contains these two as limiting cases. Unlike previous approaches, our algorithms do not make approximations to the divergences being optimized, but use Monte Carlo integration techniques to derive unbiased algorithms for direct optimization. We assess performance on radar and sensor tracking, and options pricing problems, showing general improvement over the UKF and EKF, as well as competitive performance with particle filtering.
0
0
1
1
0
0
On Chern number inequality in dimension 3
We prove that if $X---> X^+$ is a threefold terminal flip, then $c_1(X).c_2(X)\leq c_1(X^+).c_2(X^+)$ where $c_1(X)$ and $c_2(X)$ denote the Chern classes. This gives the affirmative answer to a Question by Xie \cite{Xie2}. We obtain the similar but weaker result in the case of divisorial contraction to curves.
0
0
1
0
0
0
Enhancing SDO/HMI images using deep learning
The Helioseismic and Magnetic Imager (HMI) provides continuum images and magnetograms with a cadence better than one per minute. It has been continuously observing the Sun 24 hours a day for the past 7 years. The obvious trade-off between full disk observations and spatial resolution makes HMI not enough to analyze the smallest-scale events in the solar atmosphere. Our aim is to develop a new method to enhance HMI data, simultaneously deconvolving and super-resolving images and magnetograms. The resulting images will mimic observations with a diffraction-limited telescope twice the diameter of HMI. Our method, which we call Enhance, is based on two deep fully convolutional neural networks that input patches of HMI observations and output deconvolved and super-resolved data. The neural networks are trained on synthetic data obtained from simulations of the emergence of solar active regions. We have obtained deconvolved and supper-resolved HMI images. To solve this ill-defined problem with infinite solutions we have used a neural network approach to add prior information from the simulations. We test Enhance against Hinode data that has been degraded to a 28 cm diameter telescope showing very good consistency. The code is open source.
1
1
0
0
0
0
Suppression of the superconductivity in ultrathin amorphous Mo$_{78}$Ge$_{22}$ thin films observed by STM
In contact with a superconductor, a normal metal modifies its properties due to Andreev reflection. In the current work, the local density of states (LDOS) of superconductor - normal metal Mo$_{78}$Ge$_{22}$ - Au bilayers are studied by means of STM applied from the Au side. Three bilayers have been prepared on silicate glass substrate consisting of 100, 10 and 5 nm MoGe thin films covered always by 5 nm Au layer. The tunneling spectra were measured at temperatures from 0.5 K to 7 K. The two-dimensional cross-correlation between topography and normalized zero-bias conductance (ZBC) indicates a proximity effect between 100 and 10 nm MoGe thin films and Au layer where a superconducting gap slightly smaller than that of bulk MoGe is observed. The effect of the thinnest 5 nm MoGe layer on Au leads to much smaller gap moreover the LDOS reveals almost completely suppressed coherence peaks. This is attributed to a strong pair-breaking effect of spin-flip processes at the interface between MoGe films and the substrate.
0
1
0
0
0
0
Functional data analysis in the Banach space of continuous functions
Functional data analysis is typically conducted within the $L^2$-Hilbert space framework. There is by now a fully developed statistical toolbox allowing for the principled application of the functional data machinery to real-world problems, often based on dimension reduction techniques such as functional principal component analysis. At the same time, there have recently been a number of publications that sidestep dimension reduction steps and focus on a fully functional $L^2$-methodology. This paper goes one step further and develops data analysis methodology for functional time series in the space of all continuous functions. The work is motivated by the fact that objects with rather different shapes may still have a small $L^2$-distance and are therefore identified as similar when using an $L^2$-metric. However, in applications it is often desirable to use metrics reflecting the visualization of the curves in the statistical analysis. The methodological contributions are focused on developing two-sample and change-point tests as well as confidence bands, as these procedures appear do be conducive to the proposed setting. Particular interest is put on relevant differences; that is, on not trying to test for exact equality, but rather for pre-specified deviations under the null hypothesis. The procedures are justified through large-sample theory. To ensure practicability, non-standard bootstrap procedures are developed and investigated addressing particular features that arise in the problem of testing relevant hypotheses. The finite sample properties are explored through a simulation study and an application to annual temperature profiles.
0
0
1
1
0
0
Bayesian Recurrent Neural Networks
In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80\%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.
1
0
0
1
0
0
Cross-Correlation Redshift Calibration Without Spectroscopic Calibration Samples in DES Science Verification Data
Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of $\Delta z \sim \pm 0.01$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.
0
1
0
0
0
0
Completely bounded bimodule maps and spectral synthesis
We initiate the study of the completely bounded multipliers of the Haagerup tensor product $A(G)\otimes_{\rm h} A(G)$ of two copies of the Fourier algebra $A(G)$ of a locally compact group $G$. If $E$ is a closed subset of $G$ we let $E^{\sharp} = \{(s,t) : st\in E\}$ and show that if $E^{\sharp}$ is a set of spectral synthesis for $A(G)\otimes_{\rm h} A(G)$ then $E$ is a set of local spectral synthesis for $A(G)$. Conversely, we prove that if $E$ is a set of spectral synthesis for $A(G)$ and $G$ is a Moore group then $E^{\sharp}$ is a set of spectral synthesis for $A(G)\otimes_{\rm h} A(G)$. Using the natural identification of the space of all completely bounded weak* continuous $VN(G)'$-bimodule maps with the dual of $A(G)\otimes_{\rm h} A(G)$, we show that, in the case $G$ is weakly amenable, such a map leaves the multiplication algebra of $L^{\infty}(G)$ invariant if and only if its support is contained in the antidiagonal of $G$.
0
0
1
0
0
0
Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation
Black-box risk scoring models permeate our lives, yet are typically proprietary or opaque. We propose Distill-and-Compare, a model distillation and comparison approach to audit such models. To gain insight into black-box models, we treat them as teachers, training transparent student models to mimic the risk scores assigned by black-box models. We compare the student model trained with distillation to a second un-distilled transparent model trained on ground-truth outcomes, and use differences between the two models to gain insight into the black-box model. Our approach can be applied in a realistic setting, without probing the black-box model API. We demonstrate the approach on four public data sets: COMPAS, Stop-and-Frisk, Chicago Police, and Lending Club. We also propose a statistical test to determine if a data set is missing key features used to train the black-box model. Our test finds that the ProPublica data is likely missing key feature(s) used in COMPAS.
1
0
0
1
0
0
An influence-based fast preceding questionnaire model for elderly assessments
To improve the efficiency of elderly assessments, an influence-based fast preceding questionnaire model (FPQM) is proposed. Compared with traditional assessments, the FPQM optimizes questionnaires by reordering their attributes. The values of low-ranking attributes can be predicted by the values of the high-ranking attributes. Therefore, the number of attributes can be reduced without redesigning the questionnaires. A new function for calculating the influence of the attributes is proposed based on probability theory. Reordering and reducing algorithms are given based on the attributes' influences. The model is verified through a practical application. The practice in an elderly-care company shows that the FPQM can reduce the number of attributes by 90.56% with a prediction accuracy of 98.39%. Compared with other methods, such as the Expert Knowledge, Rough Set and C4.5 methods, the FPQM achieves the best performance. In addition, the FPQM can also be applied to other questionnaires.
1
0
0
0
0
0
A GPU Accelerated Discontinuous Galerkin Incompressible Flow Solver
We present a GPU-accelerated version of a high-order discontinuous Galerkin discretization of the unsteady incompressible Navier-Stokes equations. The equations are discretized in time using a semi-implicit scheme with explicit treatment of the nonlinear term and implicit treatment of the split Stokes operators. The pressure system is solved with a conjugate gradient method together with a fully GPU-accelerated multigrid preconditioner which is designed to minimize memory requirements and to increase overall performance. A semi-Lagrangian subcycling advection algorithm is used to shift the computational load per timestep away from the pressure Poisson solve by allowing larger timestep sizes in exchange for an increased number of advection steps. Numerical results confirm we achieve the design order accuracy in time and space. We optimize the performance of the most time-consuming kernels by tuning the fine-grain parallelism, memory utilization, and maximizing bandwidth. To assess overall performance we present an empirically calibrated roofline performance model for a target GPU to explain the achieved efficiency. We demonstrate that, in the most cases, the kernels used in the solver are close to their empirically predicted roofline performance.
1
0
0
0
0
0
The Auger Engineering Radio Array and multi-hybrid cosmic ray detection (TAUP 2015)
The Auger Engineering Radio Array (AERA) aims at the detection of air showers induced by high-energy cosmic rays. As an extension of the Pierre Auger Observatory, it measures complementary information to the particle detectors, fluorescence telescopes and to the muon scintillators of the Auger Muons and Infill for the Ground Array (AMIGA). AERA is sensitive to all fundamental parameters of an extensive air shower such as the arrival direction, energy and depth of shower maximum. Since the radio emission is induced purely by the electromagnetic component of the shower, in combination with the AMIGA muon counters, AERA is perfect for separate measurements of the electrons and muons in the shower, if combined with a muon counting detector like AMIGA. In addition to the depth of the shower maximum, the ratio of the electron and muon number serves as a measure of the primary particle mass.
0
1
0
0
0
0
Historic Emergence of Diversity in Painting: Heterogeneity in Chromatic Distance in Images and Characterization of Massive Painting Data Set
Painting is an art form that has long functioned as a major channel for the creative expression and communication of humans, its evolution taking place under an interplay with the science, technology, and social environments of the times. Therefore, understanding the process based on comprehensive data could shed light on how humans acted and manifested creatively under changing conditions. Yet, there exist few systematic frameworks that characterize the process for painting, which would require robust statistical methods for defining painting characteristics and identifying human's creative developments, and data of high quality and sufficient quantity. Here we propose that the color contrast of a painting image signifying the heterogeneity in inter-pixel chromatic distance can be a useful representation of its style, integrating both the color and geometry. From the color contrasts of paintings from a large-scale, comprehensive archive of 179,853 high-quality images spanning several centuries we characterize the temporal evolutionary patterns of paintings, and present a deep study of an extraordinary expansion in creative diversity and individuality that came to define the modern era.
1
1
0
0
0
0
Cage Size and Jump Precursors in Glass-Forming Liquids: Experiment and Simulations
Glassy dynamics is intermittent, as particles suddenly jump out of the cage formed by their neighbours, and heterogeneous, as these jumps are not uniformly distributed across the system. Relating these features of the dynamics to the diverse local environments explored by the particles is essential to rationalize the relaxation process. Here we investigate this issue characterizing the local environment of a particle with the amplitude of its short time vibrational motion, as determined by segmenting in cages and jumps the particle trajectories. Both simulations of supercooled liquids and experiments on colloidal suspensions show that particles in large cages are likely to jump after a small time-lag, and that, on average, the cage enlarges shortly before the particle jumps. At large time-lags, the cage has essentially a constant value, which is smaller for longer-lasting cages. Finally, we clarify how this coupling between cage size and duration controls the average behaviour and opens the way to a better understanding of the relaxation process in glass--forming liquids.
0
1
0
0
0
0
A Comprehensive Survey on Bengali Phoneme Recognition
Hidden Markov model based various phoneme recognition methods for Bengali language is reviewed. Automatic phoneme recognition for Bengali language using multilayer neural network is reviewed. Usefulness of multilayer neural network over single layer neural network is discussed. Bangla phonetic feature table construction and enhancement for Bengali speech recognition is also discussed. Comparison among these methods is discussed.
1
0
0
0
0
0
Factorization of arithmetic automorphic periods
In this paper, we prove that the arithmetic automorphic periods for $GL_{n}$ over a CM field factorize through the infinite places. This generalizes a conjecture of Shimura in 1983, and is predicted by the Langlands correspondence between automorphic representations and motives.
0
0
1
0
0
0
Multielectronic processes in particle and antiparticle collisions with rare gases
In this chapter we analyze the multiple ionization by impact of |Z|=1 projectiles: electrons, positrons, protons and antiprotons. Differences and similarities among the cross sections by these four projectiles allows us to have an insight on the physics involved. Mass and charge effects, energy thresholds, and relative importance of collisional and post-collisional processes are discussed. For this purpose, we performed a detailed theoretical-experimental comparison for single up to quintuple ionization of Ne, Ar, Kr and Xe by particles and antiparticles. We include an extensive compilation of the available data for the sixteen collisional systems, and the theoretical cross sections by means of the continuum distorted wave eikonal initial state approximation. We underline here that post-collisional ionization is decisive to describe multiple ionization by light projectiles, covering almost the whole energy range, from threshold to high energies. The normalization of positron and antiproton measurements to electron impact ones, the lack of data in certain cases, and the future prospects are presented and discussed.
0
1
0
0
0
0
A Floating Cylinder on An Unbounded Bath
In this paper, we reconsider a circular cylinder horizontally floating on an unbounded reservoir in a gravitational field directed downwards, which was studied by Bhatnargar and Finn in 2006. We follow their approach but with some modifications. We establish the relation between the total energy relative to the undisturbed state and the total force. There is a monotone relation between the height of the centre and the wetting angle. We study the number of equilibria, the floating configurations and their stability for all parameter values. We find that the system admits at most two equilibrium points for arbitrary contact angle, the one with smaller wetting angle is stable and the one with larger wetting angle is unstable. The initial model has a limitation that the fluid interfaces may intersect. We show that the stable equilibrium point never lies in the intersection region, while the unstable equilibrium point may lie in the intersection region.
0
1
1
0
0
0
Switching between Limit Cycles in a Model of Running Using Exponentially Stabilizing Discrete Control Lyapunov Function
This paper considers the problem of switching between two periodic motions, also known as limit cycles, to create agile running motions. For each limit cycle, we use a control Lyapunov function to estimate the region of attraction at the apex of the flight phase. We switch controllers at the apex, only if the current state of the robot is within the region of attraction of the subsequent limit cycle. If the intersection between two limit cycles is the null set, then we construct additional limit cycles till we are able to achieve sufficient overlap of the region of attraction between sequential limit cycles. Additionally, we impose an exponential convergence condition on the control Lyapunov function that allows us to rapidly transition between limit cycles. Using the approach we demonstrate switching between 5 limit cycles in about 5 steps with the speed changing from 2 m/s to 5 m/s.
1
0
0
0
0
0
Locally Private Bayesian Inference for Count Models
As more aspects of social interaction are digitally recorded, there is a growing need to develop privacy-preserving data analysis methods. Social scientists will be more likely to adopt these methods if doing so entails minimal change to their current methodology. Toward that end, we present a general and modular method for privatizing Bayesian inference for Poisson factorization, a broad class of models that contains some of the most widely used models in the social sciences. Our method satisfies local differential privacy, which ensures that no single centralized server need ever store the non-privatized data. To formulate our local-privacy guarantees, we introduce and focus on limited-precision local privacy---the local privacy analog of limited-precision differential privacy (Flood et al., 2013). We present two case studies, one involving social networks and one involving text corpora, that test our method's ability to form the posterior distribution over latent variables under different levels of noise, and demonstrate our method's utility over a naïve approach, wherein inference proceeds as usual, treating the privatized data as if it were not privatized.
1
0
0
1
0
0
Carrier Diffusion in Thin-Film CH3NH3PbI3 Perovskite Measured using Four-Wave Mixing
We report the application of femtosecond four-wave mixing (FWM) to the study of carrier transport in solution-processed CH3NH3PbI3. The diffusion coefficient was extracted through direct detection of the lateral diffusion of carriers utilizing the transient grating technique, coupled with simultaneous measurement of decay kinetics exploiting the versatility of the boxcar excitation beam geometry. The observation of exponential decay of the transient grating versus interpulse delay indicates diffusive transport with negligible trapping within the first nanosecond following excitation. The in-plane transport geometry in our experiments enabled the diffusion length to be compared directly with the grain size, indicating that carriers move across multiple grain boundaries prior to recombination. Our experiments illustrate the broad utility of FWM spectroscopy for rapid characterization of macroscopic film transport properties.
0
1
0
0
0
0
On effective Birkhoff's ergodic theorem for computable actions of amenable groups
We introduce computable actions of computable groups and prove the following versions of effective Birkhoff's ergodic theorem. Let $\Gamma$ be a computable amenable group, then there always exists a canonically computable tempered two-sided F{\o}lner sequence $(F_n)_{n \geq 1}$ in $\Gamma$. For a computable, measure-preserving, ergodic action of $\Gamma$ on a Cantor space $\{0,1\}^{\mathbb N}$ endowed with a computable probability measure $\mu$, it is shown that for every bounded lower semicomputable function $f$ on $\{0,1\}^{\mathbb N}$ and for every Martin-Löf random $\omega \in \{0,1\}^{\mathbb N}$ the equality \[ \lim\limits_{n \to \infty} \frac{1}{|F_n|} \sum\limits_{g \in F_n} f(g \cdot \omega) = \int\limits f d \mu \] holds, where the averages are taken with respect to a canonically computable tempered two-sided F{\o}lner sequence $(F_n)_{n \geq 1}$. We also prove the same identity for all lower semicomputable $f$'s in the special case when $\Gamma$ is a computable group of polynomial growth and $F_n:=\mathrm{B}(n)$ is the F{\o}lner sequence of balls around the neutral element of $\Gamma$.
0
0
1
0
0
0
Gender Differences in Participation and Reward on Stack Overflow
Programming is a valuable skill in the labor market, making the underrepresentation of women in computing an increasingly important issue. Online question and answer platforms serve a dual purpose in this field: they form a body of knowledge useful as a reference and learning tool, and they provide opportunities for individuals to demonstrate credible, verifiable expertise. Issues, such as male-oriented site design or overrepresentation of men among the site's elite may therefore compound the issue of women's underrepresentation in IT. In this paper we audit the differences in behavior and outcomes between men and women on Stack Overflow, the most popular of these Q&A sites. We observe significant differences in how men and women participate in the platform and how successful they are. For example, the average woman has roughly half of the reputation points, the primary measure of success on the site, of the average man. Using an Oaxaca-Blinder decomposition, an econometric technique commonly applied to analyze differences in wages between groups, we find that most of the gap in success between men and women can be explained by differences in their activity on the site and differences in how these activities are rewarded. Specifically, 1) men give more answers than women and 2) are rewarded more for their answers on average, even when controlling for possible confounders such as tenure or buy-in to the site. Women ask more questions and gain more reward per question. We conclude with a hypothetical redesign of the site's scoring system based on these behavioral differences, cutting the reputation gap in half.
1
0
0
0
0
0
Invariant surface area functionals and singular Yamabe problem in 3-dimensional CR geometry
We express two CR invariant surface area elements in terms of quantities in pseudohermitian geometry. We deduce the Euler-Lagrange equations of the associated energy functionals. Many solutions are given and discussed. In relation to the singular CR Yamabe problem, we show that one of the energy functionals appears as the coefficient (up to a constant multiple) of the log term in the associated volume renormalization.
0
0
1
0
0
0
Dynamic dipole polarizabilities of heteronuclear alkali dimers: optical response, trapping and control of ultracold molecules
In this article we address the general approach for calculating dynamical dipole polarizabilities of small quantum systems, based on a sum-over-states formula involving in principle the entire energy spectrum of the system. We complement this method by a few-parameter model involving a limited number of effective transitions, allowing for a compact and accurate representation of both the isotropic and anisotropic components of the polarizability. We apply the method to the series of ten heteronuclear molecules composed of two of ($^7$Li,$^{23}$Na,$^{39}$K,$^{87}$Rb,$^{133}$Cs) alkali-metal atoms. We rely on both up-to-date spectroscopically-determined potential energy curves for the lowest electronic states, and on our systematic studies of these systems performed during the last decade for higher excited states and for permanent and transition dipole moments. Such a compilation is timely for the continuously growing researches on ultracold polar molecules. Indeed the knowledge of the dynamic dipole polarizabilities is crucial to model the optical response of molecules when trapped in optical lattices, and to determine optimal lattice frequencies ensuring optimal transfer to the absolute ground state of initially weakly-bound molecules. When they exist, we determine the so-called "magic frequencies" where the ac-Stark shift and thus the viewed trap depth, is the same for both weakly-bound and ground-state molecules.
0
1
0
0
0
0
Minor-free graphs have light spanners
We show that every $H$-minor-free graph has a light $(1+\epsilon)$-spanner, resolving an open problem of Grigni and Sissokho and proving a conjecture of Grigni and Hung. Our lightness bound is \[O\left(\frac{\sigma_H}{\epsilon^3}\log \frac{1}{\epsilon}\right)\] where $\sigma_H = |V(H)|\sqrt{\log |V(H)|}$ is the sparsity coefficient of $H$-minor-free graphs. That is, it has a practical dependency on the size of the minor $H$. Our result also implies that the polynomial time approximation scheme (PTAS) for the Travelling Salesperson Problem (TSP) in $H$-minor-free graphs by Demaine, Hajiaghayi and Kawarabayashi is an efficient PTAS whose running time is $2^{O_H\left(\frac{1}{\epsilon^4}\log \frac{1}{\epsilon}\right)}n^{O(1)}$ where $O_H$ ignores dependencies on the size of $H$. Our techniques significantly deviate from existing lines of research on spanners for $H$-minor-free graphs, but build upon the work of Chechik and Wulff-Nilsen for spanners of general graphs.
1
0
0
0
0
0
Adversarial Generation of Natural Language
Generative Adversarial Networks (GANs) have gathered a lot of attention from the computer vision community, yielding impressive results for image generation. Advances in the adversarial generation of natural language from noise however are not commensurate with the progress made in generating images, and still lag far behind likelihood based methods. In this paper, we take a step towards generating natural language with a GAN objective alone. We introduce a simple baseline that addresses the discrete output space problem without relying on gradient estimators and show that it is able to achieve state-of-the-art results on a Chinese poem generation dataset. We present quantitative results on generating sentences from context-free and probabilistic context-free grammars, and qualitative language modeling results. A conditional version is also described that can generate sequences conditioned on sentence characteristics.
1
0
0
1
0
0
Likely Transiting Exocomets Detected by Kepler
We present the first good evidence for exocomet transits of a host star in continuum light in data from the Kepler mission. The Kepler star in question, KIC 3542116, is of spectral type F2V and is quite bright at K_p = 10. The transits have a distinct asymmetric shape with a steeper ingress and slower egress that can be ascribed to objects with a trailing dust tail passing over the stellar disk. There are three deeper transits with depths of ~0.1% that last for about a day, and three that are several times more shallow and of shorter duration. The transits were found via an exhaustive visual search of the entire Kepler photometric data set, which we describe in some detail. We review the methods we use to validate the Kepler data showing the comet transits, and rule out instrumental artefacts as sources of the signals. We fit the transits with a simple dust-tail model, and find that a transverse comet speed of ~35-50 km/s and a minimum amount of dust present in the tail of ~10^16 g are required to explain the larger transits. For a dust replenishment time of ~10 days, and a comet lifetime of only ~300 days, this implies a total cometary mass of > 3 x 10^17 g, or about the mass of Halley's comet. We also discuss the number of comets and orbital geometry that would be necessary to explain the six transits detected over the four years of Kepler prime-field observations. Finally, we also report the discovery of a single comet-shaped transit in KIC 11084727 with very similar transit and host-star properties.
0
1
0
0
0
0
Origins of bond and spin order in rare-earth nickelate bulk and heterostructures
We analyze the charge- and spin response functions of rare-earth nickelates RNiO3 and their heterostructures using random-phase approximation in a two-band Hubbard model. The inter-orbital charge fluctuation is found to be the driving mechanism for the rock-salt type bond order in bulk RNiO3, and good agreement of the ordering temperature with experimental values is achieved for all RNiO3 using realistic crystal structures and interaction parameters. We further show that magnetic ordering in bulk is not driven by the spin fluctuation and should be instead explained as ordering of localized moments. This picture changes for low-dimensional heterostructures, where the charge fluctuation is suppressed and overtaken by the enhanced spin instability, which results in a spin-density-wave ground state observed in recent experiments. Predictions for spectroscopy allow for further experimental testing of our claims.
0
1
0
0
0
0
Canonical Truth
We introduce and study a notion of canonical set theoretical truth, which means truth in a `canonical model', i.e. a transitive class model that is uniquely characterized by some $\in$-formula. We show that this notion of truth is `informative', i.e. there are statements that hold in all canonical models but do not follow from ZFC, such as Reitz' ground model axiom or the nonexistence of measurable cardinals. We also show that ZF+$V=L[\mathbb{R}]$+AD has no canonical models. On the other hand, we show that there are canonical models for `every real has sharp'. Moreover, we consider `theory-canonical' statements that only fix a transitive class model of ZFC up to elementary equivalence and show that it is consistent relative to large cardinals that there are theory-canonical models with measurable cardinals and that theory-canonicity is still informative in the sense explained above.
0
0
1
0
0
0
AACT: Application-Aware Cooperative Time Allocation for Internet of Things
As the number of Internet of Things (IoT) devices keeps increasing, data is required to be communicated and processed by these devices at unprecedented rates. Cooperation among wireless devices by exploiting Device-to-Device (D2D) connections is promising, where aggregated resources in a cooperative setup can be utilized by all devices, which would increase the total utility of the setup. In this paper, we focus on the resource allocation problem for cooperating IoT devices with multiple heterogeneous applications. In particular, we develop Application-Aware Cooperative Time allocation (AACT) framework, which optimizes the time that each application utilizes the aggregated system resources by taking into account heterogeneous device constraints and application requirements. AACT is grounded on the concept of Rolling Horizon Control (RHC) where decisions are made by iteratively solving a convex optimization problem over a moving control window of estimated system parameters. The simulation results demonstrate significant performance gains.
1
0
0
0
0
0
COPA: Constrained PARAFAC2 for Sparse & Large Datasets
PARAFAC2 has demonstrated success in modeling irregular tensors, where the tensor dimensions vary across one of the modes. An example scenario is modeling treatments across a set of patients with the varying number of medical encounters over time. Despite recent improvements on unconstrained PARAFAC2, its model factors are usually dense and sensitive to noise which limits their interpretability. As a result, the following open challenges remain: a) various modeling constraints, such as temporal smoothness, sparsity and non-negativity, are needed to be imposed for interpretable temporal modeling and b) a scalable approach is required to support those constraints efficiently for large datasets. To tackle these challenges, we propose a {\it CO}nstrained {\it PA}RAFAC2 (COPA) method, which carefully incorporates optimization constraints such as temporal smoothness, sparsity, and non-negativity in the resulting factors. To efficiently support all those constraints, COPA adopts a hybrid optimization framework using alternating optimization and alternating direction method of multiplier (AO-ADMM). As evaluated on large electronic health record (EHR) datasets with hundreds of thousands of patients, COPA achieves significant speedups (up to 36 times faster) over prior PARAFAC2 approaches that only attempt to handle a subset of the constraints that COPA enables. Overall, our method outperforms all the baselines attempting to handle a subset of the constraints in terms of speed, while achieving the same level of accuracy. Through a case study on temporal phenotyping of medically complex children, we demonstrate how the constraints imposed by COPA reveal concise phenotypes and meaningful temporal profiles of patients. The clinical interpretation of both the phenotypes and the temporal profiles was confirmed by a medical expert.
0
0
0
1
0
0
Effect of Composition Gradient on Magnetothermal Instability Modified by Shear and Rotation
We model the intracluster medium as a weakly collisional plasma that is a binary mixture of the hydrogen and the helium ions, along with free electrons. When, owing to the helium sedimentation, the gradient of the mean molecular weight (or equivalently, composition or helium ions' concentration) of the plasma is not negligible, it can have appreciable influence on the stability criteria of the thermal convective instabilities, e.g., the heat-flux-buoyancy instability and the magnetothermal instability (MTI). These instabilities are consequences of the anisotropic heat conduction occurring preferentially along the magnetic field lines. In this paper, without ignoring the magnetic tension, we first present the mathematical criterion for the onset of composition gradient modified MTI. Subsequently, we relax the commonly adopted equilibrium state in which the plasma is at rest, and assume that the plasma is in a sheared state which may be due to differential rotation. We discuss how the concentration gradient affects the coupling between the Kelvin--Helmholtz instability and the MTI in rendering the plasma unstable or stable. We derive exact stability criterion by working with the sharp boundary case in which the physical variables---temperature, mean molecular weight, density, and magnetic field---change discontinuously from one constant value to another on crossing the boundary. Finally, we perform the linear stability analysis for the case of the differentially rotating plasma that is thermally and compositionally stratified as well. By assuming axisymmetric perturbations, we find the corresponding dispersion relation and the explicit mathematical expression determining the onset of the modified MTI.
0
1
0
0
0
0
A Verified Algorithm Enumerating Event Structures
An event structure is a mathematical abstraction modeling concepts as causality, conflict and concurrency between events. While many other mathematical structures, including groups, topological spaces, rings, abound with algorithms and formulas to generate, enumerate and count particular sets of their members, no algorithm or formulas are known to generate or count all the possible event structures over a finite set of events. We present an algorithm to generate such a family, along with a functional implementation verified using Isabelle/HOL. As byproducts, we obtain a verified enumeration of all possible preorders and partial orders. While the integer sequences counting preorders and partial orders are already listed on OEIS (On-line Encyclopedia of Integer Sequences), the one counting event structures is not. We therefore used our algorithm to submit a formally verified addition, which has been successfully reviewed and is now part of the OEIS.
1
0
0
0
0
0
Missing Data as Part of the Social Behavior in Real-World Financial Complex Systems
Many real-world networks are known to exhibit facts that counter our knowledge prescribed by the theories on network creation and communication patterns. A common prerequisite in network analysis is that information on nodes and links will be complete because network topologies are extremely sensitive to missing information of this kind. Therefore, many real-world networks that fail to meet this criterion under random sampling may be discarded. In this paper we offer a framework for interpreting the missing observations in network data under the hypothesis that these observations are not missing at random. We demonstrate the methodology with a case study of a financial trade network, where the awareness of agents to the data collection procedure by a self-interested observer may result in strategic revealing or withholding of information. The non-random missingness has been overlooked despite the possibility of this being an important feature of the processes by which the network is generated. The analysis demonstrates that strategic information withholding may be a valid general phenomenon in complex systems. The evidence is sufficient to support the existence of an influential observer and to offer a compelling dynamic mechanism for the creation of the network.
0
0
0
1
0
0
Quantum Monte Carlo simulation of a two-dimensional Majorana lattice model
We study interacting Majorana fermions in two dimensions as a low-energy effective model of a vortex lattice in two-dimensional time-reversal-invariant topological superconductors. For that purpose, we implement ab-initio quantum Monte Carlo simulation to the Majorana fermion system in which the path-integral measure is given by a semi-positive Pfaffian. We discuss spontaneous breaking of time-reversal symmetry at finite temperature.
0
1
0
0
0
0
Geometric Rescaling Algorithms for Submodular Function Minimization
We present a new class of polynomial-time algorithms for submodular function minimization (SFM), as well as a unified framework to obtain strongly polynomial SFM algorithms. Our new algorithms are based on simple iterative methods for the minimum-norm problem, such as the conditional gradient and the Fujishige-Wolfe algorithms. We exhibit two techniques to turn simple iterative methods into polynomial-time algorithms. Firstly, we use the geometric rescaling technique, which has recently gained attention in linear programming. We adapt this technique to SFM and obtain a weakly polynomial bound $O((n^4\cdot EO + n^5)\log (n L))$. Secondly, we exhibit a general combinatorial black-box approach to turn any strongly polynomial $\varepsilon L$-approximate SFM oracle into a strongly polynomial exact SFM algorithm. This framework can be applied to a wide range of combinatorial and continuous algorithms, including pseudo-polynomial ones. In particular, we can obtain strongly polynomial algorithms by a repeated application of the conditional gradient or of the Fujishige-Wolfe algorithm. Combined with the geometric rescaling technique, the black-box approach provides a $O((n^5\cdot EO + n^6)\log^2 n)$ algorithm. Finally, we show that one of the techniques we develop in the paper can also be combined with the cutting-plane method of Lee, Sidford, and Wong, yielding a simplified variant of their $O(n^3 \log^2 n \cdot EO + n^4\log^{O(1)} n)$ algorithm.
1
0
1
0
0
0
Statistical PT-symmetric lasing in an optical fiber network
PT-symmetry in optics is a condition whereby the real and imaginary parts of the refractive index across a photonic structure are deliberately balanced. This balance can lead to a host of novel optical phenomena, such as unidirectional invisibility, loss-induced lasing, single-mode lasing from multimode resonators, and non-reciprocal effects in conjunction with nonlinearities. Because PT-symmetry has been thought of as fragile, experimental realizations to date have been usually restricted to on-chip micro-devices. Here, we demonstrate that certain features of PT-symmetry are sufficiently robust to survive the statistical fluctuations associated with a macroscopic optical cavity. We construct optical-fiber-based coupled-cavities in excess of a kilometer in length (the free spectral range is less than 0.8 fm) with balanced gain and loss in two sub-cavities and examine the lasing dynamics. In such a macroscopic system, fluctuations can lead to a cavity-detuning exceeding the free spectral range. Nevertheless, by varying the gain-loss contrast, we observe that both the lasing threshold and the growth of the laser power follow the predicted behavior of a stable PT-symmetric structure. Furthermore, a statistical symmetry-breaking point is observed upon varying the cavity loss. These findings indicate that PT-symmetry is a more robust optical phenomenon than previously expected, and points to potential applications in optical fiber networks and fiber lasers.
0
1
0
0
0
0
Transforming Single Domain Magnetic CoFe2O4 Nanoparticles from Hydrophobic to Hydrophilic By Novel Mechanochemical Ligand Exchange
Single phase, uniform size (~9 nm) Cobalt Ferrite (CFO) nanoparticles have been synthesized by hydrothermal synthesis using oleic acid as a surfactant. The as synthesized oleic acid coated CFO (OA-CFO) nanoparticles were well dispersible in nonpolar solvents but not dispersible in water. The OA-CFO nanoparticles have been successfully transformed to highly water dispersible citric acid coated CFO (CA-CFO) nanoparticles using a novel single step ligand exchange process by mechanochemical milling, in which small chain citric acid molecules replace the original large chain oleic acid molecules available on CFO nanoparticles. The contact angle measurement shows that OA-CFO nanoparticles are hydrophobic whereas CA-CFO nanoparticles are superhydrophilic in nature. The potentiality of as synthesized OA-CFO and mechanochemically transformed CA-CFO nanoparticles for the demulsification of highly stabilized water-in-oil and oil-in-water emulsions has been demonstrated.
0
1
0
0
0
0
Probing the dusty stellar populations of the Local Volume Galaxies with JWST/MIRI
The Mid-Infrared Instrument (MIRI) for the {\em James Webb Space Telescope} (JWST) will revolutionize our understanding of infrared stellar populations in the Local Volume. Using the rich {\em Spitzer}-IRS spectroscopic data-set and spectral classifications from the Surveying the Agents of Galaxy Evolution (SAGE)-Spectroscopic survey of over a thousand objects in the Magellanic Clouds, the Grid of Red supergiant and Asymptotic giant branch star ModelS ({\sc grams}), and the grid of YSO models by Robitaille et al. (2006), we calculate the expected flux-densities and colors in the MIRI broadband filters for prominent infrared stellar populations. We use these fluxes to explore the {\em JWST}/MIRI colours and magnitudes for composite stellar population studies of Local Volume galaxies. MIRI colour classification schemes are presented; these diagrams provide a powerful means of identifying young stellar objects, evolved stars and extragalactic background galaxies in Local Volume galaxies with a high degree of confidence. Finally, we examine which filter combinations are best for selecting populations of sources based on their JWST colours.
0
1
0
0
0
0
PythonRobotics: a Python code collection of robotics algorithms
This paper describes an Open Source Software (OSS) project: PythonRobotics. This is a collection of robotics algorithms implemented in the Python programming language. The focus of the project is on autonomous navigation, and the goal is for beginners in robotics to understand the basic ideas behind each algorithm. In this project, the algorithms which are practical and widely used in both academia and industry are selected. Each sample code is written in Python3 and only depends on some standard modules for readability and ease of use. It includes intuitive animations to understand the behavior of the simulation.
1
0
0
0
0
0
A Liouville theorem for the Euler equations in the plane
This paper is concerned with qualitative properties of bounded steady flows of an ideal incompressible fluid with no stagnation point in the two-dimensional plane R^2. We show that any such flow is a shear flow, that is, it is parallel to some constant vector. The proof of this Liouville-type result is firstly based on the study of the geometric properties of the level curves of the stream function and secondly on the derivation of some estimates on the at most logarithmic growth of the argument of the flow. These estimates lead to the conclusion that the streamlines of the flow are all parallel lines.
0
0
1
0
0
0
First On-Site True Gamma-Ray Imaging-Spectroscopy of Contamination near Fukushima Plant
We have developed an Electron Tracking Compton Camera (ETCC), which provides a well-defined Point Spread Function (PSF) by reconstructing a direction of each gamma as a point and realizes simultaneous measurement of brightness and spectrum of MeV gamma-rays for the first time. Here, we present the results of our on-site pilot gamma-imaging-spectroscopy with ETCC at three contaminated locations in the vicinity of the Fukushima Daiichi Nuclear Power Plants in Japan in 2014. The obtained distribution of brightness (or emissivity) with remote-sensing observations is unambiguously converted into the dose distribution. We confirm that the dose distribution is consistent with the one taken by conventional mapping measurements with a dosimeter physically placed at each grid point. Furthermore, its imaging spectroscopy, boosted by Compton-edge-free spectra, reveals complex radioactive features in a quantitative manner around each individual target point in the background-dominated environment. Notably, we successfully identify a "micro hot spot" of residual caesium contamination even in an already decontaminated area. These results show that the ETCC performs exactly as the geometrical optics predicts, demonstrates its versatility in the field radiation measurement, and reveals potentials for application in many fields, including the nuclear industry, medical field, and astronomy.
0
1
0
0
0
0
Weak Versus Strong Disorder Superfluid-Bose Glass Transition in One Dimension
Using large-scale simulations based on matrix product state and quantum Monte Carlo techniques, we study the superfluid to Bose glass-transition for one-dimensional attractive hard-core bosons at zero temperature, across the full regime from weak to strong disorder. As a function of interaction and disorder strength, we identify a Berezinskii-Kosterlitz-Thouless critical line with two different regimes. At small attraction where critical disorder is weak compared to the bandwidth, the critical Luttinger parameter $K_c$ takes its universal Giamarchi-Schulz value $K_{c}=3/2$. Conversely, a non-universal $K_c>3/2$ emerges for stronger attraction where weak-link physics is relevant. In this strong disorder regime, the transition is characterized by self-similar power-law distributed weak links with a continuously varying characteristic exponent $\alpha$.
0
1
0
0
0
0
Quantum Structures in Human Decision-making: Towards Quantum Expected Utility
{\it Ellsberg thought experiments} and empirical confirmation of Ellsberg preferences pose serious challenges to {\it subjective expected utility theory} (SEUT). We have recently elaborated a quantum-theoretic framework for human decisions under uncertainty which satisfactorily copes with the Ellsberg paradox and other puzzles of SEUT. We apply here the quantum-theoretic framework to the {\it Ellsberg two-urn example}, showing that the paradox can be explained by assuming a state change of the conceptual entity that is the object of the decision ({\it decision-making}, or {\it DM}, {\it entity}) and representing subjective probabilities by quantum probabilities. We also model the empirical data we collected in a DM test on human participants within the theoretic framework above. The obtained results are relevant, as they provide a line to model real life, e.g., financial and medical, decisions that show the same empirical patterns as the two-urn experiment.
0
0
0
0
1
1
Fine-grained Event Learning of Human-Object Interaction with LSTM-CRF
Event learning is one of the most important problems in AI. However, notwithstanding significant research efforts, it is still a very complex task, especially when the events involve the interaction of humans or agents with other objects, as it requires modeling human kinematics and object movements. This study proposes a methodology for learning complex human-object interaction (HOI) events, involving the recording, annotation and classification of event interactions. For annotation, we allow multiple interpretations of a motion capture by slicing over its temporal span, for classification, we use Long-Short Term Memory (LSTM) sequential models with Conditional Randon Field (CRF) for constraints of outputs. Using a setup involving captures of human-object interaction as three dimensional inputs, we argue that this approach could be used for event types involving complex spatio-temporal dynamics.
1
0
0
0
0
0
Book Review Interferometry and Synthesis in Radio Astronomy - 3rd Ed
Review of the third edition of "Interferometry and Synthesis in Radio Astronomy" by Thompson, Moran and Swenson
0
1
0
0
0
0
A Kernel Theory of Modern Data Augmentation
Data augmentation, a technique in which a training set is expanded with class-preserving transformations, is ubiquitous in modern machine learning pipelines. In this paper, we seek to establish a theoretical framework for understanding modern data augmentation techniques. We start by showing that for kernel classifiers, data augmentation can be approximated by first-order feature averaging and second-order variance regularization components. We connect this general approximation framework to prior work in invariant kernels, tangent propagation, and robust optimization. Next, we explicitly tackle the compositional aspect of modern data augmentation techniques, proposing a novel model of data augmentation as a Markov process. Under this model, we show that performing $k$-nearest neighbors with data augmentation is asymptotically equivalent to a kernel classifier. Finally, we illustrate ways in which our theoretical framework can be leveraged to accelerate machine learning workflows in practice, including reducing the amount of computation needed to train on augmented data, and predicting the utility of a transformation prior to training.
0
0
0
1
0
0
Carrier driven coupling in ferromagnetic oxide heterostructures
Transition metal oxides are well known for their complex magnetic and electrical properties. When brought together in heterostructure geometries, they show particular promise for spintronics and colossal magnetoresistance applications. In this letter, we propose a new mechanism for the coupling between layers of itinerant ferromagnetic materials in heterostructures. The coupling is mediated by charge carriers that strive to maximally delocalize through the heterostructure to gain kinetic energy. In doing so, they force a ferromagnetic or antiferromagnetic coupling between the constituent layers. To illustrate this, we focus on heterostructures composed of SrRuO$_3$ and La$_{1-x}$A$_{x}$MnO$_3$ (A=Ca/Sr). Our mechanism is consistent with antiferromagnetic alignment that is known to occur in multilayers of SrRuO$_3$-La$_{1-x}$A$_{x}$MnO$_3$. To support our assertion, we present a minimal Kondo-lattice model which reproduces the known magnetization properties of such multilayers. In addition, we discuss a quantum well model for heterostructures and argue that the spin-dependent density of states determines the nature of the coupling. As a smoking gun signature, we propose that bilayers with the same constituents will oscillate between ferromagnetic and antiferromagnetic coupling upon tuning the relative thicknesses of the layers.
0
1
0
0
0
0
Data Dropout in Arbitrary Basis for Deep Network Regularization
An important problem in training deep networks with high capacity is to ensure that the trained network works well when presented with new inputs outside the training dataset. Dropout is an effective regularization technique to boost the network generalization in which a random subset of the elements of the given data and the extracted features are set to zero during the training process. In this paper, a new randomized regularization technique in which we withhold a random part of the data without necessarily turning off the neurons/data-elements is proposed. In the proposed method, of which the conventional dropout is shown to be a special case, random data dropout is performed in an arbitrary basis, hence the designation Generalized Dropout. We also present a framework whereby the proposed technique can be applied efficiently to convolutional neural networks. The presented numerical experiments demonstrate that the proposed technique yields notable performance gain. Generalized Dropout provides new insight into the idea of dropout, shows that we can achieve different performance gains by using different bases matrices, and opens up a new research question as of how to choose optimal bases matrices that achieve maximal performance gain.
1
0
0
1
0
0
Efficient algorithms to discover alterations with complementary functional association in cancer
Recent large cancer studies have measured somatic alterations in an unprecedented number of tumours. These large datasets allow the identification of cancer-related sets of genetic alterations by identifying relevant combinatorial patterns. Among such patterns, mutual exclusivity has been employed by several recent methods that have shown its effectivenes in characterizing gene sets associated to cancer. Mutual exclusivity arises because of the complementarity, at the functional level, of alterations in genes which are part of a group (e.g., a pathway) performing a given function. The availability of quantitative target profiles, from genetic perturbations or from clinical phenotypes, provides additional information that can be leveraged to improve the identification of cancer related gene sets by discovering groups with complementary functional associations with such targets. In this work we study the problem of finding groups of mutually exclusive alterations associated with a quantitative (functional) target. We propose a combinatorial formulation for the problem, and prove that the associated computation problem is computationally hard. We design two algorithms to solve the problem and implement them in our tool UNCOVER. We provide analytic evidence of the effectiveness of UNCOVER in finding high-quality solutions and show experimentally that UNCOVER finds sets of alterations significantly associated with functional targets in a variety of scenarios. In addition, our algorithms are much faster than the state-of-the-art, allowing the analysis of large datasets of thousands of target profiles from cancer cell lines. We show that on one such dataset from project Achilles our methods identify several significant gene sets with complementary functional associations with targets.
0
0
0
0
1
0
Laplace operators on holomorphic Lie algebroids
The paper introduces Laplace-type operators for functions defined on the tangent space of a Finsler Lie algebroid, using a volume form on the prolongation of the algebroid. It also presents the construction of a horizontal Laplace operator for forms defined on the prolongation of the algebroid. All of the Laplace operators considered in the paper are also locally expressed using the Chern-Finsler connection of the algebroid.
0
0
1
0
0
0