text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Intense cross-tail field-aligned currents in the plasma sheet at lunar distances, Abstract: Field-aligned currents in the Earth's magnetotail are traditionally associated with transient plasma flows and strong plasma pressure gradients in the near-Earth side. In this paper we demonstrate a new field-aligned current system present at the lunar orbit tail. Using magnetotail current sheet observations by two ARTEMIS probes at $\sim60 R_E$, we analyze statistically the current sheet structure and current density distribution closest to the neutral sheet. For about half of our 130 current sheet crossings, the equatorial magnetic field component across-the tail (along the main, cross-tail current) contributes significantly to the vertical pressure balance. This magnetic field component peaks at the equator, near the cross-tail current maximum. For those cases, a significant part of the tail current, having an intensity in the range 1-10nA/m$^2$, flows along the magnetic field lines (it is both field-aligned and cross-tail). We suggest that this current system develops in order to compensate the thermal pressure by particles that on its own is insufficient to fend off the lobe magnetic pressure.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Result of Uniqueness of Solutions of the Shigesada-Kawasaki-Teramoto Equations, Abstract: We derive the uniqueness of weak solutions to the Shigesada-Kawasaki-Teramoto (SKT) systems using the adjoint problem argument. Combining with [PT17] we then derive the well-posedness for the SKT systems in space dimension $d\le 4$
[ 0, 0, 1, 0, 0, 0 ]
Title: Mind the Gap: A Well Log Data Analysis, Abstract: The main task in oil and gas exploration is to gain an understanding of the distribution and nature of rocks and fluids in the subsurface. Well logs are records of petro-physical data acquired along a borehole, providing direct information about what is in the subsurface. The data collected by logging wells can have significant economic consequences, due to the costs inherent to drilling wells, and the potential return of oil deposits. In this paper, we describe preliminary work aimed at building a general framework for well log prediction. First, we perform a descriptive and exploratory analysis of the gaps in the neutron porosity logs of more than a thousand wells in the North Sea. Then, we generate artificial gaps in the neutron logs that reflect the statistics collected before. Finally, we compare Artificial Neural Networks, Random Forests, and three algorithms of Linear Regression in the prediction of missing gaps on a well-by-well basis.
[ 1, 0, 0, 1, 0, 0 ]
Title: All-optical switching and unidirectional plasmon launching with electron-hole plasma driven silicon nanoantennas, Abstract: High-index dielectric nanoparticles have become a powerful platform for modern light science, enabling various fascinating applications, especially in nonlinear nanophotonics for which they enable special types of optical nonlinearity, such as electron-hole plasma photoexcitation, which are not inherent to plasmonic nanostructures. Here, we propose a novel geometry for highly tunable all-dielectric nanoantennas, consisting of a chain of silicon nanoparticles excited by an electric dipole source, which allows tuning their radiation properties via electron-hole plasma photoexcitation. We show that the slowly guided modes determining the Van Hove singularity of the nanoantenna are very sensitive to the nanoparticle permittivity, opening up the ability to utilize this effect for efficient all-optical modulation. We show that by pumping several boundary nanoparticles with relatively low intensities may cause dramatic variations in the nanoantenna radiation power patterns and Purcell factor. We also demonstrate that ultrafast pumping of the designed nanoantenna allows unidirectional launching of surface plasmon-polaritons, with interesting implications for modern nonlinear nanophotonics.
[ 0, 1, 0, 0, 0, 0 ]
Title: Group chasing tactics: how to catch a faster prey?, Abstract: We propose a bio-inspired, agent-based approach to describe the natural phenomenon of group chasing in both two and three dimensions. Using a set of local interaction rules we created a continuous-space and discrete-time model with time delay, external noise and limited acceleration. We implemented a unique collective chasing strategy, optimized its parameters and studied its properties when chasing a much faster, erratic escaper. We show that collective chasing strategies can significantly enhance the chasers' success rate. Our realistic approach handles group chasing within closed, soft boundaries - contrasting most of those published in the literature with periodic ones -- and resembles several properties of pursuits observed in nature, such as the emergent encircling or the escaper's zigzag motion.
[ 0, 1, 0, 1, 0, 0 ]
Title: Probing for sparse and fast variable selection with model-based boosting, Abstract: We present a new variable selection method based on model-based gradient boosting and randomly permuted variables. Model-based boosting is a tool to fit a statistical model while performing variable selection at the same time. A drawback of the fitting lies in the need of multiple model fits on slightly altered data (e.g. cross-validation or bootstrap) to find the optimal number of boosting iterations and prevent overfitting. In our proposed approach, we augment the data set with randomly permuted versions of the true variables, so called shadow variables, and stop the step-wise fitting as soon as such a variable would be added to the model. This allows variable selection in a single fit of the model without requiring further parameter tuning. We show that our probing approach can compete with state-of-the-art selection methods like stability selection in a high-dimensional classification benchmark and apply it on gene expression data for the estimation of riboflavin production of Bacillus subtilis.
[ 0, 0, 0, 1, 0, 0 ]
Title: A surface-hopping method for semiclassical calculations of cross sections for radiative association with electronic transitions, Abstract: A semicalssical method based on surface-hopping techniques is developed to model the dynamics of radiative association with electronic transitions in arbitrary polyatomic systems. It can be proven that our method is an extension of the established semiclassical formula used in the characterization of diatomic molecule- formation. Our model is tested for diatomic molecules. It gives the same cross sections as the former semiclassical formula, but contrary to the former method it allows us to follow the fate of the trajectories after the emission of a photon. This means that we can characterize the rovibrational states of the stabilized molecules: using semiclassial quantization we can obtain quantum state resolved cross sections or emission spectra for the radiative association process. The calculated semiclassical state resolved spectra show good agreement with the result of quantum mechanical perturbation theory. Furthermore our surface-hopping model is not only applicable for the description of radiative association but it can be use for semiclassical characterization of any molecular process where spontaneous emission occurs.
[ 0, 1, 0, 0, 0, 0 ]
Title: Persistent Spread Measurement for Big Network Data Based on Register Intersection, Abstract: Persistent spread measurement is to count the number of distinct elements that persist in each network flow for predefined time periods. It has many practical applications, including detecting long-term stealthy network activities in the background of normal-user activities, such as stealthy DDoS attack, stealthy network scan, or faked network trend, which cannot be detected by traditional flow cardinality measurement. With big network data, one challenge is to measure the persistent spreads of a massive number of flows without incurring too much memory overhead as such measurement may be performed at the line speed by network processors with fast but small on-chip memory. We propose a highly compact Virtual Intersection HyperLogLog (VI-HLL) architecture for this purpose. It achieves far better memory efficiency than the best prior work of V-Bitmap, and in the meantime drastically extends the measurement range. Theoretical analysis and extensive experiments demonstrate that VI-HLL provides good measurement accuracy even in very tight memory space of less than 1 bit per flow.
[ 1, 0, 0, 0, 0, 0 ]
Title: High-dimensional Linear Regression for Dependent Observations with Application to Nowcasting, Abstract: In the last few years, an extensive literature has been focused on the $\ell_1$ penalized least squares (Lasso) estimators of high dimensional linear regression when the number of covariates $p$ is considerably larger than the sample size $n$. However, there is limited attention paid to the properties of the estimators when the errors or/and the covariates are serially dependent. In this study, we investigate the theoretical properties of the Lasso estimators for linear regression with random design under serially dependent and/or non-sub-Gaussian errors and covariates. In contrast to the traditional case in which the errors are i.i.d and have finite exponential moments, we show that $p$ can at most be a power of $n$ if the errors have only polynomial moments. In addition, the rate of convergence becomes slower due to the serial dependencies in errors and the covariates. We also consider sign consistency for model selection via Lasso when there are serial correlations in the errors or the covariates or both. Adopting the framework of functional dependence measure, we provide a detailed description on how the rates of convergence and the selection consistencies of the estimators depend on the dependence measures and moment conditions of the errors and the covariates. Simulation results show that Lasso regression can be substantially more powerful than the mixed frequency data sampling regression (MIDAS) in the presence of irrelevant variables. We apply the results obtained for the Lasso method to nowcasting mixing frequency data in which serially correlated errors and a large number of covariates are common. In real examples, the Lasso procedure outperforms the MIDAS in both forecasting and nowcasting.
[ 0, 0, 1, 1, 0, 0 ]
Title: Dynamic control of the optical emission from GaN/InGaN nanowire quantum dots by surface acoustic waves, Abstract: The optical emission of InGaN quantum dots embedded in GaN nanowires is dynamically controlled by a surface acoustic wave (SAW). The emission energy of both the exciton and biexciton lines is modulated over a 1.5 meV range at ~330 MHz. A small but systematic difference in the exciton and biexciton spectral modulation reveals a linear change of the biexciton binding energy with the SAW amplitude. The present results are relevant for the dynamic control of individual single photon emitters based on nitride semiconductors.
[ 0, 1, 0, 0, 0, 0 ]
Title: Deep Generative Networks For Sequence Prediction, Abstract: This thesis investigates unsupervised time series representation learning for sequence prediction problems, i.e. generating nice-looking input samples given a previous history, for high dimensional input sequences by decoupling the static input representation from the recurrent sequence representation. We introduce three models based on Generative Stochastic Networks (GSN) for unsupervised sequence learning and prediction. Experimental results for these three models are presented on pixels of sequential handwritten digit (MNIST) data, videos of low-resolution bouncing balls, and motion capture data. The main contribution of this thesis is to provide evidence that GSNs are a viable framework to learn useful representations of complex sequential input data, and to suggest a new framework for deep generative models to learn complex sequences by decoupling static input representations from dynamic time dependency representations.
[ 0, 0, 0, 1, 0, 0 ]
Title: Composite Behavioral Modeling for Identity Theft Detection in Online Social Networks, Abstract: In this work, we aim at building a bridge from poor behavioral data to an effective, quick-response, and robust behavior model for online identity theft detection. We concentrate on this issue in online social networks (OSNs) where users usually have composite behavioral records, consisting of multi-dimensional low-quality data, e.g., offline check-ins and online user generated content (UGC). As an insightful result, we find that there is a complementary effect among different dimensions of records for modeling users' behavioral patterns. To deeply exploit such a complementary effect, we propose a joint model to capture both online and offline features of a user's composite behavior. We evaluate the proposed joint model by comparing with some typical models on two real-world datasets: Foursquare and Yelp. In the widely-used setting of theft simulation (simulating thefts via behavioral replacement), the experimental results show that our model outperforms the existing ones, with the AUC values $0.956$ in Foursquare and $0.947$ in Yelp, respectively. Particularly, the recall (True Positive Rate) can reach up to $65.3\%$ in Foursquare and $72.2\%$ in Yelp with the corresponding disturbance rate (False Positive Rate) below $1\%$. It is worth mentioning that these performances can be achieved by examining only one composite behavior (visiting a place and posting a tip online simultaneously) per authentication, which guarantees the low response latency of our method. This study would give the cybersecurity community new insights into whether and how a real-time online identity authentication can be improved via modeling users' composite behavioral patterns.
[ 1, 0, 0, 0, 0, 0 ]
Title: Asymptotic properties of the set of systoles of arithmetic Riemann surfaces, Abstract: The purpose this article is to try to understand the mysterious coincidence between the asymptotic behavior of the volumes of the Moduli Space of closed hyperbolic surfaces of genus $g$ with respect to the Weil-Petersson metric and the asymptotic behavior of the number of arithmetic closed hyperbolic surfaces of genus $g$. If the set of arithmetic surfaces is well distributed then its image for any interesting function should be well distributed too. We investigate the distribution of the function systole. We give several results indicating that the systoles of arithmetic surfaces can not be concentrated, consequently the same holds for the set of arithmetic surfaces. The proofs are based in different techniques: combinatorics (obtaining regular graphs with any girth from results of B. Bollobas and constructions with cages and Ramanujan graphs), group theory (constructing finite index subgroups of surface groups from finite index subgroups of free groups using results of G. Baumslag) and geometric group theory (linking the geometry of graphs with the geometry of coverings of a surface).
[ 0, 0, 1, 0, 0, 0 ]
Title: Multiband Superconductivity in the time reversal symmetry broken superconductor Re6Zr, Abstract: We report point contact Andreev Reflection (PCAR) measurements on a high-quality single crystal of the non-centrosymmetric superconductor Re6Zr. We observe that the PCAR spectra can be fitted by taking two isotropic superconducting gaps with Delta_1 ~ 0.79 meV and Delta_2 ~ 0.22 meV respectively, suggesting that there are at least two bands which contribute to superconductivity. Combined with the observation of time reversal symmetry breaking at the superconducting transition from muon spin relaxation measurements (Phys. Rev. Lett. 112, 107002 (2014)), our results imply an unconventional superconducting order in this compound: A multiband singlet state that breaks time reversal symmetry or a triplet state dominated by interband pairing.
[ 0, 1, 0, 0, 0, 0 ]
Title: Comment on "Laser cooling of $^{173}$Yb for isotope separation and precision hyperfine spectroscopy", Abstract: We present measurements of the hyperfine splitting in the Yb-173 $6s6p~^1P_1^{\rm o} (F^{\prime}=3/2,7/2)$ states that disagree significantly with those measured previously by Das and Natarajan [Phys. Rev. A 76, 062505 (2007)]. We point out inconsistencies in their measurements and suggest that their error is due to optical pumping and improper determination of the atomic line center. Our measurements are made using an optical frequency comb. We use an optical pumping scheme to improve the signal-to-background ratio for the $F^{\prime}=3/2$ component.
[ 0, 1, 0, 0, 0, 0 ]
Title: Training GANs with Optimism, Abstract: We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs. Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games. WGANs is exactly a context of solving a zero-sum game with simultaneous no-regret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs. We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle. We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum. We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences. We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants. Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam. We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam.
[ 1, 0, 0, 1, 0, 0 ]
Title: A recipe for topological observables of density matrices, Abstract: Meaningful topological invariants for mixed quantum states are challenging to identify as there is no unique way to define them, and most choices do not directly relate to physical observables. Here, we propose a simple pragmatic approach to construct topological invariants of mixed states while preserving a connection to physical observables, by continuously deforming known topological invariants for pure (ground) states. Our approach relies on expectation values of many-body operators, with no reference to single-particle (e.g., Bloch) wavefunctions. To illustrate it, we examine extensions to mixed states of $U(1)$ geometric (Berry) phases and their corresponding topological invariant (winding or Chern number). We discuss measurement schemes, and provide a detailed construction of invariants for thermal or more general mixed states of quantum systems with (at least) $U(1)$ charge-conservation symmetry, such as quantum Hall insulators.
[ 0, 1, 0, 0, 0, 0 ]
Title: On a class of infinitely differentiable functions in ${\mathbb R}^n$ admitting holomorphic extension in ${\mathbb C}^n$, Abstract: A space $G(M, \varPhi)$ of infinitely differentiable functions in ${\mathbb R}^n$ constructed with a help of a family $\varPhi=\{\varphi_m\}_{m=1}^{\infty}$ of real-valued functions $\varphi_m \in~C({\mathbb R}^n)$ and a logarithmically convex sequence $M$ of positive numbers is considered in the article. In view of conditions on $M$ each function of $G(M, \varPhi)$ can be extended to an entire function in ${\mathbb C}^n$. Imposed conditions on $M$ and $\varPhi$ allow to describe the space of such extensions.
[ 0, 0, 1, 0, 0, 0 ]
Title: Effects of Degree Correlations in Interdependent Security: Good or Bad?, Abstract: We study the influence of degree correlations or network mixing in interdependent security. We model the interdependence in security among agents using a dependence graph and employ a population game model to capture the interaction among many agents when they are strategic and have various security measures they can choose to defend themselves. The overall network security is measured by what we call the average risk exposure (ARE) from neighbors, which is proportional to the total (expected) number of attacks in the network. We first show that there exists a unique pure-strategy Nash equilibrium of a population game. Then, we prove that as the agents with larger degrees in the dependence graph see higher risks than those with smaller degrees, the overall network security deteriorates in that the ARE experienced by agents increases and there are more attacks in the network. Finally, using this finding, we demonstrate that the effects of network mixing on ARE depend on the (cost) effectiveness of security measures available to agents; if the security measures are not effective, increasing assortativity of dependence graph results in higher ARE. On the other hand, if the security measures are effective at fending off the damages and losses from attacks, increasing assortativity reduces the ARE experienced by agents.
[ 1, 1, 0, 0, 0, 0 ]
Title: Forming short-period Wolf-Rayet X-ray binaries and double black holes through stable mass transfer, Abstract: We show that black-hole High-Mass X-ray Binaries (HMXBs) with O- or B-type donor stars and relatively short orbital periods, of order one week to several months may survive spiral in, to then form Wolf-Rayet (WR) X-ray binaries with orbital periods of order a day to a few days; while in systems where the compact star is a neutron star, HMXBs with these orbital periods never survive spiral-in. We therefore predict that WR X-ray binaries can only harbor black holes. The reason why black-hole HMXBs with these orbital periods may survive spiral in is: the combination of a radiative envelope of the donor star, and a high mass of the compact star. In this case, when the donor begins to overflow its Roche lobe, the systems are able to spiral in slowly with stable Roche-lobe overflow, as is shown by the system SS433. In this case the transferred mass is ejected from the vicinity of the compact star (so-called "isotropic re-emission" mass loss mode, or "SS433-like mass loss"), leading to gradual spiral-in. If the mass ratio of donor and black hole is $>3.5$, these systems will go into CE evolution and are less likely to survive. If they survive, they produce WR X-ray binaries with orbital periods of a few hours to one day. Several of the well-known WR+O binaries in our Galaxy and the Magellanic Clouds, with orbital periods in the range between a week and several months, are expected to evolve into close WR-Black-Hole binaries,which may later produce close double black holes. The galactic formation rate of double black holes resulting from such systems is still uncertain, as it depends on several poorly known factors in this evolutionary picture. It might possibly be as high as $\sim 10^{-5}$ per year.
[ 0, 1, 0, 0, 0, 0 ]
Title: Learning to Draw Samples with Amortized Stein Variational Gradient Descent, Abstract: We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient direction (Liu & Wang, 2016) that maximally decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. We demonstrate our method with a number of applications, including variational autoencoder (VAE) with expressive encoders to model complex latent space structures, and hyper-parameter learning of MCMC samplers that allows Bayesian inference to adaptively improve itself when seeing more data.
[ 0, 0, 0, 1, 0, 0 ]
Title: Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition, Abstract: This note proposes a simple and general framework of dynamic mode decomposition (DMD) and a mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and the mode selection can be performed with low memory consumption and small computational complexity and can be applied to large datasets. In addition, a simple mode selection algorithm based on a greedy method is proposed. The proposed framework is applied to the analysis of a three-dimensional flows around a circular cylinder.
[ 0, 1, 0, 0, 0, 0 ]
Title: Variants of RMSProp and Adagrad with Logarithmic Regret Bounds, Abstract: Adaptive gradient methods have become recently very popular, in particular as they have been shown to be useful in the training of deep neural networks. In this paper we have analyzed RMSProp, originally proposed for the training of deep neural networks, in the context of online convex optimization and show $\sqrt{T}$-type regret bounds. Moreover, we propose two variants SC-Adagrad and SC-RMSProp for which we show logarithmic regret bounds for strongly convex functions. Finally, we demonstrate in the experiments that these new variants outperform other adaptive gradient techniques or stochastic gradient descent in the optimization of strongly convex functions as well as in training of deep neural networks.
[ 1, 0, 0, 1, 0, 0 ]
Title: Dykstra's Algorithm, ADMM, and Coordinate Descent: Connections, Insights, and Extensions, Abstract: We study connections between Dykstra's algorithm for projecting onto an intersection of convex sets, the augmented Lagrangian method of multipliers or ADMM, and block coordinate descent. We prove that coordinate descent for a regularized regression problem, in which the (separable) penalty functions are seminorms, is exactly equivalent to Dykstra's algorithm applied to the dual problem. ADMM on the dual problem is also seen to be equivalent, in the special case of two sets, with one being a linear subspace. These connections, aside from being interesting in their own right, suggest new ways of analyzing and extending coordinate descent. For example, from existing convergence theory on Dykstra's algorithm over polyhedra, we discern that coordinate descent for the lasso problem converges at an (asymptotically) linear rate. We also develop two parallel versions of coordinate descent, based on the Dykstra and ADMM connections.
[ 0, 0, 1, 1, 0, 0 ]
Title: A Topological Perspective on Interacting Algebraic Theories, Abstract: Techniques from higher categories and higher-dimensional rewriting are becoming increasingly important for understanding the finer, computational properties of higher algebraic theories that arise, among other fields, in quantum computation. These theories have often the property of containing simpler sub-theories, whose interaction is regulated in a limited number of ways, which reveals a topological substrate when pictured by string diagrams. By exploring the double nature of computads as presentations of higher algebraic theories, and combinatorial descriptions of "directed spaces", we develop a basic language of directed topology for the compositional study of algebraic theories. We present constructions of computads, all with clear analogues in standard topology, that capture in great generality such notions as homomorphisms and actions, and the interactions of monoids and comonoids that lead to the theory of Frobenius algebras and of bialgebras. After a number of examples, we describe how a fragment of the ZX calculus can be reconstructed in this framework.
[ 1, 0, 1, 0, 0, 0 ]
Title: Dynamic nested sampling: an improved algorithm for parameter estimation and evidence calculation, Abstract: We introduce dynamic nested sampling: a generalisation of the nested sampling algorithm in which the number of "live points" varies to allocate samples more efficiently. In empirical tests the new method significantly improves calculation accuracy compared to standard nested sampling with the same number of samples; this increase in accuracy is equivalent to speeding up the computation by factors of up to ~72 for parameter estimation and ~7 for evidence calculations. We also show that the accuracy of both parameter estimation and evidence calculations can be improved simultaneously. In addition, unlike in standard nested sampling, more accurate results can be obtained by continuing the calculation for longer. Popular standard nested sampling implementations can be easily adapted to perform dynamic nested sampling, and several dynamic nested sampling software packages are now publicly available.
[ 0, 1, 0, 1, 0, 0 ]
Title: Bounds on poloidal kinetic energy in plane layer convection, Abstract: A numerical method is presented which conveniently computes upper bounds on heat transport and poloidal energy in plane layer convection for infinite and finite Prandtl numbers. The bounds obtained for the heat transport coincide with earlier results. These bounds imply upper bounds for the poloidal energy which follow directly from the definitions of dissipation and energy. The same constraints used for computing upper bounds on the heat transport lead to improved bounds for the poloidal energy.
[ 0, 1, 0, 0, 0, 0 ]
Title: Switching Isotropic and Directional Exploration with Parameter Space Noise in Deep Reinforcement Learning, Abstract: This paper proposes an exploration method for deep reinforcement learning based on parameter space noise. Recent studies have experimentally shown that parameter space noise results in better exploration than the commonly used action space noise. Previous methods devised a way to update the diagonal covariance matrix of a noise distribution and did not consider the direction of the noise vector and its correlation. In addition, fast updates of the noise distribution are required to facilitate policy learning. We propose a method that deforms the noise distribution according to the accumulated returns and the noises that have led to the returns. Moreover, this method switches isotropic exploration and directional exploration in parameter space with regard to obtained rewards. We validate our exploration strategy in the OpenAI Gym continuous environments and modified environments with sparse rewards. The proposed method achieves results that are competitive with a previous method at baseline tasks. Moreover, our approach exhibits better performance in sparse reward environments by exploration with the switching strategy.
[ 0, 0, 0, 1, 0, 0 ]
Title: The solution to the initial value problem for the ultradiscrete Somos-4 and 5 equations, Abstract: We propose a method to solve the initial value problem for the ultradiscrete Somos-4 and Somos-5 equations by expressing terms in the equations as convex polygons and regarding max-plus algebras as those on polygons.
[ 0, 1, 0, 0, 0, 0 ]
Title: Two variants of the Froiduire-Pin Algorithm for finite semigroups, Abstract: In this paper, we present two algorithms based on the Froidure-Pin Algorithm for computing the structure of a finite semigroup from a generating set. As was the case with the original algorithm of Froidure and Pin, the algorithms presented here produce the left and right Cayley graphs, a confluent terminating rewriting system, and a reduced word of the rewriting system for every element of the semigroup. If $U$ is any semigroup, and $A$ is a subset of $U$, then we denote by $\langle A\rangle$ the least subsemigroup of $U$ containing $A$. If $B$ is any other subset of $U$, then, roughly speaking, the first algorithm we present describes how to use any information about $\langle A\rangle$, that has been found using the Froidure-Pin Algorithm, to compute the semigroup $\langle A\cup B\rangle$. More precisely, we describe the data structure for a finite semigroup $S$ given by Froidure and Pin, and how to obtain such a data structure for $\langle A\cup B\rangle$ from that for $\langle A\rangle$. The second algorithm is a lock-free concurrent version of the Froidure-Pin Algorithm.
[ 1, 0, 1, 0, 0, 0 ]
Title: Holon Wigner Crystal in a Lightly Doped Kagome Quantum Spin Liquid, Abstract: We address the problem of a lightly doped spin-liquid through a large-scale density-matrix renormalization group (DMRG) study of the $t$-$J$ model on a Kagome lattice with a small but non-zero concentration, $\delta$, of doped holes. It is now widely accepted that the undoped ($\delta=0$) spin 1/2 Heisenberg antiferromagnet has a spin-liquid groundstate. Theoretical arguments have been presented that light doping of such a spin-liquid could give rise to a high temperature superconductor or an exotic topological Fermi liquid metal (FL$^\ast$). Instead, we infer that the doped holes form an insulating charge-density wave state with one doped-hole per unit cell - i.e. a Wigner crystal (WC). Spin correlations remain short-ranged, as in the spin-liquid parent state, from which we infer that the state is a crystal of spinless holons (WC$^\ast$), rather than of holes. Our results may be relevant to Kagome lattice Herbertsmithite $\rm ZnCu_3(OH)_6Cl_2$ upon doping.
[ 0, 1, 0, 0, 0, 0 ]
Title: Accelerated Block Coordinate Proximal Gradients with Applications in High Dimensional Statistics, Abstract: Nonconvex optimization problems arise in different research fields and arouse lots of attention in signal processing, statistics and machine learning. In this work, we explore the accelerated proximal gradient method and some of its variants which have been shown to converge under nonconvex context recently. We show that a novel variant proposed here, which exploits adaptive momentum and block coordinate update with specific update rules, further improves the performance of a broad class of nonconvex problems. In applications to sparse linear regression with regularizations like Lasso, grouped Lasso, capped $\ell_1$ and SCAP, the proposed scheme enjoys provable local linear convergence, with experimental justification.
[ 1, 0, 0, 1, 0, 0 ]
Title: Static structure of chameleon dark Matter as an explanation of dwarf spheroidal galactic core, Abstract: We propose a novel mechanism which explains cored dark matter density profile in recently observed dark matter rich dwarf spheroidal galaxies. In our scenario, dark matter particle mass decreases gradually as function of distance towards the center of a dwarf galaxy due to its interaction with a chameleon scalar. At closer distance towards galactic center the strength of attractive scalar fifth force becomes much stronger than gravity and is balanced by the Fermi pressure of dark matter cloud, thus an equilibrium static configuration of dark matter halo is obtained. Like the case of soliton star or fermion Q-star, the stability of the dark matter halo is obtained as the scalar achieves a static profile and reaches an asymptotic value away from the galactic center. For simple scalar-dark matter interaction and quadratic scalar self interaction potential, we show that dark matter behaves exactly like cold dark matter (CDM) beyond few $\rm{kpc}$ away from galactic center but at closer distance it becomes lighter and fermi pressure cannot be ignored anymore. Using Thomas-Fermi approximation, we numerically solve the radial static profile of the scalar field, fermion mass and dark matter energy density as a function of distance. We find that for fifth force mediated by an ultra light scalar, it is possible to obtain a flattened dark matter density profile towards galactic center. In our scenario, the fifth force can be neglected at distance $ r \geq 1\, \rm{kpc}$ from galactic center and dark matter can be simply treated as heavy non-relativistic particles beyond this distance, thus reproducing the success of CDM at large scales.
[ 0, 1, 0, 0, 0, 0 ]
Title: Unbalancing Sets and an Almost Quadratic Lower Bound for Syntactically Multilinear Arithmetic Circuits, Abstract: We prove a lower bound of $\Omega(n^2/\log^2 n)$ on the size of any syntactically multilinear arithmetic circuit computing some explicit multilinear polynomial $f(x_1, \ldots, x_n)$. Our approach expands and improves upon a result of Raz, Shpilka and Yehudayoff ([RSY08]), who proved a lower bound of $\Omega(n^{4/3}/\log^2 n)$ for the same polynomial. Our improvement follows from an asymptotically optimal lower bound for a generalized version of Galvin's problem in extremal set theory.
[ 1, 0, 0, 0, 0, 0 ]
Title: Topological and non inertial effects on the interbank light absorption, Abstract: In this work, we investigate the combined influence of the nontrivial topology introduced by a disclination and non inertial effects due to rotation, in the energy levels and the wave functions of a noninteracting electron gas confined to a two-dimensional pseudoharmonic quantum dot, under the influence of an external uniform magnetic field. The exact solutions for energy eigenvalues and wave functions are computed as functions of the applied magnetic field strength, the disclination topological charge, magnetic quantum number and the rotation speed of the sample. We investigate the modifications on the light interband absorption coefficient and absorption threshold frequency. We observe novel features in the system, including a range of magnetic field without corresponding absorption phenomena, which is due to a tripartite term of the Hamiltonian, involving magnetic field, the topological charge of the defect and the rotation frequency.
[ 0, 1, 0, 0, 0, 0 ]
Title: Hyperbolic inverse mean curvature flow, Abstract: In this paper, we prove the short-time existence of hyperbolic inverse (mean) curvature flow (with or without the specified forcing term) under the assumption that the initial compact smooth hypersurface of $\mathbb{R}^{n+1}$ ($n\geqslant2$) is mean convex and star-shaped. Several interesting examples and some hyperbolic evolution equations for geometric quantities of the evolving hypersurfaces have been shown. Besides, under different assumptions for the initial velocity, we can get the expansion and the convergence results of a hyperbolic inverse mean curvature flow in the plane $\mathbb{R}^2$, whose evolving curves move normally.
[ 0, 0, 1, 0, 0, 0 ]
Title: A new charge reconstruction algorithm for the DAMPE silicon microstrip detector, Abstract: The DArk Matter Particle Explorer (DAMPE) is one of the four satellites within the Strategic Pioneer Research Program in Space Science of the Chinese Academy of Science (CAS). The Silicon-Tungsten Tracker (STK), which is composed of 768 singled-sided silicon microstrip detectors, is one of the four subdetectors in DAMPE, providing track reconstruction and charge identification for relativistic charged particles. The charge response of DAMPE silicon microstrip detectors is complicated, depending on the incident angle and impact position. A new charge reconstruction algorithm for the DAMPE silicon microstrip detector is introduced in this paper. This algorithm can correct the complicated charge response, and was proved applicable by the ion test beam.
[ 0, 1, 0, 0, 0, 0 ]
Title: The Structure Transfer Machine Theory and Applications, Abstract: Representation learning is a fundamental but challenging problem, especially when the distribution of data is unknown. We propose a new representation learning method, termed Structure Transfer Machine (STM), which enables feature learning process to converge at the representation expectation in a probabilistic way. We theoretically show that such an expected value of the representation (mean) is achievable if the manifold structure can be transferred from the data space to the feature space. The resulting structure regularization term, named manifold loss, is incorporated into the loss function of the typical deep learning pipeline. The STM architecture is constructed to enforce the learned deep representation to satisfy the intrinsic manifold structure from the data, which results in robust features that suit various application scenarios, such as digit recognition, image classification and object tracking. Compared to state-of-the-art CNN architectures, we achieve the better results on several commonly used benchmarks\footnote{The source code is available. this https URL }.
[ 0, 0, 0, 1, 0, 0 ]
Title: Large dimensional analysis of general margin based classification methods, Abstract: Margin-based classifiers have been popular in both machine learning and statistics for classification problems. Since a large number of classifiers are available, one natural question is which type of classifiers should be used given a particular classification task. We aim to answering this question by investigating the asymptotic performance of a family of large-margin classifiers in situations where the data dimension $p$ and the sample $n$ are both large. This family covers a broad range of classifiers including support vector machine, distance weighted discrimination, penalized logistic regression, and large-margin unified machine as special cases. The asymptotic results are described by a set of nonlinear equations and we observe a close match of them with Monte Carlo simulation on finite data samples. Our analytical studies shed new light on how to select the best classifier among various classification methods as well as on how to choose the optimal tuning parameters for a given method.
[ 1, 0, 0, 1, 0, 0 ]
Title: Covariance structure associated with an equality between two general ridge estimators, Abstract: In a general linear model, this paper derives a necessary and sufficient condition under which two general ridge estimators coincide with each other. The condition is given as a structure of the dispersion matrix of the error term. Since the class of estimators considered here contains linear unbiased estimators such as the ordinary least squares estimator and the best linear unbiased estimator, our result can be viewed as a generalization of the well-known theorems on the equality between these two estimators, which have been fully studied in the literature. Two related problems are also considered: equality between two residual sums of squares, and classification of dispersion matrices by a perturbation approach.
[ 0, 0, 1, 1, 0, 0 ]
Title: Three natural subgroups of the Brauer-Picard group of a Hopf algebra with applications, Abstract: In this article we construct three explicit natural subgroups of the Brauer-Picard group of the category of representations of a finite-dimensional Hopf algebra. In examples the Brauer Picard group decomposes into an ordered product of these subgroups, somewhat similar to a Bruhat decomposition. Our construction returns for any Hopf algebra three types of braided autoequivalences and correspondingly three families of invertible bimodule categories. This gives examples of so-called (2-)Morita equivalences and defects in topological field theories. We have a closer look at the case of quantum groups and Nichols algebras and give interesting applications. Finally, we briefly discuss the three families of group-theoretic extensions.
[ 0, 0, 1, 0, 0, 0 ]
Title: Towards a Deep Reinforcement Learning Approach for Tower Line Wars, Abstract: There have been numerous breakthroughs with reinforcement learning in the recent years, perhaps most notably on Deep Reinforcement Learning successfully playing and winning relatively advanced computer games. There is undoubtedly an anticipation that Deep Reinforcement Learning will play a major role when the first AI masters the complicated game plays needed to beat a professional Real-Time Strategy game player. For this to be possible, there needs to be a game environment that targets and fosters AI research, and specifically Deep Reinforcement Learning. Some game environments already exist, however, these are either overly simplistic such as Atari 2600 or complex such as Starcraft II from Blizzard Entertainment. We propose a game environment in between Atari 2600 and Starcraft II, particularly targeting Deep Reinforcement Learning algorithm research. The environment is a variant of Tower Line Wars from Warcraft III, Blizzard Entertainment. Further, as a proof of concept that the environment can harbor Deep Reinforcement algorithms, we propose and apply a Deep Q-Reinforcement architecture. The architecture simplifies the state space so that it is applicable to Q-learning, and in turn improves performance compared to current state-of-the-art methods. Our experiments show that the proposed architecture can learn to play the environment well, and score 33% better than standard Deep Q-learning which in turn proves the usefulness of the game environment.
[ 1, 0, 0, 0, 0, 0 ]
Title: UV Detector based on InAlN/GaN-on-Si HEMT Stack with Photo-to-Dark Current Ratio > 107, Abstract: We demonstrate an InAlN/GaN-on-Si HEMT based UV detector with photo to dark current ratio > 107. Ti/Al/Ni/Au metal stack was evaporated and rapid thermal annealed for Ohmic contacts to the 2D electron gas (2DEG) at the InAlN/GaN interface while the channel + barrier was recess etched to a depth of 20 nm to pinch-off the 2DEG between Source-Drain pads. Spectral responsivity (SR) of 34 A/W at 367 nm was measured at 5 V in conjunction with very high photo to dark current ratio of > 10^7. The photo to dark current ratio at a fixed bias was found to be decreasing with increase in recess length of the PD. The fabricated devices were found to exhibit a UV-to-visible rejection ratio of >103 with a low dark current < 32 pA at 5 V. Transient measurements showed rise and fall times in the range of 3-4 ms. The gain mechanism was investigated and carrier lifetimes were estimated which matched well with those reported elsewhere.
[ 0, 1, 0, 0, 0, 0 ]
Title: Deep Reinforcement Learning for Inquiry Dialog Policies with Logical Formula Embeddings, Abstract: This paper is the first attempt to learn the policy of an inquiry dialog system (IDS) by using deep reinforcement learning (DRL). Most IDS frameworks represent dialog states and dialog acts with logical formulae. In order to make learning inquiry dialog policies more effective, we introduce a logical formula embedding framework based on a recursive neural network. The results of experiments to evaluate the effect of 1) the DRL and 2) the logical formula embedding framework show that the combination of the two are as effective or even better than existing rule-based methods for inquiry dialog policies.
[ 1, 0, 0, 0, 0, 0 ]
Title: A Lattice Model of Charge-Pattern-Dependent Polyampholyte Phase Separation, Abstract: In view of recent intense experimental and theoretical interests in the biophysics of liquid-liquid phase separation (LLPS) of intrinsically disordered proteins (IDPs), heteropolymer models with chain molecules configured as self-avoiding walks on the simple cubic lattice are constructed to study how phase behaviors depend on the sequence of monomers along the chains. To address pertinent general principles, we focus primarily on two fully charged 50-monomer sequences with significantly different charge patterns. Each monomer in our models occupies a single lattice site and all monomers interact via a screened pairwise Coulomb potential. Phase diagrams are obtained by extensive Monte Carlo sampling performed at multiple temperatures on ensembles of 300 chains in boxes of sizes ranging from $52\times 52\times 52$ to $246\times 246\times 246$ to simulate a large number of different systems with the overall polymer volume fraction $\phi$ in each system varying from $0.001$ to $0.1$. Phase separation in the model systems is characterized by the emergence of a large cluster connected by inter-monomer nearest-neighbor lattice contacts and by large fluctuations in local polymer density. The simulated critical temperatures, $T_{\rm cr}$, of phase separation for the two sequences differ significantly, whereby the sequence with a more "blocky" charge pattern exhibits a substantially higher propensity to phase separate. The trend is consistent with our sequence-specific random-phase-approximation (RPA) polymer theory, but the variation of the simulated $T_{\rm cr}$ with a previously proposed "sequence charge decoration" pattern parameter is milder than that predicted by RPA. Ramifications of our findings for the development of analytical theory and simulation protocols of IDP LLPS are discussed.
[ 0, 0, 0, 0, 1, 0 ]
Title: "Noiseless" thermal noise measurement of atomic force microscopy cantilevers, Abstract: When measuring quadratic values representative of random fluctuations, such as the thermal noise of Atomic Force Microscopy (AFM) cantilevers, the background measurement noise cannot be averaged to zero. We present a signal processing method that allows to get rid of this limitation using the ubiquitous optical beam deflection sensor of standard AFMs. We demonstrate a two orders of magnitude enhancement of the signal to noise ratio in our experiment, allowing the calibration of stiff cantilevers or easy identification of higher order modes from thermal noise measurements.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Forward Model at Purkinje Cell Synapses Facilitates Cerebellar Anticipatory Control, Abstract: How does our motor system solve the problem of anticipatory control in spite of a wide spectrum of response dynamics from different musculo-skeletal systems, transport delays as well as response latencies throughout the central nervous system? To a great extent, our highly-skilled motor responses are a result of a reactive feedback system, originating in the brain-stem and spinal cord, combined with a feed-forward anticipatory system, that is adaptively fine-tuned by sensory experience and originates in the cerebellum. Based on that interaction we design the counterfactual predictive control (CFPC) architecture, an anticipatory adaptive motor control scheme in which a feed-forward module, based on the cerebellum, steers an error feedback controller with counterfactual error signals. Those are signals that trigger reactions as actual errors would, but that do not code for any current or forthcoming errors. In order to determine the optimal learning strategy, we derive a novel learning rule for the feed-forward module that involves an eligibility trace and operates at the synaptic level. In particular, our eligibility trace provides a mechanism beyond co-incidence detection in that it convolves a history of prior synaptic inputs with error signals. In the context of cerebellar physiology, this solution implies that Purkinje cell synapses should generate eligibility traces using a forward model of the system being controlled. From an engineering perspective, CFPC provides a general-purpose anticipatory control architecture equipped with a learning rule that exploits the full dynamics of the closed-loop system.
[ 1, 0, 1, 0, 0, 0 ]
Title: Episode-Based Active Learning with Bayesian Neural Networks, Abstract: We investigate different strategies for active learning with Bayesian deep neural networks. We focus our analysis on scenarios where new, unlabeled data is obtained episodically, such as commonly encountered in mobile robotics applications. An evaluation of different strategies for acquisition, updating, and final training on the CIFAR-10 dataset shows that incremental network updates with final training on the accumulated acquisition set are essential for best performance, while limiting the amount of required human labeling labor.
[ 1, 0, 0, 1, 0, 0 ]
Title: Freeness and The Partial Transposes of Wishart Random Matrices, Abstract: We show that the partial transposes of complex Wishart random matrices are asymptotically free. We also investigate regimes where the number of blocks is fixed but the size of the blocks increases. This gives a example where the partial transpose produces freeness at the operator level. Finally we investigate the case of real Wishart matrices.
[ 0, 0, 1, 0, 0, 0 ]
Title: Fixed points of polarity type operators, Abstract: A well-known result says that the Euclidean unit ball is the unique fixed point of the polarity operator. This result implies that if, in $\mathbb{R}^n$, the unit ball of some norm is equal to the unit ball of the dual norm, then the norm must be Euclidean. Motivated by these results and by relatively recent results in convex analysis and convex geometry regarding various properties of order reversing operators, we consider, in a real Hilbert space setting, a more general fixed point equation in which the polarity operator is composed with a continuous invertible linear operator. We show that if the linear operator is positive definite, then the considered equation is uniquely solvable by an ellipsoid. Otherwise, the equation can have several (possibly infinitely many) solutions or no solution at all. Our analysis yields a few by-products of possible independent interest, among them results related to coercive bilinear forms (essentially a quantitative convex analytic converse to the celebrated Lax-Milgram theorem from partial differential equations) and a characterization of real Hilbertian spaces.
[ 0, 0, 1, 0, 0, 0 ]
Title: Implications of a wavelength dependent PSF for weak lensing measurements, Abstract: The convolution of galaxy images by the point-spread function (PSF) is the dominant source of bias for weak gravitational lensing studies, and an accurate estimate of the PSF is required to obtain unbiased shape measurements. The PSF estimate for a galaxy depends on its spectral energy distribution (SED), because the instrumental PSF is generally a function of the wavelength. In this paper we explore various approaches to determine the resulting `effective' PSF using broad-band data. Considering the Euclid mission as a reference, we find that standard SED template fitting methods result in biases that depend on source redshift, although this may be remedied if the algorithms can be optimised for this purpose. Using a machine-learning algorithm we show that, at least in principle, the required accuracy can be achieved with the current survey parameters. It is also possible to account for the correlations between photometric redshift and PSF estimates that arise from the use of the same photometry. We explore the impact of errors in photometric calibration, errors in the assumed wavelength dependence of the PSF model and limitations of the adopted template libraries. Our results indicate that the required accuracy for Euclid can be achieved using the data that are planned to determine photometric redshifts.
[ 0, 1, 0, 0, 0, 0 ]
Title: Controlling Sources of Inaccuracy in Stochastic Kriging, Abstract: Scientists and engineers commonly use simulation models to study real systems for which actual experimentation is costly, difficult, or impossible. Many simulations are stochastic in the sense that repeated runs with the same input configuration will result in different outputs. For expensive or time-consuming simulations, stochastic kriging \citep{ankenman} is commonly used to generate predictions for simulation model outputs subject to uncertainty due to both function approximation and stochastic variation. Here, we develop and justify a few guidelines for experimental design, which ensure accuracy of stochastic kriging emulators. We decompose error in stochastic kriging predictions into nominal, numeric, parameter estimation and parameter estimation numeric components and provide means to control each in terms of properties of the underlying experimental design. The design properties implied for each source of error are weakly conflicting and broad principles are proposed. In brief, space-filling properties "small fill distance" and "large separation distance" should balance with replication at distinct input configurations, with number of replications depending on the relative magnitudes of stochastic and process variability. Non-stationarity implies higher input density in more active regions, while regression functions imply a balance with traditional design properties. A few examples are presented to illustrate the results.
[ 0, 0, 1, 1, 0, 0 ]
Title: Implications of Decentralized Q-learning Resource Allocation in Wireless Networks, Abstract: Reinforcement Learning is gaining attention by the wireless networking community due to its potential to learn good-performing configurations only from the observed results. In this work we propose a stateless variation of Q-learning, which we apply to exploit spatial reuse in a wireless network. In particular, we allow networks to modify both their transmission power and the channel used solely based on the experienced throughput. We concentrate in a completely decentralized scenario in which no information about neighbouring nodes is available to the learners. Our results show that although the algorithm is able to find the best-performing actions to enhance aggregate throughput, there is high variability in the throughput experienced by the individual networks. We identify the cause of this variability as the adversarial setting of our setup, in which the most played actions provide intermittent good/poor performance depending on the neighbouring decisions. We also evaluate the effect of the intrinsic learning parameters of the algorithm on this variability.
[ 1, 0, 0, 0, 0, 0 ]
Title: Exponential Ergodicity of the Bouncy Particle Sampler, Abstract: Non-reversible Markov chain Monte Carlo schemes based on piecewise deterministic Markov processes have been recently introduced in applied probability, automatic control, physics and statistics. Although these algorithms demonstrate experimentally good performance and are accordingly increasingly used in a wide range of applications, geometric ergodicity results for such schemes have only been established so far under very restrictive assumptions. We give here verifiable conditions on the target distribution under which the Bouncy Particle Sampler algorithm introduced in \cite{P_dW_12} is geometrically ergodic. This holds whenever the target satisfies a curvature condition and has tails decaying at least as fast as an exponential and at most as fast as a Gaussian distribution. This allows us to provide a central limit theorem for the associated ergodic averages. When the target has tails thinner than a Gaussian distribution, we propose an original modification of this scheme that is geometrically ergodic. For thick-tailed target distributions, such as $t$-distributions, we extend the idea pioneered in \cite{J_G_12} in a random walk Metropolis context. We apply a change of variable to obtain a transformed target satisfying the tail conditions for geometric ergodicity. By sampling the transformed target using the Bouncy Particle Sampler and mapping back the Markov process to the original parameterization, we obtain a geometrically ergodic algorithm.
[ 0, 0, 0, 1, 0, 0 ]
Title: Analysis and X-ray tomography, Abstract: These are lecture notes for the course "MATS4300 Analysis and X-ray tomography" given at the University of Jyväskylä in Fall 2017. The course is a broad overview of various tools in analysis that can be used to study X-ray tomography. The focus is on tools and ideas, not so much on technical details and minimal assumptions. Only very basic functional analysis is assumed as background. Exercise problems are included.
[ 0, 0, 1, 0, 0, 0 ]
Title: Differential-operator representations of Weyl group and singular vectors, Abstract: Given a suitable ordering of the positive root system associated with a semisimple Lie algebra, there exists a natural correspondence between Verma modules and related polynomial algebras. With this, the Lie algebra action on a Verma module can be interpreted as a differential operator action on polynomials, and thus on the corresponding truncated formal power series. We prove that the space of truncated formal power series is a differential-operator representation of the Weyl group $W$. We also introduce a system of partial differential equations to investigate singular vectors in the Verma module. It is shown that the solution space of the system in the space of truncated formal power series is the span of $\{w(1)\ |\ w\in W\}$. Those $w(1)$ that are polynomials correspond to singular vectors in the Verma module. This elementary approach by partial differential equations also gives a new proof of the well-known BGG-Verma Theorem.
[ 0, 0, 1, 0, 0, 0 ]
Title: On Approximation Guarantees for Greedy Low Rank Optimization, Abstract: We provide new approximation guarantees for greedy low rank matrix estimation under standard assumptions of restricted strong convexity and smoothness. Our novel analysis also uncovers previously unknown connections between the low rank estimation and combinatorial optimization, so much so that our bounds are reminiscent of corresponding approximation bounds in submodular maximization. Additionally, we also provide statistical recovery guarantees. Finally, we present empirical comparison of greedy estimation with established baselines on two important real-world problems.
[ 1, 0, 0, 1, 0, 0 ]
Title: Wasserstein Introspective Neural Networks, Abstract: We present Wasserstein introspective neural networks (WINN) that are both a generator and a discriminator within a single model. WINN provides a significant improvement over the recent introspective neural networks (INN) method by enhancing INN's generative modeling capability. WINN has three interesting properties: (1) A mathematical connection between the formulation of the INN algorithm and that of Wasserstein generative adversarial networks (WGAN) is made. (2) The explicit adoption of the Wasserstein distance into INN results in a large enhancement to INN, achieving compelling results even with a single classifier --- e.g., providing nearly a 20 times reduction in model size over INN for unsupervised generative modeling. (3) When applied to supervised classification, WINN also gives rise to improved robustness against adversarial examples in terms of the error reduction. In the experiments, we report encouraging results on unsupervised learning problems including texture, face, and object modeling, as well as a supervised classification task against adversarial attacks.
[ 1, 0, 0, 0, 0, 0 ]
Title: New skein invariants of links, Abstract: We introduce new skein invariants of links based on a procedure where we first apply the skein relation only to crossings of distinct components, so as to produce collections of unlinked knots. We then evaluate the resulting knots using a given invariant. A skein invariant can be computed on each link solely by the use of skein relations and a set of initial conditions. The new procedure, remarkably, leads to generalizations of the known skein invariants. We make skein invariants of classical links, $H[R]$, $K[Q]$ and $D[T]$, based on the invariants of knots, $R$, $Q$ and $T$, denoting the regular isotopy version of the Homflypt polynomial, the Kauffman polynomial and the Dubrovnik polynomial. We provide skein theoretic proofs of the well-definedness of these invariants. These invariants are also reformulated into summations of the generating invariants ($R$, $Q$, $T$) on sublinks of a given link $L$, obtained by partitioning $L$ into collections of sublinks.
[ 0, 0, 1, 0, 0, 0 ]
Title: Faddeev-Jackiw approach of the noncommutative spacetime Podolsky electromagnetic theory, Abstract: The interest in higher derivatives field theories has its origin mainly in their influence concerning the renormalization properties of physical models and to remove ultraviolet divergences. The noncommutative Podolsky theory is a constrained system that cannot by directly quantized by the canonical way. In this work we have used the Faddeev-Jackiw method in order to obtain the Dirac brackets of the NC Podolsky theory.
[ 0, 1, 0, 0, 0, 0 ]
Title: Model-Based Control Using Koopman Operators, Abstract: This paper explores the application of Koopman operator theory to the control of robotic systems. The operator is introduced as a method to generate data-driven models that have utility for model-based control methods. We then motivate the use of the Koopman operator towards augmenting model-based control. Specifically, we illustrate how the operator can be used to obtain a linearizable data-driven model for an unknown dynamical process that is useful for model-based control synthesis. Simulated results show that with increasing complexity in the choice of the basis functions, a closed-loop controller is able to invert and stabilize a cart- and VTOL-pendulum systems. Furthermore, the specification of the basis function are shown to be of importance when generating a Koopman operator for specific robotic systems. Experimental results with the Sphero SPRK robot explore the utility of the Koopman operator in a reduced state representation setting where increased complexity in the basis function improve open- and closed-loop controller performance in various terrains, including sand.
[ 1, 0, 0, 0, 0, 0 ]
Title: Turbulence Hierarchy in a Random Fibre Laser, Abstract: Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here, we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold, and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov's theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform.
[ 0, 1, 0, 0, 0, 0 ]
Title: Optimal Rates for Learning with Nyström Stochastic Gradient Methods, Abstract: In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nyström subsampling, allowing multiple passes over the data and mini-batches. Generalization error bounds for the studied algorithm are provided. Particularly, optimal learning rates are derived considering different possible choices of the step-size, the mini-batch size, the number of iterations/passes, and the subsampling level. In comparison with state-of-the-art algorithms such as the classic stochastic gradient methods and kernel ridge regression with Nyström, the studied algorithm has advantages on the computational complexity, while achieving the same optimal learning rates. Moreover, our results indicate that using mini-batches can reduce the total computational cost while achieving the same optimal statistical results.
[ 1, 0, 1, 1, 0, 0 ]
Title: Run Procrustes, Run! On the convergence of accelerated Procrustes Flow, Abstract: In this work, we present theoretical results on the convergence of non-convex accelerated gradient descent in matrix factorization models. The technique is applied to matrix sensing problems with squared loss, for the estimation of a rank $r$ optimal solution $X^\star \in \mathbb{R}^{n \times n}$. We show that the acceleration leads to linear convergence rate, even under non-convex settings where the variable $X$ is represented as $U U^\top$ for $U \in \mathbb{R}^{n \times r}$. Our result has the same dependence on the condition number of the objective --and the optimal solution-- as that of the recent results on non-accelerated algorithms. However, acceleration is observed in practice, both in synthetic examples and in two real applications: neuronal multi-unit activities recovery from single electrode recordings, and quantum state tomography on quantum computing simulators.
[ 0, 0, 0, 1, 0, 0 ]
Title: On the presentation of Hecke-Hopf algebras for non-simply-laced type, Abstract: Hecke-Hopf algebras were defined by A. Berenstein and D. Kazhdan. We give an explicit presentation of an Hecke-Hopf algebra when the parameter $m_{ij},$ associated to any two distinct vertices $i$ and $j$ in the presentation of a Coxeter group, equals $4,$ $5$ or $6$. As an application, we give a proof of a conjecture of Berenstein and Kazhdan when the Coxeter group is crystallographic and non-simply-laced. As another application, we show that another conjecture of Berenstein and Kazhdan holds when $m_{ij},$ associated to any two distinct vertices $i$ and $j,$ equals $4$ and that the conjecture does not hold when some $m_{ij}$ equals $6$ by giving a counterexample to it.
[ 0, 0, 1, 0, 0, 0 ]
Title: ILP-based Alleviation of Dense Meander Segments with Prioritized Shifting and Progressive Fixing in PCB Routing, Abstract: Length-matching is an important technique to bal- ance delays of bus signals in high-performance PCB routing. Existing routers, however, may generate very dense meander segments. Signals propagating along these meander segments exhibit a speedup effect due to crosstalk between the segments of the same wire, thus leading to mismatch of arrival times even under the same physical wire length. In this paper, we present a post-processing method to enlarge the width and the distance of meander segments and hence distribute them more evenly on the board so that crosstalk can be reduced. In the proposed framework, we model the sharing of available routing areas after removing dense meander segments from the initial routing, as well as the generation of relaxed meander segments and their groups for wire length compensation. This model is transformed into an ILP problem and solved for a balanced distribution of wire patterns. In addition, we adjust the locations of long wire segments according to wire priorities to swap free spaces toward critical wires that need much length compensation. To reduce the problem space of the ILP model, we also introduce a progressive fixing technique so that wire patterns are grown gradually from the edge of the routing toward the center area. Experimental results show that the proposed method can expand meander segments significantly even under very tight area constraints, so that the speedup effect can be alleviated effectively in high- performance PCB designs.
[ 1, 0, 0, 0, 0, 0 ]
Title: Smoothing with Couplings of Conditional Particle Filters, Abstract: In state space models, smoothing refers to the task of estimating a latent stochastic process given noisy measurements related to the process. We propose an unbiased estimator of smoothing expectations. The lack-of-bias property has methodological benefits: independent estimators can be generated in parallel, and confidence intervals can be constructed from the central limit theorem to quantify the approximation error. To design unbiased estimators, we combine a generic debiasing technique for Markov chains with a Markov chain Monte Carlo algorithm for smoothing. The resulting procedure is widely applicable and we show in numerical experiments that the removal of the bias comes at a manageable increase in variance. We establish the validity of the proposed estimators under mild assumptions. Numerical experiments are provided on toy models, including a setting of highly-informative observations, and a realistic Lotka-Volterra model with an intractable transition density.
[ 0, 0, 0, 1, 0, 0 ]
Title: Formation of High Pressure Gradients at the Free Surface of a Liquid Dielectric in a Tangential Electric Field, Abstract: Nonlinear dynamics of the free surface of an ideal incompressible non-conducting fluid with high dielectric constant subjected by strong horizontal electric field is simulated on the base of the method of conformal transformations. It is demonstrated that interaction of counter-propagating waves leads to formation of regions with steep wave front at the fluid surface; angles of the boundary inclination tend to {\pi}/2, and the curvature of surface extremely increases. A significant concentration of the energy of the system occurs at these points. From the physical point of view, the appearance of these singularities corresponds to formation of regions at the fluid surface where pressure exerted by electric field undergoes a discontinuity and dynamical pressure increases almost an order of magnitude.
[ 0, 1, 0, 0, 0, 0 ]
Title: Outliers and related problems, Abstract: We define outliers as a set of observations which contradicts the proposed mathematical (statistical) model and we discuss the frequently observed types of the outliers. Further we explore what changes in the model have to be made in order to avoid the occurance of the outliers. We observe that some variants of the outliers lead to classical results in probability, such as the law of large numbers and the concept of heavy tailed distributions. Key words: outlier; the law of large numbers; heavy tailed distributions; model rejection.
[ 0, 0, 1, 1, 0, 0 ]
Title: On Quadratic Convergence of DC Proximal Newton Algorithm for Nonconvex Sparse Learning in High Dimensions, Abstract: We propose a DC proximal Newton algorithm for solving nonconvex regularized sparse learning problems in high dimensions. Our proposed algorithm integrates the proximal Newton algorithm with multi-stage convex relaxation based on the difference of convex (DC) programming, and enjoys both strong computational and statistical guarantees. Specifically, by leveraging a sophisticated characterization of sparse modeling structures/assumptions (i.e., local restricted strong convexity and Hessian smoothness), we prove that within each stage of convex relaxation, our proposed algorithm achieves (local) quadratic convergence, and eventually obtains a sparse approximate local optimum with optimal statistical properties after only a few convex relaxations. Numerical experiments are provided to support our theory.
[ 1, 0, 1, 1, 0, 0 ]
Title: Heuristic Optimization for Automated Distribution System Planning in Network Integration Studies, Abstract: Network integration studies try to assess the impact of future developments, such as the increase of Renewable Energy Sources or the introduction of Smart Grid Technologies, on large-scale network areas. Goals can be to support strategic alignment in the regulatory framework or to adapt the network planning principles of Distribution System Operators. This study outlines an approach for the automated distribution system planning that can calculate network reconfiguration, reinforcement and extension plans in a fully automated fashion. This allows the estimation of the expected cost in massive probabilistic simulations of large numbers of real networks and constitutes a core component of a framework for large-scale network integration studies. Exemplary case study results are presented that were performed in cooperation with different major distribution system operators. The case studies cover the estimation of expected network reinforcement costs, technical and economical assessment of smart grid technologies and structural network optimisation.
[ 1, 0, 0, 0, 0, 0 ]
Title: The Sizes and Depletions of the Dust and Gas Cavities in the Transitional Disk J160421.7-213028, Abstract: We report ALMA Cycle 2 observations of 230 GHz (1.3 mm) dust continuum emission, and $^{12}$CO, $^{13}$CO, and C$^{18}$O J = 2-1 line emission, from the Upper Scorpius transitional disk [PZ99] J160421.7-213028, with an angular resolution of ~0".25 (35 AU). Armed with these data and existing H-band scattered light observations, we measure the size and depth of the disk's central cavity, and the sharpness of its outer edge, in three components: sub-$\mu$m-sized "small" dust traced by scattered light, millimeter-sized "big" dust traced by the millimeter continuum, and gas traced by line emission. Both dust populations feature a cavity of radius $\sim$70 AU that is depleted by factors of at least 1000 relative to the dust density just outside. The millimeter continuum data are well explained by a cavity with a sharp edge. Scattered light observations can be fitted with a cavity in small dust that has either a sharp edge at 60 AU, or an edge that transitions smoothly over an annular width of 10 AU near 60 AU. In gas, the data are consistent with a cavity that is smaller, about 15 AU in radius, and whose surface density at 15 AU is $10^{3\pm1}$ times smaller than the surface density at 70 AU; the gas density grades smoothly between these two radii. The CO isotopologue observations rule out a sharp drop in gas surface density at 30 AU or a double-drop model as found by previous modeling. Future observations are needed to assess the nature of these gas and dust cavities, e.g., whether they are opened by multiple as-yet-unseen planets or photoevaporation.
[ 0, 1, 0, 0, 0, 0 ]
Title: Demystifying AlphaGo Zero as AlphaGo GAN, Abstract: The astonishing success of AlphaGo Zero\cite{Silver_AlphaGo} invokes a worldwide discussion of the future of our human society with a mixed mood of hope, anxiousness, excitement and fear. We try to dymystify AlphaGo Zero by a qualitative analysis to indicate that AlphaGo Zero can be understood as a specially structured GAN system which is expected to possess an inherent good convergence property. Thus we deduct the success of AlphaGo Zero may not be a sign of a new generation of AI.
[ 1, 0, 0, 1, 0, 0 ]
Title: Commissioning and Operation, Abstract: Chapter 16 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report. The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.
[ 0, 1, 0, 0, 0, 0 ]
Title: Stable absorbing boundary conditions for molecular dynamics in general domains, Abstract: A new type of absorbing boundary conditions for molecular dynamics simulations are presented. The exact boundary conditions for crystalline solids with harmonic approximation are expressed as a dynamic Dirichlet- to-Neumann (DtN) map. It connects the displacement of the atoms at the boundary to the traction on these atoms. The DtN map is valid for a domain with general geometry. To avoid evaluating the time convo- lution of the dynamic DtN map, we approximate the associated kernel function by rational functions in the Laplace domain. The parameters in the approximations are determined by interpolations. The explicit forms of the zeroth, first, and second order approximations will be presented. The stability of the molecular dynamics model, supplemented with these absorbing boundary conditions is established. Two numerical simulations are performed to demonstrate the effectiveness of the methods.
[ 0, 1, 0, 0, 0, 0 ]
Title: Algebraic operads up to homotopy, Abstract: This paper deals with the homotopy theory of differential graded operads. We endow the Koszul dual category of curved conilpotent cooperads, where the notion of quasi-isomorphism barely makes sense, with a model category structure Quillen equivalent to that of operads. This allows us to describe the homotopy properties of differential graded operads in a simpler and richer way, using obstruction methods.
[ 0, 0, 1, 0, 0, 0 ]
Title: Functional data analysis in the Banach space of continuous functions, Abstract: Functional data analysis is typically conducted within the $L^2$-Hilbert space framework. There is by now a fully developed statistical toolbox allowing for the principled application of the functional data machinery to real-world problems, often based on dimension reduction techniques such as functional principal component analysis. At the same time, there have recently been a number of publications that sidestep dimension reduction steps and focus on a fully functional $L^2$-methodology. This paper goes one step further and develops data analysis methodology for functional time series in the space of all continuous functions. The work is motivated by the fact that objects with rather different shapes may still have a small $L^2$-distance and are therefore identified as similar when using an $L^2$-metric. However, in applications it is often desirable to use metrics reflecting the visualization of the curves in the statistical analysis. The methodological contributions are focused on developing two-sample and change-point tests as well as confidence bands, as these procedures appear do be conducive to the proposed setting. Particular interest is put on relevant differences; that is, on not trying to test for exact equality, but rather for pre-specified deviations under the null hypothesis. The procedures are justified through large-sample theory. To ensure practicability, non-standard bootstrap procedures are developed and investigated addressing particular features that arise in the problem of testing relevant hypotheses. The finite sample properties are explored through a simulation study and an application to annual temperature profiles.
[ 0, 0, 1, 1, 0, 0 ]
Title: Cross-Correlation Redshift Calibration Without Spectroscopic Calibration Samples in DES Science Verification Data, Abstract: Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of $\Delta z \sim \pm 0.01$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.
[ 0, 1, 0, 0, 0, 0 ]
Title: Completely bounded bimodule maps and spectral synthesis, Abstract: We initiate the study of the completely bounded multipliers of the Haagerup tensor product $A(G)\otimes_{\rm h} A(G)$ of two copies of the Fourier algebra $A(G)$ of a locally compact group $G$. If $E$ is a closed subset of $G$ we let $E^{\sharp} = \{(s,t) : st\in E\}$ and show that if $E^{\sharp}$ is a set of spectral synthesis for $A(G)\otimes_{\rm h} A(G)$ then $E$ is a set of local spectral synthesis for $A(G)$. Conversely, we prove that if $E$ is a set of spectral synthesis for $A(G)$ and $G$ is a Moore group then $E^{\sharp}$ is a set of spectral synthesis for $A(G)\otimes_{\rm h} A(G)$. Using the natural identification of the space of all completely bounded weak* continuous $VN(G)'$-bimodule maps with the dual of $A(G)\otimes_{\rm h} A(G)$, we show that, in the case $G$ is weakly amenable, such a map leaves the multiplication algebra of $L^{\infty}(G)$ invariant if and only if its support is contained in the antidiagonal of $G$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation, Abstract: Black-box risk scoring models permeate our lives, yet are typically proprietary or opaque. We propose Distill-and-Compare, a model distillation and comparison approach to audit such models. To gain insight into black-box models, we treat them as teachers, training transparent student models to mimic the risk scores assigned by black-box models. We compare the student model trained with distillation to a second un-distilled transparent model trained on ground-truth outcomes, and use differences between the two models to gain insight into the black-box model. Our approach can be applied in a realistic setting, without probing the black-box model API. We demonstrate the approach on four public data sets: COMPAS, Stop-and-Frisk, Chicago Police, and Lending Club. We also propose a statistical test to determine if a data set is missing key features used to train the black-box model. Our test finds that the ProPublica data is likely missing key feature(s) used in COMPAS.
[ 1, 0, 0, 1, 0, 0 ]
Title: A GPU Accelerated Discontinuous Galerkin Incompressible Flow Solver, Abstract: We present a GPU-accelerated version of a high-order discontinuous Galerkin discretization of the unsteady incompressible Navier-Stokes equations. The equations are discretized in time using a semi-implicit scheme with explicit treatment of the nonlinear term and implicit treatment of the split Stokes operators. The pressure system is solved with a conjugate gradient method together with a fully GPU-accelerated multigrid preconditioner which is designed to minimize memory requirements and to increase overall performance. A semi-Lagrangian subcycling advection algorithm is used to shift the computational load per timestep away from the pressure Poisson solve by allowing larger timestep sizes in exchange for an increased number of advection steps. Numerical results confirm we achieve the design order accuracy in time and space. We optimize the performance of the most time-consuming kernels by tuning the fine-grain parallelism, memory utilization, and maximizing bandwidth. To assess overall performance we present an empirically calibrated roofline performance model for a target GPU to explain the achieved efficiency. We demonstrate that, in the most cases, the kernels used in the solver are close to their empirically predicted roofline performance.
[ 1, 0, 0, 0, 0, 0 ]
Title: Historic Emergence of Diversity in Painting: Heterogeneity in Chromatic Distance in Images and Characterization of Massive Painting Data Set, Abstract: Painting is an art form that has long functioned as a major channel for the creative expression and communication of humans, its evolution taking place under an interplay with the science, technology, and social environments of the times. Therefore, understanding the process based on comprehensive data could shed light on how humans acted and manifested creatively under changing conditions. Yet, there exist few systematic frameworks that characterize the process for painting, which would require robust statistical methods for defining painting characteristics and identifying human's creative developments, and data of high quality and sufficient quantity. Here we propose that the color contrast of a painting image signifying the heterogeneity in inter-pixel chromatic distance can be a useful representation of its style, integrating both the color and geometry. From the color contrasts of paintings from a large-scale, comprehensive archive of 179,853 high-quality images spanning several centuries we characterize the temporal evolutionary patterns of paintings, and present a deep study of an extraordinary expansion in creative diversity and individuality that came to define the modern era.
[ 1, 1, 0, 0, 0, 0 ]
Title: Cage Size and Jump Precursors in Glass-Forming Liquids: Experiment and Simulations, Abstract: Glassy dynamics is intermittent, as particles suddenly jump out of the cage formed by their neighbours, and heterogeneous, as these jumps are not uniformly distributed across the system. Relating these features of the dynamics to the diverse local environments explored by the particles is essential to rationalize the relaxation process. Here we investigate this issue characterizing the local environment of a particle with the amplitude of its short time vibrational motion, as determined by segmenting in cages and jumps the particle trajectories. Both simulations of supercooled liquids and experiments on colloidal suspensions show that particles in large cages are likely to jump after a small time-lag, and that, on average, the cage enlarges shortly before the particle jumps. At large time-lags, the cage has essentially a constant value, which is smaller for longer-lasting cages. Finally, we clarify how this coupling between cage size and duration controls the average behaviour and opens the way to a better understanding of the relaxation process in glass--forming liquids.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Floating Cylinder on An Unbounded Bath, Abstract: In this paper, we reconsider a circular cylinder horizontally floating on an unbounded reservoir in a gravitational field directed downwards, which was studied by Bhatnargar and Finn in 2006. We follow their approach but with some modifications. We establish the relation between the total energy relative to the undisturbed state and the total force. There is a monotone relation between the height of the centre and the wetting angle. We study the number of equilibria, the floating configurations and their stability for all parameter values. We find that the system admits at most two equilibrium points for arbitrary contact angle, the one with smaller wetting angle is stable and the one with larger wetting angle is unstable. The initial model has a limitation that the fluid interfaces may intersect. We show that the stable equilibrium point never lies in the intersection region, while the unstable equilibrium point may lie in the intersection region.
[ 0, 1, 1, 0, 0, 0 ]
Title: Switching between Limit Cycles in a Model of Running Using Exponentially Stabilizing Discrete Control Lyapunov Function, Abstract: This paper considers the problem of switching between two periodic motions, also known as limit cycles, to create agile running motions. For each limit cycle, we use a control Lyapunov function to estimate the region of attraction at the apex of the flight phase. We switch controllers at the apex, only if the current state of the robot is within the region of attraction of the subsequent limit cycle. If the intersection between two limit cycles is the null set, then we construct additional limit cycles till we are able to achieve sufficient overlap of the region of attraction between sequential limit cycles. Additionally, we impose an exponential convergence condition on the control Lyapunov function that allows us to rapidly transition between limit cycles. Using the approach we demonstrate switching between 5 limit cycles in about 5 steps with the speed changing from 2 m/s to 5 m/s.
[ 1, 0, 0, 0, 0, 0 ]
Title: Carrier Diffusion in Thin-Film CH3NH3PbI3 Perovskite Measured using Four-Wave Mixing, Abstract: We report the application of femtosecond four-wave mixing (FWM) to the study of carrier transport in solution-processed CH3NH3PbI3. The diffusion coefficient was extracted through direct detection of the lateral diffusion of carriers utilizing the transient grating technique, coupled with simultaneous measurement of decay kinetics exploiting the versatility of the boxcar excitation beam geometry. The observation of exponential decay of the transient grating versus interpulse delay indicates diffusive transport with negligible trapping within the first nanosecond following excitation. The in-plane transport geometry in our experiments enabled the diffusion length to be compared directly with the grain size, indicating that carriers move across multiple grain boundaries prior to recombination. Our experiments illustrate the broad utility of FWM spectroscopy for rapid characterization of macroscopic film transport properties.
[ 0, 1, 0, 0, 0, 0 ]
Title: On effective Birkhoff's ergodic theorem for computable actions of amenable groups, Abstract: We introduce computable actions of computable groups and prove the following versions of effective Birkhoff's ergodic theorem. Let $\Gamma$ be a computable amenable group, then there always exists a canonically computable tempered two-sided F{\o}lner sequence $(F_n)_{n \geq 1}$ in $\Gamma$. For a computable, measure-preserving, ergodic action of $\Gamma$ on a Cantor space $\{0,1\}^{\mathbb N}$ endowed with a computable probability measure $\mu$, it is shown that for every bounded lower semicomputable function $f$ on $\{0,1\}^{\mathbb N}$ and for every Martin-Löf random $\omega \in \{0,1\}^{\mathbb N}$ the equality \[ \lim\limits_{n \to \infty} \frac{1}{|F_n|} \sum\limits_{g \in F_n} f(g \cdot \omega) = \int\limits f d \mu \] holds, where the averages are taken with respect to a canonically computable tempered two-sided F{\o}lner sequence $(F_n)_{n \geq 1}$. We also prove the same identity for all lower semicomputable $f$'s in the special case when $\Gamma$ is a computable group of polynomial growth and $F_n:=\mathrm{B}(n)$ is the F{\o}lner sequence of balls around the neutral element of $\Gamma$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Minor-free graphs have light spanners, Abstract: We show that every $H$-minor-free graph has a light $(1+\epsilon)$-spanner, resolving an open problem of Grigni and Sissokho and proving a conjecture of Grigni and Hung. Our lightness bound is \[O\left(\frac{\sigma_H}{\epsilon^3}\log \frac{1}{\epsilon}\right)\] where $\sigma_H = |V(H)|\sqrt{\log |V(H)|}$ is the sparsity coefficient of $H$-minor-free graphs. That is, it has a practical dependency on the size of the minor $H$. Our result also implies that the polynomial time approximation scheme (PTAS) for the Travelling Salesperson Problem (TSP) in $H$-minor-free graphs by Demaine, Hajiaghayi and Kawarabayashi is an efficient PTAS whose running time is $2^{O_H\left(\frac{1}{\epsilon^4}\log \frac{1}{\epsilon}\right)}n^{O(1)}$ where $O_H$ ignores dependencies on the size of $H$. Our techniques significantly deviate from existing lines of research on spanners for $H$-minor-free graphs, but build upon the work of Chechik and Wulff-Nilsen for spanners of general graphs.
[ 1, 0, 0, 0, 0, 0 ]
Title: Adversarial Generation of Natural Language, Abstract: Generative Adversarial Networks (GANs) have gathered a lot of attention from the computer vision community, yielding impressive results for image generation. Advances in the adversarial generation of natural language from noise however are not commensurate with the progress made in generating images, and still lag far behind likelihood based methods. In this paper, we take a step towards generating natural language with a GAN objective alone. We introduce a simple baseline that addresses the discrete output space problem without relying on gradient estimators and show that it is able to achieve state-of-the-art results on a Chinese poem generation dataset. We present quantitative results on generating sentences from context-free and probabilistic context-free grammars, and qualitative language modeling results. A conditional version is also described that can generate sequences conditioned on sentence characteristics.
[ 1, 0, 0, 1, 0, 0 ]
Title: Likely Transiting Exocomets Detected by Kepler, Abstract: We present the first good evidence for exocomet transits of a host star in continuum light in data from the Kepler mission. The Kepler star in question, KIC 3542116, is of spectral type F2V and is quite bright at K_p = 10. The transits have a distinct asymmetric shape with a steeper ingress and slower egress that can be ascribed to objects with a trailing dust tail passing over the stellar disk. There are three deeper transits with depths of ~0.1% that last for about a day, and three that are several times more shallow and of shorter duration. The transits were found via an exhaustive visual search of the entire Kepler photometric data set, which we describe in some detail. We review the methods we use to validate the Kepler data showing the comet transits, and rule out instrumental artefacts as sources of the signals. We fit the transits with a simple dust-tail model, and find that a transverse comet speed of ~35-50 km/s and a minimum amount of dust present in the tail of ~10^16 g are required to explain the larger transits. For a dust replenishment time of ~10 days, and a comet lifetime of only ~300 days, this implies a total cometary mass of > 3 x 10^17 g, or about the mass of Halley's comet. We also discuss the number of comets and orbital geometry that would be necessary to explain the six transits detected over the four years of Kepler prime-field observations. Finally, we also report the discovery of a single comet-shaped transit in KIC 11084727 with very similar transit and host-star properties.
[ 0, 1, 0, 0, 0, 0 ]
Title: Canonical Truth, Abstract: We introduce and study a notion of canonical set theoretical truth, which means truth in a `canonical model', i.e. a transitive class model that is uniquely characterized by some $\in$-formula. We show that this notion of truth is `informative', i.e. there are statements that hold in all canonical models but do not follow from ZFC, such as Reitz' ground model axiom or the nonexistence of measurable cardinals. We also show that ZF+$V=L[\mathbb{R}]$+AD has no canonical models. On the other hand, we show that there are canonical models for `every real has sharp'. Moreover, we consider `theory-canonical' statements that only fix a transitive class model of ZFC up to elementary equivalence and show that it is consistent relative to large cardinals that there are theory-canonical models with measurable cardinals and that theory-canonicity is still informative in the sense explained above.
[ 0, 0, 1, 0, 0, 0 ]
Title: Effect of Composition Gradient on Magnetothermal Instability Modified by Shear and Rotation, Abstract: We model the intracluster medium as a weakly collisional plasma that is a binary mixture of the hydrogen and the helium ions, along with free electrons. When, owing to the helium sedimentation, the gradient of the mean molecular weight (or equivalently, composition or helium ions' concentration) of the plasma is not negligible, it can have appreciable influence on the stability criteria of the thermal convective instabilities, e.g., the heat-flux-buoyancy instability and the magnetothermal instability (MTI). These instabilities are consequences of the anisotropic heat conduction occurring preferentially along the magnetic field lines. In this paper, without ignoring the magnetic tension, we first present the mathematical criterion for the onset of composition gradient modified MTI. Subsequently, we relax the commonly adopted equilibrium state in which the plasma is at rest, and assume that the plasma is in a sheared state which may be due to differential rotation. We discuss how the concentration gradient affects the coupling between the Kelvin--Helmholtz instability and the MTI in rendering the plasma unstable or stable. We derive exact stability criterion by working with the sharp boundary case in which the physical variables---temperature, mean molecular weight, density, and magnetic field---change discontinuously from one constant value to another on crossing the boundary. Finally, we perform the linear stability analysis for the case of the differentially rotating plasma that is thermally and compositionally stratified as well. By assuming axisymmetric perturbations, we find the corresponding dispersion relation and the explicit mathematical expression determining the onset of the modified MTI.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Verified Algorithm Enumerating Event Structures, Abstract: An event structure is a mathematical abstraction modeling concepts as causality, conflict and concurrency between events. While many other mathematical structures, including groups, topological spaces, rings, abound with algorithms and formulas to generate, enumerate and count particular sets of their members, no algorithm or formulas are known to generate or count all the possible event structures over a finite set of events. We present an algorithm to generate such a family, along with a functional implementation verified using Isabelle/HOL. As byproducts, we obtain a verified enumeration of all possible preorders and partial orders. While the integer sequences counting preorders and partial orders are already listed on OEIS (On-line Encyclopedia of Integer Sequences), the one counting event structures is not. We therefore used our algorithm to submit a formally verified addition, which has been successfully reviewed and is now part of the OEIS.
[ 1, 0, 0, 0, 0, 0 ]
Title: Missing Data as Part of the Social Behavior in Real-World Financial Complex Systems, Abstract: Many real-world networks are known to exhibit facts that counter our knowledge prescribed by the theories on network creation and communication patterns. A common prerequisite in network analysis is that information on nodes and links will be complete because network topologies are extremely sensitive to missing information of this kind. Therefore, many real-world networks that fail to meet this criterion under random sampling may be discarded. In this paper we offer a framework for interpreting the missing observations in network data under the hypothesis that these observations are not missing at random. We demonstrate the methodology with a case study of a financial trade network, where the awareness of agents to the data collection procedure by a self-interested observer may result in strategic revealing or withholding of information. The non-random missingness has been overlooked despite the possibility of this being an important feature of the processes by which the network is generated. The analysis demonstrates that strategic information withholding may be a valid general phenomenon in complex systems. The evidence is sufficient to support the existence of an influential observer and to offer a compelling dynamic mechanism for the creation of the network.
[ 0, 0, 0, 1, 0, 0 ]
Title: Geometric Rescaling Algorithms for Submodular Function Minimization, Abstract: We present a new class of polynomial-time algorithms for submodular function minimization (SFM), as well as a unified framework to obtain strongly polynomial SFM algorithms. Our new algorithms are based on simple iterative methods for the minimum-norm problem, such as the conditional gradient and the Fujishige-Wolfe algorithms. We exhibit two techniques to turn simple iterative methods into polynomial-time algorithms. Firstly, we use the geometric rescaling technique, which has recently gained attention in linear programming. We adapt this technique to SFM and obtain a weakly polynomial bound $O((n^4\cdot EO + n^5)\log (n L))$. Secondly, we exhibit a general combinatorial black-box approach to turn any strongly polynomial $\varepsilon L$-approximate SFM oracle into a strongly polynomial exact SFM algorithm. This framework can be applied to a wide range of combinatorial and continuous algorithms, including pseudo-polynomial ones. In particular, we can obtain strongly polynomial algorithms by a repeated application of the conditional gradient or of the Fujishige-Wolfe algorithm. Combined with the geometric rescaling technique, the black-box approach provides a $O((n^5\cdot EO + n^6)\log^2 n)$ algorithm. Finally, we show that one of the techniques we develop in the paper can also be combined with the cutting-plane method of Lee, Sidford, and Wong, yielding a simplified variant of their $O(n^3 \log^2 n \cdot EO + n^4\log^{O(1)} n)$ algorithm.
[ 1, 0, 1, 0, 0, 0 ]
Title: Statistical PT-symmetric lasing in an optical fiber network, Abstract: PT-symmetry in optics is a condition whereby the real and imaginary parts of the refractive index across a photonic structure are deliberately balanced. This balance can lead to a host of novel optical phenomena, such as unidirectional invisibility, loss-induced lasing, single-mode lasing from multimode resonators, and non-reciprocal effects in conjunction with nonlinearities. Because PT-symmetry has been thought of as fragile, experimental realizations to date have been usually restricted to on-chip micro-devices. Here, we demonstrate that certain features of PT-symmetry are sufficiently robust to survive the statistical fluctuations associated with a macroscopic optical cavity. We construct optical-fiber-based coupled-cavities in excess of a kilometer in length (the free spectral range is less than 0.8 fm) with balanced gain and loss in two sub-cavities and examine the lasing dynamics. In such a macroscopic system, fluctuations can lead to a cavity-detuning exceeding the free spectral range. Nevertheless, by varying the gain-loss contrast, we observe that both the lasing threshold and the growth of the laser power follow the predicted behavior of a stable PT-symmetric structure. Furthermore, a statistical symmetry-breaking point is observed upon varying the cavity loss. These findings indicate that PT-symmetry is a more robust optical phenomenon than previously expected, and points to potential applications in optical fiber networks and fiber lasers.
[ 0, 1, 0, 0, 0, 0 ]
Title: Probing the dusty stellar populations of the Local Volume Galaxies with JWST/MIRI, Abstract: The Mid-Infrared Instrument (MIRI) for the {\em James Webb Space Telescope} (JWST) will revolutionize our understanding of infrared stellar populations in the Local Volume. Using the rich {\em Spitzer}-IRS spectroscopic data-set and spectral classifications from the Surveying the Agents of Galaxy Evolution (SAGE)-Spectroscopic survey of over a thousand objects in the Magellanic Clouds, the Grid of Red supergiant and Asymptotic giant branch star ModelS ({\sc grams}), and the grid of YSO models by Robitaille et al. (2006), we calculate the expected flux-densities and colors in the MIRI broadband filters for prominent infrared stellar populations. We use these fluxes to explore the {\em JWST}/MIRI colours and magnitudes for composite stellar population studies of Local Volume galaxies. MIRI colour classification schemes are presented; these diagrams provide a powerful means of identifying young stellar objects, evolved stars and extragalactic background galaxies in Local Volume galaxies with a high degree of confidence. Finally, we examine which filter combinations are best for selecting populations of sources based on their JWST colours.
[ 0, 1, 0, 0, 0, 0 ]
Title: Weak Versus Strong Disorder Superfluid-Bose Glass Transition in One Dimension, Abstract: Using large-scale simulations based on matrix product state and quantum Monte Carlo techniques, we study the superfluid to Bose glass-transition for one-dimensional attractive hard-core bosons at zero temperature, across the full regime from weak to strong disorder. As a function of interaction and disorder strength, we identify a Berezinskii-Kosterlitz-Thouless critical line with two different regimes. At small attraction where critical disorder is weak compared to the bandwidth, the critical Luttinger parameter $K_c$ takes its universal Giamarchi-Schulz value $K_{c}=3/2$. Conversely, a non-universal $K_c>3/2$ emerges for stronger attraction where weak-link physics is relevant. In this strong disorder regime, the transition is characterized by self-similar power-law distributed weak links with a continuously varying characteristic exponent $\alpha$.
[ 0, 1, 0, 0, 0, 0 ]
Title: Fine-grained Event Learning of Human-Object Interaction with LSTM-CRF, Abstract: Event learning is one of the most important problems in AI. However, notwithstanding significant research efforts, it is still a very complex task, especially when the events involve the interaction of humans or agents with other objects, as it requires modeling human kinematics and object movements. This study proposes a methodology for learning complex human-object interaction (HOI) events, involving the recording, annotation and classification of event interactions. For annotation, we allow multiple interpretations of a motion capture by slicing over its temporal span, for classification, we use Long-Short Term Memory (LSTM) sequential models with Conditional Randon Field (CRF) for constraints of outputs. Using a setup involving captures of human-object interaction as three dimensional inputs, we argue that this approach could be used for event types involving complex spatio-temporal dynamics.
[ 1, 0, 0, 0, 0, 0 ]
Title: Carrier driven coupling in ferromagnetic oxide heterostructures, Abstract: Transition metal oxides are well known for their complex magnetic and electrical properties. When brought together in heterostructure geometries, they show particular promise for spintronics and colossal magnetoresistance applications. In this letter, we propose a new mechanism for the coupling between layers of itinerant ferromagnetic materials in heterostructures. The coupling is mediated by charge carriers that strive to maximally delocalize through the heterostructure to gain kinetic energy. In doing so, they force a ferromagnetic or antiferromagnetic coupling between the constituent layers. To illustrate this, we focus on heterostructures composed of SrRuO$_3$ and La$_{1-x}$A$_{x}$MnO$_3$ (A=Ca/Sr). Our mechanism is consistent with antiferromagnetic alignment that is known to occur in multilayers of SrRuO$_3$-La$_{1-x}$A$_{x}$MnO$_3$. To support our assertion, we present a minimal Kondo-lattice model which reproduces the known magnetization properties of such multilayers. In addition, we discuss a quantum well model for heterostructures and argue that the spin-dependent density of states determines the nature of the coupling. As a smoking gun signature, we propose that bilayers with the same constituents will oscillate between ferromagnetic and antiferromagnetic coupling upon tuning the relative thicknesses of the layers.
[ 0, 1, 0, 0, 0, 0 ]