title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
LPCNet: Improving Neural Speech Synthesis Through Linear Prediction
Neural speech synthesis models have recently demonstrated the ability to synthesize high quality speech for text-to-speech and compression applications. These new models often require powerful GPUs to achieve real-time operation, so being able to reduce their complexity would open the way for many new applications. We propose LPCNet, a WaveRNN variant that combines linear prediction with recurrent neural networks to significantly improve the efficiency of speech synthesis. We demonstrate that LPCNet can achieve significantly higher quality than WaveRNN for the same network size and that high quality LPCNet speech synthesis is achievable with a complexity under 3 GFLOPS. This makes it easier to deploy neural synthesis applications on lower-power devices, such as embedded systems and mobile phones.
1
0
0
0
0
0
Coherent anti-Stokes Raman Scattering Lidar Using Slow Light: A Theoretical Study
We theoretically investigate a scheme in which backward coherent anti-Stokes Raman scattering (CARS) is significantly enhanced by using slow light. Specifically, we reduce the group velocity of the Stokes excitation pulse by introducing a coupling laser that causes electromagnetically induced transparency (EIT). When the Stokes pulse has a spatial length shorter than the CARS wavelength, the backward CARS emission is significantly enhanced. We also investigated the possibility of applying this scheme as a CARS lidar with O2 or N2 as the EIT medium. We found that if nanosecond laser with large pulse energy (>1 J) and a telescope with large aperture (~10 m) are equipped in the lidar system, a CARS lidar could become much more sensitive than a spontaneous Raman lidar.
0
1
0
0
0
0
Tests based on characterizations, and their efficiencies: a survey
A survey of goodness-of-fit and symmetry tests based on the characterization properties of distributions is presented. This approach became popular in recent years. In most cases the test statistics are functionals of $U$-empirical processes. The limiting distributions and large deviations of new statistics under the null hypothesis are described. Their local Bahadur efficiency for various parametric alternatives is calculated and compared with each other as well as with diverse previously known tests. We also describe new directions of possible research in this domain.
0
0
1
1
0
0
Hyperprior on symmetric Dirichlet distribution
In this article we introduce how to put vague hyperprior on Dirichlet distribution, and we update the parameter of it by adaptive rejection sampling (ARS). Finally we analyze this hyperprior in an over-fitted mixture model by some synthetic experiments.
1
0
0
0
0
0
On the interpretability and computational reliability of frequency-domain Granger causality
This is a comment to the paper 'A study of problems encountered in Granger causality analysis from a neuroscience perspective'. We agree that interpretation issues of Granger Causality in Neuroscience exist (partially due to the historical unfortunate use of the name 'causality', as nicely described in previous literature). On the other hand we think that the paper uses a formulation of Granger causality which is outdated (albeit still used), and in doing so it dismisses the measure based on a suboptimal use of it. Furthermore, since data from simulated systems are used, the pitfalls that are found with the used formulation are intended to be general, and not limited to neuroscience. It would be a pity if this paper, even written in good faith, became a wildcard against all possible applications of Granger Causality, regardless of the hard work of colleagues aiming to seriously address the methodological and interpretation pitfalls. In order to provide a balanced view, we replicated their simulations used the updated State Space implementation, proposed already some years ago, in which the pitfalls are mitigated or directly solved.
0
0
1
1
0
0
Artificial topological models based on a one-dimensional spin-dependent optical lattice
Topological matter is a popular topic in both condensed matter and cold atom research. In the past decades, a variety of models have been identified with fascinating topological features. Some, but not all, of the models can be found in materials. As a fully controllable system, cold atoms trapped in optical lattices provide an ideal platform to simulate and realize these topological models. Here we present a proposal for synthesizing topological models in cold atoms based on a one-dimensional (1D) spin-dependent optical lattice potential. In our system, features such as staggered tunneling, staggered Zeeman field, nearest-neighbor interaction, beyond-near-neighbor tunneling, etc. can be readily realized. They underlie the emergence of various topological phases. Our proposal can be realized with current technology and hence has potential applications in quantum simulation of topological matter.
0
1
0
0
0
0
Ramsey expansions of metrically homogeneous graphs
We discuss the Ramsey property, the existence of a stationary independence relation and the coherent extension property for partial isometries (coherent EPPA) for all classes of metrically homogeneous graphs from Cherlin's catalogue, which is conjectured to include all such structures. We show that, with the exception of tree-like graphs, all metric spaces in the catalogue have precompact Ramsey expansions (or lifts) with the expansion property. With two exceptions we can also characterise the existence of a stationary independence relation and the coherent EPPA. Our results can be seen as a new contribution to Nešetřil's classification programme of Ramsey classes and as empirical evidence of the recent convergence in techniques employed to establish the Ramsey property, the expansion (or lift or ordering) property, EPPA and the existence of a stationary independence relation. At the heart of our proof is a canonical way of completing edge-labelled graphs to metric spaces in Cherlin's classes. The existence of such a "completion algorithm" then allows us to apply several strong results in the areas that imply EPPA and respectively the Ramsey property. The main results have numerous corollaries on the automorphism groups of the Fraïssé limits of the classes, such as amenability, unique ergodicity, existence of universal minimal flows, ample generics, small index property, 21-Bergman property and Serre's property (FA).
1
0
1
0
0
0
Rapidly star-forming galaxies adjacent to quasars at redshifts exceeding 6
The existence of massive ($10^{11}$ solar masses) elliptical galaxies by redshift z~4 (when the Universe was 1.5 billion years old) necessitates the presence of galaxies with star-formation rates exceeding 100 solar masses per year at z>6 (corresponding to an age of the Universe of less than 1 billion years). Surveys have discovered hundreds of galaxies at these early cosmic epochs, but their star-formation rates are more than an order of magnitude lower. The only known galaxies with very high star-formation rates at z>6 are, with only one exception, the host galaxies of quasars, but these galaxies also host accreting supermassive (more than $10^9$ solar masses) black holes, which probably affect the properties of the galaxies. Here we report observations of an emission line of singly ionized carbon ([CII] at a wavelength of 158 micrometres) in four galaxies at z>6 that are companions of quasars, with velocity offsets of less than 600 kilometers per second and linear offsets of less than 600 kiloparsecs. The discovery of these four galaxies was serendipitous; they are close to their companion quasars and appear bright in the far-infrared. On the basis of the [CII] measurements, we estimate star-formation rates in the companions of more than 100 solar masses per year. These sources are similar to the host galaxies of the quasars in [CII] brightness, linewidth and implied dynamical masses, but do not show evidence for accreting supermassive black holes. Similar systems have previously been found at lower redshift. We find such close companions in four out of twenty-five z>6 quasars surveyed, a fraction that needs to be accounted for in simulations. If they are representative of the bright end of the [CII] luminosity function, then they can account for the population of massive elliptical galaxies at z~4 in terms of cosmic space density.
0
1
0
0
0
0
Regular irreducible characters of a hyperspecial compact group
A parametrization of irreducible unitary representations associated with the regular adjoint orbits of a hyperspecial compact subgroup of a reductive group over a non-dyadic non-archimedean local filed is presented. The parametrization is given by means of (a subset of) the character group of certain finite abelian groups arising from the reductive group. Our method is based upon Cliffod's theory and Weil representations over finite fields. It works under an assumption of the triviality of certain Schur multipliers defined for an algebraic group over a finite field. The assumption of the triviality has good evidences in the case of general linear groups and highly probable in general.
0
0
1
0
0
0
Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces
In this work we study the problem of exploring surfaces and building compact 3D representations of the environment surrounding a robot through active perception. We propose an online probabilistic framework that merges visual and tactile measurements using Gaussian Random Field and Gaussian Process Implicit Surfaces. The system investigates incomplete point clouds in order to find a small set of regions of interest which are then physically explored with a robotic arm equipped with tactile sensors. We show experimental results obtained using a PrimeSense camera, a Kinova Jaco2 robotic arm and Optoforce sensors on different scenarios. We then demonstrate how to use the online framework for object detection and terrain classification.
1
0
0
0
0
0
Doing good vs. avoiding bad in prosocial choice: A refined test and extension of the morality preference hypothesis
Prosociality is fundamental to human social life, and, accordingly, much research has attempted to explain human prosocial behavior. Capraro and Rand (Judgment and Decision Making, 13, 99-111, 2018) recently provided experimental evidence that prosociality in anonymous, one-shot interactions (such as Prisoner's Dilemma and Dictator Game experiments) is not driven by outcome-based social preferences - as classically assumed - but by a generalized morality preference for "doing the right thing". Here we argue that the key experiments reported in Capraro and Rand (2018) comprise prominent methodological confounds and open questions that bear on influential psychological theory. Specifically, their design confounds: (i) preferences for efficiency with self-interest; and (ii) preferences for action with preferences for morality. Furthermore, their design fails to dissociate the preference to do "good" from the preference to avoid doing "bad". We thus designed and conducted a preregistered, refined and extended test of the morality preference hypothesis (N=801). Consistent with this hypothesis, our findings indicate that prosociality in the anonymous, one-shot Dictator Game is driven by preferences for doing the morally right thing. Inconsistent with influential psychological theory, however, our results suggest the preference to do "good" was as potent as the preference to avoid doing "bad" in this case.
0
0
0
0
1
0
Measuring Territorial Control in Civil Wars Using Hidden Markov Models: A Data Informatics-Based Approach
Territorial control is a key aspect shaping the dynamics of civil war. Despite its importance, we lack data on territorial control that are fine-grained enough to account for subnational spatio-temporal variation and that cover a large set of conflicts. To resolve this issue, we propose a theoretical model of the relationship between territorial control and tactical choice in civil war and outline how Hidden Markov Models (HMMs) are suitable to capture theoretical intuitions and estimate levels of territorial control. We discuss challenges of using HMMs in this application and mitigation strategies for future work.
1
0
0
1
0
0
Virtual quandle for links in lens spaces
We construct a virtual quandle for links in lens spaces $L(p,q)$, with $q=1$. This invariant has two valuable advantages over an ordinary fundamental quandle for links in lens spaces: the virtual quandle is an essential invariant and the presentation of the virtual quandle can be easily written from the band diagram of a link.
0
0
1
0
0
0
Adaptation to Easy Data in Prediction with Limited Advice
We derive an online learning algorithm with improved regret guarantees for `easy' loss sequences. We consider two types of `easiness': (a) stochastic loss sequences and (b) adversarial loss sequences with small effective range of the losses. While a number of algorithms have been proposed for exploiting small effective range in the full information setting, Gerchinovitz and Lattimore [2016] have shown the impossibility of regret scaling with the effective range of the losses in the bandit setting. We show that just one additional observation per round is sufficient to circumvent the impossibility result. The proposed Second Order Difference Adjustments (SODA) algorithm requires no prior knowledge of the effective range of the losses, $\varepsilon$, and achieves an $O(\varepsilon \sqrt{KT \ln K}) + \tilde{O}(\varepsilon K \sqrt[4]{T})$ expected regret guarantee, where $T$ is the time horizon and $K$ is the number of actions. The scaling with the effective loss range is achieved under significantly weaker assumptions than those made by Cesa-Bianchi and Shamir [2018] in an earlier attempt to circumvent the impossibility result. We also provide a regret lower bound of $\Omega(\varepsilon\sqrt{T K})$, which almost matches the upper bound. In addition, we show that in the stochastic setting SODA achieves an $O\left(\sum_{a:\Delta_a>0} \frac{K\varepsilon^2}{\Delta_a}\right)$ pseudo-regret bound that holds simultaneously with the adversarial regret guarantee. In other words, SODA is safe against an unrestricted oblivious adversary and provides improved regret guarantees for at least two different types of `easiness' simultaneously.
0
0
0
1
0
0
Why Bohr was (Mostly) Right
After a discussion of the Frauchiger-Renner argument that no 'single- world' interpretation of quantum mechanics can be self-consistent, I propose a 'Bohrian' alternative to many-worlds or QBism as the rational option.
0
1
0
0
0
0
Generalizing Distance Covariance to Measure and Test Multivariate Mutual Dependence
We propose three measures of mutual dependence between multiple random vectors. All the measures are zero if and only if the random vectors are mutually independent. The first measure generalizes distance covariance from pairwise dependence to mutual dependence, while the other two measures are sums of squared distance covariance. All the measures share similar properties and asymptotic distributions to distance covariance, and capture non-linear and non-monotone mutual dependence between the random vectors. Inspired by complete and incomplete V-statistics, we define the empirical measures and simplified empirical measures as a trade-off between the complexity and power when testing mutual independence. Implementation of the tests is demonstrated by both simulation results and real data examples.
0
0
1
1
0
0
Simultaneous Inference for High Dimensional Mean Vectors
Let $X_1, \ldots, X_n\in\mathbb{R}^p$ be i.i.d. random vectors. We aim to perform simultaneous inference for the mean vector $\mathbb{E} (X_i)$ with finite polynomial moments and an ultra high dimension. Our approach is based on the truncated sample mean vector. A Gaussian approximation result is derived for the latter under the very mild finite polynomial ($(2+\theta)$-th) moment condition and the dimension $p$ can be allowed to grow exponentially with the sample size $n$. Based on this result, we propose an innovative resampling method to construct simultaneous confidence intervals for mean vectors.
0
0
1
1
0
0
Joint Routing, Scheduling and Power Control Providing Hard Deadline in Wireless Multihop Networks
We consider optimal/efficient power allocation policies in a single/multihop wireless network in the presence of hard end-to-end deadline delay constraints on the transmitted packets. Such constraints can be useful for real time voice and video. Power is consumed in only transmission of the data. We consider the case when the power used in transmission is a convex function of the data transmitted. We develop a computationally efficient online algorithm, which minimizes the average power for the single hop. We model this problem as dynamic program (DP) and obtain the optimal solution. Next, we generalize it to the multiuser, multihop scenario when there are multiple real time streams with different hard deadline constraints.
1
0
0
0
0
0
Activation of Microwave Fields in a Spin-Torque Nano-Oscillator by Neuronal Action Potentials
Action potentials are the basic unit of information in the nervous system and their reliable detection and decoding holds the key to understanding how the brain generates complex thought and behavior. Transducing these signals into microwave field oscillations can enable wireless sensors that report on brain activity through magnetic induction. In the present work we demonstrate that action potentials from crayfish lateral giant neuron can trigger microwave oscillations in spin-torque nano-oscillators. These nanoscale devices take as input small currents and convert them to microwave current oscillations that can wirelessly broadcast neuronal activity, opening up the possibility for compact neuro-sensors. We show that action potentials activate microwave oscillations in spin-torque nano-oscillators with an amplitude that follows the action potential signal, demonstrating that the device has both the sensitivity and temporal resolution to respond to action potentials from a single neuron. The activation of magnetic oscillations by action potentials, together with the small footprint and the high frequency tunability, makes these devices promising candidates for high resolution sensing of bioelectric signals from neural tissues. These device attributes may be useful for design of high-throughput bi-directional brain-machine interfaces.
0
1
0
0
0
0
Responses in Large-Scale Structure
We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and $n$ soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a "bias" expansion of the local power spectrum, with a fixed number of physical response coefficients, which are only a function of the hard wavenumber $k$. Further, the responses up to $n$-th order completely describe the $(n+2)$-point function in the squeezed limit, i.e. with two hard and $n$ soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance ${\rm Cov}^{\rm NG}_{\ell = 0}(k_1,k_2)$, in the limit where one of the modes, say $k_2$, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for $k_2 \lesssim 0.06\ h/{\rm Mpc}$, and for any $k_1 \gtrsim 2 k_2$. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.
0
1
0
0
0
0
Graph isomorphisms in quasi-polynomial time
Let us be given two graphs $\Gamma_1$, $\Gamma_2$ of $n$ vertices. Are they isomorphic? If they are, the set of isomorphisms from $\Gamma_1$ to $\Gamma_2$ can be identified with a coset $H\cdot\pi$ inside the symmetric group on $n$ elements. How do we find $\pi$ and a set of generators of $H$? The challenge of giving an always efficient algorithm answering these questions remained open for a long time. Babai has recently shown how to solve these problems -- and others linked to them -- in quasi-polynomial time, i.e. in time $\exp\left(O(\log n)^{O(1)}\right)$. His strategy is based in part on the algorithm by Luks (1980/82), who solved the case of graphs of bounded degree.
0
0
1
0
0
0
Hardware-Aware Machine Learning: Modeling and Optimization
Recent breakthroughs in Deep Learning (DL) applications have made DL models a key component in almost every modern computing system. The increased popularity of DL applications deployed on a wide-spectrum of platforms have resulted in a plethora of design challenges related to the constraints introduced by the hardware itself. What is the latency or energy cost for an inference made by a Deep Neural Network (DNN)? Is it possible to predict this latency or energy consumption before a model is trained? If yes, how can machine learners take advantage of these models to design the hardware-optimal DNN for deployment? From lengthening battery life of mobile devices to reducing the runtime requirements of DL models executing in the cloud, the answers to these questions have drawn significant attention. One cannot optimize what isn't properly modeled. Therefore, it is important to understand the hardware efficiency of DL models during serving for making an inference, before even training the model. This key observation has motivated the use of predictive models to capture the hardware performance or energy efficiency of DL applications. Furthermore, DL practitioners are challenged with the task of designing the DNN model, i.e., of tuning the hyper-parameters of the DNN architecture, while optimizing for both accuracy of the DL model and its hardware efficiency. Therefore, state-of-the-art methodologies have proposed hardware-aware hyper-parameter optimization techniques. In this paper, we provide a comprehensive assessment of state-of-the-art work and selected results on the hardware-aware modeling and optimization for DL applications. We also highlight several open questions that are poised to give rise to novel hardware-aware designs in the next few years, as DL applications continue to significantly impact associated hardware systems and platforms.
0
0
0
1
0
0
On the unit distance problem
The Erd\H os unit distance conjecture in the plane says that the number of pairs of points from a point set of size $n$ separated by a fixed (Euclidean) distance is $\leq C_{\epsilon} n^{1+\epsilon}$ for any $\epsilon>0$. The best known bound is $Cn^{\frac{4}{3}}$. We show that if the set under consideration is well-distributed and the fixed distance is much smaller than the diameter of the set, then the exponent $\frac{4}{3}$ is significantly improved. Corresponding results are also established in higher dimensions. The results are obtained by solving the corresponding continuous problem and using a continuous-to-discrete conversion mechanism. The degree of sharpness of results is tested using the known results on the distribution of lattice points dilates of convex domains. We also introduce the following variant of the Erd\H os unit distance problem: how many pairs of points from a set of size $n$ are separated by an integer distance? We obtain some results in this direction and formulate a conjecture.
0
0
1
0
0
0
Ordered states in the Kitaev-Heisenberg model: From 1D chains to 2D honeycomb
We study the ground state of the 1D Kitaev-Heisenberg (KH) model using the density-matrix renormalization group and Lanczos exact diagonalization methods. We obtain a rich ground-state phase diagram as a function of the ratio between Heisenberg ($J=\cos\phi)$ and Kitaev ($K=\sin\phi$) interactions. Depending on the ratio, the system exhibits four long-range ordered states: ferromagnetic-$z$ , ferromagnetic-$xy$, staggered-$xy$, Néel-$z$, and two liquid states: Tomonaga-Luttinger liquid and spiral-$xy$. The two Kitaev points $\phi=\frac{\pi}{2}$ and $\phi=\frac{3\pi}{2}$ are singular. The $\phi$-dependent phase diagram is similar to that for the 2D honeycomb-lattice KH model. Remarkably, all the ordered states of the honeycomb-lattice KH model can be interpreted in terms of the coupled KH chains. We also discuss the magnetic structure of the K-intercalated RuCl$_3$, a potential Kitaev material, in the framework of the 1D KH model. Furthermore, we demonstrate that the low-lying excitations of the 1D KH Hamiltonian can be explained within the combination of the known six-vertex model and spin-wave theory.
0
1
0
0
0
0
Quadratic forms and Sobolev spaces of fractional order
We study quadratic functionals on $L^2(\mathbb{R}^d)$ that generate seminorms in the fractional Sobolev space $H^s(\mathbb{R}^d)$ for $0 < s < 1$. The functionals under consideration appear in the study of Markov jump processes and, independently, in recent research on the Boltzmann equation. The functional measures differentiability of a function $f$ in a similar way as the seminorm of $H^s(\mathbb{R}^d)$. The major difference is that differences $f(y) - f(x)$ are taken into account only if $y$ lies in some double cone with apex at $x$ or vice versa. The configuration of double cones is allowed to be inhomogeneous without any assumption on the spatial regularity. We prove that the resulting seminorm is comparable to the standard one of $H^s(\mathbb{R}^d)$. The proof follows from a similar result on discrete quadratic forms in $\mathbb{Z}^d$, which is our second main result. We establish a general scheme for discrete approximations of nonlocal quadratic forms. Applications to Markov jump processes are discussed.
0
0
1
0
0
0
General Latent Feature Modeling for Data Exploration Tasks
This paper introduces a general Bayesian non- parametric latent feature model suitable to per- form automatic exploratory analysis of heterogeneous datasets, where the attributes describing each object can be either discrete, continuous or mixed variables. The proposed model presents several important properties. First, it accounts for heterogeneous data while can be inferred in linear time with respect to the number of objects and attributes. Second, its Bayesian nonparametric nature allows us to automatically infer the model complexity from the data, i.e., the number of features necessary to capture the latent structure in the data. Third, the latent features in the model are binary-valued variables, easing the interpretability of the obtained latent features in data exploration tasks.
1
0
0
1
0
0
The ALMA View of the OMC1 Explosion in Orion
Most massive stars form in dense clusters where gravitational interactions with other stars may be common. The two nearest forming massive stars, the BN object and Source I, located behind the Orion Nebula, were ejected with velocities of $\sim$29 and $\sim$13 km s$^{-1}$ about 500 years ago by such interactions. This event generated an explosion in the gas. New ALMA observations show in unprecedented detail, a roughly spherically symmetric distribution of over a hundred $^{12}$CO J=2$-$1 streamers with velocities extending from V$_{LSR}$ =$-$150 to +145 km s$^{-1}$. The streamer radial velocities increase (or decrease) linearly with projected distance from the explosion center, forming a `Hubble Flow' confined to within 50 arcseconds of the explosion center. They point toward the high proper-motion, shock-excited H$_2$ and [Fe ii ] `fingertips' and lower-velocity CO in the H$_2$ wakes comprising Orion's `fingers'. In some directions, the H$_2$ `fingers' extend more than a factor of two farther from the ejection center than the CO streamers. Such deviations from spherical symmetry may be caused by ejecta running into dense gas or the dynamics of the N-body interaction that ejected the stars and produced the explosion. This $\sim$10$^{48}$ erg event may have been powered by the release of gravitational potential energy associated with the formation of a compact binary or a protostellar merger. Orion may be the prototype for a new class of stellar explosion responsible for luminous infrared transients in nearby galaxies.
0
1
0
0
0
0
Neurology-as-a-Service for the Developing World
Electroencephalography (EEG) is an extensively-used and well-studied technique in the field of medical diagnostics and treatment for brain disorders, including epilepsy, migraines, and tumors. The analysis and interpretation of EEGs require physicians to have specialized training, which is not common even among most doctors in the developed world, let alone the developing world where physician shortages plague society. This problem can be addressed by teleEEG that uses remote EEG analysis by experts or by local computer processing of EEGs. However, both of these options are prohibitively expensive and the second option requires abundant computing resources and infrastructure, which is another concern in developing countries where there are resource constraints on capital and computing infrastructure. In this work, we present a cloud-based deep neural network approach to provide decision support for non-specialist physicians in EEG analysis and interpretation. Named `neurology-as-a-service,' the approach requires almost no manual intervention in feature engineering and in the selection of an optimal architecture and hyperparameters of the neural network. In this study, we deploy a pipeline that includes moving EEG data to the cloud and getting optimal models for various classification tasks. Our initial prototype has been tested only in developed world environments to-date, but our intention is to test it in developing world environments in future work. We demonstrate the performance of our proposed approach using the BCI2000 EEG MMI dataset, on which our service attains 63.4% accuracy for the task of classifying real vs. imaginary activity performed by the subject, which is significantly higher than what is obtained with a shallow approach such as support vector machines.
1
0
0
1
0
0
Estimation of Low-Rank Matrices via Approximate Message Passing
Consider the problem of estimating a low-rank symmetric matrix when its entries are perturbed by Gaussian noise, a setting that is known as `spiked model' or `deformed Wigner matrix'. If the empirical distribution of the entries of the spikes is known, optimal estimators that exploit this knowledge can substantially outperform spectral approaches. Recent work characterizes the accuracy of Bayes-optimal estimators in the high-dimensional limit. In this paper we present a practical algorithm that can achieve Bayes-optimal accuracy above the spectral threshold. A bold conjecture from statistical physics posits that no polynomial-time algorithm achieves optimal error below the same threshold (unless the best estimator is trivial). Our approach uses Approximate Message Passing (AMP) in conjunction with a spectral initialization. AMP has proven successful in a variety of statistical problem, and are amenable to exact asymptotic analysis via state evolution. Unfortunately, state evolution is uninformative when the algorithm is initialized near an unstable fixed point, as it often happens in matrix estimation. We develop a new analysis of AMP that allows for spectral initializations, and builds on a decoupling between the outlier eigenvectors and the bulk in the spiked random matrix model. Our main theorem is general and applies beyond matrix estimation. However, we use it to derive detailed predictions for the problem of estimating a rank-one matrix in noise. Special cases of these problem are closely related -via universality arguments- to the network community detection problem for two asymmetric communities. For general rank-one models, we show that AMP can be used to construct asymptotically valid confidence intervals. As a further illustration, we consider the example of a block-constant low-rank matrix with symmetric blocks, which we refer to as `Gaussian Block Model'.
0
0
1
1
0
0
Simultaneous diagonalisation of the covariance and complementary covariance matrices in quaternion widely linear signal processing
Recent developments in quaternion-valued widely linear processing have established that the exploitation of complete second-order statistics requires consideration of both the standard covariance and the three complementary covariance matrices. Although such matrices have a tremendous amount of structure and their decomposition is a powerful tool in a variety of applications, the non-commutative nature of the quaternion product has been prohibitive to the development of quaternion uncorrelating transforms. To this end, we introduce novel techniques for a simultaneous decomposition of the covariance and complementary covariance matrices in the quaternion domain, whereby the quaternion version of the Takagi factorisation is explored to diagonalise symmetric quaternion-valued matrices. This gives new insights into the quaternion uncorrelating transform (QUT) and forms a basis for the proposed quaternion approximate uncorrelating transform (QAUT) which simultaneously diagonalises all four covariance matrices associated with improper quaternion signals. The effectiveness of the proposed uncorrelating transforms is validated by simulations on both synthetic and real-world quaternion-valued signals.
1
0
0
0
0
0
Arithmetic Siegel-Weil formula on $X_{0}(N)$
In this paper, we proved an arithmetic Siegel-Weil formula and the modularity of some arithmetic theta function on the modular curve $X_0(N)$ when $N$ is square free. In the process, we also construct some generalized Delta function for $\Gamma_0(N)$ and proved some explicit Kronecker limit formula for Eisenstein series on $X_0(N)$.
0
0
1
0
0
0
Perturbation, Non-Gaussianity and Reheating in a GB-$α$-Attractor Model
Motivated by $\alpha$-attractor models, in this paper we consider a Gauss-Bonnet inflation with E-model type of potential. We consider the Gauss-Bonnet coupling function to be the same as the E-model potential. In the small $\alpha$ limit we obtain an attractor at $r=0$ as expected, and in the large $\alpha$ limit we recover the Gauss-Bonnet model with potential and coupling function of the form $\phi^{2n}$. We study perturbations and non-Gaussianity in this setup and we find some constraints on the model's parameters in comparison with PLANCK datasets. We study also the reheating epoch after inflation in this setup. For this purpose, we seek the number of e-folds and temperature during reheating epoch. These quantities depend on the model's parameter and the effective equation of state of the dominating energy density in the reheating era. We find some observational constraints on these parameters.
0
1
0
0
0
0
Experimental determination of the frequency and field dependence of Specific Loss Power in Magnetic Fluid Hyperthermia
Magnetic nanoparticles are promising systems for biomedical applications and in particular for Magnetic Fluid Hyperthermia, a promising therapy that utilizes the heat released by such systems to damage tumor cells. We present an experimental study of the physical properties that influences the capability of heat release, i.e. the Specific Loss Power, SLP, of three biocompatible ferrofluid samples having a magnetic core of maghemite with different core diameter d= 10.2, 14.6 and 19.7 nm. The SLP was measured as a function of frequency f and intensity of the applied alternating magnetic field H, and it turned out to depend on the core diameter, as expected. The results allowed us to highlight experimentally that the physical mechanism responsible for the heating is size-dependent and to establish, at applied constant frequency, the phenomenological functional relationship SLP=cH^x, with 2<x<3 for all samples. The x-value depends on sample size and field frequency/ intensity, here chosen in the typical range of operating magnetic hyperthermia devices. For the smallest sample, the effective relaxation time Teff=19.5 ns obtained from SLP data is in agreement with the value estimated from magnetization data, thus confirming the validity of the Linear Response Theory model for this system at properly chosen field intensity and frequency.
0
1
0
0
0
0
Training wide residual networks for deployment using a single bit for each weight
For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit. Error-rates usually increase when this requirement is imposed. Here, we report large improvements in error rates on multiple datasets, for deep convolutional neural networks deployed with 1-bit-per-weight. Using wide residual networks as our main baseline, our approach simplifies existing methods that binarize weights by applying the sign function in training; we apply scaling factors for each layer with constant unlearned values equal to the layer-specific standard deviations used for initialization. For CIFAR-10, CIFAR-100 and ImageNet, and models with 1-bit-per-weight requiring less than 10 MB of parameter memory, we achieve error rates of 3.9%, 18.5% and 26.0% / 8.5% (Top-1 / Top-5) respectively. We also considered MNIST, SVHN and ImageNet32, achieving 1-bit-per-weight test results of 0.27%, 1.9%, and 41.3% / 19.1% respectively. For CIFAR, our error rates halve previously reported values, and are within about 1% of our error-rates for the same network with full-precision weights. For networks that overfit, we also show significant improvements in error rate by not learning batch normalization scale and offset parameters. This applies to both full precision and 1-bit-per-weight networks. Using a warm-restart learning-rate schedule, we found that training for 1-bit-per-weight is just as fast as full-precision networks, with better accuracy than standard schedules, and achieved about 98%-99% of peak performance in just 62 training epochs for CIFAR-10/100. For full training code and trained models in MATLAB, Keras and PyTorch see this https URL .
0
0
0
1
0
0
IP Based Traffic Recovery: An Optimal Approach using SDN Application for Data Center Network
With the passage of time and indulgence in Information Technology, network management has proved its significance and has become one of the most important and challenging task in today's era of information flow. Communication networks implement a high level of sophistication in managing and flowing the data through secure channels, to make it almost impossible for data loss. That is why there are many proposed solution that are currently implemented in wide range of network-based applications like social networks and finance applications. The objective of this research paper is to propose a very reliable method of data flow: Choose best path for traffic using SDN application. This is an IP based method in which our SDN application implements provision of best possible path by filtering the requests on base of their IP origin. We'll distinguish among source and will provide the data flow with lowest traffic path, thus resulting in providing us minimum chances of data loss. A request to access our test application will be generated from host and request from each host will be distinguished by our SDN application that will get number of active users for all available servers and will redirect the request to server with minimum traffic load. It will also destroy sessions of inactive users, resulting in maintaining a best responsive channel for data flow.
1
0
0
0
0
0
Performance of Optimal Data Shaping Codes
Data shaping is a coding technique that has been proposed to increase the lifetime of flash memory devices. Several data shaping codes have been described in recent work, including endurance codes and direct shaping codes for structured data. In this paper, we study information-theoretic properties of a general class of data shaping codes and prove a separation theorem stating that optimal data shaping can be achieved by the concatenation of optimal lossless compression with optimal endurance coding. We also determine the expansion factor that minimizes the total wear cost. Finally, we analyze the performance of direct shaping codes and establish a condition for their optimality.
1
0
0
0
0
0
Finding influential nodes for integration in brain networks using optimal percolation theory
Global integration of information in the brain results from complex interactions of segregated brain networks. Identifying the most influential neuronal populations that efficiently bind these networks is a fundamental problem of systems neuroscience. Here we apply optimal percolation theory and pharmacogenetic interventions in-vivo to predict and subsequently target nodes that are essential for global integration of a memory network in rodents. The theory predicts that integration in the memory network is mediated by a set of low-degree nodes located in the nucleus accumbens. This result is confirmed with pharmacogenetic inactivation of the nucleus accumbens, which eliminates the formation of the memory network, while inactivations of other brain areas leave the network intact. Thus, optimal percolation theory predicts essential nodes in brain networks. This could be used to identify targets of interventions to modulate brain function.
0
0
0
0
1
0
Automatic Music Highlight Extraction using Convolutional Recurrent Attention Networks
Music highlights are valuable contents for music services. Most methods focused on low-level signal features. We propose a method for extracting highlights using high-level features from convolutional recurrent attention networks (CRAN). CRAN utilizes convolution and recurrent layers for sequential learning with an attention mechanism. The attention allows CRAN to capture significant snippets for distinguishing between genres, thus being used as a high-level feature. CRAN was evaluated on over 32,000 popular tracks in Korea for two months. Experimental results show our method outperforms three baseline methods through quantitative and qualitative evaluations. Also, we analyze the effects of attention and sequence information on performance.
1
0
0
1
0
0
On Oracle-Efficient PAC RL with Rich Observations
We study the computational tractability of PAC reinforcement learning with rich observations. We present new provably sample-efficient algorithms for environments with deterministic hidden state dynamics and stochastic rich observations. These methods operate in an oracle model of computation -- accessing policy and value function classes exclusively through standard optimization primitives -- and therefore represent computationally efficient alternatives to prior algorithms that require enumeration. With stochastic hidden state dynamics, we prove that the only known sample-efficient algorithm, OLIVE, cannot be implemented in the oracle model. We also present several examples that illustrate fundamental challenges of tractable PAC reinforcement learning in such general settings.
0
0
0
1
0
0
Noise induced transition in Josephson junction with second harmonic
We show a noise-induced transition in Josephson junction with fundamental as well as second harmonic. A periodically modulated multiplicative colored noise can stabilize an unstable configuration in such a system. The stabilization of the unstable configuration has been captured in the effective potential of the system obtained by integrating out the high-frequency components of the noise. This is a classical approach to understand the stability of an unstable configuration due to the presence of such stochasticity in the system and our numerical analysis confirms the prediction from the analytical calculation.
0
1
0
0
0
0
Distributed Estimation of Principal Eigenspaces
Principal component analysis (PCA) is fundamental to statistical machine learning. It extracts latent principal factors that contribute to the most variation of the data. When data are stored across multiple machines, however, communication cost can prohibit the computation of PCA in a central location and distributed algorithms for PCA are thus needed. This paper proposes and studies a distributed PCA algorithm: each node machine computes the top $K$ eigenvectors and transmits them to the central server; the central server then aggregates the information from all the node machines and conducts a PCA based on the aggregated information. We investigate the bias and variance for the resulting distributed estimator of the top $K$ eigenvectors. In particular, we show that for distributions with symmetric innovation, the empirical top eigenspaces are unbiased and hence the distributed PCA is "unbiased". We derive the rate of convergence for distributed PCA estimators, which depends explicitly on the effective rank of covariance, eigen-gap, and the number of machines. We show that when the number of machines is not unreasonably large, the distributed PCA performs as well as the whole sample PCA, even without full access of whole data. The theoretical results are verified by an extensive simulation study. We also extend our analysis to the heterogeneous case where the population covariance matrices are different across local machines but share similar top eigen-structures.
0
0
1
1
0
0
Invariant algebraic surfaces of the FitzHugh-Nagumo system
In this paper, we characterize all the irreducible Darboux polynomials and polynomial first integrals of FitzHugh-Nagumo (F-N) system. The method of the weight homogeneous polynomials and the characteristic curves is widely used to give a complete classification of Darboux polynomials of a system. However, this method does not work for F-N system. Here by considering the Darboux polynomials of an assistant system associated to F-N system, we classified the invariant algebraic surfaces of F-N system. Our results show that there is no invariant algebraic surface of F-N system in the biological parameters region.
0
0
1
0
0
0
Local and global boundary rigidity and the geodesic X-ray transform in the normal gauge
In this paper we analyze the local and global boundary rigidity problem for general Riemannian manifolds with boundary $(M,g)$ whose boundary is strictly convex. We show that the boundary distance function, i.e., $d_g|_{\partial M\times\partial M}$, known over suitable open sets of $\partial M$ determines $g$ in suitable corresponding open subsets of $M$, up to the natural diffeomorphism invariance of the problem. We also show that if there is a function on $M$ with suitable convexity properties relative to $g$ then $d_g|_{\partial M\times\partial M}$ determines $g$ globally in the sense that if $d_g|_{\partial M\times\partial M}=d_{\tilde g}|_{\partial M\times \partial M}$ then there is a diffeomorphism $\psi$ fixing $\partial M$ (pointwise) such that $g=\psi^*\tilde g$. This global assumption is satisfied, for instance, for the distance function from a given point if the manifold has no focal points (from that point). We also consider the lens rigidity problem. The lens relation measures the point of exit from $M$ and the direction of exit of geodesics issued from the boundary and the length of the geodesic. The lens rigidity problem is whether we can determine the metric up to isometry from the lens relation. We solve the lens rigidity problem under the same global assumption mentioned above. This shows, for instance, that manifolds with a strictly convex boundary and non-positive sectional curvature are lens rigid. The key tool is the analysis of the geodesic X-ray transform on 2-tensors, corresponding to a metric $g$, in the normal gauge, such as normal coordinates relative to a hypersurface, where one also needs to allow microlocal weights. This is handled by refining and extending our earlier results in the solenoidal gauge.
0
0
1
0
0
0
Adversarial Source Identification Game with Corrupted Training
We study a variant of the source identification game with training data in which part of the training data is corrupted by an attacker. In the addressed scenario, the defender aims at deciding whether a test sequence has been drawn according to a discrete memoryless source $X \sim P_X$, whose statistics are known to him through the observation of a training sequence generated by $X$. In order to undermine the correct decision under the alternative hypothesis that the test sequence has not been drawn from $X$, the attacker can modify a sequence produced by a source $Y \sim P_Y$ up to a certain distortion, and corrupt the training sequence either by adding some fake samples or by replacing some samples with fake ones. We derive the unique rationalizable equilibrium of the two versions of the game in the asymptotic regime and by assuming that the defender bases its decision by relying only on the first order statistics of the test and the training sequences. By mimicking Stein's lemma, we derive the best achievable performance for the defender when the first type error probability is required to tend to zero exponentially fast with an arbitrarily small, yet positive, error exponent. We then use such a result to analyze the ultimate distinguishability of any two sources as a function of the allowed distortion and the fraction of corrupted samples injected into the training sequence.
1
0
0
1
0
0
Toric actions and convexity in cosymplectic geometry
We prove a convexity theorem for Hamiltonian torus actions on compact cosymplectic manifolds. We show that compact toric cosymplectic manifolds are mapping tori of equivariant symplectomorphisms of toric symplectic manifolds.
0
0
1
0
0
0
Localization and Stationary Phase Approximation on Supermanifolds
Given an odd vector field $Q$ on a supermanifold $M$ and a $Q$-invariant density $\mu$ on $M$, under certain compactness conditions on $Q$, the value of the integral $\int_{M}\mu$ is determined by the value of $\mu$ on any neighborhood of the vanishing locus $N$ of $Q$. We present a formula for the integral in the case where $N$ is a subsupermanifold which is appropriately non-degenerate with respect to $Q$. In the process, we discuss the linear algebra necessary to express our result in a coordinate independent way. We also extend stationary phase approximation and the Morse-Bott Lemma to supermanifolds.
0
0
1
0
0
0
An Analytical Design Optimization Method for Electric Propulsion Systems of Multicopter UAVs with Desired Hovering Endurance
Multicopters are becoming increasingly important in both civil and military fields. Currently, most multicopter propulsion systems are designed by experience and trial-and-error experiments, which are costly and ineffective. This paper proposes a simple and practical method to help designers find the optimal propulsion system according to the given design requirements. First, the modeling methods for four basic components of the propulsion system including propellers, motors, electric speed controls, and batteries are studied respectively. Secondly, the whole optimization design problem is simplified and decoupled into several sub-problems. By solving these sub-problems, the optimal parameters of each component can be obtained respectively. Finally, based on the obtained optimal component parameters, the optimal product of each component can be quickly located and determined from the corresponding database. Experiments and statistical analyses demonstrate the effectiveness of the proposed method.
1
0
0
0
0
0
A Reassessment of Absolute Energies of the X-ray L Lines of Lanthanide Metals
We introduce a new technique for determining x-ray fluorescence line energies and widths, and we present measurements made with this technique of 22 x-ray L lines from lanthanide-series elements. The technique uses arrays of transition-edge sensors, microcalorimeters with high energy-resolving power that simultaneously observe both calibrated x-ray standards and the x-ray emission lines under study. The uncertainty in absolute line energies is generally less than 0.4 eV in the energy range of 4.5 keV to 7.5 keV. Of the seventeen line energies of neodymium, samarium, and holmium, thirteen are found to be consistent with the available x-ray reference data measured after 1990; only two of the four lines for which reference data predate 1980, however, are consistent with our results. Five lines of terbium are measured with uncertainties that improve on those of existing data by factors of two or more. These results eliminate a significant discrepancy between measured and calculated x-ray line energies for the terbium Ll line (5.551 keV). The line widths are also measured, with uncertainties of 0.6 eV or less on the full-width at half-maximum in most cases. These measurements were made with an array of approximately one hundred superconducting x- ray microcalorimeters, each sensitive to an energy band from 1 keV to 8 keV. No energy-dispersive spectrometer has previously been used for absolute-energy estimation at this level of accuracy. Future spectrometers, with superior linearity and energy resolution, will allow us to improve on these results and expand the measurements to more elements and a wider range of line energies.
0
1
0
0
0
0
Limitations of Source-Filter Coupling In Phonation
The coupling of vocal fold (source) and vocal tract (filter) is one of the most critical factors in source-filter articulation theory. The traditional linear source-filter theory has been challenged by current research which clearly shows the impact of acoustic loading on the dynamic behavior of the vocal fold vibration as well as the variations in the glottal flow pulses shape. This paper outlines the underlying mechanism of source-filter interactions; demonstrates the design and working principles of coupling for the various existing vocal cord and vocal tract biomechanical models. For our study, we have considered self-oscillating lumped-element models of the acoustic source and computational models of the vocal tract as articulators. To understand the limitations of source-filter interactions which are associated with each of those models, we compare them concerning their mechanical design, acoustic and physiological characteristics and aerodynamic simulation.
1
0
0
0
0
0
Correlation between clustering and degree in affiliation networks
We are interested in the probability that two randomly selected neighbors of a random vertex of degree (at least) $k$ are adjacent. We evaluate this probability for a power law random intersection graph, where each vertex is prescribed a collection of attributes and two vertices are adjacent whenever they share a common attribute. We show that the probability obeys the scaling $k^{-\delta}$ as $k\to+\infty$. Our results are mathematically rigorous. The parameter $0\le \delta\le 1$ is determined by the tail indices of power law random weights defining the links between vertices and attributes.
1
0
0
0
0
0
Automatic Disambiguation of French Discourse Connectives
Discourse connectives (e.g. however, because) are terms that can explicitly convey a discourse relation within a text. While discourse connectives have been shown to be an effective clue to automatically identify discourse relations, they are not always used to convey such relations, thus they should first be disambiguated between discourse-usage non-discourse-usage. In this paper, we investigate the applicability of features proposed for the disambiguation of English discourse connectives for French. Our results with the French Discourse Treebank (FDTB) show that syntactic and lexical features developed for English texts are as effective for French and allow the disambiguation of French discourse connectives with an accuracy of 94.2%.
1
0
0
0
0
0
Predicting Financial Crime: Augmenting the Predictive Policing Arsenal
Financial crime is a rampant but hidden threat. In spite of this, predictive policing systems disproportionately target "street crime" rather than white collar crime. This paper presents the White Collar Crime Early Warning System (WCCEWS), a white collar crime predictive model that uses random forest classifiers to identify high risk zones for incidents of financial crime.
1
0
0
0
0
0
Polar codes with a stepped boundary
We consider explicit polar constructions of blocklength $n\rightarrow\infty$ for the two extreme cases of code rates $R\rightarrow1$ and $R\rightarrow0.$ For code rates $R\rightarrow1,$ we design codes with complexity order of $n\log n$ in code construction, encoding, and decoding. These codes achieve the vanishing output bit error rates on the binary symmetric channels with any transition error probability $p\rightarrow 0$ and perform this task with a substantially smaller redundancy $(1-R)n$ than do other known high-rate codes, such as BCH codes or Reed-Muller (RM). We then extend our design to the low-rate codes that achieve the vanishing output error rates with the same complexity order of $n\log n$ and an asymptotically optimal code rate $R\rightarrow0$ for the case of $p\rightarrow1/2.$
1
0
0
0
0
0
Resonant Scattering Characteristics of Homogeneous Dielectric Sphere
In the present article the classical problem of electromagnetic scattering by a single homogeneous sphere is revisited. Main focus is the study of the scattering behavior as a function of the material contrast and the size parameters for all electric and magnetic resonances of a dielectric sphere. Specifically, the Padé approximants are introduced and utilized as an alternative system expansion of the Mie coefficients. Low order Padé approximants can give compact and physically insightful expressions for the scattering system and the enabled dynamic mechanisms. Higher order approximants are used for predicting accurately the resonant pole spectrum. These results are summarized into general pole formulae, covering up to fifth order magnetic and forth order electric resonances of a small dielectric sphere. Additionally, the connection between the radiative damping process and the resonant linewidth is investigated. The results obtained reveal the fundamental connection of the radiative damping mechanism with the maximum width occurring for each resonance. Finally, the suggested system ansatz is used for studying the resonant absorption maximum through a circuit-inspired perspective.
0
1
0
0
0
0
An efficient global optimization algorithm for maximizing the sum of two generalized Rayleigh quotients
Maximizing the sum of two generalized Rayleigh quotients (SRQ) can be reformulated as a one-dimensional optimization problem, where the function value evaluations are reduced to solving semi-definite programming (SDP) subproblems. In this paper, we first use the dual SDP subproblem to construct an explicit overestimation and then propose a branch-and-bound algorithm to globally solve (SRQ). Numerical results demonstrate that it is even more efficient than the recent SDP-based heuristic algorithm.
0
0
1
0
0
0
Location Dependent Dirichlet Processes
Dirichlet processes (DP) are widely applied in Bayesian nonparametric modeling. However, in their basic form they do not directly integrate dependency information among data arising from space and time. In this paper, we propose location dependent Dirichlet processes (LDDP) which incorporate nonparametric Gaussian processes in the DP modeling framework to model such dependencies. We develop the LDDP in the context of mixture modeling, and develop a mean field variational inference algorithm for this mixture model. The effectiveness of the proposed modeling framework is shown on an image segmentation task.
1
0
0
1
0
0
Poisson brackets symmetry from the pentagon-wheel cocycle in the graph complex
Kontsevich designed a scheme to generate infinitesimal symmetries $\dot{\mathcal{P}} = \mathcal{Q}(\mathcal{P})$ of Poisson brackets $\mathcal{P}$ on all affine manifolds $M^r$; every such deformation is encoded by oriented graphs on $n+2$ vertices and $2n$ edges. In particular, these symmetries can be obtained by orienting sums of non-oriented graphs $\gamma$ on $n$ vertices and $2n-2$ edges. The bi-vector flow $\dot{\mathcal{P}} = \text{Or}(\gamma)(\mathcal{P})$ preserves the space of Poisson structures if $\gamma$ is a cocycle with respect to the vertex-expanding differential in the graph complex. A class of such cocycles $\boldsymbol{\gamma}_{2\ell+1}$ is known to exist: marked by $\ell \in \mathbb{N}$, each of them contains a $(2\ell+1)$-gon wheel with a nonzero coefficient. At $\ell=1$ the tetrahedron $\boldsymbol{\gamma}_3$ itself is a cocycle; at $\ell=2$ the Kontsevich--Willwacher pentagon-wheel cocycle $\boldsymbol{\gamma}_5$ consists of two graphs. We reconstruct the symmetry $\mathcal{Q}_5(\mathcal{P}) = \text{Or}(\boldsymbol{\gamma}_5)(\mathcal{P})$ and verify that $\mathcal{Q}_5$ is a Poisson cocycle indeed: $[\![\mathcal{P},\mathcal{Q}_5(\mathcal{P})]\!]\doteq 0$ via $[\![\mathcal{P},\mathcal{P}]\!]=0$.
0
0
1
0
0
0
Shadows of characteristic cycles, Verma modules, and positivity of Chern-Schwartz-MacPherson classes of Schubert cells
Chern-Schwartz-MacPherson (CSM) classes generalize to singular and/or noncompact varieties the classical total homology Chern class of the tangent bundle of a smooth compact complex manifold. The theory of CSM classes has been extended to the equivariant setting by Ohmoto. We prove that for an arbitrary complex projective manifold $X$, the homogenized, torus equivariant CSM class of a constructible function $\varphi$ is the restriction of the characteristic cycle of $\varphi$ via the zero section of the cotangent bundle of $X$. This extends to the equivariant setting results of Ginzburg and Sabbah. We specialize $X$ to be a (generalized) flag manifold $G/B$. In this case CSM classes are determined by a Demazure-Lusztig (DL) operator. We prove a `Hecke orthogonality' of CSM classes, determined by the DL operator and its Poincar{é} adjoint. We further use the theory of holonomic $\mathcal{D}_X$-modules to show that the characteristic cycle of a Verma module, restricted to the zero section, gives the CSM class of the corresponding Schubert cell. Since the Verma characteristic cycles naturally identify with the Maulik and Okounkov's stable envelopes, we establish an equivalence between CSM classes and stable envelopes; this reproves results of Rim{á}nyi and Varchenko. As an application, we obtain a Segre type formula for CSM classes. In the non-equivariant case this formula is manifestly positive, showing that the expansion in the Schubert basis of the CSM class of a Schubert cell is effective. This proves a previous conjecture by Aluffi and Mihalcea, and it extends previous positivity results by J. Huh in the Grassmann manifold case. Finally, we generalize all of this to partial flag manifolds $G/P$.
0
0
1
0
0
0
An Iterative Scheme for Leverage-based Approximate Aggregation
The current data explosion poses great challenges to the approximate aggregation with an efficiency and accuracy. To address this problem, we propose a novel approach to calculate the aggregation answers with a high accuracy using only a small portion of the data. We introduce leverages to reflect individual differences in the samples from a statistical perspective. Two kinds of estimators, the leverage-based estimator, and the sketch estimator (a "rough picture" of the aggregation answer), are in constraint relations and iteratively improved according to the actual conditions until their difference is below a threshold. Due to the iteration mechanism and the leverages, our approach achieves a high accuracy. Moreover, some features, such as not requiring recording the sampled data and easy to extend to various execution modes (e.g., the online mode), make our approach well suited to deal with big data. Experiments show that our approach has an extraordinary performance, and when compared with the uniform sampling, our approach can achieve high-quality answers with only 1/3 of the same sample size.
1
0
0
0
0
0
A Game of Martingales
We consider a two player dynamic game played over $T \leq \infty$ periods. In each period each player chooses any probability distribution with support on $[0,1]$ with a given mean, where the mean is the realized value of the draw from the previous period. The player with the highest realization in the period achieves a payoff of $1$, and the other player, $0$; and each player seeks to maximize the discounted sum of his per-period payoffs over the whole time horizon. We solve for the unique subgame perfect equilibrium of this game, and establish properties of the equilibrium strategies and payoffs in the limit. The solution and comparative statics thereof provide insight about intertemporal choice with status concerns. In particular we find that patient players take fewer risks.
0
0
0
0
0
1
Effects of tunnelling and asymmetry for system-bath models of electron transfer
We apply the newly derived nonadiabatic golden-rule instanton theory to asymmetric models describing electron-transfer in solution. The models go beyond the usual spin-boson description and have anharmonic free-energy surfaces with different values for the reactant and product reorganization energies. The instanton method gives an excellent description of the behaviour of the rate constant with respect to asymmetry for the whole range studied. We derive a general formula for an asymmetric version of Marcus theory based on the classical limit of the instanton and find that this gives significant corrections to the standard Marcus theory. A scheme is given to compute this rate based only on equilibrium simulations. We also compare the rate constants obtained by the instanton method with its classical limit to study the effect of tunnelling and other quantum nuclear effects. These quantum effects can increase the rate constant by orders of magnitude.
0
1
0
0
0
0
Self-Modifying Morphology Experiments with DyRET: Dynamic Robot for Embodied Testing
If robots are to become ubiquitous, they will need to be able to adapt to complex and dynamic environments. Robots that can adapt their bodies while deployed might be flexible and robust enough to meet this challenge. Previous work on dynamic robot morphology has focused on simulation, combining simple modules, or switching between locomotion modes. Here, we present an alternative approach: a self-reconfigurable morphology that allows a single four-legged robot to actively adapt the length of its legs to different environments. We report the design of our robot, as well as the results of a study that verifies the performance impact of self-reconfiguration. This study compares three different control and morphology pairs under different levels of servo supply voltage in the lab. We also performed preliminary tests in different uncontrolled outdoor environments to see if changes to the external environment supports our findings in the lab. Our results show better performance with an adaptable body, lending evidence to the value of self-reconfiguration for quadruped robots.
1
0
0
0
0
0
Dropout is a special case of the stochastic delta rule: faster and more accurate deep learning
Multi-layer neural networks have lead to remarkable performance on many kinds of benchmark tasks in text, speech and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting. One approach to this overfitting and related problems (local minima, colinearity, feature discovery etc.) is called dropout (Srivastava, et al 2014, Baldi et al 2016). This method removes hidden units with a Bernoulli random variable with probability $p$ over updates. In this paper we will show that Dropout is a special case of a more general model published originally in 1990 called the stochastic delta rule ( SDR, Hanson, 1990). SDR parameterizes each weight in the network as a random variable with mean $\mu_{w_{ij}}$ and standard deviation $\sigma_{w_{ij}}$. These random variables are sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights. Both parameters are updated according to prediction error, thus implementing weight noise injections that reflect a local history of prediction error and efficient model averaging. SDR therefore implements a local gradient-dependent simulated annealing per weight converging to a bayes optimal network. Tests on standard benchmarks (CIFAR) using a modified version of DenseNet shows the SDR outperforms standard dropout in error by over 50% and in loss by over 50%. Furthermore, the SDR implementation converges on a solution much faster, reaching a training error of 5 in just 15 epochs with DenseNet-40 compared to standard DenseNet-40's 94 epochs.
0
0
0
1
0
0
Kernel Regression with Sparse Metric Learning
Kernel regression is a popular non-parametric fitting technique. It aims at learning a function which estimates the targets for test inputs as precise as possible. Generally, the function value for a test input is estimated by a weighted average of the surrounding training examples. The weights are typically computed by a distance-based kernel function and they strongly depend on the distances between examples. In this paper, we first review the latest developments of sparse metric learning and kernel regression. Then a novel kernel regression method involving sparse metric learning, which is called kernel regression with sparse metric learning (KR$\_$SML), is proposed. The sparse kernel regression model is established by enforcing a mixed $(2,1)$-norm regularization over the metric matrix. It learns a Mahalanobis distance metric by a gradient descent procedure, which can simultaneously conduct dimensionality reduction and lead to good prediction results. Our work is the first to combine kernel regression with sparse metric learning. To verify the effectiveness of the proposed method, it is evaluated on 19 data sets for regression. Furthermore, the new method is also applied to solving practical problems of forecasting short-term traffic flows. In the end, we compare the proposed method with other three related kernel regression methods on all test data sets under two criterions. Experimental results show that the proposed method is much more competitive.
1
0
0
1
0
0
Learning MSO-definable hypotheses on string
We study the classification problems over string data for hypotheses specified by formulas of monadic second-order logic MSO. The goal is to design learning algorithms that run in time polynomial in the size of the training set, independently of or at least sublinear in the size of the whole data set. We prove negative as well as positive results. If the data set is an unprocessed string to which our algorithms have local access, then learning in sublinear time is impossible even for hypotheses definable in a small fragment of first-order logic. If we allow for a linear time pre-processing of the string data to build an index data structure, then learning of MSO-definable hypotheses is possible in time polynomial in the size of the training set, independently of the size of the whole data set.
1
0
0
0
0
0
From semimetal to chiral Fulde-Ferrell superfluids
The recent realization of two-dimensional (2D) synthetic spin-orbit (SO) coupling opens a broad avenue to study novel topological states for ultracold atoms. Here, we propose a new scheme to realize exotic chiral Fulde-Ferrell superfluid for ultracold fermions, with a generic theory being shown that the topology of superfluid pairing phases can be determined from the normal states. The main findings are two fold. First, a semimetal is driven by a new type of 2D SO coupling whose realization is even simpler than the recent experiment, and can be tuned into massive Dirac fermion phases with or without inversion symmetry. Without inversion symmetry the superfluid phase with nonzero pairing momentum is favored under an attractive interaction. Furthermore, we show a fundamental theorem that the topology of a 2D chiral superfluid can be uniquely determined from the unpaired normal states, with which the topological chiral Fulde-Ferrell superfluid with a broad topological region is predicted for the present system. This generic theorem is also useful for condensed matter physics and material science in search for new topological superconductors.
0
1
0
0
0
0
TumorNet: Lung Nodule Characterization Using Multi-View Convolutional Neural Network with Gaussian Process
Characterization of lung nodules as benign or malignant is one of the most important tasks in lung cancer diagnosis, staging and treatment planning. While the variation in the appearance of the nodules remains large, there is a need for a fast and robust computer aided system. In this work, we propose an end-to-end trainable multi-view deep Convolutional Neural Network (CNN) for nodule characterization. First, we use median intensity projection to obtain a 2D patch corresponding to each dimension. The three images are then concatenated to form a tensor, where the images serve as different channels of the input image. In order to increase the number of training samples, we perform data augmentation by scaling, rotating and adding noise to the input image. The trained network is used to extract features from the input image followed by a Gaussian Process (GP) regression to obtain the malignancy score. We also empirically establish the significance of different high level nodule attributes such as calcification, sphericity and others for malignancy determination. These attributes are found to be complementary to the deep multi-view CNN features and a significant improvement over other methods is obtained.
1
0
0
1
0
0
An apparatus architecture for femtosecond transmission electron microscopy
The motion of electrons in or near solids, liquids and gases can be tracked by forcing their ejection with attosecond x-ray pulses, derived from femtosecond lasers. The momentum of these emitted electrons carries the imprint of the electronic state. Aberration corrected transmission electron microscopes have observed individual atoms, and have sufficient energy sensitivity to quantify atom bonding and electronic configurations. Recent developments in ultrafast electron microscopy and diffraction indicate that spatial and temporal information can be collected simultaneously. In the present work, we push the capability of femtosecond transmission electron microscopy (fs-TEM) towards that of the state of the art in ultrafast lasers and electron microscopes. This is anticipated to facilitate unprecedented elucidation of physical, chemical and biological structural dynamics on electronic time and length scales. The fs-TEM numerically studied employs a nanotip source, electrostatic acceleration to 70 keV, magnetic lens beam transport and focusing, a condenser-objective around the sample and a terahertz temporal compressor, including space charge effects during propagation. With electron emission equivalent to a 20 fs laser pulse, we find a spatial resolution below 10 nm and a temporal resolution of below 10 fs will be feasible for pulses comprised of on average 20 electrons. The influence of a transverse electric field at the sample is modelled, indicating that a field of 1 V/$\mu$m can be resolved.
0
1
0
0
0
0
HourGlass: Predictable Time-based Cache Coherence Protocol for Dual-Critical Multi-Core Systems
We present a hardware mechanism called HourGlass to predictably share data in a multi-core system where cores are explicitly designated as critical or non-critical. HourGlass is a time-based cache coherence protocol for dual-critical multi-core systems that ensures worst-case latency (WCL) bounds for memory requests originating from critical cores. Although HourGlass does not provide either WCL or bandwidth guarantees for memory requests from non-critical cores, it promotes the use of timers to improve its bandwidth utilization while still maintaining WCL bounds for critical cores. This encourages a trade-off between the WCL bounds for critical cores, and the improved memory bandwidth for non-critical cores via timer configurations. We evaluate HourGlass using gem5, and with multithreaded benchmark suites including SPLASH-2, and synthetic workloads. Our results show that the WCL for critical cores with HourGlass is always within the analytical WCL bounds, and provides a tighter WCL bound on critical cores compared to the state-of-the-art real-time cache coherence protocol. Further, we show that HourGlass enables a trade-off between provable WCL bounds for critical cores, and improved bandwidth utilization for non-critical cores. The average-case performance of HourGlass is comparable to the state-of-the-art real-time cache coherence protocol, and suffers a slowdown of 1.43x and 1.46x compared to the conventional MSI and MESI protocols.
1
0
0
0
0
0
Frictional Effects on RNA Folding: Speed Limit and Kramers Turnover
We investigated frictional effects on the folding rates of a human telomerase hairpin (hTR HP) and H-type pseudoknot from the Beet Western Yellow Virus (BWYV PK) using simulations of the Three Interaction Site (TIS) model for RNA. The heat capacity from TIS model simulations, calculated using temperature replica exchange simulations, reproduces nearly quantitatively the available experimental data for the hTR HP. The corresponding results for BWYV PK serve as predictions. We calculated the folding rates ($k_\mathrm{F}$) from more than 100 folding trajectories for each value of the solvent viscosity ($\eta$) at a fixed salt concentration of 200 mM. By using the theoretical estimate ($\propto$$\sqrt{N}$ where $N$ is the number of nucleotides) for folding free energy barrier, $k_\mathrm{F}$ data for both the RNAs are quantitatively fit using one-dimensional Kramers' theory with two parameters specifying the curvatures in the unfolded basin and the barrier top. In the high-friction regime ($\eta\gtrsim10^{-5}\,\textrm{Pa\ensuremath{\cdot}s}$), for both HP and PK, $k_\mathrm{F}$s decrease as $1/\eta$ whereas in the low friction regime, $k_\mathrm{F}$ values increase as $\eta$ increases, leading to a maximum folding rate at a moderate viscosity ($\sim10^{-6}\,\textrm{Pa\ensuremath{\cdot}s}$), which is the Kramers turnover. From the fits, we find that the speed limit to RNA folding at water viscosity is between 1 and 4 $\mathrm{\mu s}$, which is in accord with our previous theoretical prediction as well as results from several single molecule experiments. Both the RNA constructs fold by parallel pathways. Surprisingly, we find that the flux through the pathways could be altered by changing solvent viscosity, a prediction that is more easily testable in RNA than in proteins.
0
0
0
0
1
0
Learning to Rank based on Analogical Reasoning
Object ranking or "learning to rank" is an important problem in the realm of preference learning. On the basis of training data in the form of a set of rankings of objects represented as feature vectors, the goal is to learn a ranking function that predicts a linear order of any new set of objects. In this paper, we propose a new approach to object ranking based on principles of analogical reasoning. More specifically, our inference pattern is formalized in terms of so-called analogical proportions and can be summarized as follows: Given objects $A,B,C,D$, if object $A$ is known to be preferred to $B$, and $C$ relates to $D$ as $A$ relates to $B$, then $C$ is (supposedly) preferred to $D$. Our method applies this pattern as a main building block and combines it with ideas and techniques from instance-based learning and rank aggregation. Based on first experimental results for data sets from various domains (sports, education, tourism, etc.), we conclude that our approach is highly competitive. It appears to be specifically interesting in situations in which the objects are coming from different subdomains, and which hence require a kind of knowledge transfer.
1
0
0
1
0
0
Modeling Spatial Overdispersion with the Generalized Waring Process
Modeling spatial overdispersion requires point processes models with finite dimensional distributions that are overdisperse relative to the Poisson. Fitting such models usually heavily relies on the properties of stationarity, ergodicity, and orderliness. And, though processes based on negative binomial finite dimensional distributions have been widely considered, they typically fail to simultaneously satisfy the three required properties for fitting. Indeed, it has been conjectured by Diggle and Milne that no negative binomial model can satisfy all three properties. In light of this, we change perspective, and construct a new process based on a different overdisperse count model, the Generalized Waring Distribution. While comparably tractable and flexible to negative binomial processes, the Generalized Waring process is shown to possess all required properties, and additionally span the negative binomial and Poisson processes as limiting cases. In this sense, the GW process provides an approximate resolution to the conundrum highlighted by Diggle and Milne.
0
0
1
1
0
0
Adversarial examples for generative models
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.
0
0
0
1
0
0
Sparse covariance matrix estimation in high-dimensional deconvolution
We study the estimation of the covariance matrix $\Sigma$ of a $p$-dimensional normal random vector based on $n$ independent observations corrupted by additive noise. Only a general nonparametric assumption is imposed on the distribution of the noise without any sparsity constraint on its covariance matrix. In this high-dimensional semiparametric deconvolution problem, we propose spectral thresholding estimators that are adaptive to the sparsity of $\Sigma$. We establish an oracle inequality for these estimators under model miss-specification and derive non-asymptotic minimax convergence rates that are shown to be logarithmic in $n/\log p$. We also discuss the estimation of low-rank matrices based on indirect observations as well as the generalization to elliptical distributions. The finite sample performance of the threshold estimators is illustrated in a numerical example.
0
0
1
0
0
0
Dynamic controllers for column synchronization of rotation matrices: a QR-factorization approach
In the multi-agent systems setting, this paper addresses continuous-time distributed synchronization of columns of rotation matrices. More precisely, k specific columns shall be synchronized and only the corresponding k columns of the relative rotations between the agents are assumed to be available for the control design. When one specific column is considered, the problem is equivalent to synchronization on the (d-1)-dimensional unit sphere and when all the columns are considered, the problem is equivalent to synchronization on SO(d). We design dynamic control laws for these synchronization problems. The control laws are based on the introduction of auxiliary variables in combination with a QR-factorization approach. The benefit of this QR-factorization approach is that we can decouple the dynamics for the k columns from the remaining d-k ones. Under the control scheme, the closed loop system achieves almost global convergence to synchronization for quasi-strong interaction graph topologies.
1
0
1
0
0
0
The ELEGANT NMR Spectrometer
Compact and portable in-situ NMR spectrometers which can be dipped in the liquid to be measured, and are easily maintained, with affordable coil constructions and electronics, together with an apparatus to recover depleted magnets are presented, that provide a new real-time processing method for NMR spectrum acquisition, that remains stable despite magnetic field fluctuations.
0
1
0
0
0
0
What Would a Graph Look Like in This Layout? A Machine Learning Approach to Large Graph Visualization
Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.
1
0
0
1
0
0
A sure independence screening procedure for ultra-high dimensional partially linear additive models
We introduce a two-step procedure, in the context of ultra-high dimensional additive models, which aims to reduce the size of covariates vector and distinguish linear and nonlinear effects among nonzero components. Our proposed screening procedure, in the first step, is constructed based on the concept of cumulative distribution function and conditional expectation of response in the framework of marginal correlation. B-splines and empirical distribution functions are used to estimate the two above measures. The sure property of this procedure is also established. In the second step, a double penalization based procedure is applied to identify nonzero and linear components, simultaneously. The performance of the designed method is examined by several test functions to show its capabilities against competitor methods when errors distribution are varied. Simulation studies imply that the proposed screening procedure can be applied to the ultra-high dimensional data and well detect the in uential covariates. It is also demonstrate the superiority in comparison with the existing methods. This method is also applied to identify most in uential genes for overexpression of a G protein-coupled receptor in mice.
0
0
1
1
0
0
Unbiased Multi-index Monte Carlo
We introduce a new class of Monte Carlo based approximations of expectations of random variables such that their laws are only available via certain discretizations. Sampling from the discretized versions of these laws can typically introduce a bias. In this paper, we show how to remove that bias, by introducing a new version of multi-index Monte Carlo (MIMC) that has the added advantage of reducing the computational effort, relative to i.i.d. sampling from the most precise discretization, for a given level of error. We cover extensions of results regarding variance and optimality criteria for the new approach. We apply the methodology to the problem of computing an unbiased mollified version of the solution of a partial differential equation with random coefficients. A second application concerns the Bayesian inference (the smoothing problem) of an infinite dimensional signal modelled by the solution of a stochastic partial differential equation that is observed on a discrete space grid and at discrete times. Both applications are complemented by numerical simulations.
0
0
0
1
0
0
Hilbert Transformation and $r\mathrm{Spin}(n)+\mathbb{R}^n$ Group
In this paper we study symmetry properties of the Hilbert transformation of several real variables in the Clifford algebra setting. In order to describe the symmetry properties we introduce the group $r\mathrm{Spin}(n)+\mathbb{R}^n, r>0,$ which is essentially an extension of the ax+b group. The study concludes that the Hilbert transformation has certain characteristic symmetry properties in terms of $r\mathrm{Spin}(n)+\mathbb{R}^n.$ In the present paper, for $n=2$ and $3$ we obtain, explicitly, the induced spinor representations of the $r\mathrm{Spin}(n)+\mathbb{R}^n$ group. Then we decompose the natural representation of $r\mathrm{Spin}(n)+\mathbb{R}^n$ into the direct sum of some two irreducible spinor representations, by which we characterize the Hilbert transformation in $\mathbb{R}^3$ and $\mathbb{R}^2.$ Precisely, we show that a nontrivial skew operator is the Hilbert transformation if and only if it is invariant under the action of the $r\mathrm{Spin}(n)+\mathbb{R}^n, n=2,3,$ group.
0
0
1
0
0
0
Asymptotic limit and decay estimates for a class of dissipative linear hyperbolic systems in several dimensions
In this paper, we study the large-time behavior of solutions to a class of partially dissipative linear hyperbolic systems with applications in velocity-jump processes in several dimensions. Given integers $n,d\ge 1$, let $\mathbf A:=(A^1,\dots,A^d)\in (\mathbb R^{n\times n})^d$ be a matrix-vector, where $A^j\in\mathbb R^{n\times n}$, and let $B\in \mathbb R^{n\times n}$ be not required to be symmetric but have one single eigenvalue zero, we consider the Cauchy problem for linear $n\times n$ systems having the form \begin{equation*} \partial_{t}u+\mathbf A\cdot \nabla_{\mathbf x} u+Bu=0,\qquad (\mathbf x,t)\in \mathbb R^d\times \mathbb R_+. \end{equation*} Under appropriate assumptions, we show that the solution $u$ is decomposed into $u=u^{(1)}+u^{(2)}$, where $u^{(1)}$ has the asymptotic profile which is the solution, denoted by $U$, of a parabolic equation and $u^{(1)}-U$ decays at the rate $t^{-\frac d2(\frac 1q-\frac 1p)-\frac 12}$ as $t\to +\infty$ in any $L^p$-norm, and $u^{(2)}$ decays exponentially in $L^2$-norm, provided $u(\cdot,0)\in L^q(\mathbb R^d)\cap L^2(\mathbb R^d)$ for $1\le q\le p\le \infty$. Moreover, $u^{(1)}-U$ decays at the optimal rate $t^{-\frac d2(\frac 1q-\frac 1p)-1}$ as $t\to +\infty$ if the system satisfies a symmetry property. The main proofs are based on asymptotic expansions of the solution $u$ in the frequency space and the Fourier analysis.
0
0
1
0
0
0
(G, μ)-displays and Rapoport-Zink spaces
Let (G, \mu) be a pair of a reductive group G over the p-adic integers and a minuscule cocharacter {\mu} of G defined over an unramified extension. We introduce and study "(G, \mu)-displays" which generalize Zink's Witt vector displays. We use these to define certain Rapoport-Zink formal schemes purely group theoretically, i.e. without p-divisible groups.
0
0
1
0
0
0
Selecting Representative Examples for Program Synthesis
Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, mapping the inputs to their corresponding outputs exactly. Due to its precise and combinatorial nature, program synthesis is commonly formulated as a constraint satisfaction problem, where input-output examples are encoded as constraints and solved with a constraint solver. A key challenge of this formulation is scalability: while constraint solvers work well with a few well-chosen examples, a large set of examples can incur significant overhead in both time and memory. We describe a method to discover a subset of examples that is both small and representative: the subset is constructed iteratively, using a neural network to predict the probability of unchosen examples conditioned on the chosen examples in the subset, and greedily adding the least probable example. We empirically evaluate the representativeness of the subsets constructed by our method, and demonstrate such subsets can significantly improve synthesis time and stability.
1
0
0
0
0
0
Semistable rank 2 sheaves with singularities of mixed dimension on $\mathbb{P}^3$
We describe new irreducible components of the Gieseker-Maruyama moduli scheme $\mathcal{M}(3)$ of semistable rank 2 coherent sheaves with Chern classes $c_1=0,\ c_2=3,\ c_3=0$ on $\mathbb{P}^3$, general points of which correspond to sheaves whose singular loci contain components of dimensions both 0 and 1. These sheaves are produced by elementary transformations of stable reflexive rank 2 sheaves with $c_1=0,\ c_2=2,\ c_3=2$ or 4 along a disjoint union of a projective line and a collection of points in $\mathbb{P}^3$. The constructed families of sheaves provide first examples of irreducible components of the Gieseker-Maruyama moduli scheme such that their general sheaves have singularities of mixed dimension.
0
0
1
0
0
0
From jamming to collective cell migration through a boundary induced transition
Cell monolayers provide an interesting example of active matter, exhibiting a phase transition from a flowing to jammed state as they age. Here we report experiments and numerical simulations illustrating how a jammed cellular layer rapidly reverts to a flowing state after a wound. Quantitative comparison between experiments and simulations shows that cells change their self-propulsion and alignement strength so that the system crosses a phase transition line, which we characterize by finite-size scaling in an active particle model. This wound-induced unjamming transition is found to occur generically in epithelial, endothelial and cancer cells.
0
0
0
0
1
0
An Approximate Bayesian Long Short-Term Memory Algorithm for Outlier Detection
Long Short-Term Memory networks trained with gradient descent and back-propagation have received great success in various applications. However, point estimation of the weights of the networks is prone to over-fitting problems and lacks important uncertainty information associated with the estimation. However, exact Bayesian neural network methods are intractable and non-applicable for real-world applications. In this study, we propose an approximate estimation of the weights uncertainty using Ensemble Kalman Filter, which is easily scalable to a large number of weights. Furthermore, we optimize the covariance of the noise distribution in the ensemble update step using maximum likelihood estimation. To assess the proposed algorithm, we apply it to outlier detection in five real-world events retrieved from the Twitter platform.
1
0
0
1
0
0
Estimates of covering type and the number of vertices of minimal triangulations
The covering type of a space $X$ is defined as the minimal cardinality of a good cover of a space that is homotopy equivalent to $X$. We derive estimates for the covering type of $X$ in terms of other invariants of $X$, namely the ranks of the homology groups, the multiplicative structure of the cohomology ring and the Lusternik-Schnirelmann category of $X$. By relating the covering type to the number of vertices of minimal triangulations of complexes and combinatorial manifolds, we obtain, within a unified framework, several estimates which are either new or extensions of results that have been previously obtained by ad hoc combinatorial arguments. Moreover, our methods give results that are valid for entire homotopy classes of spaces.
0
0
1
0
0
0
Principal Floquet subspaces and exponential separations of type II with applications to random delay differential equations
This paper deals with the study of principal Lyapunov exponents, principal Floquet subspaces, and exponential separation for positive random linear dynamical systems in ordered Banach spaces. The main contribution lies in the introduction of a new type of exponential separation, called of type II, important for its application to nonautonomous random differential equations with delay. Under weakened assumptions, the existence of an exponential separation of type II in an abstract general setting is shown, and an illustration of its application to dynamical systems generated by scalar linear random delay differential equations with finite delay is given.
0
0
1
0
0
0
DNA insertion mutations can be predicted by a periodic probability function
It is generally difficult to predict the positions of mutations in genomic DNA at the nucleotide level. Retroviral DNA insertion is one mode of mutation, resulting in host infections that are difficult to treat. This mutation process involves the integration of retroviral DNA into the host-infected cellular genomic DNA following the interaction between host DNA and a pre-integration complex consisting of retroviral DNA and integrase. Here, we report that retroviral insertion sites around a hotspot within the Zfp521 and N-myc genes can be predicted by a periodic function that is deduced using the diffraction lattice model. In conclusion, the mutagenesis process is described by a biophysical model for DNA-DNA interactions.
0
1
0
0
0
0
Machine Learning Molecular Dynamics for the Simulation of Infrared Spectra
Machine learning has emerged as an invaluable tool in many research areas. In the present work, we harness this power to predict highly accurate molecular infrared spectra with unprecedented computational efficiency. To account for vibrational anharmonic and dynamical effects -- typically neglected by conventional quantum chemistry approaches -- we base our machine learning strategy on ab initio molecular dynamics simulations. While these simulations are usually extremely time consuming even for small molecules, we overcome these limitations by leveraging the power of a variety of machine learning techniques, not only accelerating simulations by several orders of magnitude, but also greatly extending the size of systems that can be treated. To this end, we develop a molecular dipole moment model based on environment dependent neural network charges and combine it with the neural network potentials of Behler and Parrinello. Contrary to the prevalent big data philosophy, we are able to obtain very accurate machine learning models for the prediction of infrared spectra based on only a few hundreds of electronic structure reference points. This is made possible through the introduction of a fully automated sampling scheme and the use of molecular forces during neural network potential training. We demonstrate the power of our machine learning approach by applying it to model the infrared spectra of a methanol molecule, n-alkanes containing up to 200 atoms and the protonated alanine tripeptide, which at the same time represents the first application of machine learning techniques to simulate the dynamics of a peptide. In all these case studies we find excellent agreement between the infrared spectra predicted via machine learning models and the respective theoretical and experimental spectra.
0
1
0
1
0
0
Statistically Optimal and Computationally Efficient Low Rank Tensor Completion from Noisy Entries
In this article, we develop methods for estimating a low rank tensor from noisy observations on a subset of its entries to achieve both statistical and computational efficiencies. There have been a lot of recent interests in this problem of noisy tensor completion. Much of the attention has been focused on the fundamental computational challenges often associated with problems involving higher order tensors, yet very little is known about their statistical performance. To fill in this void, in this article, we characterize the fundamental statistical limits of noisy tensor completion by establishing minimax optimal rates of convergence for estimating a $k$th order low rank tensor under the general $\ell_p$ ($1\le p\le 2$) norm which suggest significant room for improvement over the existing approaches. Furthermore, we propose a polynomial-time computable estimating procedure based upon power iteration and a second-order spectral initialization that achieves the optimal rates of convergence. Our method is fairly easy to implement and numerical experiments are presented to further demonstrate the practical merits of our estimator.
0
0
1
1
0
0
The cohomology of the full directed graph complex
In his seminal paper "Formality conjecture", M. Kontsevich introduced a graph complex $GC_{1ve}$ closely connected with the problem of constructing a formality quasi-isomorphism for Hochschild cochains. In this paper, we express the cohomology of the full directed graph complex explicitly in terms of the cohomology of $GC_{1ve}$. Applications of our results include a recent work by the first author which completely characterizes homotopy classes of formality quasi-isomorphisms for Hochschild cochains in the stable setting.
0
0
1
0
0
0
Model equations and structures formation for the media with memory
We propose new types of models of the appearance of small- and large scale structures in media with memory, including a hyperbolic modification of the Navier-Stokes equations and a class of dynamical low-dimensional models with memory effects. On the basis of computer modeling, the formation of the small-scale structures and collapses and the appearance of new chaotic solutions are demonstrated. Possibilities of the application of some proposed models to the description of the burst-type processes and collapses o nthe Sun are discussed.
0
1
0
0
0
0
On the Support Recovery of Jointly Sparse Gaussian Sources using Sparse Bayesian Learning
In this work, we provide non-asymptotic, probabilistic guarantees for successful sparse support recovery by the multiple sparse Bayesian learning (M-SBL) algorithm in the multiple measurement vector (MMV) framework. For joint sparse Gaussian sources, we show that M-SBL perfectly recovers their common nonzero support with arbitrarily high probability using only finitely many MMVs. In fact, the support error probability decays exponentially fast with the number of MMVs, with the decay rate depending on the restricted isometry property of the self Khatri-Rao product of the measurement matrix. Our analysis theoretically confirms that M-SBL is capable of recovering supports of size as high as $\mathcal{O}(m^2)$, where $m$ is the number of measurements per sparse vector. In contrast, popular MMV algorithms in compressed sensing such as simultaneous orthogonal matching pursuit and row-LASSO can recover only $\mathcal{O}(m)$ sized supports. In the special case of noiseless measurements, we show that a single MMV suffices for perfect recovery of the $k$-sparse support in M-SBL, provided any $k + 1$ columns of the measurement matrix are linearly independent. Unlike existing support recovery guarantees for M-SBL, our sufficient conditions are non-asymptotic in nature, and do not require the orthogonality of the nonzero rows of the joint sparse signals.
1
0
0
0
0
0
A Critical-like Collective State Leads to Long-range Cell Communication in Dictyostelium discoideum Aggregation
The transition from single-cell to multicellular behavior is important in early development but rarely studied. The starvation-induced aggregation of the social amoeba Dictyostelium discoideum into a multicellular slug is known to result from single-cell chemotaxis towards emitted pulses of cyclic adenosine monophosphate (cAMP). However, how exactly do transient short-range chemical gradients lead to coherent collective movement at a macroscopic scale? Here, we use a multiscale model verified by quantitative microscopy to describe wide-ranging behaviors from chemotaxis and excitability of individual cells to aggregation of thousands of cells. To better understand the mechanism of long-range cell-cell communication and hence aggregation, we analyze cell-cell correlations, showing evidence for self-organization at the onset of aggregation (as opposed to following a leader cell). Surprisingly, cell collectives, despite their finite size, show features of criticality known from phase transitions in physical systems. Application of external cAMP perturbations in our simulations near the sensitive critical point allows steering cells into early aggregation and towards certain locations but not once an aggregation center has been chosen.
0
0
0
0
1
0
Twistor theory at fifty: from contour integrals to twistor strings
We review aspects of twistor theory, its aims and achievements spanning thelast five decades. In the twistor approach, space--time is secondary with events being derived objects that correspond to compact holomorphic curves in a complex three--fold -- the twistor space. After giving an elementary construction of this space we demonstrate how solutions to linear and nonlinear equations of mathematical physics: anti-self-duality (ASD) equations on Yang--Mills, or conformal curvature can be encoded into twistor cohomology. These twistor correspondences yield explicit examples of Yang--Mills, and gravitational instantons which we review. They also underlie the twistor approach to integrability: the solitonic systems arise as symmetry reductions of ASD Yang--Mills equations, and Einstein--Weyl dispersionless systems are reductions of ASD conformal equations. We then review the holomorphic string theories in twistor and ambitwistor spaces, and explain how these theories give rise to remarkable new formulae for the computation of quantum scattering amplitudes. Finally we discuss the Newtonian limit of twistor theory, and its possible role in Penrose's proposal for a role of gravity in quantum collapse of a wave function.
0
1
1
0
0
0
Strong convergence rates of probabilistic integrators for ordinary differential equations
Probabilistic integration of a continuous dynamical system is a way of systematically introducing model error, at scales no larger than errors introduced by standard numerical discretisation, in order to enable thorough exploration of possible responses of the system to inputs. It is thus a potentially useful approach in a number of applications such as forward uncertainty quantification, inverse problems, and data assimilation. We extend the convergence analysis of probabilistic integrators for deterministic ordinary differential equations, as proposed by Conrad et al.\ (\textit{Stat.\ Comput.}, 2016), to establish mean-square convergence in the uniform norm on discrete- or continuous-time solutions under relaxed regularity assumptions on the driving vector fields and their induced flows. Specifically, we show that randomised high-order integrators for globally Lipschitz flows and randomised Euler integrators for dissipative vector fields with polynomially-bounded local Lipschitz constants all have the same mean-square convergence rate as their deterministic counterparts, provided that the variance of the integration noise is not of higher order than the corresponding deterministic integrator. These and similar results are proven for probabilistic integrators where the random perturbations may be state-dependent, non-Gaussian, or non-centred random variables.
0
0
1
1
0
0
The G-centre and gradable derived equivalences
We propose a generalisation for the notion of the centre of an algebra in the setup of algebras graded by an arbitrary abelian group G. Our generalisation, which we call the G-centre, is designed to control the endomorphism category of the grading shift functors. We show that the G-centre is preserved by gradable derived equivalences given by tilting modules. We also discuss links with existing notions in superalgebra theory and apply our results to derived equivalences of superalgebras.
0
0
1
0
0
0
RFCDE: Random Forests for Conditional Density Estimation
Random forests is a common non-parametric regression technique which performs well for mixed-type data and irrelevant covariates, while being robust to monotonic variable transformations. Existing random forest implementations target regression or classification. We introduce the RFCDE package for fitting random forest models optimized for nonparametric conditional density estimation, including joint densities for multiple responses. This enables analysis of conditional probability distributions which is useful for propagating uncertainty and of joint distributions that describe relationships between multiple responses and covariates. RFCDE is released under the MIT open-source license and can be accessed at this https URL . Both R and Python versions, which call a common C++ library, are available.
0
0
0
1
0
0
The symplectic approach of gauged linear $σ$-model
Witten's Gauged Linear $\sigma$-Model (GLSM) unifies the Gromov-Witten theory and the Landau-Ginzburg theory, and provides a global perspective on mirror symmetry. In this article, we summarize a mathematically rigorous construction of the GLSM in the geometric phase using methods from symplectic geometry.
0
0
1
0
0
0