text
stringlengths
138
2.38k
labels
sequencelengths
6
6
Predictions
sequencelengths
1
3
Title: Integrable 7-point discrete equations and evolution lattice equations of order 2, Abstract: We consider differential-difference equations that determine the continuous symmetries of discrete equations on the triangular lattice. It is shown that a certain combination of continuous flows can be represented as a scalar evolution lattice equation of order 2. The general scheme is illustrated by a number of examples, including an analog of the elliptic Yamilov lattice equation.
[ 0, 1, 0, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Complete Analysis of a Random Forest Model, Abstract: Random forests have become an important tool for improving accuracy in regression problems since their popularization by [Breiman, 2001] and others. In this paper, we revisit a random forest model originally proposed by [Breiman, 2004] and later studied by [Biau, 2012], where a feature is selected at random and the split occurs at the midpoint of the box containing the chosen feature. If the Lipschitz regression function is sparse and only depends on a small, unknown subset of $S$ out of $d$ features, we show that, given access to $n$ observations, this random forest model outputs a predictor that has a mean-squared prediction error $O((n(\sqrt{\log n})^{S-1})^{-\frac{1}{S\log2+1}})$. This positively answers an outstanding question of [Biau, 2012] about whether the rate of convergence therein could be improved. The second part of this article shows that the aforementioned prediction error cannot generally be improved, which we accomplish by characterizing the variance and by showing that the bias is tight for any linear model with nonzero parameter vector. As a striking consequence of our analysis, we show the variance of this forest is similar in form to the best-case variance lower bound of [Lin and Jeon, 2006] among all random forest models with nonadaptive splitting schemes (i.e., where the split protocol is independent of the training data).
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime, Abstract: We propose a novel technique for analyzing adaptive sampling called the {\em Simulator}. Our approach differs from the existing methods by considering not how much information could be gathered by any fixed sampling strategy, but how difficult it is to distinguish a good sampling strategy from a bad one given the limited amount of data collected up to any given time. This change of perspective allows us to match the strength of both Fano and change-of-measure techniques, without succumbing to the limitations of either method. For concreteness, we apply our techniques to a structured multi-arm bandit problem in the fixed-confidence pure exploration setting, where we show that the constraints on the means imply a substantial gap between the moderate-confidence sample complexity, and the asymptotic sample complexity as $\delta \to 0$ found in the literature. We also prove the first instance-based lower bounds for the top-k problem which incorporate the appropriate log-factors. Moreover, our lower bounds zero-in on the number of times each \emph{individual} arm needs to be pulled, uncovering new phenomena which are drowned out in the aggregate sample complexity. Our new analysis inspires a simple and near-optimal algorithm for the best-arm and top-k identification, the first {\em practical} algorithm of its kind for the latter problem which removes extraneous log factors, and outperforms the state-of-the-art in experiments.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Levi-Kahler reduction of CR structures, products of spheres, and toric geometry, Abstract: We study CR geometry in arbitrary codimension, and introduce a process, which we call the Levi-Kahler quotient, for constructing Kahler metrics from CR structures with a transverse torus action. Most of the paper is devoted to the study of Levi-Kahler quotients of toric CR manifolds, and in particular, products of odd dimensional spheres. We obtain explicit descriptions and characterizations of such quotients, and find Levi-Kahler quotients of products of 3-spheres which are extremal in a weighted sense introduced by G. Maschler and the first author.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Confidence intervals for the area under the receiver operating characteristic curve in the presence of ignorable missing data, Abstract: Receiver operating characteristic (ROC) curves are widely used as a measure of accuracy of diagnostic tests and can be summarized using the area under the ROC curve (AUC). Often, it is useful to construct a confidence intervals for the AUC, however, since there are a number of different proposed methods to measure variance of the AUC, there are thus many different resulting methods for constructing these intervals. In this manuscript, we compare different methods of constructing Wald-type confidence interval in the presence of missing data where the missingness mechanism is ignorable. We find that constructing confidence intervals using multiple imputation (MI) based on logistic regression (LR) gives the most robust coverage probability and the choice of CI method is less important. However, when missingness rate is less severe (e.g. less than 70%), we recommend using Newcombe's Wald method for constructing confidence intervals along with multiple imputation using predictive mean matching (PMM).
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Overcoming the Sign Problem at Finite Temperature: Quantum Tensor Network for the Orbital $e_g$ Model on an Infinite Square Lattice, Abstract: The variational tensor network renormalization approach to two-dimensional (2D) quantum systems at finite temperature is applied for the first time to a model suffering the notorious quantum Monte Carlo sign problem --- the orbital $e_g$ model with spatially highly anisotropic orbital interactions. Coarse-graining of the tensor network along the inverse temperature $\beta$ yields a numerically tractable 2D tensor network representing the Gibbs state. Its bond dimension $D$ --- limiting the amount of entanglement --- is a natural refinement parameter. Increasing $D$ we obtain a converged order parameter and its linear susceptibility close to the critical point. They confirm the existence of finite order parameter below the critical temperature $T_c$, provide a numerically exact estimate of~$T_c$, and give the critical exponents within $1\%$ of the 2D Ising universality class.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Nonasymptotic estimation and support recovery for high dimensional sparse covariance matrices, Abstract: We propose a general framework for nonasymptotic covariance matrix estimation making use of concentration inequality-based confidence sets. We specify this framework for the estimation of large sparse covariance matrices through incorporation of past thresholding estimators with key emphasis on support recovery. This technique goes beyond past results for thresholding estimators by allowing for a wide range of distributional assumptions beyond merely sub-Gaussian tails. This methodology can furthermore be adapted to a wide range of other estimators and settings. The usage of nonasymptotic dimension-free confidence sets yields good theoretical performance. Through extensive simulations, it is demonstrated to have superior performance when compared with other such methods. In the context of support recovery, we are able to specify a false positive rate and optimize to maximize the true recoveries.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Most Complex Deterministic Union-Free Regular Languages, Abstract: A regular language $L$ is union-free if it can be represented by a regular expression without the union operation. A union-free language is deterministic if it can be accepted by a deterministic one-cycle-free-path finite automaton; this is an automaton which has one final state and exactly one cycle-free path from any state to the final state. Jirásková and Masopust proved that the state complexities of the basic operations reversal, star, product, and boolean operations in deterministic union-free languages are exactly the same as those in the class of all regular languages. To prove that the bounds are met they used five types of automata, involving eight types of transformations of the set of states of the automata. We show that for each $n\ge 3$ there exists one ternary witness of state complexity $n$ that meets the bound for reversal and product. Moreover, the restrictions of this witness to binary alphabets meet the bounds for star and boolean operations. We also show that the tight upper bounds on the state complexity of binary operations that take arguments over different alphabets are the same as those for arbitrary regular languages. Furthermore, we prove that the maximal syntactic semigroup of a union-free language has $n^n$ elements, as in the case of regular languages, and that the maximal state complexities of atoms of union-free languages are the same as those for regular languages. Finally, we prove that there exists a most complex union-free language that meets the bounds for all these complexity measures. Altogether this proves that the complexity measures above cannot distinguish union-free languages from regular languages.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Piecewise Deterministic Markov Processes and their invariant measure, Abstract: Piecewise Deterministic Markov Processes (PDMPs) are studied in a general framework. First, different constructions are proven to be equivalent. Second, we introduce a coupling between two PDMPs following the same differential flow which implies quantitative bounds on the total variation between the marginal distributions of the two processes. Finally two results are established regarding the invariant measures of PDMPs. A practical condition to show that a probability measure is invariant for the associated PDMP semi-group is presented. In a second time, a bound on the invariant probability measures in $V$-norm of two PDMPs following the same differential flow is established. This last result is then applied to study the asymptotic bias of some non-exact PDMP MCMC methods.
[ 0, 0, 0, 1, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: Multipole resonances and directional scattering by hyperbolic-media antennas, Abstract: We propose to use optical antennas made out of natural hyperbolic material hexagonal boron nitride (hBN), and we demonstrate that this medium is a promising alternative to plasmonic and all-dielectric materials for realizing efficient subwavelength scatterers and metasurfaces based on them. We theoretically show that particles out of hyperbolic medium possess different resonances enabled by the support of high-k waves and their reflection from the particle boundaries. Among those resonances, there are electric quadrupole excitations, which cause magnetic resonance of the particle similar to what occurs in high-refractive-index particles. Excitations of the particle resonances are accompanied by the drop in the reflection from nanoparticle array to near-zero value, which can be ascribed to resonant Kerker effect. If particles are arranged in the spacer array with period d, narrow lattice resonances are possible at wavelength d, d/2, d/3 etc. This provides an additional degree of control and possibility to excite resonances at the wavelength defined by the array spacing. For the hBN particle with hyperbolic dispersion, we show that the full range of the resonances, including magnetic resonance and a decrease of reflection, is possible.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Controlling Physical Attributes in GAN-Accelerated Simulation of Electromagnetic Calorimeters, Abstract: High-precision modeling of subatomic particle interactions is critical for many fields within the physical sciences, such as nuclear physics and high energy particle physics. Most simulation pipelines in the sciences are computationally intensive -- in a variety of scientific fields, Generative Adversarial Networks have been suggested as a solution to speed up the forward component of simulation, with promising results. An important component of any simulation system for the sciences is the ability to condition on any number of physically meaningful latent characteristics that can effect the forward generation procedure. We introduce an auxiliary task to the training of a Generative Adversarial Network on particle showers in a multi-layer electromagnetic calorimeter, which allows our model to learn an attribute-aware conditioning mechanism.
[ 1, 0, 0, 0, 0, 0 ]
[ "Physics", "Computer Science" ]
Title: Max-value Entropy Search for Efficient Bayesian Optimization, Abstract: Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation, and maximize the information gained about the $\arg\max$ of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion, Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.
[ 1, 0, 1, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Energy Dissipation in Monolayer MoS$_2$ Electronics, Abstract: The advancement of nanoscale electronics has been limited by energy dissipation challenges for over a decade. Such limitations could be particularly severe for two-dimensional (2D) semiconductors integrated with flexible substrates or multi-layered processors, both being critical thermal bottlenecks. To shed light into fundamental aspects of this problem, here we report the first direct measurement of spatially resolved temperature in functioning 2D monolayer MoS$_2$ transistors. Using Raman thermometry we simultaneously obtain temperature maps of the device channel and its substrate. This differential measurement reveals the thermal boundary conductance (TBC) of the MoS$_2$ interface (14 $\pm$ 4 MWm$^-$$^2$K$^-$$^1$) is an order magnitude larger than previously thought, yet near the low end of known solid-solid interfaces. Our study also reveals unexpected insight into non-uniformities of the MoS$_2$ transistors (small bilayer regions), which do not cause significant self-heating, suggesting that such semiconductors are less sensitive to inhomogeneity than expected. These results provide key insights into energy dissipation of 2D semiconductors and pave the way for the future design of energy-efficient 2D electronics.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Interactive Reinforcement Learning for Object Grounding via Self-Talking, Abstract: Humans are able to identify a referred visual object in a complex scene via a few rounds of natural language communications. Success communication requires both parties to engage and learn to adapt for each other. In this paper, we introduce an interactive training method to improve the natural language conversation system for a visual grounding task. During interactive training, both agents are reinforced by the guidance from a common reward function. The parametrized reward function also cooperatively updates itself via interactions, and contribute to accomplishing the task. We evaluate the method on GuessWhat?! visual grounding task, and significantly improve the task success rate. However, we observe language drifting problem during training and propose to use reward engineering to improve the interpretability for the generated conversations. Our result also indicates evaluating goal-ended visual conversation tasks require semantic relevant metrics beyond task success rate.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Adaptive Bayesian Sampling with Monte Carlo EM, Abstract: We present a novel technique for learning the mass matrices in samplers obtained from discretized dynamics that preserve some energy function. Existing adaptive samplers use Riemannian preconditioning techniques, where the mass matrices are functions of the parameters being sampled. This leads to significant complexities in the energy reformulations and resultant dynamics, often leading to implicit systems of equations and requiring inversion of high-dimensional matrices in the leapfrog steps. Our approach provides a simpler alternative, by using existing dynamics in the sampling step of a Monte Carlo EM framework, and learning the mass matrices in the M step with a novel online technique. We also propose a way to adaptively set the number of samples gathered in the E step, using sampling error estimates from the leapfrog dynamics. Along with a novel stochastic sampler based on Nosé-Poincaré dynamics, we use this framework with standard Hamiltonian Monte Carlo (HMC) as well as newer stochastic algorithms such as SGHMC and SGNHT, and show strong performance on synthetic and real high-dimensional sampling scenarios; we achieve sampling accuracies comparable to Riemannian samplers while being significantly faster.
[ 1, 0, 0, 1, 0, 0 ]
[ "Statistics", "Computer Science" ]
Title: Resource Allocation for Wireless Networks: A Distributed Optimization Approach, Abstract: We consider the multi-cell joint power control and scheduling problem in cellular wireless networks as a weighted sum-rate maximization problem. This formulation is very general and applies to a wide range of applications and QoS requirements. The problem is inherently hard due to objective's non-convexity and the knapsack-like constraints. Moreover, practical system requires a distributed operation. We applied an existing algorithm proposed by Scutari et al. in distributed optimization literature to our problem. The algorithm performs local optimization followed by consensus update repeatedly. However, it is not fully applicable to our problem, as it requires all decision variables to be maintained at every base station (BS), which is impractical for large-scale networks; also, it relies on the Lipschitz continuity of the objective function's gradient, which does not hold here. We exploited the nature of our objective function, and proposed a localized version of the algorithm. Furthermore, we relaxed the requirements of Lipschitz continuity with the proximal approximation. Convergence to local optimal solutions was proved under some conditions. Future work includes proving the above results from a stochastic approximation perspective, and investigating non-linear consensus schemes to speed up the convergence.
[ 0, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Response of QD to structured beams via convolution integrals, Abstract: We propose a new expression for the response of a quadrant detector using convolution integrals. This expression is easier to evaluate by hand, exploiting the properties of the convolution. Computationally, it is also practicable to use since a large number of computer programs can right away evaluate convolutions. We use the new expression to obtain an analytical form of the response of a quadrant detector to a Gaussian beam and to Hermite-Gaussian beams in general. We compare this analytic expression for the response for the Gaussian beam with the approximations from previous studies and with a response obtained through simulations. From the response, we also obtained an analytical form for the sensitivity of the quadrant detector to a Gaussian beam. Lastly, we demonstrate the computational ease of using our new expression for the response calculating the sensitivity of the quadrant detector to the Bessel beam.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: A Survey on Cloud Video Multicasting Over Mobile Networks, Abstract: Since multimedia streaming has become very popular research topic in the recent years, this paper surveys the state of art techniques introduced for multimedia multicasting over mobile networks. In this paper, we give an overview of multimedia multicasting mechanisms in respect to cloud mobile communications, and we present some proposed solutions in perspective. We focus on the algorithms designed specifically for the video-on-demand applications. Our study on video-on-demand applications will eventually cover a wide range of applications such as cloud gaming without violating the limited scope of this survey.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: The Brauer trees of unipotent blocks, Abstract: In this paper we complete the determination of the Brauer trees of unipotent blocks (with cyclic defect groups) of finite groups of Lie type. These trees were conjectured by the first author. As a consequence, the Brauer trees of principal $\ell$-blocks of finite groups are known for $\ell>71$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: The MoEDAL experiment at the LHC: status and results, Abstract: The MoEDAL experiment at the LHC is optimised to detect highly ionising particles such as magnetic monopoles, dyons and (multiply) electrically charged stable massive particles predicted in a number of theoretical scenarios. MoEDAL, deployed in the LHCb cavern, combines passive nuclear track detectors with magnetic monopole trapping volumes (MMTs), while spallation-product backgrounds are being monitored with an array of MediPix pixel detectors. An introduction to the detector concept and its physics reach, complementary to that of the large general purpose LHC experiments ATLAS and CMS, will be given. Emphasis is given to the recent MoEDAL results at 13 TeV, where the null results from a search for magnetic monopoles in MMTs exposed in 2015 LHC collisions set the world-best limits on particles with magnetic charges more than 1.5 Dirac charge. The potential to search for heavy, long-lived supersymmetric electrically-charged particles is also discussed.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Peptide-Spectra Matching from Weak Supervision, Abstract: As in many other scientific domains, we face a fundamental problem when using machine learning to identify proteins from mass spectrometry data: large ground truth datasets mapping inputs to correct outputs are extremely difficult to obtain. Instead, we have access to imperfect hand-coded models crafted by domain experts. In this paper, we apply deep neural networks to an important step of the protein identification problem, the pairing of mass spectra with short sequences of amino acids called peptides. We train our model to differentiate between top scoring results from a state-of-the art classical system and hard-negative second and third place results. Our resulting model is much better at identifying peptides with spectra than the model used to generate its training data. In particular, we achieve a 43% improvement over standard matching methods and a 10% improvement over a combination of the matching method and an industry standard cross-spectra reranking tool. Importantly, in a more difficult experimental regime that reflects current challenges facing biologists, our advantage over the previous state-of-the-art grows to 15% even after reranking. We believe this approach will generalize to other challenging scientific problems.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: Dirichlet Bayesian Network Scores and the Maximum Relative Entropy Principle, Abstract: A classic approach for learning Bayesian networks from data is to identify a maximum a posteriori (MAP) network structure. In the case of discrete Bayesian networks, MAP networks are selected by maximising one of several possible Bayesian Dirichlet (BD) scores; the most famous is the Bayesian Dirichlet equivalent uniform (BDeu) score from Heckerman et al (1995). The key properties of BDeu arise from its uniform prior over the parameters of each local distribution in the network, which makes structure learning computationally efficient; it does not require the elicitation of prior knowledge from experts; and it satisfies score equivalence. In this paper we will review the derivation and the properties of BD scores, and of BDeu in particular, and we will link them to the corresponding entropy estimates to study them from an information theoretic perspective. To this end, we will work in the context of the foundational work of Giffin and Caticha (2007), who showed that Bayesian inference can be framed as a particular case of the maximum relative entropy principle. We will use this connection to show that BDeu should not be used for structure learning from sparse data, since it violates the maximum relative entropy principle; and that it is also problematic from a more classic Bayesian model selection perspective, because it produces Bayes factors that are sensitive to the value of its only hyperparameter. Using a large simulation study, we found in our previous work (Scutari, 2016) that the Bayesian Dirichlet sparse (BDs) score seems to provide better accuracy in structure learning; in this paper we further show that BDs does not suffer from the issues above, and we recommend to use it for sparse data instead of BDeu. Finally, will show that these issues are in fact different aspects of the same problem and a consequence of the distributional assumptions of the prior.
[ 0, 0, 1, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Thermal lattice Boltzmann method for multiphase flows, Abstract: New method to simulate heat transport in multiphase lattice Boltzmann (LB) method is proposed. The energy transport equation needs to be solved when phase boundaries are present. Internal energy is represented by an additional set of distribution functions, which evolve according to a LB-like equation simulating the transport of a passive scalar. Parasitic heat diffusion near boundaries with large density gradient is supressed by using the interparticle "pseudoforces" which prevent the spreading of energy. The compression work and heat diffusion are calculated by finite differences. The latent heat of a phase transition is released or absorbed in the inner side of a thin transition layer between liquid and vapor. This allows one to avoide the interface tracking. Several tests were carried out concerning all aspects of the processes. It was shown that the Galilean invariance and the scaling of thermal conduction process hold as well as the correct dependence of sound speed on the heat capacity ratio. The method proposed has low scheme diffusion of the internal energy, and it can be applied to modeling a wide range of multiphase flows with heat and mass transfer.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Grand Fujii-Fujii-Nakamoto operator inequality dealing with operator order and operator chaotic order, Abstract: In this paper, we shall prove that a grand Fujii-Fujii-Nakamoto operator inequality implies operator order and operator chaotic order under different conditions.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Size scaling of failure strength with fat-tailed disorder in a fiber bundle model, Abstract: We investigate the size scaling of the macroscopic fracture strength of heterogeneous materials when microscopic disorder is controlled by fat-tailed distributions. We consider a fiber bundle model where the strength of single fibers is described by a power law distribution over a finite range. Tuning the amount of disorder by varying the power law exponent and the upper cutoff of fibers' strength, in the limit of equal load sharing an astonishing size effect is revealed: For small system sizes the bundle strength increases with the number of fibers and the usual decreasing size effect of heterogeneous materials is only restored beyond a characteristic size. We show analytically that the extreme order statistics of fibers' strength is responsible for this peculiar behavior. Analyzing the results of computer simulations we deduce a scaling form which describes the dependence of the macroscopic strength of fiber bundles on the parameters of microscopic disorder over the entire range of system sizes.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Time-dependent focusing Mean-Field Games: the sub-critical case, Abstract: We consider time-dependent viscous Mean-Field Games systems in the case of local, decreasing and unbounded coupling. These systems arise in mean-field game theory, and describe Nash equilibria of games with a large number of agents aiming at aggregation. We prove the existence of weak solutions that are minimisers of an associated non-convex functional, by rephrasing the problem in a convex framework. Under additional assumptions involving the growth at infinity of the coupling, the Hamiltonian, and the space dimension, we show that such minimisers are indeed classical solutions by a blow-up argument and additional Sobolev regularity for the Fokker-Planck equation. We exhibit an example of non-uniqueness of solutions. Finally, by means of a contraction principle, we observe that classical solutions exist just by local regularity of the coupling if the time horizon is short.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Finitely forcible graph limits are universal, Abstract: The theory of graph limits represents large graphs by analytic objects called graphons. Graph limits determined by finitely many graph densities, which are represented by finitely forcible graphons, arise in various scenarios, particularly within extremal combinatorics. Lovasz and Szegedy conjectured that all such graphons possess a simple structure, e.g., the space of their typical vertices is always finite dimensional; this was disproved by several ad hoc constructions of complex finitely forcible graphons. We prove that any graphon is a subgraphon of a finitely forcible graphon. This dismisses any hope for a result showing that finitely forcible graphons possess a simple structure, and is surprising when contrasted with the fact that finitely forcible graphons form a meager set in the space of all graphons. In addition, since any finitely forcible graphon represents the unique minimizer of some linear combination of densities of subgraphs, our result also shows that such minimization problems, which conceptually are among the simplest kind within extremal graph theory, may in fact have unique optimal solutions with arbitrarily complex structure.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: Sex-biased dispersal: a review of the theory, Abstract: Dispersal is ubiquitous throughout the tree of life: factors selecting for dispersal include kin competition, inbreeding avoidance and spatiotemporal variation in resources or habitat suitability. These factors differ in whether they promote male and female dispersal equally strongly, and often selection on dispersal of one sex depends on how much the other disperses. For example, for inbreeding avoidance it can be sufficient that one sex disperses away from the natal site. Attempts to understand sex-specific dispersal evolution have created a rich body of theoretical literature, which we review here. We highlight an interesting gap between empirical and theoretical literature. The former associates different patterns of sex-biased dispersal with mating systems, such as female-biased dispersal in monogamous birds and male-biased dispersal in polygynous mammals. The predominant explanation is traceable back to Greenwood's (1980) ideas of how successful philopatric or dispersing individuals are at gaining mates or resources required to attract them. Theory, however, has developed surprisingly independently of these ideas: predominant ideas in theoretical work track how immigration and emigration change relatedness patterns and alleviate competition for limiting resources, typically considered sexually distinct, with breeding sites and fertilisable females limiting reproductive success for females and males, respectively. We show that the link between mating system and sex-biased dispersal is far from resolved: there are studies showing that mating systems matter, but the oft-stated association between polygyny and male-biased dispersal is not a straightforward theoretical expectation... (full abstract in the PDF)
[ 0, 0, 0, 0, 1, 0 ]
[ "Quantitative Biology" ]
Title: Nonequilibrium quantum dynamics of partial symmetry breaking for ultracold bosons in an optical lattice ring trap, Abstract: A vortex in a Bose-Einstein condensate on a ring undergoes quantum dynamics in response to a quantum quench in terms of partial symmetry breaking from a uniform lattice to a biperiodic one. Neither the current, a macroscopic measure, nor fidelity, a microscopic measure, exhibit critical behavior. Instead, the symmetry memory succeeds in identifying the point at which the system begins to forget its initial symmetry state. We further identify a symmetry gap in the low lying excited states which trends with the symmetry memory.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Gradient Descent using Duality Structures, Abstract: Gradient descent is commonly used to solve optimization problems arising in machine learning, such as training neural networks. Although it seems to be effective for many different neural network training problems, it is unclear if the effectiveness of gradient descent can be explained using existing performance guarantees for the algorithm. We argue that existing analyses of gradient descent rely on assumptions that are too strong to be applicable in the case of multi-layer neural networks. To address this, we propose an algorithm, duality structure gradient descent (DSGD), that is amenable to a non-asymptotic performance analysis, under mild assumptions on the training set and network architecture. The algorithm can be viewed as a form of layer-wise coordinate descent, where at each iteration the algorithm chooses one layer of the network to update. The decision of what layer to update is done in a greedy fashion, based on a rigorous lower bound of the function decrease for each possible choice of layer. In the analysis, we bound the time required to reach approximate stationary points, in both the deterministic and stochastic settings. The convergence is measured in terms of a Finsler geometry that is derived from the network architecture and designed to confirm a Lipschitz-like property on the gradient of the training objective function. Numerical experiments in both the full batch and mini-batch settings suggest that the algorithm is a promising step towards methods for training neural networks that are both rigorous and efficient.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: On the Universal Approximation Property and Equivalence of Stochastic Computing-based Neural Networks and Binary Neural Networks, Abstract: Large-scale deep neural networks are both memory intensive and computation-intensive, thereby posing stringent requirements on the computing platforms. Hardware accelerations of deep neural networks have been extensively investigated in both industry and academia. Specific forms of binary neural networks (BNNs) and stochastic computing based neural networks (SCNNs) are particularly appealing to hardware implementations since they can be implemented almost entirely with binary operations. Despite the obvious advantages in hardware implementation, these approximate computing techniques are questioned by researchers in terms of accuracy and universal applicability. Also it is important to understand the relative pros and cons of SCNNs and BNNs in theory and in actual hardware implementations. In order to address these concerns, in this paper we prove that the "ideal" SCNNs and BNNs satisfy the universal approximation property with probability 1 (due to the stochastic behavior). The proof is conducted by first proving the property for SCNNs from the strong law of large numbers, and then using SCNNs as a "bridge" to prove for BNNs. Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity. In other words, they have the same asymptotic energy consumption with the growing of network size. We also provide a detailed analysis of the pros and cons of SCNNs and BNNs for hardware implementations and conclude that SCNNs are more suitable for hardware.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Riemannian stochastic quasi-Newton algorithm with variance reduction and its convergence analysis, Abstract: Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large, but finite number of loss functions. The present paper proposes a Riemannian stochastic quasi-Newton algorithm with variance reduction (R-SQN-VR). The key challenges of averaging, adding, and subtracting multiple gradients are addressed with notions of retraction and vector transport. We present convergence analyses of R-SQN-VR on both non-convex and retraction-convex functions under retraction and vector transport operators. The proposed algorithm is evaluated on the Karcher mean computation on the symmetric positive-definite manifold and the low-rank matrix completion on the Grassmann manifold. In all cases, the proposed algorithm outperforms the state-of-the-art Riemannian batch and stochastic gradient algorithms.
[ 1, 0, 1, 1, 0, 0 ]
[ "Mathematics", "Computer Science", "Statistics" ]
Title: Strong instability of ground states to a fourth order Schrödinger equation, Abstract: In this note we prove the instability by blow-up of the ground state solutions for a class of fourth order Schr\" odinger equations. This extends the first rigorous results on blowing-up solutions for the biharmonic NLS due to Boulenger and Lenzmann \cite{BoLe} and confirm numerical conjectures from \cite{BaFi, BaFiMa1, BaFiMa, FiIlPa}.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Singular Degenerations of Lie Supergroups of Type $D(2,1;a)$, Abstract: The complex Lie superalgebras $\mathfrak{g}$ of type $D(2,1;a)$ - also denoted by $\mathfrak{osp}(4,2;a) $ - are usually considered for "non-singular" values of the parameter $a$, for which they are simple. In this paper we introduce five suitable integral forms of $\mathfrak{g}$, that are well-defined at singular values too, giving rise to "singular specializations" that are no longer simple: this extends the family of simple objects of type $D(2,1;a)$ in five different ways. The resulting five families coincide for general values of $a$, but are different at "singular" ones: here they provide non-simple Lie superalgebras, whose structure we describe explicitly. We also perform the parallel construction for complex Lie supergroups and describe their singular specializations (or "degenerations") at singular values of $a$. Although one may work with a single complex parameter $a$, in order to stress the overall $\mathfrak{S}_3$-symmetry of the whole situation, we shall work (following Kaplansky) with a two-dimensional parameter $\boldsymbol{\sigma} = (\sigma_1,\sigma_2,\sigma_3)$ ranging in the complex affine plane $\sigma_1 + \sigma_2 + \sigma_3 = 0$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Logarithmic singularities and quantum oscillations in magnetically doped topological insulators, Abstract: We report magnetotransport measurements on magnetically doped (Bi,Sb)$_2$Te$_3$ films grown by molecular beam epitaxy. In Hallbar devices, logarithmic dependence on temperature and bias voltage are obseved in both the longitudinal and anomalous Hall resistance. The interplay of disorder and electron-electron interactions is found to explain quantitatively the observed logarithmic singularities and is a dominant scattering mechanism in these samples. Submicron scale devices exhibit intriguing quantum oscillations at high magnetic fields with dependence on bias voltage. The observed quantum oscillations can be attributed to bulk and surface transport.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Dixmier traces and residues on weak operator ideals, Abstract: We develop the theory of modulated operators in general principal ideals of compact operators. For Laplacian modulated operators we establish Connes' trace formula in its local Euclidean model and a global version thereof. It expresses Dixmier traces in terms of the vector-valued Wodzicki residue. We demonstrate the applicability of our main results in the context of log-classical pseudo-differential operators, studied by Lesch, and a class of operators naturally appearing in noncommutative geometry.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Investigating early-type galaxy evolution with a multiwavelength approach. II. The UV structure of 11 galaxies with Swift-UVOT, Abstract: GALEX detected a significant fraction of early-type galaxies showing Far-UV bright structures. These features suggest the occurrence of recent star formation episodes. We aim at understanding their evolutionary path[s] and the mechanisms at the origin of their UV-bright structures. We investigate with a multi-lambda approach 11 early-types selected because of their nearly passive stage of evolution in the nuclear region. The paper, second of a series, focuses on the comparison between UV features detected by Swift-UVOT, tracing recent star formation, and the galaxy optical structure mapping older stellar populations. We performed their UV surface photometry and used BVRI photometry from other sources. Our integrated magnitudes have been analyzed and compared with corresponding values in the literature. We characterize the overall galaxy structure best fitting the UV and optical luminosity profiles using a single Sersic law. NGC 1366, NGC 1426, NGC 3818, NGC 3962 and NGC 7192 show featureless luminosity profiles. Excluding NGC 1366 which has a clear edge-on disk , n~1-2, and NGC 3818, the remaining three have Sersic's indices n~3-4 in optical and a lower index in the UV. Bright ring/arm-like structures are revealed by UV images and luminosity profiles of NGC 1415, NGC 1533, NGC 1543, NGC 2685, NGC 2974 and IC 2006. The ring/arm-like structures are different from galaxy to galaxy. Sersic indices of UV profiles for those galaxies are in the range n=1.5-3 both in S0s and in Es. In our sample optical Sersic indices are usually larger than the UV ones. (M2-V) color profiles are bluer in ring/arm-like structures with respect to the galaxy body. The lower values of Sersic's indices in the UV bands with respect to optical ones, suggesting the presence of a disk, point out that the role of the dissipation cannot be neglected in recent evolutionary phases of these early-type galaxies.
[ 0, 1, 0, 0, 0, 0 ]
[ "Astrophysics" ]
Title: On wrapping the Kalman filter and estimating with the SO(2) group, Abstract: This paper analyzes directional tracking in 2D with the extended Kalman filter on Lie groups (LG-EKF). The study stems from the problem of tracking objects moving in 2D Euclidean space, with the observer measuring direction only, thus rendering the measurement space and object position on the circle---a non-Euclidean geometry. The problem is further inconvenienced if we need to include higher-order dynamics in the state space, like angular velocity which is a Euclidean variables. The LG-EKF offers a solution to this issue by modeling the state space as a Lie group or combination thereof, e.g., SO(2) or its combinations with Rn. In the present paper, we first derive the LG-EKF on SO(2) and subsequently show that this derivation, based on the mathematically grounded framework of filtering on Lie groups, yields the same result as heuristically wrapping the angular variable within the EKF framework. This result applies only to the SO(2) and SO(2)xRn LG-EKFs and is not intended to be extended to other Lie groups or combinations thereof. In the end, we showcase the SO(2)xR2 LG-EKF, as an example of a constant angular acceleration model, on the problem of speaker tracking with a microphone array for which real-world experiments are conducted and accuracy is evaluated with ground truth data obtained by a motion capture system.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Topic Identification for Speech without ASR, Abstract: Modern topic identification (topic ID) systems for speech use automatic speech recognition (ASR) to produce speech transcripts, and perform supervised classification on such ASR outputs. However, under resource-limited conditions, the manually transcribed speech required to develop standard ASR systems can be severely limited or unavailable. In this paper, we investigate alternative unsupervised solutions to obtaining tokenizations of speech in terms of a vocabulary of automatically discovered word-like or phoneme-like units, without depending on the supervised training of ASR systems. Moreover, using automatic phoneme-like tokenizations, we demonstrate that a convolutional neural network based framework for learning spoken document representations provides competitive performance compared to a standard bag-of-words representation, as evidenced by comprehensive topic ID evaluations on both single-label and multi-label classification tasks.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Brownian motion: from kinetics to hydrodynamics, Abstract: Brownian motion has served as a pilot of studies in diffusion and other transport phenomena for over a century. The foundation of Brownian motion, laid by Einstein, has generally been accepted to be far from being complete since the late 1960s, because it fails to take important hydrodynamic effects into account. The hydrodynamic effects yield a time dependence of the diffusion coefficient, and this extends the ordinary hydrodynamics. However, the time profile of the diffusion coefficient across the kinetic and hydrodynamic regions is still absent, which prohibits a complete description of Brownian motion in the entire course of time. Here we close this gap. We manage to separate the diffusion process into two parts: a kinetic process governed by the kinetics based on molecular chaos approximation and a hydrodynamics process described by linear hydrodynamics. We find the analytical solution of vortex backflow of hydrodynamic modes triggered by a tagged particle. Coupling it to the kinetic process we obtain explicit expressions of the velocity autocorrelation function and the time profile of diffusion coefficient. This leads to an accurate account of both kinetic and hydrodynamic effects. Our theory is applicable for fluid and Brownian particles, even of irregular-shaped objects, in very general environments ranging from dilute gases to dense liquids. The analytical results are in excellent agreement with numerical experiments.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile, Abstract: Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality - a property which we call coherence. We first show that ordinary, "vanilla" MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution. We then show that this deficiency is mitigated by optimism: by taking an "extra-gradient" step, optimistic mirror descent (OMD) converges in all coherent problems. Our analysis generalizes and extends the results of Daskalakis et al. (2018) for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for establishing convergence beyond convex-concave games. We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, as well as the CelebA and CIFAR-10 datasets).
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Design of an Autonomous Precision Pollination Robot, Abstract: Precision robotic pollination systems can not only fill the gap of declining natural pollinators, but can also surpass them in efficiency and uniformity, helping to feed the fast-growing human population on Earth. This paper presents the design and ongoing development of an autonomous robot named "BrambleBee", which aims at pollinating bramble plants in a greenhouse environment. Partially inspired by the ecology and behavior of bees, BrambleBee employs state-of-the-art localization and mapping, visual perception, path planning, motion control, and manipulation techniques to create an efficient and robust autonomous pollination system.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: A sufficiently complicated noded Schottky group of rank three, Abstract: The theoretical existence of non-classical Schottky groups is due to Marden. Explicit examples of such kind of groups are only known in rank two, the first one by by Yamamoto in 1991 and later by Williams in 2009. In 2006, Maskit and the author provided a theoretical method to obtain examples of non-classical Schottky groups in any rank. The method assumes the knowledge of some algebraic limits of Schottky groups, called sufficiently complicated noded Schottky groups, whose existence was stated. In this paper we provide an explicit construction of a sufficiently complicated noded Schottky group of rank three and it is explained how to construct explicit non-classical Schottky groups of rank three.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Emulation of the space radiation environment for materials testing and radiobiological experiments, Abstract: Radiobiology studies on the effects of galactic cosmic ray radiation utilize mono-energetic single-ion particle beams, where the projected doses for exploration missions are given using highly-acute exposures. This methodology does not replicate the multi-ion species and energies found in the space radiation environment, nor does it reflect the low dose rate found in interplanetary space. In radiation biology studies, as well as in the assessment of health risk to astronaut crews, the differences in the biological effectiveness of different ions is primarily attributed to differences in the linear energy transfer of the radiation spectrum. Here we show that the linear energy transfer spectrum of the intravehicular environment of, e.g., spaceflight vehicles can be accurately generated experimentally by perturbing the intrinsic properties of hydrogen-rich crystalline materials in order to instigate specific nuclear spallation and fragmentation processes when placed in an accelerated mono-energetic heavy ion beam. Modifications to the internal geometry and chemical composition of the materials allow for the shaping of the emerging field to specific spectra that closely resemble the intravehicular field. Our approach can also be utilized to emulate the external galactic cosmic ray field, the planetary surface spectrum (e.g., Mars), and the local radiation environment of orbiting satellites. This provides the first instance of a true ground-based analog for characterizing the effects of space radiation.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: Convex Relaxations for Pose Graph Optimization with Outliers, Abstract: Pose Graph Optimization involves the estimation of a set of poses from pairwise measurements and provides a formalization for many problems arising in mobile robotics and geometric computer vision. In this paper, we consider the case in which a subset of the measurements fed to pose graph optimization is spurious. Our first contribution is to develop robust estimators that can cope with heavy-tailed measurement noise, hence increasing robustness to the presence of outliers. Since the resulting estimators require solving nonconvex optimization problems, we further develop convex relaxations that approximately solve those problems via semidefinite programming. We then provide conditions under which the proposed relaxations are exact. Contrarily to existing approaches, our convex relaxations do not rely on the availability of an initial guess for the unknown poses, hence they are more suitable for setups in which such guess is not available (e.g., multi-robot localization, recovery after localization failure). We tested the proposed techniques in extensive simulations, and we show that some of the proposed relaxations are indeed tight (i.e., they solve the original nonconvex problem 10 exactly) and ensure accurate estimation in the face of a large number of outliers.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Linking Generative Adversarial Learning and Binary Classification, Abstract: In this note, we point out a basic link between generative adversarial (GA) training and binary classification -- any powerful discriminator essentially computes an (f-)divergence between real and generated samples. The result, repeatedly re-derived in decision theory, has implications for GA Networks (GANs), providing an alternative perspective on training f-GANs by designing the discriminator loss function.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Information sensitivity functions to assess parameter information gain and identifiability of dynamical systems, Abstract: A new class of functions, called the `Information sensitivity functions' (ISFs), which quantify the information gain about the parameters through the measurements/observables of a dynamical system are presented. These functions can be easily computed through classical sensitivity functions alone and are based on Bayesian and information-theoretic approaches. While marginal information gain is quantified by decrease in differential entropy, correlations between arbitrary sets of parameters are assessed through mutual information. For individual parameters these information gains are also presented as marginal posterior variances, and, to assess the effect of correlations, as conditional variances when other parameters are given. The easy to interpret ISFs can be used to a) identify time-intervals or regions in dynamical system behaviour where information about the parameters is concentrated; b) assess the effect of measurement noise on the information gain for the parameters; c) assess whether sufficient information in an experimental protocol (input, measurements, and their frequency) is available to identify the parameters; d) assess correlation in the posterior distribution of the parameters to identify the sets of parameters that are likely to be indistinguishable; and e) assess identifiability problems for particular sets of parameters.
[ 0, 0, 0, 1, 0, 0 ]
[ "Mathematics", "Statistics", "Physics" ]
Title: Combinatorial cost: a coarse setting, Abstract: The main inspiration for this paper is a paper by Elek where he introduces combinatorial cost for graph sequences. We show that having cost equal to 1 and hyperfiniteness are coarse invariants. We also show `cost-1' for box spaces behaves multiplicatively when taking subgroups. We show that graph sequences coming from Farber sequences of a group have property A if and only if the group is amenable. The same is true for hyperfiniteness. This generalises a theorem by Elek. Furthermore we optimise this result when Farber sequences are replaced by sofic approximations. In doing so we introduce a new concept: property almost-A.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Surface thermophysical properties investigation of the potentially hazardous asteroid (99942) Apophis, Abstract: In this work, we investigate the surface thermophysical properties (thermal emissivity, thermal inertia, roughness fraction and geometric albedo) of asteroid (99942) Apophis, using the currently available thermal infrared observations of CanariCam on Gran Telescopio CANARIAS and far-infrared data by PACS of Herschel, on the basis of the Advanced thermophysical model. We show that the thermal emissivity of Apophis should be wavelength dependent from $8.70~\mu m$ to $160~\mu m$, and the maximum emissivity may arise around $20~\mu m$ similar to that of Vesta. Moreover, we further derive the thermal inertia, roughness fraction, geometric albedo and effective diameter of Apophis within a possible 1$\sigma$ scale of $\Gamma=100^{+100}_{-52}\rm~Jm^{-2}s^{-0.5}K^{-1}$, $f_{\rm r}=0.78\sim1.0$, $p_{\rm v}=0.286^{+0.030}_{-0.026}$, $D_{\rm eff}=378^{+19}_{-25}\rm~m$, and 3$\sigma$ scale of $\Gamma=100^{+240}_{-100}\rm~Jm^{-2}s^{-0.5}K^{-1}$, $f_{\rm r}=0.2\sim1.0$, $p_{\rm v}=0.286^{+0.039}_{-0.029}$, $D_{\rm eff}=378^{+27}_{-29}\rm~m$. The derived low thermal inertia but high roughness fraction may imply that Apophis could have regolith on its surface, and less regolith migration process has happened in comparison with asteroid Itokawa. Our results show that small-size asteroids could also have fine regolith on the surface, and further infer that Apophis may be delivered from the Main Belt by Yarkovsky effect.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Whole planet coupling between climate, mantle, and core: Implications for the evolution of rocky planets, Abstract: Earth's climate, mantle, and core interact over geologic timescales. Climate influences whether plate tectonics can take place on a planet, with cool climates being favorable for plate tectonics because they enhance stresses in the lithosphere, suppress plate boundary annealing, and promote hydration and weakening of the lithosphere. Plate tectonics plays a vital role in the long-term carbon cycle, which helps to maintain a temperate climate. Plate tectonics provides long-term cooling of the core, which is vital for generating a magnetic field, and the magnetic field is capable of shielding atmospheric volatiles from the solar wind. Coupling between climate, mantle, and core can potentially explain the divergent evolution of Earth and Venus. As Venus lies too close to the sun for liquid water to exist, there is no long-term carbon cycle and thus an extremely hot climate. Therefore plate tectonics cannot operate and a long-lived core dynamo cannot be sustained due to insufficient core cooling. On planets within the habitable zone where liquid water is possible, a wide range of evolutionary scenarios can take place depending on initial atmospheric composition, bulk volatile content, or the timing of when plate tectonics initiates, among other factors. Many of these evolutionary trajectories would render the planet uninhabitable. However, there is still significant uncertainty over the nature of the coupling between climate, mantle, and core. Future work is needed to constrain potential evolutionary scenarios and the likelihood of an Earth-like evolution.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation, Abstract: Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Modelling Luminous-Blue-Variable Isolation, Abstract: Observations show that luminous blue variables (LBVs) are far more dispersed than massive O-type stars, and Smith & Tombleson suggested that these large separations are inconsistent with a single-star evolution model of LBVs. Instead, they suggested that the large distances are most consistent with binary evolution scenarios. To test these suggestions, we modelled young stellar clusters and their passive dissolution, and we find that, indeed, the standard single-star evolution model is mostly inconsistent with the observed LBV environments. If LBVs are single stars, then the lifetimes inferred from their luminosity and mass are far too short to be consistent with their extreme isolation. This implies that there is either an inconsistency in the luminosity- to-mass mapping or the mass-to-age mapping. In this paper, we explore binary solutions that modify the mass-to-age mapping and are consistent with the isolation of LBVs. For the binary scenarios, our crude models suggest that LBVs are rejuvenated stars. They are either the result of mergers or they are mass gainers and received a kick when the primary star exploded. In the merger scenario, if the primary is about 19 solar masses, then the binary has enough time to wander far afield, merge and form a rejuvenated star. In the mass-gainer and kick scenario, we find that LBV isolation is consistent with a wide range of kick velocities, anywhere from 0 to ~ 105 km/s. In either scenario, binarity seems to play a major role in the isolation of LBVs.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Service adoption spreading in online social networks, Abstract: The collective behaviour of people adopting an innovation, product or online service is commonly interpreted as a spreading phenomenon throughout the fabric of society. This process is arguably driven by social influence, social learning and by external effects like media. Observations of such processes date back to the seminal studies by Rogers and Bass, and their mathematical modelling has taken two directions: One paradigm, called simple contagion, identifies adoption spreading with an epidemic process. The other one, named complex contagion, is concerned with behavioural thresholds and successfully explains the emergence of large cascades of adoption resulting in a rapid spreading often seen in empirical data. The observation of real world adoption processes has become easier lately due to the availability of large digital social network and behavioural datasets. This has allowed simultaneous study of network structures and dynamics of online service adoption, shedding light on the mechanisms and external effects that influence the temporal evolution of behavioural or innovation adoption. These advancements have induced the development of more realistic models of social spreading phenomena, which in turn have provided remarkably good predictions of various empirical adoption processes. In this chapter we review recent data-driven studies addressing real-world service adoption processes. Our studies provide the first detailed empirical evidence of a heterogeneous threshold distribution in adoption. We also describe the modelling of such phenomena with formal methods and data-driven simulations. Our objective is to understand the effects of identified social mechanisms on service adoption spreading, and to provide potential new directions and open questions for future research.
[ 1, 1, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Compound-Specific Chlorine Isotope Analysis of Organochlorines Using Gas Chromatography-Double Focus Magnetic-Sector High Resolution Mass Spectrometry, Abstract: Compound-specific chlorine isotope analysis (CSIA-Cl) is a practicable and high-performance approach for quantification of transformation processes and pollution source apportionment of chlorinated organic compounds. This study developed a CSIA-Cl method for perchlorethylene (PCE) and trichloroethylene (TCE) using gas chromatography-double focus magnetic-sector high resolution mass spectrometry (GC-DFS-HRMS) with a bracketing injection mode. The achieved highest precision for PCE was 0.021% (standard deviation of isotope ratios), and that for TCE was 0.025%. When one standard was used as the external isotopic standard for another of the same analyte, the lowest standard deviations of relative isotope-ratio variations ({\delta}37Cl') between the two corresponding standards were 0.064% and 0.080% for PCE and TCE, respectively. As a result, the critical {\delta}37Cl' for differentiating two isotope ratios are 0.26% and 0.32% for PCE and TCE, respectively, which are comparable with those in some reported studies using GC-quadrupole MS (GC-qMS). The lower limit of detection for CSIA-Cl of PCE was 0.1 ug/mL (0.1 ng on column), and that for TCE was determined to be 1.0 ug/mL (1.0 ng on column). Two isotope ratio calculation schemes, i.e., a scheme using complete molecular-ion isotopologues and another one using a pair of neighboring isotopologues, were evaluated in terms of precision and accuracy. The complete-isotopologue scheme showed evidently higher precision and was deduced to be more competent to reflect trueness in comparison with the isotopologue-pair scheme. The CSIA-Cl method developed in this study will be conducive to future studies concerning transformation processes and source apportionment of PCE and TCE, and light the ways to method development of CSIA-Cl for more organochlorines.
[ 0, 1, 0, 0, 0, 0 ]
[ "Chemistry" ]
Title: Precise Recovery of Latent Vectors from Generative Adversarial Networks, Abstract: Generative adversarial networks (GANs) transform latent vectors into visually plausible images. It is generally thought that the original GAN formulation gives no out-of-the-box method to reverse the mapping, projecting images back into latent space. We introduce a simple, gradient-based technique called stochastic clipping. In experiments, for images generated by the GAN, we precisely recover their latent vector pre-images 100% of the time. Additional experiments demonstrate that this method is robust to noise. Finally, we show that even for unseen images, our method appears to recover unique encodings.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: The dependence of protostar formation on the geometry and strength of the initial magnetic field, Abstract: We report results from twelve simulations of the collapse of a molecular cloud core to form one or more protostars, comprising three field strengths (mass-to-flux ratios, {\mu}, of 5, 10, and 20) and four field geometries (with values of the angle between the field and rotation axes, {\theta}, of 0°, 20°, 45°, and 90°), using a smoothed particle magnetohydrodynamics method. We find that the values of both parameters have a strong effect on the resultant protostellar system and outflows. This ranges from the formation of binary systems when {\mu} = 20 to strikingly differing outflow structures for differing values of {\theta}, in particular highly suppressed outflows when {\theta} = 90°. Misaligned magnetic fields can also produce warped pseudo-discs where the outer regions align perpendicular to the magnetic field but the innermost region re-orientates to be perpendicular to the rotation axis. We follow the collapse to sizes comparable to those of first cores and find that none of the outflow speeds exceed 8 km s$^{-1}$. These results may place constraints on both observed protostellar outflows, and also on which molecular cloud cores may eventually form either single stars and binaries: a sufficiently weak magnetic field may allow for disc fragmentation, whilst conversely the greater angular momentum transport of a strong field may inhibit disc fragmentation.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Submolecular-resolution non-invasive imaging of interfacial water with atomic force microscopy, Abstract: Scanning probe microscopy (SPM) has been extensively applied to probe interfacial water in many interdisciplinary fields but the disturbance of the probes on the hydrogen-bonding structure of water has remained an intractable problem. Here we report submolecular-resolution imaging of the water clusters on a NaCl(001) surface within the nearly non-invasive region by a qPlus-based noncontact atomic force microscopy. Comparison with theoretical simulations reveals that the key lies in probing the weak high-order electrostatic force between the quadrupole-like CO-terminated tip and the polar water molecules at large tip-water distances. This interaction allows the imaging and structural determination of the weakly bonded water clusters and even of their metastable states without inducing any disturbance. This work may open up new possibility of studying the intrinsic structure and electrostatics of ice or water on bulk insulating surfaces, ion hydration and biological water with atomic precision.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: A Novel Stochastic Stratified Average Gradient Method: Convergence Rate and Its Complexity, Abstract: SGD (Stochastic Gradient Descent) is a popular algorithm for large scale optimization problems due to its low iterative cost. However, SGD can not achieve linear convergence rate as FGD (Full Gradient Descent) because of the inherent gradient variance. To attack the problem, mini-batch SGD was proposed to get a trade-off in terms of convergence rate and iteration cost. In this paper, a general CVI (Convergence-Variance Inequality) equation is presented to state formally the interaction of convergence rate and gradient variance. Then a novel algorithm named SSAG (Stochastic Stratified Average Gradient) is introduced to reduce gradient variance based on two techniques, stratified sampling and averaging over iterations that is a key idea in SAG (Stochastic Average Gradient). Furthermore, SSAG can achieve linear convergence rate of $\mathcal {O}((1-\frac{\mu}{8CL})^k)$ at smaller storage and iterative costs, where $C\geq 2$ is the category number of training data. This convergence rate depends mainly on the variance between classes, but not on the variance within the classes. In the case of $C\ll N$ ($N$ is the training data size), SSAG's convergence rate is much better than SAG's convergence rate of $\mathcal {O}((1-\frac{\mu}{8NL})^k)$. Our experimental results show SSAG outperforms SAG and many other algorithms.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics", "Statistics" ]
Title: Experimentation with MANETs of Smartphones, Abstract: Mobile AdHoc NETworks (MANETs) have been identified as a key emerging technology for scenarios in which IEEE 802.11 or cellular communications are either infeasible, inefficient, or cost-ineffective. Smartphones are the most adequate network nodes in many of these scenarios, but it is not straightforward to build a network with them. We extensively survey existing possibilities to build applications on top of ad-hoc smartphone networks for experimentation purposes, and introduce a taxonomy to classify them. We present AdHocDroid, an Android package that creates an IP-level MANET of (rooted) Android smartphones, and make it publicly available to the community. AdHocDroid supports standard TCP/IP applications, providing real smartphone IEEE 802.11 MANET and the capability to easily change the routing protocol. We tested our framework on several smartphones and a laptop. We validate the MANET running off-the-shelf applications, and reporting on experimental performance evaluation, including network metrics and battery discharge rate.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Interaction between cluster synchronization and epidemic spread in community networks, Abstract: In real world, there is a significant relation between human behaviors and epidemic spread. Especially, the reactions among individuals in different communities to epidemics may be different, which lead to cluster synchronization of human behaviors. So, a mathematical model that embeds community structures, behavioral evolution and epidemic transmission is constructed to study the interaction between cluster synchronization and epidemic spread. The epidemic threshold of the model is obtained by using Gersgorin Lemma and dynamical system theory. By applying the Lyapunov stability method, the stability analysis of cluster synchronization and spreading dynamics are presented. Then, some numerical simulations are performed to illustrate and complement our theoretical results. As far as we know, this work is the first one to address the interplay between cluster synchronization and epidemic transmission in community networks, so it may deepen the understanding of the impact of cluster behaviors on infectious disease dynamics.
[ 0, 1, 0, 0, 0, 0 ]
[ "Mathematics", "Quantitative Biology" ]
Title: Big Data Fusion to Estimate Fuel Consumption: A Case Study of Riyadh, Abstract: Falling oil revenues and rapid urbanization are putting a strain on the budgets of oil producing nations which often subsidize domestic fuel consumption. A direct way to decrease the impact of subsidies is to reduce fuel consumption by reducing congestion and car trips. While fuel consumption models have started to incorporate data sources from ubiquitous sensing devices, the opportunity is to develop comprehensive models at urban scale leveraging sources such as Global Positioning System (GPS) data and Call Detail Records. We combine these big data sets in a novel method to model fuel consumption within a city and estimate how it may change due to different scenarios. To do so we calibrate a fuel consumption model for use on any car fleet fuel economy distribution and apply it in Riyadh, Saudi Arabia. The model proposed, based on speed profiles, is then used to test the effects on fuel consumption of reducing flow, both randomly and by targeting the most fuel inefficient trips in the city. The estimates considerably improve baseline methods based on average speeds, showing the benefits of the information added by the GPS data fusion. The presented method can be adapted to also measure emissions. The results constitute a clear application of data analysis tools to help decision makers compare policies aimed at achieving economic and environmental goals.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics", "Quantitative Finance" ]
Title: Rigid realizations of modular forms in Calabi--Yau threefolds, Abstract: We construct examples of modular rigid Calabi--Yau threefolds, which give a realization of some new weight 4 cusp forms.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: The agreement distance of rooted phylogenetic networks, Abstract: The minimal number of rooted subtree prune and regraft (rSPR) operations needed to transform one phylogenetic tree into another one induces a metric on phylogenetic trees - the rSPR-distance. The rSPR-distance between two phylogenetic trees $T$ and $T'$ can be characterised by a maximum agreement forest; a forest with a minimal number of components that covers both $T$ and $T'$. The rSPR operation has recently been generalised to phylogenetic networks with, among others, the subnetwork prune and regraft (SNPR) operation. Here, we introduce maximum agreement graphs as an explicit representations of differences of two phylogenetic networks, thus generalising maximum agreement forests. We show that maximum agreement graphs induce a metric on phylogenetic networks - the agreement distance. While this metric does not characterise the distances induced by SNPR and other generalisations of rSPR, we prove that it still bounds these distances with constant factors.
[ 0, 0, 0, 0, 1, 0 ]
[ "Mathematics", "Quantitative Biology" ]
Title: Optimization of the Waiting Time for H-R Coordination, Abstract: An analytical model of Human-Robot (H-R) coordination is presented for a Human-Robot system executing a collaborative task in which a high level of synchronization among the agents is desired. The influencing parameters and decision variables that affect the waiting time of the collaborating agents were analyzed. The performance of the model was evaluated based on the costs of the waiting times of each of the agents at the pre-defined spatial point of handover. The model was tested for two cases of dynamic H-R coordination scenarios. Results indicate that this analytical model can be used as a tool for designing an H-R system that optimizes the agent waiting time thereby increasing the joint-efficiency of the system and making coordination fluent and natural.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Extrasolar Planets and Their Host Stars, Abstract: In order to understand the exoplanet, you need to understand its parent star. Astrophysical parameters of extrasolar planets are directly and indirectly dependent on the properties of their respective host stars. These host stars are very frequently the only visible component in the systems. This book describes our work in the field of characterization of exoplanet host stars using interferometry to determine angular diameters, trigonometric parallax to determine physical radii, and SED fitting to determine effective temperatures and luminosities. The interferometry data are based on our decade-long survey using the CHARA Array. We describe our methods and give an update on the status of the field, including a table with the astrophysical properties of all stars with high-precision interferometric diameters out to 150 pc (status Nov 2016). In addition, we elaborate in more detail on a number of particularly significant or important exoplanet systems, particularly with respect to (1) insights gained from transiting exoplanets, (2) the determination of system habitable zones, and (3) the discrepancy between directly determined and model-based stellar radii. Finally, we discuss current and future work including the calibration of semi-empirical methods based on interferometric data.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Determining rough first order perturbations of the polyharmonic operator, Abstract: We show that the knowledge of Dirichlet to Neumann map for rough $A$ and $q$ in $(-\Delta)^m +A\cdot D +q$ for $m \geq 2$ for a bounded domain in $\mathbb{R}^n$, $n \geq 3$ determines $A$ and $q$ uniquely. The unique identifiability is proved using property of products of functions in Sobolev spaces and constructing complex geometrical optics solutions with sufficient decay of remainder terms.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Data Science: A Three Ring Circus or a Big Tent?, Abstract: This is part of a collection of discussion pieces on David Donoho's paper 50 Years of Data Science, appearing in Volume 26, Issue 4 of the Journal of Computational and Graphical Statistics (2017).
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Computer Science" ]
Title: Collisions in shape memory alloys, Abstract: We present here a model for instantaneous collisions in a solid made of shape memory alloys (SMA) by means of a predictive theory which is based on the introduction not only of macroscopic velocities and temperature, but also of microscopic velocities responsible of the austenite-martensites phase changes. Assuming time discontinuities for velocities, volume fractions and temperature, and applying the principles of thermodynamics for non-smooth evolutions together with constitutive laws typical of SMA, we end up with a system of nonlinearly coupled elliptic equations for which we prove an existence and uniqueness result in the 2 and 3 D cases. Finally, we also present numerical results for a SMA 2D solid subject to an external percussion by an hammer stroke.
[ 0, 0, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Big Data Meets HPC Log Analytics: Scalable Approach to Understanding Systems at Extreme Scale, Abstract: Today's high-performance computing (HPC) systems are heavily instrumented, generating logs containing information about abnormal events, such as critical conditions, faults, errors and failures, system resource utilization, and about the resource usage of user applications. These logs, once fully analyzed and correlated, can produce detailed information about the system health, root causes of failures, and analyze an application's interactions with the system, providing valuable insights to domain scientists and system administrators. However, processing HPC logs requires a deep understanding of hardware and software components at multiple layers of the system stack. Moreover, most log data is unstructured and voluminous, making it more difficult for system users and administrators to manually inspect the data. With rapid increases in the scale and complexity of HPC systems, log data processing is becoming a big data challenge. This paper introduces a HPC log data analytics framework that is based on a distributed NoSQL database technology, which provides scalability and high availability, and the Apache Spark framework for rapid in-memory processing of the log data. The analytics framework enables the extraction of a range of information about the system so that system administrators and end users alike can obtain necessary insights for their specific needs. We describe our experience with using this framework to glean insights from the log data about system behavior from the Titan supercomputer at the Oak Ridge National Laboratory.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Multiplex model of mental lexicon reveals explosive learning in humans, Abstract: Word similarities affect language acquisition and use in a multi-relational way barely accounted for in the literature. We propose a multiplex network representation of this mental lexicon of word similarities as a natural framework for investigating large-scale cognitive patterns. Our representation accounts for semantic, taxonomic, and phonological interactions and it identifies a cluster of words which are used with greater frequency, are identified, memorised, and learned more easily, and have more meanings than expected at random. This cluster emerges around age 7 through an explosive transition not reproduced by null models. We relate this explosive emergence to polysemy -- redundancy in word meanings. Results indicate that the word cluster acts as a core for the lexicon, increasing both lexical navigability and robustness to linguistic degradation. Our findings provide quantitative confirmation of existing conjectures about core structure in the mental lexicon and the importance of integrating multi-relational word-word interactions in psycholinguistic frameworks.
[ 1, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology", "Statistics" ]
Title: Various sharp estimates for semi-discrete Riesz transforms of the second order, Abstract: We give several sharp estimates for a class of combinations of second order Riesz transforms on Lie groups ${G}={G}_{x} \times {G}_{y}$ that are multiply connected, composed of a discrete abelian component ${G}_{x}$ and a connected component ${G}_{y}$ endowed with a biinvariant measure. These estimates include new sharp $L^p$ estimates via Choi type constants, depending upon the multipliers of the operator. They also include weak-type, logarithmic and exponential estimates. We give an optimal $L^q \to L^p$ estimate as well. It was shown recently by Arcozzi, Domelevo and Petermichl that such second order Riesz transforms applied to a function may be written as conditional expectation of a simple transformation of a stochastic integral associated with the function. The proofs of our theorems combine this stochastic integral representation with a number of deep estimates for pairs of martingales under strong differential subordination by Choi, Banuelos and Osekowski. When two continuous directions are available, sharpness is shown via the laminates technique. We show that sharpness is preserved in the discrete case using Lax-Richtmyer theorem.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Boundary Layer Problems in the Viscosity-Diffusion Vanishing Limits for the Incompressible MHD Systems, Abstract: In this paper, we we study boundary layer problems for the incompressible MHD systems in the presence of physical boundaries with the standard Dirichlet oundary conditions with small generic viscosity and diffusion coefficients. We identify a non-trivial class of initial data for which we can establish the uniform stability of the Prandtl's type boundary layers and prove rigorously that the solutions to the viscous and diffusive incompressible MHD systems converges strongly to the superposition of the solution to the ideal MHD systems with a Prandtl's type boundary layer corrector. One of the main difficulties is to deal with the effect of the difference between viscosity and diffusion coefficients and to control the singular boundary layers resulting from the Dirichlet boundary conditions for both the viscosity and the magnetic fields. One key derivation here is that for the class of initial data we identify here, there exist cancelations between the boundary layers of the velocity field and that of the magnetic fields so that one can use an elaborate energy method to take advantage this special structure. In addition, in the case of fixed positive viscosity, we also establish the stability of diffusive boundary layer for the magnetic field and convergence of solutions in the limit of zero magnetic diffusion for general initial data.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Why Adaptively Collected Data Have Negative Bias and How to Correct for It, Abstract: From scientific experiments to online A/B testing, the previously observed data often affects how future experiments are performed, which in turn affects which data will be collected. Such adaptivity introduces complex correlations between the data and the collection procedure. In this paper, we prove that when the data collection procedure satisfies natural conditions, then sample means of the data have systematic \emph{negative} biases. As an example, consider an adaptive clinical trial where additional data points are more likely to be tested for treatments that show initial promise. Our surprising result implies that the average observed treatment effects would underestimate the true effects of each treatment. We quantitatively analyze the magnitude and behavior of this negative bias in a variety of settings. We also propose a novel debiasing algorithm based on selective inference techniques. In experiments, our method can effectively reduce bias and estimation error.
[ 1, 0, 0, 1, 0, 0 ]
[ "Statistics", "Computer Science" ]
Title: Machine learning of neuroimaging to diagnose cognitive impairment and dementia: a systematic review and comparative analysis, Abstract: INTRODUCTION: Advanced machine learning methods might help to identify dementia risk from neuroimaging, but their accuracy to date is unclear. METHODS: We systematically reviewed the literature, 2006 to late 2016, for machine learning studies differentiating healthy ageing through to dementia of various types, assessing study quality, and comparing accuracy at different disease boundaries. RESULTS: Of 111 relevant studies, most assessed Alzheimer's disease (AD) vs healthy controls, used ADNI data, support vector machines and only T1-weighted sequences. Accuracy was highest for differentiating AD from healthy controls, and poor for differentiating healthy controls vs MCI vs AD, or MCI converters vs non-converters. Accuracy increased using combined data types, but not by data source, sample size or machine learning method. DISCUSSION: Machine learning does not differentiate clinically-relevant disease categories yet. More diverse datasets, combinations of different types of data, and close clinical integration of machine learning would help to advance the field.
[ 0, 0, 0, 0, 1, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: Real embedding and equivariant eta forms, Abstract: In 1993, Bismut and Zhang establish a mod Z embedding formula of Atiyah-Patodi-Singer reduced eta invariants. In this paper, we explain the hidden mod Z term as a spectral flow and extend this embedding formula to the equivariant family case. In this case, the spectral flow is generalized to the equivariant chern character of some equivariant Dai-Zhang higher spectral flow.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Hierarchical Block Sparse Neural Networks, Abstract: Sparse deep neural networks(DNNs) are efficient in both memory and compute when compared to dense DNNs. But due to irregularity in computation of sparse DNNs, their efficiencies are much lower than that of dense DNNs on regular parallel hardware such as TPU. This inefficiency leads to poor/no performance benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated by bringing structure to the sparsity and leveraging it for improving runtime efficiency. But such structural constraints often lead to suboptimal accuracies. In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of sparse neural networks called HBsNN (Hierarchical Block sparse Neural Networks). For a given sparsity, HBsNN models achieve better runtime performance than unstructured sparse models and better accuracy than highly structured sparse models.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: Are crossing dependencies really scarce?, Abstract: The syntactic structure of a sentence can be modelled as a tree, where vertices correspond to words and edges indicate syntactic dependencies. It has been claimed recurrently that the number of edge crossings in real sentences is small. However, a baseline or null hypothesis has been lacking. Here we quantify the amount of crossings of real sentences and compare it to the predictions of a series of baselines. We conclude that crossings are really scarce in real sentences. Their scarcity is unexpected by the hubiness of the trees. Indeed, real sentences are close to linear trees, where the potential number of crossings is maximized.
[ 1, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology", "Statistics" ]
Title: A Proof of the Herschel-Maxwell Theorem Using the Strong Law of Large Numbers, Abstract: In this article, we use the strong law of large numbers to give a proof of the Herschel-Maxwell theorem, which characterizes the normal distribution as the distribution of the components of a spherically symmetric random vector, provided they are independent. We present shorter proofs under additional moment assumptions, and include a remark, which leads to another strikingly short proof of Maxwell's characterization using the central limit theorem.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: Motion and Cooperative Transportation Planning for Multi-Agent Systems under Temporal Logic Formulas, Abstract: This paper presents a hybrid control framework for the motion planning of a multi-agent system including N robotic agents and M objects, under high level goals expressed as Linear Temporal Logic (LTL) formulas. In particular, we design control protocols that allow the transition of the agents as well as the cooperative transportation of the objects by the agents, among predefined regions of interest in the workspace. This allows to abstract the coupled behavior of the agents and the objects as a finite transition system and to design a high-level multi-agent plan that satisfies the agents' and the objects' specifications, given as temporal logic formulas. Simulation results verify the proposed framework.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Realistic finite temperature simulations of magnetic systems using quantum statistics, Abstract: We have performed realistic atomistic simulations at finite temperatures using Monte Carlo and atomistic spin dynamics simulations incorporating quantum (Bose-Einstein) statistics. The description is much improved at low temperatures compared to classical (Boltzmann) statistics normally used in these kind of simulations, while at higher temperatures the classical statistics are recovered. This corrected low-temperature description is reflected in both magnetization and the magnetic specific heat, the latter allowing for improved modeling of the magnetic contribution to free energies. A central property in the method is the magnon density of states at finite temperatures and we have compared several different implementations for obtaining it. The method has no restrictions regarding chemical and magnetic order of the considered materials. This is demonstrated by applying the method to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co random alloys and the ferrimagnetic system GdFe$_3$ .
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Computer Science" ]
Title: Effective Tensor Sketching via Sparsification, Abstract: In this paper, we investigate effective sketching schemes via sparsification for high dimensional multilinear arrays or tensors. More specifically, we propose a novel tensor sparsification algorithm that retains a subset of the entries of a tensor in a judicious way, and prove that it can attain a given level of approximation accuracy in terms of tensor spectral norm with a much smaller sample complexity when compared with existing approaches. In particular, we show that for a $k$th order $d\times\cdots\times d$ cubic tensor of {\it stable rank} $r_s$, the sample size requirement for achieving a relative error $\varepsilon$ is, up to a logarithmic factor, of the order $r_s^{1/2} d^{k/2} /\varepsilon$ when $\varepsilon$ is relatively large, and $r_s d /\varepsilon^2$ and essentially optimal when $\varepsilon$ is sufficiently small. It is especially noteworthy that the sample size requirement for achieving a high accuracy is of an order independent of $k$. To further demonstrate the utility of our techniques, we also study how higher order singular value decomposition (HOSVD) of large tensors can be efficiently approximated via sparsification.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Synchronization of spin torque oscillators through spin Hall magnetoresistance, Abstract: Spin torque oscillators placed onto a nonmagnetic heavy metal show synchronized auto-oscillations due to the coupling originating from spin Hall magnetoresistance effect. Here, we study a system having two spin torque oscillators under the effect of the spin Hall torque, and show that switching the external current direction enables us to control the phase difference of the synchronization between in-phase and antiphase.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Long quasi-polycyclic $t-$CIS codes, Abstract: We study complementary information set codes of length $tn$ and dimension $n$ of order $t$ called ($t-$CIS code for short). Quasi-cyclic and quasi-twisted $t$-CIS codes are enumerated by using their concatenated structure. Asymptotic existence results are derived for one-generator and have co-index $n$ by Artin's conjecture for quasi cyclic and special case for quasi twisted. This shows that there are infinite families of long QC and QT $t$-CIS codes with relative distance satisfying a modified Varshamov-Gilbert bound for rate $1/t$ codes. Similar results are defined for the new and more general class of quasi-polycyclic codes introduced recently by Berger and Amrani.
[ 1, 0, 0, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: Fairness in Criminal Justice Risk Assessments: The State of the Art, Abstract: Objectives: Discussions of fairness in criminal justice risk assessments typically lack conceptual precision. Rhetoric too often substitutes for careful analysis. In this paper, we seek to clarify the tradeoffs between different kinds of fairness and between fairness and accuracy. Methods: We draw on the existing literatures in criminology, computer science and statistics to provide an integrated examination of fairness and accuracy in criminal justice risk assessments. We also provide an empirical illustration using data from arraignments. Results: We show that there are at least six kinds of fairness, some of which are incompatible with one another and with accuracy. Conclusions: Except in trivial cases, it is impossible to maximize accuracy and fairness at the same time, and impossible simultaneously to satisfy all kinds of fairness. In practice, a major complication is different base rates across different legally protected groups. There is a need to consider challenging tradeoffs.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Superheating in coated niobium, Abstract: Using muon spin rotation it is shown that the field of first flux penetration H_entry in Nb is enhanced by about 30% if coated with an overlayer of Nb_3Sn or MgB_2. This is consistent with an increase from the lower critical magnetic field H_c1 up to the superheating field H_sh of the Nb substrate. In the experiments presented here coatings of Nb_3Sn and MgB_2 with a thickness between 50 and 2000nm have been tested. H_entry does not depend on material or thickness. This suggests that the energy barrier at the boundary between the two materials prevents flux entry up to H_sh of the substrate. A mechanism consistent with these findings is that the proximity effect recovers the stability of the energy barrier for flux penetration, which is suppressed by defects for uncoated samples. Additionally, a low temperature baked Nb sample has been tested. Here a 6% increase of H_entry was found, also pushing H_entry beyond H_c1.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Contraction Analysis of Nonlinear DAE Systems, Abstract: This paper studies the contraction properties of nonlinear differential-algebraic equation (DAE) systems. Specifically we develop scalable techniques for constructing the attraction regions associated with a particular stable equilibrium, by establishing the relation between the contraction rates of the original systems and the corresponding virtual extended systems. We show that for a contracting DAE system, the reduced system always contracts faster than the extended ones; furthermore, there always exists an extension with contraction rate arbitrarily close to that of the original system. The proposed construction technique is illustrated with a power system example in the context of transient stability assessment.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: On the Reconstruction Risk of Convolutional Sparse Dictionary Learning, Abstract: Sparse dictionary learning (SDL) has become a popular method for adaptively identifying parsimonious representations of a dataset, a fundamental problem in machine learning and signal processing. While most work on SDL assumes a training dataset of independent and identically distributed samples, a variant known as convolutional sparse dictionary learning (CSDL) relaxes this assumption, allowing more general sequential data sources, such as time series or other dependent data. Although recent work has explored the statistical properties of classical SDL, the statistical properties of CSDL remain unstudied. This paper begins to study this by identifying the minimax convergence rate of CSDL in terms of reconstruction risk, by both upper bounding the risk of an established CSDL estimator and proving a matching information-theoretic lower bound. Our results indicate that consistency in reconstruction risk is possible precisely in the `ultra-sparse' setting, in which the sparsity (i.e., the number of feature occurrences) is in $o(N)$ in terms of the length N of the training sequence. Notably, our results make very weak assumptions, allowing arbitrary dictionaries and dependent measurement noise. Finally, we verify our theoretical results with numerical experiments on synthetic data.
[ 1, 0, 1, 1, 0, 0 ]
[ "Computer Science", "Statistics", "Mathematics" ]
Title: Identities involving Bernoulli and Euler polynomials, Abstract: We present various identities involving the classical Bernoulli and Euler polynomials. Among others, we prove that $$ \sum_{k=0}^{[n/4]}(-1)^k {n\choose 4k}\frac{B_{n-4k}(z) }{2^{6k}} =\frac{1}{2^{n+1}}\sum_{k=0}^{n} (-1)^k \frac{1+i^k}{(1+i)^k} {n\choose k}{B_{n-k}(2z)} $$ and $$ \sum_{k=1}^{n} 2^{2k-1} {2n\choose 2k-1} B_{2k-1}(z) = \sum_{k=1}^n k \, 2^{2k} {2n\choose 2k} E_{2k-1}(z). $$ Applications of our results lead to formulas for Bernoulli and Euler numbers, like, for instance, $$ n E_{n-1} =\sum_{k=1}^{[n/2]} \frac{2^{2k}-1}{k} (2^{2k}-2^n){n\choose 2k-1} B_{2k}B_{n-2k}. $$
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Morpheo: Traceable Machine Learning on Hidden data, Abstract: Morpheo is a transparent and secure machine learning platform collecting and analysing large datasets. It aims at building state-of-the art prediction models in various fields where data are sensitive. Indeed, it offers strong privacy of data and algorithm, by preventing anyone to read the data, apart from the owner and the chosen algorithms. Computations in Morpheo are orchestrated by a blockchain infrastructure, thus offering total traceability of operations. Morpheo aims at building an attractive economic ecosystem around data prediction by channelling crypto-money from prediction requests to useful data and algorithms providers. Morpheo is designed to handle multiple data sources in a transfer learning approach in order to mutualize knowledge acquired from large datasets for applications with smaller but similar datasets.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics", "Quantitative Finance" ]
Title: New nanostructures of carbon: Quasifullerenes Cn-q (n=20,42,48,60), Abstract: Based on the third allotropic form of carbon (Fullerenes) through theoretical study have been predicted structures described as non-classical fullerenes. We have studied novel allotropic carbon structures with a closed cage configuration that have been predicted for the first time, by using DFT at the B3LYP level. Such carbon Cn-q structures (where, n=20, 42, 48 and 60), combine states of hybridization sp1 and sp2, for the formation of bonds. A comparative analysis of quasi-fullerenes with respect to their isomers of greater stability was also performed. Chemical stability was evaluated with the criteria of aromaticity through the different rings that build the systems. The results show new isomerism of carbon nanostructures with interesting chemical properties such as hardness, chemical potential and HOMO-LUMO gaps. We also studied thermal stability with Lagrangian molecular dynamics method using Atom- Center Density propagation (ADMP) method.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Quasi-steady state reduction for the Michaelis-Menten reaction-diffusion system, Abstract: The Michaelis-Menten mechanism is probably the best known model for an enzyme-catalyzed reaction. For spatially homogeneous concentrations, QSS reductions are well known, but this is not the case when chemical species are allowed to diffuse. We will discuss QSS reductions for both the irreversible and reversible Michaelis-Menten reaction in the latter case, given small initial enzyme concentration and slow diffusion. Our work is based on a heuristic method to obtain an ordinary differential equation which admits reduction by Tikhonov-Fenichel theory. We will not give convergence proofs but we provide numerical results that support the accuracy of the reductions.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Quantitative Biology" ]
Title: All the people around me: face discovery in egocentric photo-streams, Abstract: Given an unconstrained stream of images captured by a wearable photo-camera (2fpm), we propose an unsupervised bottom-up approach for automatic clustering appearing faces into the individual identities present in these data. The problem is challenging since images are acquired under real world conditions; hence the visible appearance of the people in the images undergoes intensive variations. Our proposed pipeline consists of first arranging the photo-stream into events, later, localizing the appearance of multiple people in them, and finally, grouping various appearances of the same person across different events. Experimental results performed on a dataset acquired by wearing a photo-camera during one month, demonstrate the effectiveness of the proposed approach for the considered purpose.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Physical insight into the thermodynamic uncertainty relation using Brownian motion in tilted periodic potentials, Abstract: Using Brownian motion in periodic potentials $V(x)$ tilted by a force $f$, we provide physical insight into the thermodynamic uncertainty relation, a recently conjectured principle for statistical errors and irreversible heat dissipation in nonequilibrium steady states. According to the relation, nonequilibrium output generated from dissipative processes necessarily incurs an energetic cost or heat dissipation $q$, and in order to limit the output fluctuation within a relative uncertainty $\epsilon$, at least $2k_BT/\epsilon^2$ of heat must be dissipated. Our model shows that this bound is attained not only at near-equilibrium ($f\ll V'(x)$) but also at far-from-equilibrium $(f\gg V'(x))$, more generally when the dissipated heat is normally distributed. Furthermore, the energetic cost is maximized near the critical force when the barrier separating the potential wells is about to vanish and the fluctuation of Brownian particle is maximized. These findings indicate that the deviation of heat distribution from Gaussianity gives rise to the inequality of the uncertainty relation, further clarifying the meaning of the uncertainty relation. Our derivation of the uncertainty relation also recognizes a new bound of nonequilibrium fluctuations that the variance of dissipated heat ($\sigma_q^2$) increases with its mean ($\mu_q$) and cannot be smaller than $2k_BT\mu_q$.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski Functionals, Abstract: Despite the wealth of $Planck$ results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependencies. Aiming at detecting the NGs of the CMB temperature anisotropy $\delta T$, while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function $P(\delta T)$, related to ${\mathrm v}_{0}$, the first Minkowski Functional (MF), and the two other MFs, ${\mathrm v}_{1}$ and ${\mathrm v}_{2}$. From their analytical Gaussian predictions we build the discrepancy functions $\Delta_{k}$ ($k=P,0,1,2$) which are applied to an ensemble of $10^{5}$ CMB realization maps of the $\Lambda$CDM model and to the $Planck$ CMB maps. In our analysis we use general Hermite expansions of the $\Delta_{k}$ up to the $12^{th}$ order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the $2^{nd}$ order expansions of Matsubara to arbitrary order in the standard deviation $\sigma_0$ for $P(\delta T)$ and ${\mathrm v}_0$, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the $\Lambda$CDM map sample and the $Planck$ data. We confirm the weak level of non-Gaussianity ($1$-$2$)$\sigma$ of the foreground corrected masked $Planck$ $2015$ maps.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Statistics" ]
Title: Pretending Fair Decisions via Stealthily Biased Sampling, Abstract: Fairness by decision-makers is believed to be auditable by third parties. In this study, we show that this is not always true. We consider the following scenario. Imagine a decision-maker who discloses a subset of his dataset with decisions to make his decisions auditable. If he is corrupt, and he deliberately selects a subset that looks fair even though the overall decision is unfair, can we identify this decision-maker's fraud? We answer this question negatively. We first propose a sampling method that produces a subset whose distribution is biased from the original (to pretend to be fair); however, its differentiation from uniform sampling is difficult. We call such a sampling method as stealthily biased sampling, which is formulated as a Wasserstein distance minimization problem, and is solved through a minimum-cost flow computation. We proved that the stealthily biased sampling minimizes an upper-bound of the indistinguishability. We conducted experiments to see that the stealthily biased sampling is, in fact, difficult to detect.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Polaritons in Living Systems: Modifying Energy Landscapes in Photosynthetic Organisms Using a Photonic Structure, Abstract: Photosynthetic organisms rely on a series of self-assembled nanostructures with tuned electronic energy levels in order to transport energy from where it is collected by photon absorption, to reaction centers where the energy is used to drive chemical reactions. In the photosynthetic bacteria Chlorobaculum tepidum (Cba. tepidum), a member of the green sulphur bacteria (GSB) family, light is absorbed by large antenna complexes called chlorosomes. The exciton generated is transferred to a protein baseplate attached to the chlorosome, before traveling through the Fenna-Matthews-Olson (FMO) complex to the reaction center. The energy levels of these systems are generally defined by their chemical structure. Here we show that by placing bacteria within a photonic microcavity, we can access the strong exciton-photon coupling regime between a confined cavity mode and exciton states of the chlorosome, whereby a coherent exchange of energy between the bacteria and cavity mode results in the formation of polariton states. The polaritons have an energy distinct from that of the exciton and photon, and can be tuned in situ via the microcavity length. This results in real-time, non-invasive control over the relative energy levels within the bacteria. This demonstrates the ability to strongly influence living biological systems with photonic structures such as microcavities. We believe that by creating polariton states, that are in this case a superposition of a photon and excitons within a living bacteria, we can modify energy transfer pathways and therefore study the importance of energy level alignment on the efficiency of photosynthetic systems.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: A Matrix Expander Chernoff Bound, Abstract: We prove a Chernoff-type bound for sums of matrix-valued random variables sampled via a random walk on an expander, confirming a conjecture due to Wigderson and Xiao. Our proof is based on a new multi-matrix extension of the Golden-Thompson inequality which improves in some ways the inequality of Sutter, Berta, and Tomamichel, and may be of independent interest, as well as an adaptation of an argument for the scalar case due to Healy. Secondarily, we also provide a generic reduction showing that any concentration inequality for vector-valued martingales implies a concentration inequality for the corresponding expander walk, with a weakening of parameters proportional to the squared mixing time.
[ 1, 0, 0, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: Learning Neural Representations of Human Cognition across Many fMRI Studies, Abstract: Cognitive neuroscience is enjoying rapid increase in extensive public brain-imaging datasets. It opens the door to large-scale statistical models. Finding a unified perspective for all available data calls for scalable and automated solutions to an old challenge: how to aggregate heterogeneous information on brain function into a universal cognitive system that relates mental operations/cognitive processes/psychological tasks to brain networks? We cast this challenge in a machine-learning approach to predict conditions from statistical brain maps across different studies. For this, we leverage multi-task learning and multi-scale dimension reduction to learn low-dimensional representations of brain images that carry cognitive information and can be robustly associated with psychological stimuli. Our multi-dataset classification model achieves the best prediction performance on several large reference datasets, compared to models without cognitive-aware low-dimension representations, it brings a substantial performance boost to the analysis of small datasets, and can be introspected to identify universal template cognitive concepts.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics", "Quantitative Biology" ]
Title: Investigating the Application of Common-Sense Knowledge-Base for Identifying Term Obfuscation in Adversarial Communication, Abstract: Word obfuscation or substitution means replacing one word with another word in a sentence to conceal the textual content or communication. Word obfuscation is used in adversarial communication by terrorist or criminals for conveying their messages without getting red-flagged by security and intelligence agencies intercepting or scanning messages (such as emails and telephone conversations). ConceptNet is a freely available semantic network represented as a directed graph consisting of nodes as concepts and edges as assertions of common sense about these concepts. We present a solution approach exploiting vast amount of semantic knowledge in ConceptNet for addressing the technically challenging problem of word substitution in adversarial communication. We frame the given problem as a textual reasoning and context inference task and utilize ConceptNet's natural-language-processing tool-kit for determining word substitution. We use ConceptNet to compute the conceptual similarity between any two given terms and define a Mean Average Conceptual Similarity (MACS) metric to identify out-of-context terms. The test-bed to evaluate our proposed approach consists of Enron email dataset (having over 600000 emails generated by 158 employees of Enron Corporation) and Brown corpus (totaling about a million words drawn from a wide variety of sources). We implement word substitution techniques used by previous researches to generate a test dataset. We conduct a series of experiments consisting of word substitution methods used in the past to evaluate our approach. Experimental results reveal that the proposed approach is effective.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Phase Space Sketching for Crystal Image Analysis based on Synchrosqueezed Transforms, Abstract: Recent developments of imaging techniques enable researchers to visualize materials at the atomic resolution to better understand the microscopic structures of materials. This paper aims at automatic and quantitative characterization of potentially complicated microscopic crystal images, providing feedback to tweak theories and improve synthesis in materials science. As such, an efficient phase-space sketching method is proposed to encode microscopic crystal images in a translation, rotation, illumination, and scale invariant representation, which is also stable with respect to small deformations. Based on the phase-space sketching, we generalize our previous analysis framework for crystal images with simple structures to those with complicated geometry.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Computer Science" ]