text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Kinetically constrained lattice gases: tagged particle diffusion, Abstract: Kinetically constrained lattice gases (KCLG) are interacting particle systems on the integer lattice $\mathbb Z^d$ with hard core exclusion and Kawasaki type dynamics. Their peculiarity is that jumps are allowed only if the configuration satisfies a constraint which asks for enough empty sites in a certain local neighborhood. KCLG have been introduced and extensively studied in physics literature as models of glassy dynamics. We focus on the most studied class of KCLG, the Kob Andersen (KA) models. We analyze the behavior of a tracer (i.e. a tagged particle) at equilibrium. We prove that for all dimensions $d\geq 2$ and for any equilibrium particle density, under diffusive rescaling the motion of the tracer converges to a $d$-dimensional Brownian motion with non-degenerate diffusion matrix. Therefore we disprove the occurrence of a diffusive/non diffusive transition which had been conjectured in physics literature. Our technique is flexible enough and can be extended to analyse the tracer behavior for other choices of constraints.
[ 0, 1, 1, 0, 0, 0 ]
Title: An Atomistic Fingerprint Algorithm for Learning Ab Initio Molecular Force Fields, Abstract: Molecular fingerprints, i.e. feature vectors describing atomistic neighborhood configurations, is an important abstraction and a key ingredient for data-driven modeling of potential energy surface and interatomic force. In this paper, we present the Density-Encoded Canonically Aligned Fingerprint (DECAF) fingerprint algorithm, which is robust and efficient, for fitting per-atom scalar and vector quantities. The fingerprint is essentially a continuous density field formed through the superimposition of smoothing kernels centered on the atoms. Rotational invariance of the fingerprint is achieved by aligning, for each fingerprint instance, the neighboring atoms onto a local canonical coordinate frame computed from a kernel minisum optimization procedure. We show that this approach is superior over PCA-based methods especially when the atomistic neighborhood is sparse and/or contains symmetry. We propose that the `distance' between the density fields be measured using a volume integral of their pointwise difference. This can be efficiently computed using optimal quadrature rules, which only require discrete sampling at a small number of grid points. We also experiment on the choice of weight functions for constructing the density fields, and characterize their performance for fitting interatomic potentials. The applicability of the fingerprint is demonstrated through a set of benchmark problems.
[ 1, 1, 0, 0, 0, 0 ]
Title: Generalized 4 $\times$ 4 Matrix Formalism for Light Propagation in Anisotropic Stratified Media: Study of Surface Phonon Polaritons in Polar Dielectric Heterostructures, Abstract: We present a generalized 4 $\times$ 4 matrix formalism for the description of light propagation in birefringent stratified media. In contrast to previous work, our algorithm is capable of treating arbitrarily anisotropic or isotropic, absorbing or non-absorbing materials and is free of discontinous solutions. We calculate the reflection and transmission coefficients and derive equations for the electric field distribution for any number of layers. The algorithm is easily comprehensible and can be straight forwardly implemented in a computer program. To demonstrate the capabilities of the approach, we calculate the reflectivities, electric field distributions, and dispersion curves for surface phonon polaritons excited in the Otto geometry for selected model systems, where we observe several distinct phenomena ranging from critical coupling to mode splitting, and surface phonon polaritons in hyperbolic media.
[ 0, 1, 0, 0, 0, 0 ]
Title: Localization of hidden Chua attractors by the describing function method, Abstract: In this paper the Chua circuit with five linear elements and saturation non-linearity is studied. Numerical localization of self-excited attractor in the Chua circuit model can be done by computation of trajectory with initial data in a vicinity of an unstable equilibrium. For a hidden attractor its basin of attraction does not overlap with a small vicinity of equilibria, so it is difficult to find the corresponding initial data for localization. This survey is devoted to the application of describing function method for localization of hidden periodic and chaotic attractors in the Chua model. We use a rigorous justification of the describing function method, based on the method of small parameter, to get the initial data for the visualization of the hidden attractors. A new configuration of hidden Chua attractors is presented.
[ 0, 1, 0, 0, 0, 0 ]
Title: A New Unbiased and Efficient Class of LSH-Based Samplers and Estimators for Partition Function Computation in Log-Linear Models, Abstract: Log-linear models are arguably the most successful class of graphical models for large-scale applications because of their simplicity and tractability. Learning and inference with these models require calculating the partition function, which is a major bottleneck and intractable for large state spaces. Importance Sampling (IS) and MCMC-based approaches are lucrative. However, the condition of having a "good" proposal distribution is often not satisfied in practice. In this paper, we add a new dimension to efficient estimation via sampling. We propose a new sampling scheme and an unbiased estimator that estimates the partition function accurately in sub-linear time. Our samples are generated in near-constant time using locality sensitive hashing (LSH), and so are correlated and unnormalized. We demonstrate the effectiveness of our proposed approach by comparing the accuracy and speed of estimating the partition function against other state-of-the-art estimation techniques including IS and the efficient variant of Gumbel-Max sampling. With our efficient sampling scheme, we accurately train real-world language models using only 1-2% of computations.
[ 1, 0, 0, 1, 0, 0 ]
Title: Effects of interaction strength, doping, and frustration on the antiferromagnetic phase of the two-dimensional Hubbard model, Abstract: Recent quantum-gas microscopy of ultracold atoms and scanning tunneling microscopy of the cuprates reveal new detailed information about doped Mott antiferromagnets, which can be compared with calculations. Using cellular dynamical mean-field theory, we map out the antiferromagnetic (AF) phase of the two-dimensional Hubbard model as a function of interaction strength $U$, hole doping $\delta$ and temperature $T$. The Néel phase boundary is non-monotonic as a function of $U$ and $\delta$. Frustration induced by second-neighbor hopping reduces Néel order more effectively at small $U$. The doped AF is stabilized at large $U$ by kinetic energy and at small $U$ by potential energy. The transition between the AF insulator and the doped metallic AF is continuous. At large $U$, we find in-gap states similar to those observed in scanning tunneling microscopy. We predict that, contrary to the Hubbard bands, these states are only slightly spin polarized.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Coupled Lattice Boltzmann Method and Discrete Element Method for Discrete Particle Simulations of Particulate Flows, Abstract: Discrete particle simulations are widely used to study large-scale particulate flows in complex geometries where particle-particle and particle-fluid interactions require an adequate representation but the computational cost has to be kept low. In this work, we present a novel coupling approach for such simulations. A lattice Boltzmann formulation of the generalized Navier-Stokes equations is used to describe the fluid motion. This promises efficient simulations suitable for high performance computing and, since volume displacement effects by the solid phase are considered, our approach is also applicable to non-dilute particulate systems. The discrete element method is combined with an explicit evaluation of interparticle lubrication forces to simulate the motion of individual submerged particles. Drag, pressure and added mass forces determine the momentum transfer by fluid-particle interactions. A stable coupling algorithm is presented and discussed in detail. We demonstrate the validity of our approach for dilute as well as dense systems by predicting the settling velocity of spheres over a broad range of solid volume fractions in good agreement with semi-empirical correlations. Additionally, the accuracy of particle-wall interactions in a viscous fluid is thoroughly tested and established. Our approach can thus be readily used for various particulate systems and can be extended straightforward to e.g. non-spherical particles.
[ 1, 1, 0, 0, 0, 0 ]
Title: The MERger-event Gamma-Ray (MERGR) Telescope, Abstract: We describe the MERger-event Gamma-Ray (MERGR) Telescope intended for deployment by ~2021. MERGR will cover from 20 keV to 2 MeV with a wide field of view (6 sr) using nineteen gamma-ray detectors arranged on a section of a sphere. The telescope will work as a standalone system or as part of a network of sensors, to increase by ~50% the current sky coverage to detect short Gamma-Ray Burst (SGRB) counterparts to neutron-star binary mergers within the ~200 Mpc range of gravitational wave detectors in the early 2020's. Inflight software will provide realtime burst detections with mean localization uncertainties of 6 deg for a photon fluence of 5 ph cm^-2 (the mean fluence of Fermi-GBM SGRBs) and <3 deg for the brightest ~5% of SGRBs to enable rapid multi-wavelength follow-up to identify a host galaxy and its redshift. To minimize cost and time to first light, MERGR is directly derived from demonstrators designed and built at NRL for the DoD Space Test Program (STP). We argue that the deployment of a network that provides all-sky coverage for SGRB detection is of immediate urgency to the multi-messenger astrophysics community.
[ 0, 1, 0, 0, 0, 0 ]
Title: Graham-Witten's conformal invariant for closed four dimensional submanifolds, Abstract: It was proved by Graham and Witten in 1999 that conformal invariants of submanifolds can be obtained via volume renormalization of minimal surfaces in conformally compact Einstein manifolds. The conformal invariant of a submanifold $\Sigma$ is contained in the volume expansion of the minimal surface which is asymptotic to $\Sigma$ when the minimal surface approaches the conformaly infinity. In the paper we give the explicit expression of Graham-Witten's conformal invariant for closed four dimensional submanifolds and find critical points of the conformal invariant in the case of Euclidean ambient spaces.
[ 0, 0, 1, 0, 0, 0 ]
Title: Computationally Inferred Genealogical Networks Uncover Long-Term Trends in Assortative Mating, Abstract: Genealogical networks, also known as family trees or population pedigrees, are commonly studied by genealogists wanting to know about their ancestry, but they also provide a valuable resource for disciplines such as digital demography, genetics, and computational social science. These networks are typically constructed by hand through a very time-consuming process, which requires comparing large numbers of historical records manually. We develop computational methods for automatically inferring large-scale genealogical networks. A comparison with human-constructed networks attests to the accuracy of the proposed methods. To demonstrate the applicability of the inferred large-scale genealogical networks, we present a longitudinal analysis on the mating patterns observed in a network. This analysis shows a consistent tendency of people choosing a spouse with a similar socioeconomic status, a phenomenon known as assortative mating. Interestingly, we do not observe this tendency to consistently decrease (nor increase) over our study period of 150 years.
[ 1, 0, 0, 0, 1, 0 ]
Title: Nonlinear stability for the Maxwell--Born--Infeld system on a Schwarzschild background, Abstract: In this paper we prove small data global existence for solutions to the Maxwell--Born--Infeld (MBI) system on a fixed Schwarzschild background. This system has appeared in the context of string theory and can be seen as a nonlinear model problem for the stability of the background metric itself, due to its tensorial and quasilinear nature. The MBI system models nonlinear electromagnetism and does not display birefringence. The key element in our proof lies in the observation that there exists a first-order differential transformation which brings solutions of the spin $\pm 1$ Teukolsky equations, satisfied by the extreme components of the field, into solutions of a "good" equation (the Fackerell--Ipser Equation). This strategy was established in [F. Pasqualotto, The spin $\pm 1$ Teukolsky equations and the Maxwell system on Schwarzschild, Preprint 2016, arXiv:1612.07244] for the linear Maxwell field on Schwarzschild. We show that analogous Fackerell--Ipser equations hold for the MBI system on a fixed Schwarzschild background, which are however nonlinearly coupled. To essentially decouple these right hand sides, we setup a bootstrap argument. We use the $r^p$ method of Dafermos and Rodnianski in [M. Dafermos and I. Rodnianski, A new physical-space approach to decay for the wave equation with applications to black hole spacetimes, in XVIth International Congress on Mathematical Physics, Pavel Exner ed., Prague 2009 pp. 421-433, 2009, arXiv:0910.4957] in order to deduce decay of some null components, and we infer decay for the remaining quantities by integrating the MBI system as transport equations.
[ 0, 0, 1, 0, 0, 0 ]
Title: Numerical simulation of BOD5 dynamics in Igapó I lake, Londrina, Paraná, Brazil: Experimental measurement and mathematical modeling, Abstract: The concentration of biochemical oxygen demand, BOD5, was studied in order to evaluate the water quality of the Igapó I Lake, in Londrina, Paraná State, Brazil. The simulation was conducted by means of the discretization in curvilinear coordinates of the geometry of Igapó I Lake, together with finite difference and finite element methods. The evaluation of the proposed numerical model for water quality was performed by comparing the experimental values of BOD5 with the numerical results. The evaluation of the model showed quantitative results compatible with the actual behavior of Igapó I Lake in relation to the simulated parameter. The qualitative analysis of the numerical simulations provided a better understanding of the dynamics of the BOD5 concentration at Igapó I Lake, showing that such concentrations in the central regions of the lake have values above those allowed by Brazilian law. The results can help to guide choices by public officials, as: (i) improve the identification mechanisms of pollutant emitters on Lake Igapó I, (ii) contribute to the optimal treatment of the recovery of the polluted environment and (iii) provide a better quality of life for the regulars of the lake as well as for the residents living on the lakeside.
[ 0, 0, 0, 0, 1, 0 ]
Title: Sensitivity Analysis for matched pair analysis of binary data: From worst case to average case analysis, Abstract: In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured con- founder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects.
[ 0, 0, 0, 1, 0, 0 ]
Title: Closed Sets and Operators thereon: Representations, Computability and Complexity, Abstract: The TTE approach to Computable Analysis is the study of so-called representations (encodings for continuous objects such as reals, functions, and sets) with respect to the notions of computability they induce. A rich variety of such representations had been devised over the past decades, particularly regarding closed subsets of Euclidean space plus subclasses thereof (like compact subsets). In addition, they had been compared and classified with respect to both non-uniform computability of single sets and uniform computability of operators on sets. In this paper we refine these investigations from the point of view of computational complexity. Benefiting from the concept of second-order representations and complexity recently devised by Kawamura & Cook (2012), we determine parameterized complexity bounds for operators such as union, intersection, projection, and more generally function image and inversion. By indicating natural parameters in addition to the output precision, we get a uniform view on results by Ko (1991-2013), Braverman (2004/05) and Zhao & Müller (2008), relating these problems to the P/UP/NP question in discrete complexity theory.
[ 1, 0, 1, 0, 0, 0 ]
Title: Propensity score estimation using classification and regression trees in the presence of missing covariate data, Abstract: Data mining and machine learning techniques such as classification and regression trees (CART) represent a promising alternative to conventional logistic regression for propensity score estimation. Whereas incomplete data preclude the fitting of a logistic regression on all subjects, CART is appealing in part because some implementations allow for incomplete records to be incorporated in the tree fitting and provide propensity score estimates for all subjects. Based on theoretical considerations, we argue that the automatic handling of missing data by CART may however not be appropriate. Using a series of simulation experiments, we examined the performance of different approaches to handling missing covariate data; (i) applying the CART algorithm directly to the (partially) incomplete data, (ii) complete case analysis, and (iii) multiple imputation. Performance was assessed in terms of bias in estimating exposure-outcome effects \add{among the exposed}, standard error, mean squared error and coverage. Applying the CART algorithm directly to incomplete data resulted in bias, even in scenarios where data were missing completely at random. Overall, multiple imputation followed by CART resulted in the best performance. Our study showed that automatic handling of missing data in CART can cause serious bias and does not outperform multiple imputation as a means to account for missing data.
[ 0, 0, 0, 1, 0, 0 ]
Title: Deciding some Maltsev conditions in finite idempotent algebras, Abstract: In this paper we investigate the computational complexity of deciding if a given finite algebraic structure satisfies a fixed (strong) Maltsev condition $\Sigma$. Our goal in this paper is to show that $\Sigma$-testing can be accomplished in polynomial time when the algebras tested are idempotent and the Maltsev condition $\Sigma$ can be described using paths. Examples of such path conditions are having a Maltsev term, having a majority operation, and having a chain of Jónsson (or Gumm) terms of fixed length.
[ 1, 0, 1, 0, 0, 0 ]
Title: MF traces and the Cuntz semigroup, Abstract: A trace $\tau$ on a separable C*-algebra $A$ is called matricial field (MF) if there is a trace-preserving morphism from $A$ to $Q_\omega$, where $Q_\omega$ denotes the norm ultrapower of the universal UHF-algebra $Q$. In general, the trace $\tau$ induces a state on the Cuntz semigroup $Cu(A)$. We show there is always a state-preserving morphism from $Cu(A)$ to $Cu(Q_\omega)$. As an application, if $A$ is an AI-algebra and $F$ is a free group acting on $A$, then every trace on the reduced crossed product $A \rtimes F$ is MF. This further implies the same result when $A$ is an AH-algebra with the ideal property such that $K_1(A)$ is a torsion group. We also use this to characterize when $A \rtimes F$ is MF (i.e. admits an isometric morphism into $Q_\omega$) for many simple, nuclear C*-algebras $A$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Matrix product states for topological phases with parafermions, Abstract: In the Fock representation, we propose a framework to construct the generalized matrix product states (MPS) for topological phases with $\mathbb{ Z}_{p}$ parafermions. Unlike the $\mathbb{Z}_{2}$ Majorana fermions, the $% \mathbb{Z}_{p}$ parafermions form intrinsically interacting systems. Here we explicitly construct two topologically distinct classes of irreducible $% \mathbb{Z}_{3}$ parafermionic MPS wave functions, characterized by one or two parafermionic zero modes at each end of an open chain. Their corresponding parent Hamiltonians are found as the fixed point models of the single $\mathbb{Z}_{3}$ parafermion chain and two-coupled parafermion chains with $\mathbb{Z}_{3}\times \mathbb{Z}_{3}$ symmetry. Our results thus pave the road to investigate all possible topological phases with $\mathbb{Z}_{p}$ parafermions within the matrix product representation in one dimension.
[ 0, 1, 0, 0, 0, 0 ]
Title: F-TRIDYN: A Binary Collision Approximation Code for Simulating Ion Interactions with Rough Surfaces, Abstract: Fractal TRIDYN (F-TRIDYN) is a modified version of the widely used Monte Carlo, Binary Collision Approximation code TRIDYN that includes an explicit model of surface roughness and additional output modes for coupling to plasma edge and material codes. Surface roughness plays an important role in ion irradiation processes such as sputtering; roughness can significantly increase the angle of maximum sputtering and change the maximum observed sputtering yield by a factor of 2 or more. The complete effect of surface roughness on sputtering and other ion irradiation phenomena is not completely understood. Many rough surfaces can be consistently and realistically modeled by fractals, using the fractal dimension and fractal length scale as the sole input parameters. F-TRIDYN includes a robust fractal surface algorithm that is more computationally efficient than those in previous fractal codes and which reproduces available experimental sputtering data from rough surfaces. Fractals provide a compelling path toward a complete and concise understanding of the effect that surface geometry plays on the behavior of plasma-facing materials. F-TRIDYN is a flexible code for simulating ion-solid interactions and coupling to plasma and material codes for multiscale modeling.
[ 0, 1, 0, 0, 0, 0 ]
Title: Measurement of Radon Concentration in Super-Kamiokande's Buffer Gas, Abstract: To precisely measure radon concentrations in purified air supplied to the Super-Kamiokande detector as a buffer gas, we have developed a highly sensitive radon detector with an intrinsic background as low as 0.33$\pm$0.07 mBq/m$^{3}$. In this article, we discuss the construction and calibration of this detector as well as results of its application to the measurement and monitoring of the buffer gas layer above Super-Kamiokande. In March 2013, the chilled activated charcoal system used to remove radon in the input buffer gas was upgraded. After this improvement, a dramatic reduction in the radon concentration of the supply gas down to 0.08 $\pm$ 0.07 mBq/m$^{3}$. Additionally, the Rn concentration of the in-situ buffer gas has been measured 28.8$\pm$1.7 mBq/m$^{3}$ using the new radon detector. Based on these measurements we have determined that the dominant source of Rn in the buffer gas arises from contamination from the Super-Kamiokande tank itself.
[ 0, 1, 0, 0, 0, 0 ]
Title: On The Robustness of a Neural Network, Abstract: With the development of neural networks based machine learning and their usage in mission critical applications, voices are rising against the \textit{black box} aspect of neural networks as it becomes crucial to understand their limits and capabilities. With the rise of neuromorphic hardware, it is even more critical to understand how a neural network, as a distributed system, tolerates the failures of its computing nodes, neurons, and its communication channels, synapses. Experimentally assessing the robustness of neural networks involves the quixotic venture of testing all the possible failures, on all the possible inputs, which ultimately hits a combinatorial explosion for the first, and the impossibility to gather all the possible inputs for the second. In this paper, we prove an upper bound on the expected error of the output when a subset of neurons crashes. This bound involves dependencies on the network parameters that can be seen as being too pessimistic in the average case. It involves a polynomial dependency on the Lipschitz coefficient of the neurons activation function, and an exponential dependency on the depth of the layer where a failure occurs. We back up our theoretical results with experiments illustrating the extent to which our prediction matches the dependencies between the network parameters and robustness. Our results show that the robustness of neural networks to the average crash can be estimated without the need to neither test the network on all failure configurations, nor access the training set used to train the network, both of which are practically impossible requirements.
[ 1, 0, 0, 1, 0, 0 ]
Title: The Cut Elimination and the Nonlengthening Property for the Sequent Calculus with Equality, Abstract: We show how Leibnitz.s indiscernibility principle and Gentzen's original work lead to extensions of the sequent calculus to first order logic with equality and investigate the cut elimination property. Furthermore we discuss and improve the nonlengthening property of Lifshitz and Orevkov.
[ 1, 0, 1, 0, 0, 0 ]
Title: On the Whittaker Plancherel Theorem for Real Reductive Groups, Abstract: The main purpose of this article is to fix several aspects aspects of the proof of the Whittaker Plancherel Theorem in Real Reductive Groups II that are affected by recently observed errors or gaps . In the process of completing the proof of the theorem the paper also gives an exposition of its structure, and adds some clarifying new results. It also outlines the steps in the proof of the Harish-Chandra Plancherel theorem as they are needed in our proof of the Whittaker version.
[ 0, 0, 1, 0, 0, 0 ]
Title: Poisson distribution for gaps between sums of two squares and level spacings for toral point scatterers, Abstract: We investigate the level spacing distribution for the quantum spectrum of the square billiard. Extending work of Connors--Keating, and Smilansky, we formulate an analog of the Hardy--Littlewood prime $k$-tuple conjecture for sums of two squares, and show that it implies that the spectral gaps, after removing degeneracies and rescaling, are Poisson distributed. Consequently, by work of Rudnick and Ueberschär, the level spacings of arithmetic toral point scatterers, in the weak coupling limit, are also Poisson distributed. We also give numerical evidence for the conjecture and its implications.
[ 0, 1, 1, 0, 0, 0 ]
Title: Maria Krawczyk: friend and physicist, Abstract: With this note, we remember our friend Maria Krawczyk, who passed away this year, on May 24th. We briefly outline some of her physics interests and main accomplishments, and her great human and moral qualities.
[ 0, 1, 0, 0, 0, 0 ]
Title: Neural Networks Regularization Through Class-wise Invariant Representation Learning, Abstract: Training deep neural networks is known to require a large number of training samples. However, in many applications only few training samples are available. In this work, we tackle the issue of training neural networks for classification task when few training samples are available. We attempt to solve this issue by proposing a new regularization term that constrains the hidden layers of a network to learn class-wise invariant representations. In our regularization framework, learning invariant representations is generalized to the class membership where samples with the same class should have the same representation. Numerical experiments over MNIST and its variants showed that our proposal helps improving the generalization of neural network particularly when trained with few samples. We provide the source code of our framework this https URL .
[ 1, 0, 0, 1, 0, 0 ]
Title: Incorporating Global Visual Features into Attention-Based Neural Machine Translation, Abstract: We introduce multi-modal, attention-based neural machine translation (NMT) models which incorporate visual features into different parts of both the encoder and the decoder. We utilise global image features extracted using a pre-trained convolutional neural network and incorporate them (i) as words in the source sentence, (ii) to initialise the encoder hidden state, and (iii) as additional data to initialise the decoder hidden state. In our experiments, we evaluate how these different strategies to incorporate global image features compare and which ones perform best. We also study the impact that adding synthetic multi-modal, multilingual data brings and find that the additional data have a positive impact on multi-modal models. We report new state-of-the-art results and our best models also significantly improve on a comparable phrase-based Statistical MT (PBSMT) model trained on the Multi30k data set according to all metrics evaluated. To the best of our knowledge, it is the first time a purely neural model significantly improves over a PBSMT model on all metrics evaluated on this data set.
[ 1, 0, 0, 0, 0, 0 ]
Title: Complete Subgraphs of the Coprime Hypergraph of Integers III: Construction, Abstract: The coprime hypergraph of integers on $n$ vertices $CHI_k(n)$ is defined via vertex set $\{1,2,\dots,n\}$ and hyperedge set $\{\{v_1,v_2,\dots,v_{k+1}\}\subseteq\{1,2,\dots,n\}:\gcd(v_1,v_2,\dots,v_{k+1})=1\}$. In this article we present ideas on how to construct maximal subgraphs in $CHI_k(n)$. This continues the author's earlier work, which dealt with bounds on the size and structural properties of these subgraphs. We succeed in the cases $k\in\{1,2,3\}$ and give promising ideas for $k\geq 4$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Involvement of Surfactant Protein D in Ebola Virus Infection Enhancement via Glycoprotein Interaction, Abstract: Since the largest 2014-2016 Ebola virus disease outbreak in West Africa, understanding of Ebola virus infection has improved, notably the involvement of innate immune mediators. Amongst them, collectins are important players in the antiviral innate immune defense. A screening of Ebola glycoprotein (GP)-collectins interactions revealed the specific interaction of human surfactant protein D (hSP-D), a lectin expressed in lung and liver, two compartments where Ebola was found in vivo. Further analyses have demonstrated an involvement of hSP-D in the enhancement of virus infection in several in vitro models. Similar effects were observed for porcine SP-D (pSP-D). In addition, both hSP-D and pSP-D interacted with Reston virus (RESTV) GP and enhanced pseudoviral infection in pulmonary cells. Thus, our study reveals a novel partner of Ebola GP that may participate to enhance viral spread.
[ 0, 0, 0, 0, 1, 0 ]
Title: Deep Approximately Orthogonal Nonnegative Matrix Factorization for Clustering, Abstract: Nonnegative Matrix Factorization (NMF) is a widely used technique for data representation. Inspired by the expressive power of deep learning, several NMF variants equipped with deep architectures have been proposed. However, these methods mostly use the only nonnegativity while ignoring task-specific features of data. In this paper, we propose a novel deep approximately orthogonal nonnegative matrix factorization method where both nonnegativity and orthogonality are imposed with the aim to perform a hierarchical clustering by using different level of abstractions of data. Experiment on two face image datasets showed that the proposed method achieved better clustering performance than other deep matrix factorization methods and state-of-the-art single layer NMF variants.
[ 1, 0, 0, 0, 0, 0 ]
Title: Separation-Free Super-Resolution from Compressed Measurements is Possible: an Orthonormal Atomic Norm Minimization Approach, Abstract: We consider the problem of recovering the superposition of $R$ distinct complex exponential functions from compressed non-uniform time-domain samples. Total Variation (TV) minimization or atomic norm minimization was proposed in the literature to recover the $R$ frequencies or the missing data. However, it is known that in order for TV minimization and atomic norm minimization to recover the missing data or the frequencies, the underlying $R$ frequencies are required to be well-separated, even when the measurements are noiseless. This paper shows that the Hankel matrix recovery approach can super-resolve the $R$ complex exponentials and their frequencies from compressed non-uniform measurements, regardless of how close their frequencies are to each other. We propose a new concept of orthonormal atomic norm minimization (OANM), and demonstrate that the success of Hankel matrix recovery in separation-free super-resolution comes from the fact that the nuclear norm of a Hankel matrix is an orthonormal atomic norm. More specifically, we show that, in traditional atomic norm minimization, the underlying parameter values $\textbf{must}$ be well separated to achieve successful signal recovery, if the atoms are changing continuously with respect to the continuously-valued parameter. In contrast, for the OANM, it is possible the OANM is successful even though the original atoms can be arbitrarily close. As a byproduct of this research, we provide one matrix-theoretic inequality of nuclear norm, and give its proof from the theory of compressed sensing.
[ 1, 0, 0, 0, 0, 0 ]
Title: A note on MCMC for nested multilevel regression models via belief propagation, Abstract: In the quest for scalable Bayesian computational algorithms we need to exploit the full potential of existing methodologies. In this note we point out that message passing algorithms, which are very well developed for inference in graphical models, appear to be largely unexplored for scalable inference in Bayesian multilevel regression models. We show that nested multilevel regression models with Gaussian errors lend themselves very naturally to the combined use of belief propagation and MCMC. Specifically, the posterior distribution of the regression parameters conditionally on covariance hyperparameters is a high-dimensional Gaussian that can be sampled exactly (as well as marginalized) using belief propagation at a cost that scales linearly in the number of parameters and data. We derive an algorithm that works efficiently even for conditionally singular Gaussian distributions, e.g., when there are linear constraints between the parameters at different levels. We show that allowing for such non-invertible Gaussians is critical for belief propagation to be applicable to a large class of nested multilevel models. From a different perspective, the methodology proposed can be seen as a generalization of forward-backward algorithms for sampling to multilevel regressions with tree-structure graphical models, as opposed to single-branch trees used in classical Kalman filter contexts.
[ 0, 0, 0, 1, 0, 0 ]
Title: InGaN Metal-IN Solar Cell: optimized efficiency and fabrication tolerance, Abstract: Choosing the Indium Gallium Nitride (InGaN) ternary alloy for thin films solar cells might yield high benefits concerning efficiency and reliability, because its bandgap can be tuned through the Indium composition and radiations have little destructive effect on it. It may also reveal challenges because good quality p-doped InGaN layers are difficult to elaborate. In this letter, a new design for an InGaN thin film solar cell is optimized, where the player of a PIN structure is replaced by a Schottky contact, leading to a Metal-IN (MIN) structure. With a simulated efficiency of 19.8%, the MIN structure performs better than the previously studied Schottky structure, while increasing its fabrication tolerance and thus functional reliability a. Owing to its good tolerance to radiations [1], its high light absorption [2, 3] and its Indium-composition-tuned bandgap [4, 5], the Indium Gallium Nitride (InGaN) ternary alloy is a good candidate for high-efficiency-high-reliability solar cells able to operate in harsh environments. Unfortunately, InGaN p-doping is still a challenge, owing to InGaN residual n-doping [6], the lack of dedicated ac-ceptors [7] and the complex fabrication process itself [8, 9]. To these drawbacks can be added the uneasy fabrication of ohmic contacts [4] and the difficulty to grow the high-quality-high-Indium-content thin films [10] which would be needed to cover the whole solar spectrum. These drawbacks still prevent InGaN solar cells to be competitive with other well established III-V and silicon technologies [11]. In this letter, is proposed a new Metal-IN (MIN) InGaN solar cell structure where the InGaN p-doped layer is removed and replaced by a Schottky contact, lifting one of the above mentioned drawbacks. A set of realistic physical models based on actual measurements is used to simulate and optimize its behavior and performance using mathematically rigorous multi-criteria optimization methods, aiming to show that both efficiency and fabrication tolerances are better than the previously described simple InGaN Schottky solar cell [12].
[ 0, 1, 0, 0, 0, 0 ]
Title: Proceedings of the IJCAI 2017 Workshop on Learning in the Presence of Class Imbalance and Concept Drift (LPCICD'17), Abstract: With the wide application of machine learning algorithms to the real world, class imbalance and concept drift have become crucial learning issues. Class imbalance happens when the data categories are not equally represented, i.e., at least one category is minority compared to other categories. It can cause learning bias towards the majority class and poor generalization. Concept drift is a change in the underlying distribution of the problem, and is a significant issue specially when learning from data streams. It requires learners to be adaptive to dynamic changes. Class imbalance and concept drift can significantly hinder predictive performance, and the problem becomes particularly challenging when they occur simultaneously. This challenge arises from the fact that one problem can affect the treatment of the other. For example, drift detection algorithms based on the traditional classification error may be sensitive to the imbalanced degree and become less effective; and class imbalance techniques need to be adaptive to changing imbalance rates, otherwise the class receiving the preferential treatment may not be the correct minority class at the current moment. Therefore, the mutual effect of class imbalance and concept drift should be considered during algorithm design. The aim of this workshop is to bring together researchers from the areas of class imbalance learning and concept drift in order to encourage discussions and new collaborations on solving the combined issue of class imbalance and concept drift. It provides a forum for international researchers and practitioners to share and discuss their original work on addressing new challenges and research issues in class imbalance learning, concept drift, and the combined issues of class imbalance and concept drift. The proceedings include 8 papers on these topics.
[ 1, 0, 0, 0, 0, 0 ]
Title: Nonlinear dynamics on branched structures and networks, Abstract: Nonlinear dynamics on graphs has rapidly become a topical issue with many physical applications, ranging from nonlinear optics to Bose-Einstein condensation. Whenever in a physical experiment a ramified structure is involved, it can prove useful to approximate such a structure by a metric graph, or network. For the Schroedinger equation it turns out that the sixth power in the nonlinear term of the energy is critical in the sense that below that power the constrained energy is lower bounded irrespectively of the value of the mass (subcritical case). On the other hand, if the nonlinearity power equals six, then the lower boundedness depends on the value of the mass: below a critical mass, the constrained energy is lower bounded, beyond it, it is not. For powers larger than six the constrained energy functional is never lower bounded, so that it is meaningless to speak about ground states (supercritical case). These results are the same as in the case of the nonlinear Schrodinger equation on the real line. In fact, as regards the existence of ground states, the results for systems on graphs differ, in general, from the ones for systems on the line even in the subcritical case: in the latter case, whenever the constrained energy is lower bounded there always exist ground states (the solitons, whose shape is explicitly known), whereas for graphs the existence of a ground state is not guaranteed. For the critical case, our results show a phenomenology much richer than the analogous on the line.
[ 0, 0, 1, 0, 0, 0 ]
Title: Rank modulation codes for DNA storage, Abstract: Synthesis of DNA molecules offers unprecedented advances in storage technology. Yet, the microscopic world in which these molecules reside induces error patterns that are fundamentally different from their digital counterparts. Hence, to maintain reliability in reading and writing, new coding schemes must be developed. In a reading technique called shotgun sequencing, a long DNA string is read in a sliding window fashion, and a profile vector is produced. It was recently suggested by Kiah et al. that such a vector can represent the permutation which is induced by its entries, and hence a rank-modulation scheme arises. Although this interpretation suggests high error tolerance, it is unclear which permutations are feasible, and how to produce a DNA string whose profile vector induces a given permutation. In this paper, by observing some necessary conditions, an upper bound for the number of feasible permutations is given. Further, a technique for deciding the feasibility of a permutation is devised. By using insights from this technique, an algorithm for producing a considerable number of feasible permutations is given, which applies to any alphabet size and any window length.
[ 1, 0, 0, 0, 0, 0 ]
Title: Effect of ion motion on relativistic electron beam driven wakefield in a cold plasma, Abstract: Excitation of relativistic electron beam driven wakefield in a cold plasma is studied using 1-D fluid simulation techniques where the effect of ion motion is included. We have excited the wakefield using a ultra-relativistic, homogeneous, rigid electron beam with different beam densities and mass-ratios (ratio of electron's to ion's mass). We have shown that the numerically excited wakefield is in a good agreement with the analytical results of Rosenzweig et al. \textcolor{blue}{[Physical Review A. 40, 9, (1989)]} for several plasma periods. It is shown here that the excited wake wave is equivalent to the corresponding "Khachatryan mode" \textcolor{blue}{[Physical Review E. 58, 6, (1998)]}. After several plasma periods, it is found that the excited wake wave gradually modifies and finally breaks, exhibiting sharp spikes in density and sawtooth like structure in electric field profile. It is shown here that the excited wake wave breaks much below the Khachatryan's wave breaking limit.
[ 0, 1, 0, 0, 0, 0 ]
Title: Memory-efficient Kernel PCA via Partial Matrix Sampling and Nonconvex Optimization: a Model-free Analysis of Local Minima, Abstract: Kernel PCA is a widely used nonlinear dimension reduction technique in machine learning, but storing the kernel matrix is notoriously challenging when the sample size is large. Inspired by Yi et al. [2016], where the idea of partial matrix sampling followed by nonconvex optimization is proposed for matrix completion and robust PCA, we apply a similar approach to memory-efficient Kernel PCA. In theory, with no assumptions on the kernel matrix in terms of eigenvalues or eigenvectors, we established a model-free theory for the low-rank approximation based on any local minimum of the proposed objective function. As interesting byproducts, when the underlying positive semidefinite matrix is assumed to be low-rank and highly structured, corollaries of our main theorem improve the state-of-the-art results of Ge et al. [2016, 2017] for nonconvex matrix completion with no spurious local minima. Numerical experiments also show that our approach is competitive in terms of approximation accuracy compared to the well-known Nyström algorithm for Kernel PCA.
[ 1, 0, 0, 1, 0, 0 ]
Title: Scalable Graph Learning for Anti-Money Laundering: A First Look, Abstract: Organized crime inflicts human suffering on a genocidal scale: the Mexican drug cartels have murdered 150,000 people since 2006, upwards of 700,000 people per year are "exported" in a human trafficking industry enslaving an estimated 40 million people. These nefarious industries rely on sophisticated money laundering schemes to operate. Despite tremendous resources dedicated to anti-money laundering (AML) only a tiny fraction of illicit activity is prevented. The research community can help. In this brief paper, we map the structural and behavioral dynamics driving the technical challenge. We review AML methods, current and emergent. We provide a first look at scalable graph convolutional neural networks for forensic analysis of financial data, which is massive, dense, and dynamic. We report preliminary experimental results using a large synthetic graph (1M nodes, 9M edges) generated by a data simulator we created called AMLSim. We consider opportunities for high performance efficiency, in terms of computation and memory, and we share results from a simple graph compression experiment. Our results support our working hypothesis that graph deep learning for AML bears great promise in the fight against criminal financial activity.
[ 1, 0, 0, 0, 0, 0 ]
Title: Complete event-by-event $α$/$γ(β)$ separation in a full-size TeO$_2$ CUORE bolometer by Neganov-Luke-magnified light detection, Abstract: In the present work, we describe the results obtained with a large ($\approx 133$ cm$^3$) TeO$_2$ bolometer, with a view to a search for neutrinoless double-beta decay ($0\nu\beta\beta$) of $^{130}$Te. We demonstrate an efficient $\alpha$ particle discrimination (99.9\%) with a high acceptance of the $0\nu\beta\beta$ signal (about 96\%), expected at $\approx 2.5$ MeV. This unprecedented result was possible thanks to the superior performance (10 eV rms baseline noise) of a Neganov-Luke-assisted germanium bolometer used to detect a tiny (70 eV) light signal from the TeO$_2$ detector, dominated by $\gamma$($\beta$)-induced Cherenkov radiation but exhibiting also a clear scintillation component. The obtained results represent a major breakthrough towards the TeO$_2$-based version of CUORE Upgrade with Particle IDentification (CUPID), a ton-scale cryogenic $0\nu\beta\beta$ experiment proposed as a follow-up to the CUORE project with particle identification. The CUORE experiment began recently a search for neutrinoless double-beta decay of $^{130}$Te with an array of 988 125-cm$^3$ TeO$_2$ bolometers. The lack of $\alpha$ discrimination in CUORE makes $\alpha$ decays at the detector surface the dominant background component, at the level of $\approx 0.01$ counts/(keV kg y) in the region of interest. We show here, for the first time with a CUORE-size bolometer and using the same technology as CUORE for the readout of both heat and light signals, that surface $\alpha$ background can be fully rejected.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Novel Bayesian Multiple Testing Approach to Deregulated miRNA Discovery Harnessing Positional Clustering, Abstract: MicroRNAs (miRNAs) are small non-coding RNAs that function as regulators of gene expression. In recent years, there has been a tremendous and growing interest among researchers to investigate the role of miRNAs in normal cellular as well as in disease processes. Thus to investigate the role of miRNAs in oral cancer, we analyse the expression levels of miRNAs to identify miRNAs with statistically significant differential expression in cancer tissues. In this article, we propose a novel Bayesian hierarchical model of miRNA expression data. Compelling evidences have demonstrated that the transcription process of miRNAs in human genome is a latent process instrumental for the observed expression levels. We take into account positional clustering of the miRNAs in the analysis and model the latent transcription phenomenon nonparametrically by an appropriate Gaussian process. For the testing purpose we employ a novel Bayesian multiple testing method where we mainly focus on utilizing the dependence structure between the hypotheses for better results, while also ensuring optimality in many respects. Indeed, our non-marginal method yielded results in accordance with the underlying scientific knowledge which are found to be missed by the very popular Benjamini-Hochberg method.
[ 0, 0, 0, 1, 0, 0 ]
Title: PCN: Point Completion Network, Abstract: Shape completion, the problem of estimating the complete geometry of objects from partial observations, lies at the core of many vision and robotics applications. In this work, we propose Point Completion Network (PCN), a novel learning-based approach for shape completion. Unlike existing shape completion methods, PCN directly operates on raw point clouds without any structural assumption (e.g. symmetry) or annotation (e.g. semantic class) about the underlying shape. It features a decoder design that enables the generation of fine-grained completions while maintaining a small number of parameters. Our experiments show that PCN produces dense, complete point clouds with realistic structures in the missing regions on inputs with various levels of incompleteness and noise, including cars from LiDAR scans in the KITTI dataset.
[ 1, 0, 0, 0, 0, 0 ]
Title: Deep Relaxation: partial differential equations for optimizing deep neural networks, Abstract: In this paper we establish a connection between non-convex optimization methods for training deep neural networks and nonlinear partial differential equations (PDEs). Relaxation techniques arising in statistical physics which have already been used successfully in this context are reinterpreted as solutions of a viscous Hamilton-Jacobi PDE. Using a stochastic control interpretation allows we prove that the modified algorithm performs better in expectation that stochastic gradient descent. Well-known PDE regularity results allow us to analyze the geometry of the relaxed energy landscape, confirming empirical evidence. The PDE is derived from a stochastic homogenization problem, which arises in the implementation of the algorithm. The algorithms scale well in practice and can effectively tackle the high dimensionality of modern neural networks.
[ 1, 0, 1, 0, 0, 0 ]
Title: Forbidden Substrings In Circular K-Successions, Abstract: In this note we define circular k-successions in permutations in one-line notation and count permutations that avoid substrings j(j+k) and j(j+k) (mod n). We also count circular permutations that avoid such substrings, and show that for substrings j(j+k) (mod n), the number of permutations depends on whether n is prime, and more generally, on whether n and k are relatively prime.
[ 0, 0, 1, 0, 0, 0 ]
Title: Asymptotic genealogies of interacting particle systems with an application to sequential Monte Carlo, Abstract: We study weighted particle systems in which new generations are resampled from current particles with probabilities proportional to their weights. This covers a broad class of sequential Monte Carlo (SMC) methods, widely-used in applied statistics and cognate disciplines. We consider the genealogical tree embedded into such particle systems, and identify conditions, as well as an appropriate time-scaling, under which they converge to the Kingman n-coalescent in the infinite system size limit in the sense of finite-dimensional distributions. Thus, the tractable n-coalescent can be used to predict the shape and size of SMC genealogies, as we illustrate by characterising the limiting mean and variance of the tree height. SMC genealogies are known to be connected to algorithm performance, so that our results are likely to have applications in the design of new methods as well. Our conditions for convergence are strong, but we show by simulation that they do not appear to be necessary.
[ 0, 0, 0, 1, 1, 0 ]
Title: Symmetry Enforced Stability of Interacting Weyl and Dirac Semimetals, Abstract: The nodal and effectively relativistic dispersion featuring in a range of novel materials including two- dimensional graphene and three-dimensional Dirac and Weyl semimetals has attracted enormous interest during the past decade. Here, by studying the structure and symmetry of the diagrammatic expansion, we show that these nodal touching points are in fact perturbatively stable to all orders with respect to generic two-body interactions. For effective low-energy theories relevant for single and multilayer graphene, type-I and type-II Weyl and Dirac semimetals as well as Weyl points with higher topological charge, this stability is shown to be a direct consequence of a spatial symmetry that anti-commutes with the effective Hamiltonian while leaving the interaction invariant. A more refined argument is applied to the honeycomb lattice model of graphene showing that its Dirac points are also perturbatively stable to all orders. We also give examples of nodal Hamiltonians that acquire a gap from interactions as a consequence of symmetries different from those of Weyl and Dirac materials.
[ 0, 1, 0, 0, 0, 0 ]
Title: Luck is Hard to Beat: The Difficulty of Sports Prediction, Abstract: Predicting the outcome of sports events is a hard task. We quantify this difficulty with a coefficient that measures the distance between the observed final results of sports leagues and idealized perfectly balanced competitions in terms of skill. This indicates the relative presence of luck and skill. We collected and analyzed all games from 198 sports leagues comprising 1503 seasons from 84 countries of 4 different sports: basketball, soccer, volleyball and handball. We measured the competitiveness by countries and sports. We also identify in each season which teams, if removed from its league, result in a completely random tournament. Surprisingly, not many of them are needed. As another contribution of this paper, we propose a probabilistic graphical model to learn about the teams' skills and to decompose the relative weights of luck and skill in each game. We break down the skill component into factors associated with the teams' characteristics. The model also allows to estimate as 0.36 the probability that an underdog team wins in the NBA league, with a home advantage adding 0.09 to this probability. As shown in the first part of the paper, luck is substantially present even in the most competitive championships, which partially explains why sophisticated and complex feature-based models hardly beat simple models in the task of forecasting sports' outcomes.
[ 1, 0, 0, 1, 0, 0 ]
Title: Multiscale sequence modeling with a learned dictionary, Abstract: We propose a generalization of neural network sequence models. Instead of predicting one symbol at a time, our multi-scale model makes predictions over multiple, potentially overlapping multi-symbol tokens. A variation of the byte-pair encoding (BPE) compression algorithm is used to learn the dictionary of tokens that the model is trained with. When applied to language modelling, our model has the flexibility of character-level models while maintaining many of the performance benefits of word-level models. Our experiments show that this model performs better than a regular LSTM on language modeling tasks, especially for smaller models.
[ 1, 0, 0, 1, 0, 0 ]
Title: Infinite Matrix Product States vs Infinite Projected Entangled-Pair States on the Cylinder: a comparative study, Abstract: In spite of their intrinsic one-dimensional nature matrix product states have been systematically used to obtain remarkably accurate results for two-dimensional systems. Motivated by basic entropic arguments favoring projected entangled-pair states as the method of choice, we assess the relative performance of infinite matrix product states and infinite projected entangled-pair states on cylindrical geometries. By considering the Heisenberg and half-filled Hubbard models on the square lattice as our benchmark cases, we evaluate their variational energies as a function of both bond dimension as well as cylinder width. In both examples we find crossovers at moderate cylinder widths, i.e. for the largest bond dimensions considered we find an improvement on the variational energies for the Heisenberg model by using projected entangled-pair states at a width of about 11 sites, whereas for the half-filled Hubbard model this crossover occurs at about 7 sites.
[ 0, 1, 0, 0, 0, 0 ]
Title: Dialectometric analysis of language variation in Twitter, Abstract: In the last few years, microblogging platforms such as Twitter have given rise to a deluge of textual data that can be used for the analysis of informal communication between millions of individuals. In this work, we propose an information-theoretic approach to geographic language variation using a corpus based on Twitter. We test our models with tens of concepts and their associated keywords detected in Spanish tweets geolocated in Spain. We employ dialectometric measures (cosine similarity and Jensen-Shannon divergence) to quantify the linguistic distance on the lexical level between cells created in a uniform grid over the map. This can be done for a single concept or in the general case taking into account an average of the considered variants. The latter permits an analysis of the dialects that naturally emerge from the data. Interestingly, our results reveal the existence of two dialect macrovarieties. The first group includes a region-specific speech spoken in small towns and rural areas whereas the second cluster encompasses cities that tend to use a more uniform variety. Since the results obtained with the two different metrics qualitatively agree, our work suggests that social media corpora can be efficiently used for dialectometric analyses.
[ 1, 1, 0, 0, 0, 0 ]
Title: Single-Pass, Adaptive Natural Language Filtering: Measuring Value in User Generated Comments on Large-Scale, Social Media News Forums, Abstract: There are large amounts of insight and social discovery potential in mining crowd-sourced comments left on popular news forums like Reddit.com, Tumblr.com, Facebook.com and Hacker News. Unfortunately, due the overwhelming amount of participation with its varying quality of commentary, extracting value out of such data isn't always obvious nor timely. By designing efficient, single-pass and adaptive natural language filters to quickly prune spam, noise, copy-cats, marketing diversions, and out-of-context posts, we can remove over a third of entries and return the comments with a higher probability of relatedness to the original article in question. The approach presented here uses an adaptive, two-step filtering process. It first leverages the original article posted in the thread as a starting corpus to parse comments by matching intersecting words and term-ratio balance per sentence then grows the corpus by adding new words harvested from high-matching comments to increase filtering accuracy over time.
[ 1, 0, 0, 0, 0, 0 ]
Title: How Do Elements Really Factor in $\mathbb{Z}[\sqrt{-5}]$?, Abstract: Most undergraduate level abstract algebra texts use $\mathbb{Z}[\sqrt{-5}]$ as an example of an integral domain which is not a unique factorization domain (or UFD) by exhibiting two distinct irreducible factorizations of a nonzero element. But such a brief example, which requires merely an understanding of basic norms, only scratches the surface of how elements actually factor in this ring of algebraic integers. We offer here an interactive framework which shows that while $\mathbb{Z}[\sqrt{-5}]$ is not a UFD, it does satisfy a slightly weaker factorization condition, known as half-factoriality. The arguments involved revolve around the Fundamental Theorem of Ideal Theory.
[ 0, 0, 1, 0, 0, 0 ]
Title: Improvements on lower bounds for the blow-up time under local nonlinear Neumann conditions, Abstract: This paper studies the heat equation $u_t=\Delta u$ in a bounded domain $\Omega\subset\mathbb{R}^{n}(n\geq 2)$ with positive initial data and a local nonlinear Neumann boundary condition: the normal derivative $\partial u/\partial n=u^{q}$ on partial boundary $\Gamma_1\subseteq \partial\Omega$ for some $q>1$, while $\partial u/\partial n=0$ on the other part. We investigate the lower bound of the blow-up time $T^{*}$ of $u$ in several aspects. First, $T^{*}$ is proved to be at least of order $(q-1)^{-1}$ as $q\rightarrow 1^{+}$. Since the existing upper bound is of order $(q-1)^{-1}$, this result is sharp. Secondly, if $\Omega$ is convex and $|\Gamma_{1}|$ denotes the surface area of $\Gamma_{1}$, then $T^{*}$ is shown to be at least of order $|\Gamma_{1}|^{-\frac{1}{n-1}}$ for $n\geq 3$ and $|\Gamma_{1}|^{-1}\big/\ln\big(|\Gamma_{1}|^{-1}\big)$ for $n=2$ as $|\Gamma_{1}|\rightarrow 0$, while the previous result is $|\Gamma_{1}|^{-\alpha}$ for any $\alpha<\frac{1}{n-1}$. Finally, we generalize the results for convex domains to the domains with only local convexity near $\Gamma_{1}$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Using Variable Natural Environment Brain-Computer Interface Stimuli for Real-time Humanoid Robot Navigation, Abstract: This paper addresses the challenge of humanoid robot teleoperation in a natural indoor environment via a Brain-Computer Interface (BCI). We leverage deep Convolutional Neural Network (CNN) based image and signal understanding to facilitate both real-time object detection and dry-Electroencephalography (EEG) based human cortical brain bio-signal decoding. We employ recent advances in dry-EEG technology to stream and collect the cortical waveforms from subjects while the subjects fixate on variable Steady-State Visual Evoked Potential (SSVEP) stimuli generated directly from the environment the robot is navigating. To these ends, we propose the use of novel variable BCI stimuli by utilising the real-time video streamed via the on-board robot camera as visual input for SSVEP where the CNN detected natural scene objects are altered and flickered with differing frequencies (10Hz, 12Hz and 15Hz). These stimuli are not akin to traditional stimuli - as both the dimensions of the flicker regions and their on-screen position changes depending on the scene objects detected in the scene. On-screen object selection via dry-EEG enabled SSVEP in this way, facilitates the on-line decoding of human cortical brain signals via a secondary CNN approach into teleoperation robot commands (approach object, move in a specific direction: right, left or back). This SSVEP decoding model is trained via a priori offline experimental data in which very similar visual input is present for all subjects. The resulting offline classification demonstrates extremely high performance and with mean accuracies of 96% and 90% for the real-time robot navigation experiment across multiple test subjects.
[ 1, 0, 0, 0, 0, 0 ]
Title: Tunable Ampere phase plate for low dose imaging of biomolecular complexes, Abstract: A novel device that can be used as a tunable support-free phase plate for transmission electron microscopy of weakly scattering specimens is described. The device relies on the generation of a controlled phase shift by the magnetic field of a segment of current-carrying wire that is oriented parallel or antiparallel to the electron beam. The validity of the concept is established using both experimental electron holographic measurements and a theoretical model based on Ampere's law. Computer simulations are used to illustrate the resulting contrast enhancement for studies of biological cells and macromolecules.
[ 0, 1, 0, 0, 0, 0 ]
Title: An efficient model-free setting for longitudinal and lateral vehicle control. Validation through the interconnected pro-SiVIC/RTMaps prototyping platform, Abstract: In this paper, the problem of tracking desired longitudinal and lateral motions for a vehicle is addressed. Let us point out that a "good" modeling is often quite difficult or even impossible to obtain. It is due for example to parametric uncertainties, for the vehicle mass, inertia or for the interaction forces between the wheels and the road pavement. To overcome this type of difficulties, we consider a model-free control approach leading to "intelligent" controllers. The longitudinal and the lateral motions, on one hand, and the driving/braking torques and the steering wheel angle, on the other hand, are respectively the output and the input variables. An important part of this work is dedicated to present simulation results with actual data. Actual data, used in Matlab as reference trajectories, have been previously recorded with an instrumented Peugeot 406 experimental car. The simulation results show the efficiency of our approach. Some comparisons with a nonlinear flatness-based control in one hand, and with a classical PID control in another hand confirm this analysis. Other virtual data have been generated through the interconnected platform SiVIC/RTMaps, which is a virtual simulation platform for prototyping and validation of advanced driving assistance systems.
[ 1, 0, 1, 0, 0, 0 ]
Title: Genetic fitting techniques for precision ultracold spectroscopy, Abstract: We present development of a genetic algorithm for fitting potential energy curves of diatomic molecules to experimental data. Our approach does not involve any functional form for fitting, which makes it a general fitting procedure. In particular, it takes in a guess potential, perhaps from an $ab \ initio$ calculation, along with experimental measurements of vibrational binding energies, rotational constants, and their experimental uncertainties. The fitting procedure is able to modify the guess potential until it converges to better than 1% uncertainty, as measured by $\bar{\chi}^2$. We present the details of this technique along with a comparison of potentials calculated by our genetic algorithm and the state of the art fitting techniques based on inverted perturbation approach for the $X \ ^1\Sigma^+$ and $C \ ^1\Sigma^+$ potentials of lithium-rubidium.
[ 0, 1, 0, 0, 0, 0 ]
Title: Robust And Scalable Learning Of Complex Dataset Topologies Via Elpigraph, Abstract: Large datasets represented by multidimensional data point clouds often possess non-trivial distributions with branching trajectories and excluded regions, with the recent single-cell transcriptomic studies of developing embryo being notable examples. Reducing the complexity and producing compact and interpretable representations of such data remains a challenging task. Most of the existing computational methods are based on exploring the local data point neighbourhood relations, a step that can perform poorly in the case of multidimensional and noisy data. Here we present ElPiGraph, a scalable and robust method for approximation of datasets with complex structures which does not require computing the complete data distance matrix or the data point neighbourhood graph. This method is able to withstand high levels of noise and is capable of approximating complex topologies via principal graph ensembles that can be combined into a consensus principal graph. ElPiGraph deals efficiently with large and complex datasets in various fields from biology, where it can be used to infer gene dynamics from single-cell RNA-Seq, to astronomy, where it can be used to explore complex structures in the distribution of galaxies.
[ 0, 0, 0, 1, 1, 0 ]
Title: Epidemic dynamics in open quantum spin systems, Abstract: We explore the non-equilibrium evolution and stationary states of an open many-body system which displays epidemic spreading dynamics in a classical and a quantum regime. Our study is motivated by recent experiments conducted in strongly interacting gases of highly excited Rydberg atoms where the facilitated excitation of Rydberg states competes with radiative decay. These systems approximately implement open quantum versions of models for population dynamics or disease spreading where species can be in a healthy, infected or immune state. We show that in a two-dimensional lattice, depending on the dominance of either classical or quantum effects, the system may display a different kind of non-equilibrium phase transition. We moreover discuss the observability of our findings in laser driven Rydberg gases with particular focus on the role of long-range interactions.
[ 0, 1, 0, 0, 0, 0 ]
Title: In-Hand Object Stabilization by Independent Finger Control, Abstract: Grip control during robotic in-hand manipulation is usually modeled as part of a monolithic task, relying on complex controllers specialized for specific situations. Such approaches do not generalize well and are difficult to apply to novel manipulation tasks. Here, we propose a modular object stabilization method based on a proposition that explains how humans achieve grasp stability. In this bio-mimetic approach, independent tactile grip stabilization controllers ensure that slip does not occur locally at the engaged robot fingers. Such local slip is predicted from the tactile signals of each fingertip sensor i.e., BioTac and BioTac SP by Syntouch. We show that stable grasps emerge without any form of central communication when such independent controllers are engaged in the control of multi-digit robotic hands. These grasps are resistant to external perturbations while being capable of stabilizing a large variety of objects.
[ 1, 0, 0, 0, 0, 0 ]
Title: The X-ray and Mid-Infrared luminosities in Luminous Type 1 Quasars, Abstract: Several recent studies have reported different intrinsic correlations between the AGN mid-IR luminosity ($L_{MIR}$) and the rest-frame 2-10 keV luminosity ($L_{X}$) for luminous quasars. To understand the origin of the difference in the observed $L_{X}-L_{MIR}$ relations, we study a sample of 3,247 spectroscopically confirmed type 1 AGNs collected from Boötes, XMM-COSMOS, XMM-XXL-North, and the SDSS quasars in the Swift/XRT footprint spanning over four orders of magnitude in luminosity. We carefully examine how different observational constraints impact the observed $L_{X}-L_{MIR}$ relations, including the inclusion of X-ray non-detected objects, possible X-ray absorption in type 1 AGNs, X-ray flux limits, and star formation contamination. We find that the primary factor driving the different $L_{X}-L_{MIR}$ relations reported in the literature is the X-ray flux limits for different studies. When taking these effects into account, we find that the X-ray luminosity and mid-IR luminosity (measured at rest-frame $6\mu m$, or $L_{6\mu m}$) of our sample of type 1 AGNs follow a bilinear relation in the log-log plane: $\log L_X =(0.84\pm0.03)\times\log L_{6\mu m}/10^{45}{\rm erg\;s^{-1}} + (44.60\pm0.01)$ for $L_{6\mu m} < 10^{44.79}{\rm erg\;s^{-1}} $, and $\log L_X = (0.40\pm0.03)\times\log L_{6\mu m}/10^{45}{\rm erg\;s^{-1}} +(44.51\pm0.01)$ for $L_{6\mu m} \geq 10^{44.79}{\rm erg\;s^{-1}} $. This suggests that the luminous type 1 quasars have a shallower $L_{X}-L_{MIR}$ correlation than the approximately linear relations found in local Seyfert galaxies. This result is consistent with previous studies reporting a luminosity-dependent $L_{X}-L_{MIR}$ relation, and implies that assuming a linear $L_{X}-L_{MIR}$ relation to infer the neutral gas column density for X-ray absorption might overestimate the column densities in luminous quasars.
[ 0, 1, 0, 0, 0, 0 ]
Title: Approximation of general facets by regular facets with respect to anisotropic total variation energies and its application to the crystalline mean curvature flow, Abstract: We show that every bounded subset of an Euclidean space can be approximated by a set that admits a certain vector field, the so-called Cahn-Hoffman vector field, that is subordinate to a given anisotropic metric and has a square-integrable divergence. More generally, we introduce a concept of facets as a kind of directed sets, and show that they can be approximated in a similar manner. We use this approximation to construct test functions necessary to prove the comparison principle for viscosity solutions of the level set formulation of the crystalline mean curvature flow that were recently introduced by the authors. As a consequence, we obtain the well-posedness of the viscosity solutions in an arbitrary dimension, which extends the validity of the result in the previous paper.
[ 0, 0, 1, 0, 0, 0 ]
Title: Calibrated Fairness in Bandits, Abstract: We study fairness within the stochastic, \emph{multi-armed bandit} (MAB) decision making framework. We adapt the fairness framework of "treating similar individuals similarly" to this setting. Here, an `individual' corresponds to an arm and two arms are `similar' if they have a similar quality distribution. First, we adopt a {\em smoothness constraint} that if two arms have a similar quality distribution then the probability of selecting each arm should be similar. In addition, we define the {\em fairness regret}, which corresponds to the degree to which an algorithm is not calibrated, where perfect calibration requires that the probability of selecting an arm is equal to the probability with which the arm has the best quality realization. We show that a variation on Thompson sampling satisfies smooth fairness for total variation distance, and give an $\tilde{O}((kT)^{2/3})$ bound on fairness regret. This complements prior work, which protects an on-average better arm from being less favored. We also explain how to extend our algorithm to the dueling bandit setting.
[ 1, 0, 0, 0, 0, 0 ]
Title: HTC Vive MeVisLab integration via OpenVR for medical applications, Abstract: Virtual Reality, an immersive technology that replicates an environment via computer-simulated reality, gets a lot of attention in the entertainment industry. However, VR has also great potential in other areas, like the medical domain, Examples are intervention planning, training and simulation. This is especially of use in medical operations, where an aesthetic outcome is important, like for facial surgeries. Alas, importing medical data into Virtual Reality devices is not necessarily trivial, in particular, when a direct connection to a proprietary application is desired. Moreover, most researcher do not build their medical applications from scratch, but rather leverage platforms like MeVisLab, MITK, OsiriX or 3D Slicer. These platforms have in common that they use libraries like ITK and VTK, and provide a convenient graphical interface. However, ITK and VTK do not support Virtual Reality directly. In this study, the usage of a Virtual Reality device for medical data under the MeVisLab platform is presented. The OpenVR library is integrated into the MeVisLab platform, allowing a direct and uncomplicated usage of the head mounted display HTC Vive inside the MeVisLab platform. Medical data coming from other MeVisLab modules can directly be connected per drag-and-drop to the Virtual Reality module, rendering the data inside the HTC Vive for immersive virtual reality inspection.
[ 1, 0, 0, 0, 0, 0 ]
Title: Explanation of a Polynomial Identity, Abstract: In this note, we provide a conceptual explanation of a well-known polynomial identity used in algebraic number theory.
[ 0, 0, 1, 0, 0, 0 ]
Title: Quantum Stress Tensor Fluctuations and Primordial Gravity Waves, Abstract: We examine the effect of the stress tensor of a quantum matter field, such as the electromagnetic field, on the spectrum of primordial gravity waves expected in inflationary cosmology. We find that the net effect is a small reduction in the power spectrum, especially at higher frequencies, but which has a different form from that described by the usual spectral index. Thus this effect has a characteristic signature, and is in principle observable. The net effect is a sum of two contributions, one of which is due to quantum fluctuations of the matter field stress tensor. The other is a quantum correction to the graviton field due to coupling to the expectation value of this stress tensor. Both contributions are sensitive to initial conditions in the very early universe, so this effect has the potential to act as a probe of these initial conditions.
[ 0, 1, 0, 0, 0, 0 ]
Title: Hyperbolic Pascal simplex, Abstract: In this article we introduce a new geometric object called hyperbolic Pascal simplex. This new object is presented by the regular hypercube mosaic in the 4-dimensional hyperbolic space. The definition of the hyperbolic Pascal simplex, whose hyperfaces are hyperbolic Pascal pyramids and faces are hyperbolic Pascals triangles, is a natural generalization of the definition of the hyperbolic Pascal triangle and pyramid. We describe the growing of the hyperbolic Pascal simplex considering the numbers and the values of the elements. Further figures illustrate the stepping from a level to the next one.
[ 0, 0, 1, 0, 0, 0 ]
Title: Small-loss bounds for online learning with partial information, Abstract: We consider the problem of adversarial (non-stochastic) online learning with partial information feedback, where at each round, a decision maker selects an action from a finite set of alternatives. We develop a black-box approach for such problems where the learner observes as feedback only losses of a subset of the actions that includes the selected action. When losses of actions are non-negative, under the graph-based feedback model introduced by Mannor and Shamir, we offer algorithms that attain the so called "small-loss" $o(\alpha L^{\star})$ regret bounds with high probability, where $\alpha$ is the independence number of the graph, and $L^{\star}$ is the loss of the best action. Prior to our work, there was no data-dependent guarantee for general feedback graphs even for pseudo-regret (without dependence on the number of actions, i.e. utilizing the increased information feedback). Taking advantage of the black-box nature of our technique, we extend our results to many other applications such as semi-bandits (including routing in networks), contextual bandits (even with an infinite comparator class), as well as learning with slowly changing (shifting) comparators. In the special case of classical bandit and semi-bandit problems, we provide optimal small-loss, high-probability guarantees of $\tilde{O}(\sqrt{dL^{\star}})$ for actual regret, where $d$ is the number of actions, answering open questions of Neu. Previous bounds for bandits and semi-bandits were known only for pseudo-regret and only in expectation. We also offer an optimal $\tilde{O}(\sqrt{\kappa L^{\star}})$ regret guarantee for fixed feedback graphs with clique-partition number at most $\kappa$.
[ 1, 0, 0, 0, 0, 0 ]
Title: Fine-Grained Parameterized Complexity Analysis of Graph Coloring Problems, Abstract: The $q$-Coloring problem asks whether the vertices of a graph can be properly colored with $q$ colors. Lokshtanov et al. [SODA 2011] showed that $q$-Coloring on graphs with a feedback vertex set of size $k$ cannot be solved in time $\mathcal{O}^*((q-\varepsilon)^k)$, for any $\varepsilon > 0$, unless the Strong Exponential-Time Hypothesis (SETH) fails. In this paper we perform a fine-grained analysis of the complexity of $q$-Coloring with respect to a hierarchy of parameters. We show that even when parameterized by the vertex cover number, $q$ must appear in the base of the exponent: Unless ETH fails, there is no universal constant $\theta$ such that $q$-Coloring parameterized by vertex cover can be solved in time $\mathcal{O}^*(\theta^k)$ for all fixed $q$. We apply a method due to Jansen and Kratsch [Inform. & Comput. 2013] to prove that there are $\mathcal{O}^*((q - \varepsilon)^k)$ time algorithms where $k$ is the vertex deletion distance to several graph classes $\mathcal{F}$ for which $q$-Coloring is known to be solvable in polynomial time. We generalize earlier ad-hoc results by showing that if $\mathcal{F}$ is a class of graphs whose $(q+1)$-colorable members have bounded treedepth, then there exists some $\varepsilon > 0$ such that $q$-Coloring can be solved in time $\mathcal{O}^*((q-\varepsilon)^k)$ when parameterized by the size of a given modulator to $\mathcal{F}$. In contrast, we prove that if $\mathcal{F}$ is the class of paths - some of the simplest graphs of unbounded treedepth - then no such algorithm can exist unless SETH fails.
[ 1, 0, 0, 0, 0, 0 ]
Title: Planar Object Tracking in the Wild: A Benchmark, Abstract: Planar object tracking is an actively studied problem in vision-based robotic applications. While several benchmarks have been constructed for evaluating state-of-the-art algorithms, there is a lack of video sequences captured in the wild rather than in constrained laboratory environment. In this paper, we present a carefully designed planar object tracking benchmark containing 210 videos of 30 planar objects sampled in the natural environment. In particular, for each object, we shoot seven videos involving various challenging factors, namely scale change, rotation, perspective distortion, motion blur, occlusion, out-of-view, and unconstrained. The ground truth is carefully annotated semi-manually to ensure the quality. Moreover, eleven state-of-the-art algorithms are evaluated on the benchmark using two evaluation metrics, with detailed analysis provided for the evaluation results. We expect the proposed benchmark to benefit future studies on planar object tracking.
[ 1, 0, 0, 0, 0, 0 ]
Title: Experimental demonstration of an atomtronic battery, Abstract: Operation of an atomtronic battery is demonstrated where a finite-temperature Bose-Einstein condensate stored in one half of a double-well potential is coupled to an initially empty load well that is impedance matched by a resonant terminator beam. The atom number and temperature of the condensate are monitored during the discharge cycle, and are used to calculate fundamental properties of the battery. The discharge behavior is analyzed according to a Thévenin equivalent circuit that contains a finite internal resistance to account for dissipation in the battery. Battery performance at multiple discharge rates is characterized by the peak power output, and the current and energy capacities of the system.
[ 0, 1, 0, 0, 0, 0 ]
Title: Sensor Selection and Random Field Reconstruction for Robust and Cost-effective Heterogeneous Weather Sensor Networks for the Developing World, Abstract: We address the two fundamental problems of spatial field reconstruction and sensor selection in heterogeneous sensor networks: (i) how to efficiently perform spatial field reconstruction based on measurements obtained simultaneously from networks with both high and low quality sensors; and (ii) how to perform query based sensor set selection with predictive MSE performance guarantee. For the first problem, we developed a low complexity algorithm based on the spatial best linear unbiased estimator (S-BLUE). Next, building on the S-BLUE, we address the second problem, and develop an efficient algorithm for query based sensor set selection with performance guarantee. Our algorithm is based on the Cross Entropy method which solves the combinatorial optimization problem in an efficient manner.
[ 0, 0, 0, 1, 0, 0 ]
Title: A characterization of ordinary abelian varieties by the Frobenius push-forward of the structure sheaf II, Abstract: In this paper, we prove that a smooth projective variety $X$ of characteristic $p>0$ is an ordinary abelian variety if and only if $K_X$ is pseudo-effective and $F^e_*\mathcal O_X$ splits into a direct sum of line bundles for an integer $e$ with $p^e>2$.
[ 0, 0, 1, 0, 0, 0 ]
Title: On Minimax Optimality of Sparse Bayes Predictive Density Estimates, Abstract: We study predictive density estimation under Kullback-Leibler loss in $\ell_0$-sparse Gaussian sequence models. We propose proper Bayes predictive density estimates and establish asymptotic minimaxity in sparse models. A surprise is the existence of a phase transition in the future-to-past variance ratio $r$. For $r < r_0 = (\surd 5 - 1)/4$, the natural discrete prior ceases to be asymptotically optimal. Instead, for subcritical $r$, a `bi-grid' prior with a central region of reduced grid spacing recovers asymptotic minimaxity. This phenomenon seems to have no analog in the otherwise parallel theory of point estimation of a multivariate normal mean under quadratic loss. For spike-and-slab priors to have any prospect of minimaxity, we show that the sparse parameter space needs also to be magnitude constrained. Within a substantial range of magnitudes, spike-and-slab priors can attain asymptotic minimaxity.
[ 0, 0, 1, 1, 0, 0 ]
Title: Two forms of minimality in ASPIC+, Abstract: Many systems of structured argumentation explicitly require that the facts and rules that make up the argument for a conclusion be the minimal set required to derive the conclusion. ASPIC+ does not place such a requirement on arguments, instead requiring that every rule and fact that are part of an argument be used in its construction. Thus ASPIC+ arguments are minimal in the sense that removing any element of the argument would lead to a structure that is not an argument. In this brief note we discuss these two types of minimality and show how the first kind of minimality can, if desired, be recovered in ASPIC+.
[ 1, 0, 0, 0, 0, 0 ]
Title: Perspectives on constraining a cosmological constant-type parameter with pulsar timing in the Galactic Center, Abstract: Independent tests aiming to constrain the value of the cosmological constant $\Lambda$ are usually difficult because of its extreme smallness $\left(\Lambda \simeq 1\times 10^{-52}~\textrm{m}^{-2},~\textrm{or}~2.89\times 10^{-122}~\textrm{in Planck units}\right)$. Bounds on it from Solar System orbital motions determined with spacecraft tracking are currently at the $\simeq 10^{-43}-10^{-44}~\textrm{m}^{-2}~\left(5-1\times 10^{-113}~\textrm{in Planck units}\right)$ level, but they may turn out to be somewhat optimistic since $\Lambda$ has not yet been explicitly modeled in the planetary data reductions. Accurate $\left(\sigma_{\tau_\textrm{p}}\simeq 1-10~\mu\textrm{s}\right)$ timing of expected pulsars orbiting the Black Hole at the Galactic Center, preferably along highly eccentric and wide orbits, might, at least in principle, improve the planetary constraints by several orders of magnitude. By looking at the average time shift per orbit $\overline{\Delta\delta\tau}^\Lambda_\textrm{p}$, a S2-like orbital configuration with $e=0.8839,~P_\textrm{b}=16~\textrm{yr}$ would allow to obtain preliminarily an upper bound of the order of $\left|\Lambda\right|\lesssim 9\times 10^{-47}~\textrm{m}^{-2}~\left(\lesssim 2\times 10^{-116}~\textrm{in Planck units}\right)$ if only $\sigma_{\tau_\textrm{p}}$ were to be considered. Our results can be easily extended to modified models of gravity using $\Lambda-$type parameters.
[ 0, 1, 0, 0, 0, 0 ]
Title: Flat bundles over some compact complex manifolds, Abstract: We construct examples of flat fiber bundles over the Hopf surface such that the total spaces have no pseudoconvex neighborhood basis, admit a complete Kähler metric, or are hyperconvex but have no nonconstant holomorphic functions. For any compact Riemannian surface of positive genus, we construct a flat $\mathbb P^1$ bundle over it and a Stein domain with real analytic bundary in it whose closure does not have pseudoconvex neighborhood basis. For a compact complex manifold with positive first Betti number, we construct a flat disc bundle over it such that the total space is hyperconvex but admits no nonconstant holomorphic functions.
[ 0, 0, 1, 0, 0, 0 ]
Title: EXONEST: The Bayesian Exoplanetary Explorer, Abstract: The fields of astronomy and astrophysics are currently engaged in an unprecedented era of discovery as recent missions have revealed thousands of exoplanets orbiting other stars. While the Kepler Space Telescope mission has enabled most of these exoplanets to be detected by identifying transiting events, exoplanets often exhibit additional photometric effects that can be used to improve the characterization of exoplanets. The EXONEST Exoplanetary Explorer is a Bayesian exoplanet inference engine based on nested sampling and originally designed to analyze archived Kepler Space Telescope and CoRoT (Convection Rotation et Transits planétaires) exoplanet mission data. We discuss the EXONEST software package and describe how it accommodates plug-and-play models of exoplanet-associated photometric effects for the purpose of exoplanet detection, characterization and scientific hypothesis testing. The current suite of models allows for both circular and eccentric orbits in conjunction with photometric effects, such as the primary transit and secondary eclipse, reflected light, thermal emissions, ellipsoidal variations, Doppler beaming and superrotation. We discuss our new efforts to expand the capabilities of the software to include more subtle photometric effects involving reflected and refracted light. We discuss the EXONEST inference engine design and introduce our plans to port the current MATLAB-based EXONEST software package over to the next generation Exoplanetary Explorer, which will be a Python-based open source project with the capability to employ third-party plug-and-play models of exoplanet-related photometric effects.
[ 0, 1, 0, 1, 0, 0 ]
Title: Intelligence of agents produces a structural phase transition in collective behaviour, Abstract: Living organisms process information to interact and adapt to their changing environment with the goal of finding food, mates or averting hazards. The structure of their niche has profound repercussions by both selecting their internal architecture and also inducing adaptive responses to environmental cues and stimuli. Adaptive, collective behaviour underpinned by specialized optimization strategies is ubiquitously found in the natural world. This exceptional success originates from the processes of fitness and selection. Here we prove that a universal physical mechanism of a nonequilibrium transition underlies the collective organization of information-processing organisms. As cognitive agents build and update an internal, cognitive representation of the causal structure of their environment, complex patterns emerge in the system, where the onset of pattern formation relates to the spatial overlap of cognitive maps. Studying the exchange of information among the agents reveals a continuous, order-disorder transition. As a result of the spontaneous breaking of translational symmetry, a Goldstone mode emerges, which points at a collective mechanism of information transfer among cognitive organisms. Taken together, the characteristics of this phase transition consolidate different results in cognitive and biological sciences in a universal manner. These finding are generally applicable to the design of artificial intelligent swarm systems that do not rely on centralized control schemes.
[ 0, 1, 0, 0, 0, 0 ]
Title: Quantifying Filter Bubbles: Analyzing Surprise in Elections, Abstract: This work analyses surprising elections, and attempts to quantify the notion of surprise in elections. A voter is surprised if their estimate of the winner (assumed to be based on a combination of the preferences of their social connections and popular media predictions) is different from the true winner. A voter's social connections are assumed to consist of contacts on social media and geographically proximate people. We propose a simple mathematical model for combining the global information (traditional media) as well as the local information (local neighbourhood) of a voter in the case of a two-candidate election. We show that an unbiased, influential media can nullify the effect of filter bubbles and result in a less surprised populace. Surprisingly, an influential media source biased towards the winners of the election also results in a less surprised populace. Our model shows that elections will be unsurprising for all of the voters with a high probability under certain assumptions on the social connection model in the presence of an influential, unbiased traditional media source. Our experiments with the UK-EU referendum (popularly known as Brexit) dataset support our theoretical predictions. Since surprising elections can lead to significant economic movements, it is a worthwhile endeavour to figure out the causes of surprising elections.
[ 1, 0, 0, 0, 0, 0 ]
Title: Energy transfer, pressure tensor and heating of kinetic plasma, Abstract: Kinetic plasma turbulence cascade spans multiple scales ranging from macroscopic fluid flow to sub-electron scales. Mechanisms that dissipate large scale energy, terminate the inertial range cascade and convert kinetic energy into heat are hotly debated. Here we revisit these puzzles using fully kinetic simulation. By performing scale-dependent spatial filtering on the Vlasov equation, we extract information at prescribed scales and introduce several energy transfer functions. This approach allows highly inhomogeneous energy cascade to be quantified as it proceeds down to kinetic scales. The pressure work, $-\left( \boldsymbol{P} \cdot \nabla \right) \cdot \boldsymbol{u}$, can trigger a channel of the energy conversion between fluid flow and random motions, which is a collision-free generalization of the viscous dissipation in collisional fluid. Both the energy transfer and the pressure work are strongly correlated with velocity gradients.
[ 0, 1, 0, 0, 0, 0 ]
Title: The relation between migration and FDI in the OECD from a complex network perspective, Abstract: We explore the relationship between human migration and OECD's foreign direct investment (FDI) using a gravity equation enriched with variables that account for complex-network effects. Based on a panel data analysis, we find a strong positive correlation between the migration network and the FDI network, which can be mostly explained by countries' economic/demographic sizes and geographical distance. We highlight the existence of a stronger positive FDI relationship in pairs of countries that are more central in the migration network. Both intensive and extensive forms of centrality are FDI enhancing. Illuminating this result, we show that bilateral FDI between any two countries is further affected positively by the complex web of "third party" corridors/migration stocks of the international migration network (IMN). Our findings are consistent whether we consider bilateral FDI and bilateral migration figures, or we focus on the outward FDI and the respective inward migration of the OECD countries.
[ 0, 1, 0, 0, 0, 0 ]
Title: Thermopower and thermal conductivity in the Weyl semimetal NbP, Abstract: The Weyl semimetal NbP exhibits an extremely large magnetoresistance (MR) and an ultra-high mobility. The large MR originates from a combination of the nearly perfect compensation between electron- and hole-type charge carriers and the high mobility, which is relevant to the topological band structure. In this work we report on temperature- and field-dependent thermopower and thermal conductivity experiments on NbP. Additionally, we carried out complementary heat capacity, magnetization, and electrical resistivity measurements. We found a giant adiabatic magnetothermopower with a maximum of 800 $\mu$V/K at 50 K in a field of 9 T. Such large effects have been observed rarely in bulk materials. We suggest that the origin of this effect might be related to the high charge-carrier mobility. We further observe pronounced quantum oscillations in both thermal conductivity and thermopower. The obtained frequencies compare well with our heat capacity and magnetization data.
[ 0, 1, 0, 0, 0, 0 ]
Title: Maximum entropy and population heterogeneity in continuous cell cultures, Abstract: Continuous cultures of mammalian cells are complex systems displaying hallmark phenomena of nonlinear dynamics, such as multi-stability, hysteresis, as well as sharp transitions between different metabolic states. In this context mathematical models may suggest control strategies to steer the system towards desired states. Although even clonal populations are known to exhibit cell-to-cell variability, most of the currently studied models assume that the population is homogeneous. To overcome this limitation, we use the maximum entropy principle to model the phenotypic distribution of cells in a chemostat as a function of the dilution rate. We consider the coupling between cell metabolism and extracellular variables describing the state of the bioreactor and take into account the impact of toxic byproduct accumulation on cell viability. We present a formal solution for the stationary state of the chemostat and show how to apply it in two examples. First, a simplified model of cell metabolism where the exact solution is tractable, and then a genome-scale metabolic network of the Chinese hamster ovary (CHO) cell line. Along the way we discuss several consequences of heterogeneity, such as: qualitative changes in the dynamical landscape of the system, increasing concentrations of byproducts that vanish in the homogeneous case, and larger population sizes.
[ 0, 0, 0, 0, 1, 0 ]
Title: Spin wave propagation and spin polarized electron transport in single crystal iron films, Abstract: The technique of propagating spin wave spectroscopy is applied to a 20 nm thick Fe/MgO (001) film. The magnetic parameters extracted from the position of the resonance peaks are very close to those tabulated for bulk iron. From the propagating waveforms, a group velocity of 4 km/s and an attenuation length of about 6 micrometers are extracted for 1.6 micrometers-wavelength spin-wave at 18 GHz. From the measured current-induced spin-wave Doppler shift, we also extract a surprisingly high degree of spin-polarization of the current of 83%. This set of results makes single-crystalline iron a promising candidate for building devices utilizing high frequency spin-waves and spin-polarized currents.
[ 0, 1, 0, 0, 0, 0 ]
Title: Expert-Driven Genetic Algorithms for Simulating Evaluation Functions, Abstract: In this paper we demonstrate how genetic algorithms can be used to reverse engineer an evaluation function's parameters for computer chess. Our results show that using an appropriate expert (or mentor), we can evolve a program that is on par with top tournament-playing chess programs, outperforming a two-time World Computer Chess Champion. This performance gain is achieved by evolving a program that mimics the behavior of a superior expert. The resulting evaluation function of the evolved program consists of a much smaller number of parameters than the expert's. The extended experimental results provided in this paper include a report of our successful participation in the 2008 World Computer Chess Championship. In principle, our expert-driven approach could be used in a wide range of problems for which appropriate experts are available.
[ 1, 0, 0, 1, 0, 0 ]
Title: A gentle introduction to the minimal Naming Game, Abstract: Social conventions govern countless behaviors all of us engage in every day, from how we greet each other to the languages we speak. But how can shared conventions emerge spontaneously in the absence of a central coordinating authority? The Naming Game model shows that networks of locally interacting individuals can spontaneously self-organize to produce global coordination. Here, we provide a gentle introduction to the main features of the model, from the dynamics observed in homogeneously mixing populations to the role played by more complex social networks, and to how slight modifications of the basic interaction rules give origin to a richer phenomenology in which more conventions can co-exist indefinitely.
[ 1, 1, 0, 0, 0, 0 ]
Title: Modelling hidden structure of signals in group data analysis with modified (Lr, 1) and block-term decompositions, Abstract: This work is devoted to elaboration on the idea to use block term decomposition for group data analysis and to raise the possibility of modelling group activity with (Lr, 1) and Tucker blocks. A new generalization of block tensor decomposition was considered in application to group data analysis. Suggested approach was evaluated on multilabel classification task for a set of images. This contribution also reports results of investigation on clustering with proposed tensor models in comparison with known matrix models, namely common orthogonal basis extraction and group independent component analysis.
[ 1, 0, 0, 1, 0, 0 ]
Title: ChaLearn Looking at People: A Review of Events and Resources, Abstract: This paper reviews the historic of ChaLearn Looking at People (LAP) events. We started in 2011 (with the release of the first Kinect device) to run challenges related to human action/activity and gesture recognition. Since then we have regularly organized events in a series of competitions covering all aspects of visual analysis of humans. So far we have organized more than 10 international challenges and events in this field. This paper reviews associated events, and introduces the ChaLearn LAP platform where public resources (including code, data and preprints of papers) related to the organized events are available. We also provide a discussion on perspectives of ChaLearn LAP activities.
[ 1, 0, 0, 0, 0, 0 ]
Title: Reinterpreting the Origin of Bifurcation and Chaos by Urbanization Dynamics, Abstract: Chaos associated with bifurcation makes a new science, but the origin and essence of chaos are not yet clear. Based on the well-known logistic map, chaos used to be regarded as intrinsic randomicity of determinate dynamics systems. However, urbanization dynamics indicates new explanation about it. Using mathematical derivation, numerical computation, and empirical analysis, we can explore chaotic dynamics of urbanization. The key is the formula of urbanization level. The urbanization curve can be described with the logistic function, which can be transformed into 1-dimensional map and thus produce bifurcation and chaos. On the other hand, the logistic model of urbanization curve can be derived from the rural-urban population interaction model, and the rural-urban interaction model can be discretized to a 2-dimensional map. An interesting finding is that the 2-dimensional rural-urban coupling map can create the same bifurcation and chaos patterns as those from the 1-dimensional logistic map. This suggests that the urban bifurcation and chaos come from spatial interaction between rural and urban population rather than pure intrinsic randomicity of determinate models. This discovery provides a new way of looking at origin and essence of bifurcation and chaos. By analogy with urbanization models, the classical predator-prey interaction model can be developed to interpret the complex dynamics of the logistic map in physical and social sciences.
[ 0, 1, 0, 0, 0, 0 ]
Title: Biochemical Coupling Through Emergent Conservation Laws, Abstract: Bazhin has analyzed ATP coupling in terms of quasiequilibrium states where fast reactions have reached an approximate steady state while slow reactions have not yet reached equilibrium. After an expository introduction to the relevant aspects of reaction network theory, we review his work and explain the role of emergent conserved quantities in coupling. These are quantities, left unchanged by fast reactions, whose conservation forces exergonic processes such as ATP hydrolysis to drive desired endergonic processes.
[ 0, 0, 0, 0, 1, 0 ]
Title: Artificial Intelligence as an Enabler for Cognitive Self-Organizing Future Networks, Abstract: The explosive increase in number of smart devices hosting sophisticated applications is rapidly affecting the landscape of information communication technology industry. Mobile subscriptions, expected to reach 8.9 billion by 2022, would drastically increase the demand of extra capacity with aggregate throughput anticipated to be enhanced by a factor of 1000. In an already crowded radio spectrum, it becomes increasingly difficult to meet ever growing application demands of wireless bandwidth. It has been shown that the allocated spectrum is seldom utilized by the primary users and hence contains spectrum holes that may be exploited by the unlicensed users for their communication. As we enter the Internet Of Things (IoT) era in which appliances of common use will become smart digital devices with rigid performance requirements (such as low latency, energy efficiency, etc.), current networks face the vexing problem of how to create sufficient capacity for such applications. The fifth generation of cellular networks (5G) envisioned to address these challenges are thus required to incorporate cognition and intelligence to resolve the aforementioned issues.
[ 1, 0, 0, 0, 0, 0 ]
Title: Localized Thermal States, Abstract: It is believed that thermalization in closed systems of interacting particles can occur only when the eigenstates are fully delocalized and chaotic in the preferential (unperturbed) basis of the total Hamiltonian. Here we demonstrate that at variance with this common belief the typical situation in the systems with two-body inter-particle interaction is much more complicated and allows to treat as thermal even eigenstates that are not fully delocalized. Using a semi-analytical approach we establish the conditions for the emergence of such thermal states in a model of randomly interacting bosons. Our numerical data show an excellent correspondence with the predicted properties of {\it localized thermal eigenstates}.
[ 0, 1, 0, 0, 0, 0 ]
Title: Interface Phonon Modes in the [AlN/GaN]20 and [Al0.35Ga0.65N/Al0.55Ga0.45N]20 2D Multi Quantum Well Structures, Abstract: Interface phonon (IF) modes of c-plane oriented [AlN/GaN]20 and Al0.35Ga0.65N/Al0.55Ga0.45N]20 multi quantum well (MQW) structures grown via plasma assisted molecular beam epitaxy are reported. The effect of variation in dielectric constant of barrier layers to the IF optical phonon modes of well layers periodically arranged in the MQWs investigated.
[ 0, 1, 0, 0, 0, 0 ]
Title: Predictability of escape for a stochastic saddle-node bifurcation: when rare events are typical, Abstract: Transitions between multiple stable states of nonlinear systems are ubiquitous in physics, chemistry, and beyond. Two types of behaviors are usually seen as mutually exclusive: unpredictable noise-induced transitions and predictable bifurcations of the underlying vector field. Here, we report a new situation, corresponding to a fluctuating system approaching a bifurcation, where both effects collaborate. We show that the problem can be reduced to a single control parameter governing the competition between deterministic and stochastic effects. Two asymptotic regimes are identified: when the control parameter is small (e.g. small noise), deviations from the deterministic case are well described by the Freidlin-Wentzell theory. In particular, escapes over the potential barrier are very rare events. When the parameter is large (e.g. large noise), such events become typical. Unlike pure noise-induced transitions, the distribution of the escape time is peaked around a value which is asymptotically predicted by an adiabatic approximation. We show that the two regimes are characterized by qualitatively different reacting trajectories, with algebraic and exponential divergence, respectively.
[ 0, 1, 0, 0, 0, 0 ]
Title: Sieving rational points on varieties, Abstract: A sieve for rational points on suitable varieties is developed, together with applications to counting rational points in thin sets, the number of varieties in a family which are everywhere locally soluble, and to the notion of friable rational points with respect to divisors. In the special case of quadrics, sharper estimates are obtained by developing a version of the Selberg sieve for rational points.
[ 0, 0, 1, 0, 0, 0 ]
Title: Sampling-based probabilistic inference emerges from learning in neural circuits with a cost on reliability, Abstract: Neural responses in the cortex change over time both systematically, due to ongoing plasticity and learning, and seemingly randomly, due to various sources of noise and variability. Most previous work considered each of these processes, learning and variability, in isolation -- here we study neural networks exhibiting both and show that their interaction leads to the emergence of powerful computational properties. We trained neural networks on classical unsupervised learning tasks, in which the objective was to represent their inputs in an efficient, easily decodable form, with an additional cost for neural reliability which we derived from basic biophysical considerations. This cost on reliability introduced a tradeoff between energetically cheap but inaccurate representations and energetically costly but accurate ones. Despite the learning tasks being non-probabilistic, the networks solved this tradeoff by developing a probabilistic representation: neural variability represented samples from statistically appropriate posterior distributions that would result from performing probabilistic inference over their inputs. We provide an analytical understanding of this result by revealing a connection between the cost of reliability, and the objective for a state-of-the-art Bayesian inference strategy: variational autoencoders. We show that the same cost leads to the emergence of increasingly accurate probabilistic representations as networks become more complex, from single-layer feed-forward, through multi-layer feed-forward, to recurrent architectures. Our results provide insights into why neural responses in sensory areas show signatures of sampling-based probabilistic representations, and may inform future deep learning algorithms and their implementation in stochastic low-precision computing systems.
[ 0, 0, 0, 0, 1, 0 ]
Title: Microscopic Description of Electric and Magnetic Toroidal Multipoles in Hybrid Orbitals, Abstract: We present a general formalism of multipole descriptions under the space-time inversion group. We elucidate that two types of atomic toroidal multipoles, i.e., electric and magnetic, are fundamental pieces to express electronic order parameters in addition to ordinary electric and magnetic multipoles. By deriving quantum-mechanical operators for both toroidal multipoles, we show that electric (magnetic) toroidal multipole higher than dipole (monopole) can become a primary order parameter in a hybridized-orbital system. We also demonstrate emergent cross-correlated couplings between electric, magnetic, and elastic degrees of freedom, such as magneto-electric and magneto(electro)-elastic couplings, under toroidal multipole orders.
[ 0, 1, 0, 0, 0, 0 ]
Title: Algorithms to Approximate Column-Sparse Packing Problems, Abstract: Column-sparse packing problems arise in several contexts in both deterministic and stochastic discrete optimization. We present two unifying ideas, (non-uniform) attenuation and multiple-chance algorithms, to obtain improved approximation algorithms for some well-known families of such problems. As three main examples, we attain the integrality gap, up to lower-order terms, for known LP relaxations for k-column sparse packing integer programs (Bansal et al., Theory of Computing, 2012) and stochastic k-set packing (Bansal et al., Algorithmica, 2012), and go "half the remaining distance" to optimal for a major integrality-gap conjecture of Furedi, Kahn and Seymour on hypergraph matching (Combinatorica, 1993).
[ 1, 0, 0, 0, 0, 0 ]
Title: Teaching the Doppler Effect in Astrophysics, Abstract: The Doppler effect is a shift in the frequency of waves emitted from an object moving relative to the observer. By observing and analysing the Doppler shift in electromagnetic waves from astronomical objects, astronomers gain greater insight into the structure and operation of our universe. In this paper, a simple technique is described for teaching the basics of the Doppler effect to undergraduate astrophysics students using acoustic waves. An advantage of the technique is that it produces a visual representation of the acoustic Doppler shift. The equipment comprises a 40 kHz acoustic transmitter and a microphone. The sound is bounced off a computer fan and the signal collected by a DrDAQ ADC and processed by a spectrum analyser. Widening of the spectrum is observed as the fan power supply potential is increased from 4 to 12 V.
[ 0, 1, 0, 0, 0, 0 ]