text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Merging real and virtual worlds: An analysis of the state of the art and practical evaluation of Microsoft Hololens, Abstract: Achieving a symbiotic blending between reality and virtuality is a dream that has been lying in the minds of many people for a long time. Advances in various domains constantly bring us closer to making that dream come true. Augmented reality as well as virtual reality are in fact trending terms and are expected to further progress in the years to come. This master's thesis aims to explore these areas and starts by defining necessary terms such as augmented reality (AR) or virtual reality (VR). Usual taxonomies to classify and compare the corresponding experiences are then discussed. In order to enable those applications, many technical challenges need to be tackled, such as accurate motion tracking with 6 degrees of freedom (positional and rotational), that is necessary for compelling experiences and to prevent user sickness. Additionally, augmented reality experiences typically rely on image processing to position the superimposed content. To do so, "paper" markers or features extracted from the environment are often employed. Both sets of techniques are explored and common solutions and algorithms are presented. After investigating those technical aspects, I carry out an objective comparison of the existing state-of-the-art and state-of-the-practice in those domains, and I discuss present and potential applications in these areas. As a practical validation, I present the results of an application that I have developed using Microsoft HoloLens, one of the more advanced affordable technologies for augmented reality that is available today. Based on the experience and lessons learned during this development, I discuss the limitations of current technologies and present some avenues of future research.
[ 1, 0, 0, 0, 0, 0 ]
Title: Injectivity of the connecting homomorphisms, Abstract: Let $A$ be the inductive limit of a sequence $$A_1\, \xrightarrow{\phi_{1,2}} \,A_2\,\xrightarrow{\phi_{2,3}} \,A_3\rightarrow\cdots$$ with $A_n=\oplus_{i=1}^{n_i}A_{[n,i]}$, where all the $A_{[n,i]}$ are Elliott-Thomsen algebras and $\phi_{n,n+1}$ are homomorphisms, in this paper, we will prove that $A$ can be written as another inductive limit $$B_1\,\xrightarrow{\psi_{1,2}} \,B_2\,\xrightarrow{\psi_{2,3}} \,B_3\rightarrow\cdots$$ with $B_n=\oplus_{i=1}^{n_i}B_{[n,i]}$, where all the $B_{[n,i]}$ are Elliott-Thomsen building blocks and with the extra condition that all the $\phi_{n,n+1}$ are injective.
[ 0, 0, 1, 0, 0, 0 ]
Title: Selective Inference for Change Point Detection in Multi-dimensional Sequences, Abstract: We study the problem of detecting change points (CPs) that are characterized by a subset of dimensions in a multi-dimensional sequence. A method for detecting those CPs can be formulated as a two-stage method: one for selecting relevant dimensions, and another for selecting CPs. It has been difficult to properly control the false detection probability of these CP detection methods because selection bias in each stage must be properly corrected. Our main contribution in this paper is to formulate a CP detection problem as a selective inference problem, and show that exact (non-asymptotic) inference is possible for a class of CP detection methods. We demonstrate the performances of the proposed selective inference framework through numerical simulations and its application to our motivating medical data analysis problem.
[ 0, 0, 0, 1, 0, 0 ]
Title: Anyonic self-induced disorder in a stabilizer code: quasi-many body localization in a translational invariant model, Abstract: We enquire into the quasi-many-body localization in topologically ordered states of matter, revolving around the case of Kitaev toric code on ladder geometry, where different types of anyonic defects carry different masses induced by environmental errors. Our study verifies that random arrangement of anyons generates a complex energy landscape solely through braiding statistics, which suffices to suppress the diffusion of defects in such multi-component anyonic liquid. This non-ergodic dynamic suggests a promising scenario for investigation of quasi-many-body localization. Computing standard diagnostics evidences that, in such disorder-free many-body system, a typical initial inhomogeneity of anyons gives birth to a glassy dynamics with an exponentially diverging time scale of the full relaxation. A by-product of this dynamical effect is manifested by the slow growth of entanglement entropy, with characteristic time scales bearing resemblance to those of inhomogeneity relaxation. This setting provides a new platform which paves the way toward impeding logical errors by self-localization of anyons in a generic, high energy state, originated in their exotic statistics.
[ 0, 1, 0, 0, 0, 0 ]
Title: Earthquake Early Warning and Beyond: Systems Challenges in Smartphone-based Seismic Network, Abstract: Earthquake Early Warning (EEW) systems can effectively reduce fatalities, injuries, and damages caused by earthquakes. Current EEW systems are mostly based on traditional seismic and geodetic networks, and exist only in a few countries due to the high cost of installing and maintaining such systems. The MyShake system takes a different approach and turns people's smartphones into portable seismic sensors to detect earthquake-like motions. However, to issue EEW messages with high accuracy and low latency in the real world, we need to address a number of challenges related to mobile computing. In this paper, we first summarize our experience building and deploying the MyShake system, then focus on two key challenges for smartphone-based EEW (sensing heterogeneity and user/system dynamics) and some preliminary exploration. We also discuss other challenges and new research directions associated with smartphone-based seismic network.
[ 1, 0, 0, 0, 0, 0 ]
Title: Gate-error analysis in simulations of quantum computers with transmon qubits, Abstract: In the model of gate-based quantum computation, the qubits are controlled by a sequence of quantum gates. In superconducting qubit systems, these gates can be implemented by voltage pulses. The success of implementing a particular gate can be expressed by various metrics such as the average gate fidelity, the diamond distance, and the unitarity. We analyze these metrics of gate pulses for a system of two superconducting transmon qubits coupled by a resonator, a system inspired by the architecture of the IBM Quantum Experience. The metrics are obtained by numerical solution of the time-dependent Schrödinger equation of the transmon system. We find that the metrics reflect systematic errors that are most pronounced for echoed cross-resonance gates, but that none of the studied metrics can reliably predict the performance of a gate when used repeatedly in a quantum algorithm.
[ 0, 1, 0, 0, 0, 0 ]
Title: Common fixed point theorems under an implicit contractive condition on metric spaces endowed with an arbitrary binary relation and an application, Abstract: The aim of this paper is to establish some metrical coincidence and common fixed point theorems with an arbitrary relation under an implicit contractive condition which is general enough to cover a multitude of well known contraction conditions in one go besides yielding several new ones. We also provide an example to demonstrate the generality of our results over several well known corresponding results of the existing literature. Finally, we utilize our results to prove an existence theorem for ensuring the solution of an integral equation.
[ 0, 0, 1, 0, 0, 0 ]
Title: Operationalizing Conflict and Cooperation between Automated Software Agents in Wikipedia: A Replication and Expansion of 'Even Good Bots Fight', Abstract: This paper replicates, extends, and refutes conclusions made in a study published in PLoS ONE ("Even Good Bots Fight"), which claimed to identify substantial levels of conflict between automated software agents (or bots) in Wikipedia using purely quantitative methods. By applying an integrative mixed-methods approach drawing on trace ethnography, we place these alleged cases of bot-bot conflict into context and arrive at a better understanding of these interactions. We found that overwhelmingly, the interactions previously characterized as problematic instances of conflict are typically better characterized as routine, productive, even collaborative work. These results challenge past work and show the importance of qualitative/quantitative collaboration. In our paper, we present quantitative metrics and qualitative heuristics for operationalizing bot-bot conflict. We give thick descriptions of kinds of events that present as bot-bot reverts, helping distinguish conflict from non-conflict. We computationally classify these kinds of events through patterns in edit summaries. By interpreting found/trace data in the socio-technical contexts in which people give that data meaning, we gain more from quantitative measurements, drawing deeper understandings about the governance of algorithmic systems in Wikipedia. We have also released our data collection, processing, and analysis pipeline, to facilitate computational reproducibility of our findings and to help other researchers interested in conducting similar mixed-method scholarship in other platforms and contexts.
[ 1, 0, 0, 0, 0, 0 ]
Title: Tuning the piezoelectric and mechanical properties of the AlN system via alloying with YN and BN, Abstract: Recent advances in microelectromechanical systems often require multifunctional materials, which are designed so as to optimize more than one property. Using density functional theory calculations for alloyed nitride systems, we illustrate how co-alloying a piezoelectric material (AlN) with different nitrides helps tune both its piezoelectric and mechanical properties simultaneously. Wurtzite AlN-YN alloys display increased piezoelectric response with YN concentration, accompanied by mechanical softening along the crystallographic c direction. Both effects increase the electromechanical coupling coefficients relevant for transducers and actuators. Resonator applications, however, require superior stiffness, thus leading to the need to decouple the increased piezoelectric response from a softened lattice. We show that co-alloying of AlN with YN and BN results in improved elastic properties while retaining most of the piezoelectric enhancements from YN alloying. This finding may lead to new avenues for tuning the design properties of piezoelectrics through composition-property maps. Keywords: piezoelectricity, electromechanical coupling, density functional theory, co-alloying
[ 0, 1, 0, 0, 0, 0 ]
Title: Simple Surveys: Response Retrieval Inspired by Recommendation Systems, Abstract: In the last decade, the use of simple rating and comparison surveys has proliferated on social and digital media platforms to fuel recommendations. These simple surveys and their extrapolation with machine learning algorithms shed light on user preferences over large and growing pools of items, such as movies, songs and ads. Social scientists have a long history of measuring perceptions, preferences and opinions, often over smaller, discrete item sets with exhaustive rating or ranking surveys. This paper introduces simple surveys for social science application. We ran experiments to compare the predictive accuracy of both individual and aggregate comparative assessments using four types of simple surveys: pairwise comparisons and ratings on 2, 5 and continuous point scales in three distinct contexts: perceived Safety of Google Streetview Images, Likeability of Artwork, and Hilarity of Animal GIFs. Across contexts, we find that continuous scale ratings best predict individual assessments but consume the most time and cognitive effort. Binary choice surveys are quick and perform best to predict aggregate assessments, useful for collective decision tasks, but poorly predict personalized preferences, for which they are currently used by Netflix to recommend movies. Pairwise comparisons, by contrast, perform well to predict personal assessments, but poorly predict aggregate assessments despite being widely used to crowdsource ideas and collective preferences. We demonstrate how findings from these surveys can be visualized in a low-dimensional space that reveals distinct respondent interpretations of questions asked in each context. We conclude by reflecting on differences between sparse, incomplete simple surveys and their traditional survey counterparts in terms of efficiency, information elicited and settings in which knowing less about more may be critical for social science.
[ 1, 0, 0, 1, 0, 0 ]
Title: A Reduction for the Distinct Distances Problem in ${\mathbb R}^d$, Abstract: We introduce a reduction from the distinct distances problem in ${\mathbb R}^d$ to an incidence problem with $(d-1)$-flats in ${\mathbb R}^{2d-1}$. Deriving the conjectured bound for this incidence problem (the bound predicted by the polynomial partitioning technique) would lead to a tight bound for the distinct distances problem in ${\mathbb R}^d$. The reduction provides a large amount of information about the $(d-1)$-flats, and a framework for deriving more restrictions that these satisfy. Our reduction is based on introducing a Lie group that is a double cover of the special Euclidean group. This group can be seen as a variant of the Spin group, and a large part of our analysis involves studying its properties.
[ 0, 0, 1, 0, 0, 0 ]
Title: Local electronic properties of the graphene-protected giant Rashba-split BiAg$_2$ surface, Abstract: We report the preparation of the interface between graphene and the strong Rashba-split BiAg$_2$ surface alloy and investigatigation of its structure as well as the electronic properties by means of scanning tunneling microscopy/spectroscopy and density functional theory calculations. Upon evaluation of the quasiparticle interference patterns the unpertrubated linear dispersion for the $\pi$ band of $n$-doped graphene is observed. Our results also reveal the intact nature of the giant Rashba-split surface states of the BiAg$_2$ alloy, which demonstrate only a moderate downward energy shift upon the presence of graphene. This effect is explained in the framework of density functional theory by an inward relaxation of the Bi atoms at the interface and subsequent delocalisation of the wave function of the surface states. Our findings demonstrate a realistic pathway to prepare a graphene protected giant Rashba-split BiAg$_2$ for possible spintronic applications.
[ 0, 1, 0, 0, 0, 0 ]
Title: Lattice Boltzmann simulation of viscous fingering of immiscible displacement in a channel using an improved wetting scheme, Abstract: An improved wetting boundary implementation strategy is proposed based on lattice Boltzmann color-gradient model in this paper. In this strategy, an extra interface force condition is demonstrated based on the diffuse interface assumption and is employed in contact line region. It has been validated by three benchmark problems: static droplet wetting on a flat surface and a curved surface, and dynamic capillary filling. Good performances are shown in all three cases. Relied on the strict validation to our scheme, the viscous fingering phenomenon of immiscible fluids displacement in a two-dimensional channel has been restudied in this paper. High viscosity ratio, wide range contact angle, accurate moving contact line and mutual independence between surface tension and viscosity are the obvious advantages of our model. We find the linear relationship between the contact angle and displacement velocity or variation of finger length. When the viscosity ratio is smaller than 20, the displacement velocity is increasing with increasing viscosity ratio and reducing capillary number, and when the viscosity ratio is larger than 20, the displacement velocity tends to a specific constant. A similar conclusion is obtained on the variation of finger length.
[ 0, 1, 0, 0, 0, 0 ]
Title: Implementation of Control Strategies for Sterile Insect Techniques, Abstract: In this paper, we propose a sex-structured entomological model that serves as a basis for design of control strategies relying on releases of sterile male mosquitoes (Aedes spp) and aiming at elimination of the wild vector population in some target locality. We consider different types of releases (constant and periodic impulsive), providing necessary conditions to reach elimination. However, the main part of the paper is focused on the study of the periodic impulsive control in different situations. When the size of wild mosquito population cannot be assessed in real time, we propose the so-called open-loop control strategy that relies on periodic impulsive releases of sterile males with constant release size. Under this control mode, global convergence towards the mosquito-free equilibrium is proved on the grounds of sufficient condition that relates the size and frequency of releases. If periodic assessments (either synchronized with releases or more sparse) of the wild population size are available in real time, we propose the so-called closed-loop control strategy, which is adjustable in accordance with reliable estimations of the wild population sizes. Under this control mode, global convergence to the mosquito-free equilibrium is proved on the grounds of another sufficient condition that relates not only the size and frequency of periodic releases but also the frequency of sparse measurements taken on wild populations. Finally, we propose a mixed control strategy that combines open-loop and closed-loop strategies. This control mode renders the best result, in terms of overall time needed to reach elimination and the number of releases to be effectively carried out during the whole release campaign, while requiring for a reasonable amount of released sterile insects.
[ 0, 0, 0, 0, 1, 0 ]
Title: Convergence and submeasures in Boolean algebras, Abstract: A Boolean algebra carries a strictly positive exhaustive submeasure if and only if it has a sequential topology that is uniformly Frechet.
[ 0, 0, 1, 0, 0, 0 ]
Title: Genetic algorithm-based control of birefringent filtering for self-tuning, self-pulsing fiber lasers, Abstract: Polarization-based filtering in fiber lasers is well-known to enable spectral tunability and a wide range of dynamical operating states. This effect is rarely exploited in practical systems, however, because optimization of cavity parameters is non-trivial and evolves due to environmental sensitivity. Here, we report a genetic algorithm-based approach, utilizing electronic control of the cavity transfer function, to autonomously achieve broad wavelength tuning and the generation of Q-switched pulses with variable repetition rate and duration. The practicalities and limitations of simultaneous spectral and temporal self-tuning from a simple fiber laser are discussed, paving the way to on-demand laser properties through algorithmic control and machine learning schemes.
[ 0, 1, 0, 0, 0, 0 ]
Title: Public discourse and news consumption on online social media: A quantitative, cross-platform analysis of the Italian Referendum, Abstract: The rising attention to the spreading of fake news and unsubstantiated rumors on online social media and the pivotal role played by confirmation bias led researchers to investigate different aspects of the phenomenon. Experimental evidence showed that confirmatory information gets accepted even if containing deliberately false claims while dissenting information is mainly ignored or might even increase group polarization. It seems reasonable that, to address misinformation problem properly, we have to understand the main determinants behind content consumption and the emergence of narratives on online social media. In this paper we address such a challenge by focusing on the discussion around the Italian Constitutional Referendum by conducting a quantitative, cross-platform analysis on both Facebook public pages and Twitter accounts. We observe the spontaneous emergence of well-separated communities on both platforms. Such a segregation is completely spontaneous, since no categorization of contents was performed a priori. By exploring the dynamics behind the discussion, we find that users tend to restrict their attention to a specific set of Facebook pages/Twitter accounts. Finally, taking advantage of automatic topic extraction and sentiment analysis techniques, we are able to identify the most controversial topics inside and across both platforms. We measure the distance between how a certain topic is presented in the posts/tweets and the related emotional response of users. Our results provide interesting insights for the understanding of the evolution of the core narratives behind different echo chambers and for the early detection of massive viral phenomena around false claims.
[ 1, 1, 0, 0, 0, 0 ]
Title: AIDE: An algorithm for measuring the accuracy of probabilistic inference algorithms, Abstract: Approximate probabilistic inference algorithms are central to many fields. Examples include sequential Monte Carlo inference in robotics, variational inference in machine learning, and Markov chain Monte Carlo inference in statistics. A key problem faced by practitioners is measuring the accuracy of an approximate inference algorithm on a specific data set. This paper introduces the auxiliary inference divergence estimator (AIDE), an algorithm for measuring the accuracy of approximate inference algorithms. AIDE is based on the observation that inference algorithms can be treated as probabilistic models and the random variables used within the inference algorithm can be viewed as auxiliary variables. This view leads to a new estimator for the symmetric KL divergence between the approximating distributions of two inference algorithms. The paper illustrates application of AIDE to algorithms for inference in regression, hidden Markov, and Dirichlet process mixture models. The experiments show that AIDE captures the qualitative behavior of a broad class of inference algorithms and can detect failure modes of inference algorithms that are missed by standard heuristics.
[ 1, 0, 0, 1, 0, 0 ]
Title: Kidnapping Model: An Extension of Selten's Game, Abstract: Selten's game is a kidnapping model where the probability of capturing the kidnapper is independent of whether the hostage has been released or executed. Most often, in view of the elevated sensitivities involved, authorities put greater effort and resources into capturing the kidnapper if the hostage has been executed, in contrast to the case when a ransom is paid to secure the hostage's release. In this paper, we study the asymmetric game when the probability of capturing the kidnapper depends on whether the hostage has been executed or not and find a new uniquely determined perfect equilibrium point in Selten's game.
[ 1, 0, 0, 0, 0, 0 ]
Title: Semiparametric panel data models using neural networks, Abstract: This paper presents an estimator for semiparametric models that uses a feed-forward neural network to fit the nonparametric component. Unlike many methodologies from the machine learning literature, this approach is suitable for longitudinal/panel data. It provides unbiased estimation of the parametric component of the model, with associated confidence intervals that have near-nominal coverage rates. Simulations demonstrate (1) efficiency, (2) that parametric estimates are unbiased, and (3) coverage properties of estimated intervals. An application section demonstrates the method by predicting county-level corn yield using daily weather data from the period 1981-2015, along with parametric time trends representing technological change. The method is shown to out-perform linear methods such as OLS and ridge/lasso, as well as random forest. The procedures described in this paper are implemented in the R package panelNNET.
[ 0, 0, 0, 1, 0, 0 ]
Title: Split and Rephrase, Abstract: We propose a new sentence simplification task (Split-and-Rephrase) where the aim is to split a complex sentence into a meaning preserving sequence of shorter sentences. Like sentence simplification, splitting-and-rephrasing has the potential of benefiting both natural language processing and societal applications. Because shorter sentences are generally better processed by NLP systems, it could be used as a preprocessing step which facilitates and improves the performance of parsers, semantic role labellers and machine translation systems. It should also be of use for people with reading disabilities because it allows the conversion of longer sentences into shorter ones. This paper makes two contributions towards this new task. First, we create and make available a benchmark consisting of 1,066,115 tuples mapping a single complex sentence to a sequence of sentences expressing the same meaning. Second, we propose five models (vanilla sequence-to-sequence to semantically-motivated models) to understand the difficulty of the proposed task.
[ 1, 0, 0, 0, 0, 0 ]
Title: Evaporation and scattering of momentum- and velocity-dependent dark matter in the Sun, Abstract: Dark matter with momentum- or velocity-dependent interactions with nuclei has shown significant promise for explaining the so-called Solar Abundance Problem, a longstanding discrepancy between solar spectroscopy and helioseismology. The best-fit models are all rather light, typically with masses in the range of 3-5 GeV. This is exactly the mass range where dark matter evaporation from the Sun can be important, but to date no detailed calculation of the evaporation of such models has been performed. Here we carry out this calculation, for the first time including arbitrary velocity- and momentum-dependent interactions, thermal effects, and a completely general treatment valid from the optically thin limit all the way through to the optically thick regime. We find that depending on the dark matter mass, interaction strength and type, the mass below which evaporation is relevant can vary from 1 to 4 GeV. This has the effect of weakening some of the better-fitting solutions to the Solar Abundance Problem, but also improving a number of others. As a by-product, we also provide an improved derivation of the capture rate that takes into account thermal and optical depth effects, allowing the standard result to be smoothly matched to the well-known saturation limit.
[ 0, 1, 0, 0, 0, 0 ]
Title: ORSIm Detector: A Novel Object Detection Framework in Optical Remote Sensing Imagery Using Spatial-Frequency Channel Features, Abstract: With the rapid development of spaceborne imaging techniques, object detection in optical remote sensing imagery has drawn much attention in recent decades. While many advanced works have been developed with powerful learning algorithms, the incomplete feature representation still cannot meet the demand for effectively and efficiently handling image deformations, particularly objective scaling and rotation. To this end, we propose a novel object detection framework, called optical remote sensing imagery detector (ORSIm detector), integrating diverse channel features extraction, feature learning, fast image pyramid matching, and boosting strategy. ORSIm detector adopts a novel spatial-frequency channel feature (SFCF) by jointly considering the rotation-invariant channel features constructed in frequency domain and the original spatial channel features (e.g., color channel, gradient magnitude). Subsequently, we refine SFCF using learning-based strategy in order to obtain the high-level or semantically meaningful features. In the test phase, we achieve a fast and coarsely-scaled channel computation by mathematically estimating a scaling factor in the image domain. Extensive experimental results conducted on the two different airborne datasets are performed to demonstrate the superiority and effectiveness in comparison with previous state-of-the-art methods.
[ 1, 0, 0, 0, 0, 0 ]
Title: Synergies between Exoplanet Surveys and Variable Star Research, Abstract: With the discovery of the first transiting extrasolar planetary system back to 1999, a great number of projects started to hunt for other similar systems. Because of the incidence rate of such systems was unknown and the length of the shallow transit events is only a few percent of the orbital period, the goal was to monitor continuously as many stars as possible for at least a period of a few months. Small aperture, large field of view automated telescope systems have been installed with a parallel development of new data reduction and analysis methods, leading to better than 1% per data point precision for thousands of stars. With the successful launch of the photometric satellites CoRot and Kepler, the precision increased further by one-two orders of magnitude. Millions of stars have been analyzed and searched for transits. In the history of variable star astronomy this is the biggest undertaking so far, resulting in photometric time series inventories immensely valuable for the whole field. In this review we briefly discuss the methods of data analysis that were inspired by the main science driver of these surveys and highlight some of the most interesting variable star results that impact the field of variable star astronomy.
[ 0, 1, 0, 0, 0, 0 ]
Title: Improved Set-based Symbolic Algorithms for Parity Games, Abstract: Graph games with {\omega}-regular winning conditions provide a mathematical framework to analyze a wide range of problems in the analysis of reactive systems and programs (such as the synthesis of reactive systems, program repair, and the verification of branching time properties). Parity conditions are canonical forms to specify {\omega}-regular winning conditions. Graph games with parity conditions are equivalent to {\mu}-calculus model checking, and thus a very important algorithmic problem. Symbolic algorithms are of great significance because they provide scalable algorithms for the analysis of large finite-state systems, as well as algorithms for the analysis of infinite-state systems with finite quotient. A set-based symbolic algorithm uses the basic set operations and the one-step predecessor operators. We consider graph games with $n$ vertices and parity conditions with $c$ priorities. While many explicit algorithms exist for graph games with parity conditions, for set-based symbolic algorithms there are only two algorithms (notice that we use space to refer to the number of sets stored by a symbolic algorithm): (a) the basic algorithm that requires $O(n^c)$ symbolic operations and linear space; and (b) an improved algorithm that requires $O(n^{c/2+1})$ symbolic operations but also $O(n^{c/2+1})$ space (i.e., exponential space). In this work we present two set-based symbolic algorithms for parity games: (a) our first algorithm requires $O(n^{c/2+1})$ symbolic operations and only requires linear space; and (b) developing on our first algorithm, we present an algorithm that requires $O(n^{c/3+1})$ symbolic operations and only linear space. We also present the first linear space set-based symbolic algorithm for parity games that requires at most a sub-exponential number of symbolic operations.
[ 1, 0, 0, 0, 0, 0 ]
Title: Existence of Evolutionarily Stable Strategies Remains Hard to Decide for a Wide Range of Payoff Values, Abstract: The concept of an evolutionarily stable strategy (ESS), introduced by Smith and Price, is a refinement of Nash equilibrium in 2-player symmetric games in order to explain counter-intuitive natural phenomena, whose existence is not guaranteed in every game. The problem of deciding whether a game possesses an ESS has been shown to be $\Sigma_{2}^{P}$-complete by Conitzer using the preceding important work by Etessami and Lochbihler. The latter, among other results, proved that deciding the existence of ESS is both NP-hard and coNP-hard. In this paper we introduce a "reduction robustness" notion and we show that deciding the existence of an ESS remains coNP-hard for a wide range of games even if we arbitrarily perturb within some intervals the payoff values of the game under consideration. In contrast, ESS exist almost surely for large games with random and independent payoffs chosen from the same distribution.
[ 1, 0, 0, 0, 0, 0 ]
Title: Compound Poisson approximation to estimate the Lévy density, Abstract: We construct an estimator of the Lévy density of a pure jump Lévy process, possibly of infinite variation, from the discrete observation of one trajectory at high frequency. The novelty of our procedure is that we directly estimate the Lévy density relying on a pathwise strategy, whereas existing procedures rely on spectral techniques. By taking advantage of a compound Poisson approximation of the Lévy density, we circumvent the use of spectral techniques and in particular of the Lévy-Khintchine formula. A linear wavelet estimators is built and its performance is studied in terms of $L_p$ loss functions, $p\geq 1$, over Besov balls. The resulting rates are minimax-optimal for a large class of Lévy processes. We discuss the robustness of the procedure to the presence of a Brownian part and to the estimation set getting close to the critical value 0.
[ 0, 0, 1, 1, 0, 0 ]
Title: On the non-vanishing of certain Dirichlet series, Abstract: Given $k\in\mathbb N$, we study the vanishing of the Dirichlet series $$D_k(s,f):=\sum_{n\geq1} d_k(n)f(n)n^{-s}$$ at the point $s=1$, where $f$ is a periodic function modulo a prime $p$. We show that if $(k,p-1)=1$ or $(k,p-1)=2$ and $p\equiv 3\mod 4$, then there are no odd rational-valued functions $f\not\equiv 0$ such that $D_k(1,f)=0$, whereas in all other cases there are examples of odd functions $f$ such that $D_k(1,f)=0$. As a consequence, we obtain, for example, that the set of values $L(1,\chi)^2$, where $\chi$ ranges over odd characters mod $p$, are linearly independent over $\mathbb Q$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Points2Pix: 3D Point-Cloud to Image Translation using conditional Generative Adversarial Networks, Abstract: We present the first approach for 3D point-cloud to image translation based on conditional Generative Adversarial Networks (cGAN). The model handles multi-modal information sources from different domains, i.e. raw point-sets and images. The generator is capable of processing three conditions, whereas the point-cloud is encoded as raw point-set and camera projection. An image background patch is used as constraint to bias environmental texturing. A global approximation function within the generator is directly applied on the point-cloud (Point-Net). Hence, the representative learning model incorporates global 3D characteristics directly at the latent feature space. Conditions are used to bias the background and the viewpoint of the generated image. This opens up new ways in augmenting or texturing 3D data to aim the generation of fully individual images. We successfully evaluated our method on the Kitti and SunRGBD dataset with an outstanding object detection inception score.
[ 1, 0, 0, 0, 0, 0 ]
Title: Parametrization and Generation of Geological Models with Generative Adversarial Networks, Abstract: One of the main challenges in the parametrization of geological models is the ability to capture complex geological structures often observed in subsurface fields. In recent years, Generative Adversarial Networks (GAN) were proposed as an efficient method for the generation and parametrization of complex data, showing state-of-the-art performances in challenging computer vision tasks such as reproducing natural images (handwritten digits, human faces, etc.). In this work, we study the application of Wasserstein GAN for the parametrization of geological models. The effectiveness of the method is assessed for uncertainty propagation tasks using several test cases involving different permeability patterns and subsurface flow problems. Results show that GANs are able to generate samples that preserve the multipoint statistical features of the geological models both visually and quantitatively. The generated samples reproduce both the geological structures and the flow properties of the reference data.
[ 0, 1, 0, 1, 0, 0 ]
Title: Finite element procedures for computing normals and mean curvature on triangulated surfaces and their use for mesh refinement, Abstract: In this paper we consider finite element approaches to computing the mean curvature vector and normal at the vertices of piecewise linear triangulated surfaces. In particular, we adopt a stabilization technique which allows for first order $L^2$-convergence of the mean curvature vector and apply this stabilization technique also to the computation of continuous, recovered, normals using $L^2$-projections of the piecewise constant face normals. Finally, we use our projected normals to define an adaptive mesh refinement approach to geometry resolution where we also employ spline techniques to reconstruct the surface before refinement. We compare or results to previously proposed approaches.
[ 0, 0, 1, 0, 0, 0 ]
Title: Maximum a posteriori estimation through simulated annealing for binary asteroid orbit determination, Abstract: This paper considers a new method for the binary asteroid orbit determination problem. The method is based on the Bayesian approach with a global optimisation algorithm. The orbital parameters to be determined are modelled through an a posteriori distribution made of a priori and likelihood terms. The first term constrains the parameters space and it allows the introduction of available knowledge about the orbit. The second term is based on given observations and it allows us to use and compare different observational error models. Once the a posteriori model is built, the estimator of the orbital parameters is computed using a global optimisation procedure: the simulated annealing algorithm. The maximum a posteriori (MAP) techniques are verified using simulated and real data. The obtained results validate the proposed method. The new approach guarantees independence of the initial parameters estimation and theoretical convergence towards the global optimisation solution. It is particularly useful in these situations, whenever a good initial orbit estimation is difficult to get, whenever observations are not well-sampled, and whenever the statistical behaviour of the observational errors cannot be stated Gaussian like.
[ 0, 1, 0, 1, 0, 0 ]
Title: Extrapolating Expected Accuracies for Large Multi-Class Problems, Abstract: The difficulty of multi-class classification generally increases with the number of classes. Using data from a subset of the classes, can we predict how well a classifier will scale with an increased number of classes? Under the assumptions that the classes are sampled identically and independently from a population, and that the classifier is based on independently learned scoring functions, we show that the expected accuracy when the classifier is trained on k classes is the (k-1)st moment of a certain distribution that can be estimated from data. We present an unbiased estimation method based on the theory, and demonstrate its application on a facial recognition example.
[ 1, 0, 0, 1, 0, 0 ]
Title: RSI-CB: A Large Scale Remote Sensing Image Classification Benchmark via Crowdsource Data, Abstract: Remote sensing image classification is a fundamental task in remote sensing image processing. Remote sensing field still lacks of such a large-scale benchmark compared to ImageNet, Place2. We propose a remote sensing image classification benchmark (RSI-CB) based on crowd-source data which is massive, scalable, and diversity. Using crowdsource data, we can efficiently annotate ground objects in remotes sensing image by point of interests, vectors data from OSM or other crowd-source data. Based on this method, we construct a worldwide large-scale benchmark for remote sensing image classification. In this benchmark, there are two sub datasets with 256 * 256 and 128 * 128 size respectively since different convolution neural networks requirement different image size. The former sub dataset contains 6 categories with 35 subclasses with total of more than 24,000 images; the later one contains 6 categories with 45 subclasses with total of more than 36,000 images. The six categories are agricultural land, construction land and facilities, transportation and facilities, water and water conservancy facilities, woodland and other land, and each category has several subclasses. This classification system is defined according to the national standard of land use classification in China, and is inspired by the hierarchy mechanism of ImageNet. Finally, we have done a large number of experiments to compare RSI-CB with SAT-4, UC-Merced datasets on handcrafted features, such as such as SIFT, and classical CNN models, such as AlexNet, VGG, GoogleNet, and ResNet. We also show CNN models trained by RSI-CB have good performance when transfer to other dataset, i.e. UC-Merced, and good generalization ability. The experiments show that RSI-CB is more suitable as a benchmark for remote sensing image classification task than other ones in big data era, and can be potentially used in practical applications.
[ 1, 0, 0, 0, 0, 0 ]
Title: Quantum groups, Yang-Baxter maps and quasi-determinants, Abstract: For any quasi-triangular Hopf algebra, there exists the universal R-matrix, which satisfies the Yang-Baxter equation. It is known that the adjoint action of the universal R-matrix on the elements of the tensor square of the algebra constitutes a quantum Yang-Baxter map, which satisfies the set-theoretic Yang-Baxter equation. The map has a zero curvature representation among L-operators defined as images of the universal R-matrix. We find that the zero curvature representation can be solved by the Gauss decomposition of a product of L-operators. Thereby obtained a quasi-determinant expression of the quantum Yang-Baxter map associated with the quantum algebra $U_{q}(gl(n))$. Moreover, the map is identified with products of quasi-Plücker coordinates over a matrix composed of the L-operators. We also consider the quasi-classical limit, where the underlying quantum algebra reduces to a Poisson algebra. The quasi-determinant expression of the quantum Yang-Baxter map reduces to ratios of determinants, which give a new expression of a classical Yang-Baxter map.
[ 0, 1, 1, 0, 0, 0 ]
Title: A Brownian Motion Model and Extreme Belief Machine for Modeling Sensor Data Measurements, Abstract: As the title suggests, we will describe (and justify through the presentation of some of the relevant mathematics) prediction methodologies for sensor measurements. This exposition will mainly be concerned with the mathematics related to modeling the sensor measurements.
[ 1, 0, 0, 0, 0, 0 ]
Title: On the nature of the magnetic phase transition in a Weyl semimetal, Abstract: We investigate the nature of the magnetic phase transition induced by the short-ranged electron-electron interactions in a Weyl semimetal by using the perturbative renormalization-group method. We find that the critical point associated with the quantum phase transition is characterized by a Gaussian fixed point perturbed by a dangerously irrelevant operator. Although the low-energy and long-distance physics is governed by a free theory, the velocities of the fermionic quasiparticles and the magnetic excitations suffer from nontrivial renormalization effects. In particular, their ratio approaches one, which indicates an emergent Lorentz symmetry at low energies. We further investigate the stability of the fixed point in the presence of weak disorder. We show that while the fixed point is generally stable against weak disorder, among those disorders that are consistent with the emergent chiral symmetry of the clean system, a moderately strong random chemical potential and/or random vector potential may induce a quantum phase transition towards a disorder-dominated phase. We propose a global phase diagram of the Weyl semimetal in the presence of both electron-electron interactions and disorder based on our results.
[ 0, 1, 0, 0, 0, 0 ]
Title: Analysis of Peer Review Effectiveness for Academic Journals Based on Distributed Parallel System, Abstract: A simulation model based on parallel systems is established, aiming to explore the relation between the number of submissions and the overall quality of academic journals within a similar discipline under peer review. The model can effectively simulate the submission, review and acceptance behaviors of academic journals, in a distributed manner. According to the simulation experiments, it could possibly happen that the overall standard of academic journals may deteriorate due to excessive submissions.
[ 1, 0, 0, 0, 0, 0 ]
Title: Observational signatures of linear warps in circumbinary discs, Abstract: In recent years an increasing number of observational studies have hinted at the presence of warps in protoplanetary discs, however a general comprehensive description of observational diagnostics of warped discs was missing. We performed a series of 3D SPH hydrodynamic simulations and combined them with 3D radiative transfer calculations to study the observability of warps in circumbinary discs, whose plane is misaligned with respect to the orbital plane of the central binary. Our numerical hydrodynamic simulations confirm previous analytical results on the dependence of the warp structure on the viscosity and the initial misalignment between the binary and the disc. To study the observational signatures of warps we calculate images in the continuum at near-infrared and sub-millimetre wavelengths and in the pure rotational transition of CO in the sub-millimetre. Warped circumbinary discs show surface brightness asymmetry in near-infrared scattered light images as well as in optically thick gas lines at sub-millimetre wavelengths. The asymmetry is caused by self-shadowing of the disc by the inner warped regions, thus the strength of the asymmetry depends on the strength of the warp. The projected velocity field, derived from line observations, shows characteristic deviations, twists and a change in the slope of the rotation curve, from that of an unperturbed disc. In extreme cases even the direction of rotation appears to change in the disc inwards of a characteristic radius. The strength of the kinematical signatures of warps decreases with increasing inclination. The strength of all warp signatures decreases with decreasing viscosity.
[ 0, 1, 0, 0, 0, 0 ]
Title: Multi-parameter One-Sided Monitoring Test, Abstract: Multi-parameter one-sided hypothesis test problems arise naturally in many applications. We are particularly interested in effective tests for monitoring multiple quality indices in forestry products. Our search reveals that there are many effective statistical methods in the literature for normal data, and that they can easily be adapted for non-normal data. We find that the beautiful likelihood ratio test is unsatisfactory, because in order to control the size, it must cope with the least favorable distributions at the cost of power. In this paper, we find a novel way to slightly ease the size control, obtaining a much more powerful test. Simulation confirms that the new test retains good control of the type I error and is markedly more powerful than the likelihood ratio test as well as many competitors based on normal data. The new method performs well in the context of monitoring multiple quality indices.
[ 0, 0, 1, 1, 0, 0 ]
Title: Statistical mechanics of low-rank tensor decomposition, Abstract: Often, large, high dimensional datasets collected across multiple modalities can be organized as a higher order tensor. Low-rank tensor decomposition then arises as a powerful and widely used tool to discover simple low dimensional structures underlying such data. However, we currently lack a theoretical understanding of the algorithmic behavior of low-rank tensor decompositions. We derive Bayesian approximate message passing (AMP) algorithms for recovering arbitrarily shaped low-rank tensors buried within noise, and we employ dynamic mean field theory to precisely characterize their performance. Our theory reveals the existence of phase transitions between easy, hard and impossible inference regimes, and displays an excellent match with simulations. Moreover, it reveals several qualitative surprises compared to the behavior of symmetric, cubic tensor decomposition. Finally, we compare our AMP algorithm to the most commonly used algorithm, alternating least squares (ALS), and demonstrate that AMP significantly outperforms ALS in the presence of noise.
[ 0, 0, 0, 0, 1, 0 ]
Title: A path integral based model for stocks and order dynamics, Abstract: We introduce a model for the short-term dynamics of financial assets based on an application to finance of quantum gauge theory, developing ideas of Ilinski. We present a numerical algorithm for the computation of the probability distribution of prices and compare the results with APPLE stocks prices and the S&P500 index.
[ 0, 0, 0, 0, 0, 1 ]
Title: A New Algorithm to Automate Inductive Learning of Default Theories, Abstract: In inductive learning of a broad concept, an algorithm should be able to distinguish concept examples from exceptions and noisy data. An approach through recursively finding patterns in exceptions turns out to correspond to the problem of learning default theories. Default logic is what humans employ in common-sense reasoning. Therefore, learned default theories are better understood by humans. In this paper, we present new algorithms to learn default theories in the form of non-monotonic logic programs. Experiments reported in this paper show that our algorithms are a significant improvement over traditional approaches based on inductive logic programming.
[ 1, 0, 0, 0, 0, 0 ]
Title: On a result of Fel'dman on linear forms in the values of some E-functions, Abstract: We shall consider a result of Fel'dman, where a sharp Baker-type lower bound is obtained for linear forms in the values of some E-functions. Fel'dman's proof is based on an explicit construction of Padé approximations of the first kind for these functions. In the present paper we introduce Padé approximations of the second kind for the same functions and use these to obtain a slightly improved version of Fel'dman's result.
[ 0, 0, 1, 0, 0, 0 ]
Title: Learning Low-shot facial representations via 2D warping, Abstract: In this work, we mainly study the influence of the 2D warping module for one-shot face recognition.
[ 1, 0, 0, 0, 0, 0 ]
Title: Catalyzed bimolecular reactions in responsive nanoreactors, Abstract: We describe a general theory for surface-catalyzed bimolecular reactions in responsive nanoreactors, catalytically active nanoparticles coated by a stimuli-responsive 'gating' shell, whose permeability controls the activity of the process. We address two archetypal scenarios encountered in this system: The first, where two species diffusing from a bulk solution react at the catalyst's surface; the second where only one of the reactants diffuses from the bulk while the other one is produced at the nanoparticle surface, e.g., by light conversion. We find that in both scenarios the total catalytic rate has the same mathematical structure, once diffusion rates are properly redefined. Moreover, the diffusional fluxes of the different reactants are strongly coupled, providing a richer behavior than that arising in unimolecular reactions. We also show that in stark contrast to bulk reactions, the identification of a limiting reactant is not simply determined by the relative bulk concentrations but controlled by the nanoreactor shell permeability. Finally, we describe an application of our theory by analyzing experimental data on the reaction between hexacyanoferrate (III) and borohydride ions in responsive hydrogel-based core-shell nanoreactors.
[ 0, 1, 0, 0, 0, 0 ]
Title: Hierarchical Learning for Modular Robots, Abstract: We argue that hierarchical methods can become the key for modular robots achieving reconfigurability. We present a hierarchical approach for modular robots that allows a robot to simultaneously learn multiple tasks. Our evaluation results present an environment composed of two different modular robot configurations, namely 3 degrees-of-freedom (DoF) and 4DoF with two corresponding targets. During the training, we switch between configurations and targets aiming to evaluate the possibility of training a neural network that is able to select appropriate motor primitives and robot configuration to achieve the target. The trained neural network is then transferred and executed on a real robot with 3DoF and 4DoF configurations. We demonstrate how this technique generalizes to robots with different configurations and tasks.
[ 1, 0, 0, 0, 0, 0 ]
Title: Online Learning with Diverse User Preferences, Abstract: In this paper, we investigate the impact of diverse user preference on learning under the stochastic multi-armed bandit (MAB) framework. We aim to show that when the user preferences are sufficiently diverse and each arm can be optimal for certain users, the O(log T) regret incurred by exploring the sub-optimal arms under the standard stochastic MAB setting can be reduced to a constant. Our intuition is that to achieve sub-linear regret, the number of times an optimal arm being pulled should scale linearly in time; when all arms are optimal for certain users and pulled frequently, the estimated arm statistics can quickly converge to their true values, thus reducing the need of exploration dramatically. We cast the problem into a stochastic linear bandits model, where both the users preferences and the state of arms are modeled as {independent and identical distributed (i.i.d)} d-dimensional random vectors. After receiving the user preference vector at the beginning of each time slot, the learner pulls an arm and receives a reward as the linear product of the preference vector and the arm state vector. We also assume that the state of the pulled arm is revealed to the learner once its pulled. We propose a Weighted Upper Confidence Bound (W-UCB) algorithm and show that it can achieve a constant regret when the user preferences are sufficiently diverse. The performance of W-UCB under general setups is also completely characterized and validated with synthetic data.
[ 1, 0, 0, 1, 0, 0 ]
Title: The cohomology of rank two stable bundle moduli: mod two nilpotency & skew Schur polynomials, Abstract: We compute cup product pairings in the integral cohomology ring of the moduli space of rank two stable bundles with odd determinant over a Riemann surface using methods of Zagier. The resulting formula is related to a generating function for certain skew Schur polynomials. As an application, we compute the nilpotency degree of a distinguished degree two generator in the mod two cohomology ring. We then give descriptions of the mod two cohomology rings in low genus, and describe the subrings invariant under the mapping class group action.
[ 0, 0, 1, 0, 0, 0 ]
Title: Automated Formal Synthesis of Digital Controllers for State-Space Physical Plants, Abstract: We present a sound and automated approach to synthesize safe digital feedback controllers for physical plants represented as linear, time invariant models. Models are given as dynamical equations with inputs, evolving over a continuous state space and accounting for errors due to the digitalization of signals by the controller. Our approach has two stages, leveraging counterexample guided inductive synthesis (CEGIS) and reachability analysis. CEGIS synthesizes a static feedback controller that stabilizes the system under restrictions given by the safety of the reach space. Safety is verified either via BMC or abstract acceleration; if the verification step fails, we refine the controller by generalizing the counterexample. We synthesize stable and safe controllers for intricate physical plant models from the digital control literature.
[ 1, 0, 0, 0, 0, 0 ]
Title: High efficiently numerical simulation of the TDGL equation with reticular free energy in hydrogel, Abstract: In this paper, we focus on the numerical simulation of phase separation about macromolecule microsphere composite (MMC) hydrogel. The model equation is based on Time-Dependent Ginzburg-Landau (TDGL) equation with reticular free energy. We have put forward two $L^2$ stable schemes to simulate simplified TDGL equation. In numerical experiments, we observe that simulating the whole process of phase separation requires a considerably long time. We also notice that the total free energy changes significantly in initial time and varies slightly in the following time. Based on these properties, we introduce an adaptive strategy based on one of stable scheme mentioned. It is found that the introduction of the time adaptivity cannot only resolve the dynamical changes of the solution accurately but also can significantly save CPU time for the long time simulation.
[ 0, 1, 0, 0, 0, 0 ]
Title: Entanglement verification protocols for distributed systems based on the Quantum Recursive Network Architecture, Abstract: In distributed systems based on the Quantum Recursive Network Architecture, quantum channels and quantum memories are used to establish entangled quantum states between node pairs. Such systems are robust against attackers that interact with the quantum channels. Conversely, weaknesses emerge when an attacker takes full control of a node and alters the configuration of the local quantum memory, either to make a denial-of-service attack or to reprogram the node. In such a scenario, entanglement verification over quantum memories is a means for detecting the intruder. Usually, entanglement verification approaches focus either on untrusted sources of entangled qubits (photons, in most cases) or on eavesdroppers that interfere with the quantum channel while entangled qubits are transmitted. Instead, in this work we assume that the source of entanglement is trusted, but parties may be dishonest. Looking for efficient entanglement verification protocols that only require classical channels and local quantum operations to work, we thoroughly analyze the one proposed by Nagy and Akl, that we denote as NA2010 for simplicity, and we define and analyze two entanglement verification protocols based on teleportation (denoted as AC1 and AC2), characterized by increasing efficiency in terms of intrusion detection probability versus sacrificed quantum resources.
[ 1, 0, 0, 0, 0, 0 ]
Title: Testing Equality of Autocovariance Operators for Functional Time Series, Abstract: We consider strictly stationary stochastic processes of Hilbert space-valued random variables and focus on tests of the equality of the lag-zero autocovariance operators of several independent functional time series. A moving block bootstrap-based testing procedure is proposed which generates pseudo random elements that satisfy the null hypothesis of interest. It is based on directly bootstrapping the time series of tensor products which overcomes some common difficulties associated with applications of the bootstrap to related testing problems. The suggested methodology can be potentially applied to a broad range of test statistics of the hypotheses of interest. As an example, we establish validity for approximating the distribution under the null of a fully functional test statistic based on the Hilbert-Schmidt distance of the corresponding sample lag-zero autocovariance operators, and show consistency under the alternative. As a prerequisite, we prove a central limit theorem for the moving block bootstrap procedure applied to the sample autocovariance operator which is of interest on its own. The finite sample size and power performance of the suggested moving block bootstrap-based testing procedure is illustrated through simulations and an application to a real-life dataset is discussed.
[ 0, 0, 1, 1, 0, 0 ]
Title: AdS4 backgrounds with N>16 supersymmetries in 10 and 11 dimensions, Abstract: We explore all warped $AdS_4\times_w M^{D-4}$ backgrounds with the most general allowed fluxes that preserve more than 16 supersymmetries in $D=10$- and $11$-dimensional supergravities. After imposing the assumption that either the internal space $M^{D-4}$ is compact without boundary or the isometry algebra of the background decomposes into that of AdS$_4$ and that of $M^{D-4}$, we find that there are no such backgrounds in IIB supergravity. Similarly in IIA supergravity, there is a unique such background with 24 supersymmetries locally isometric to $AdS_4\times \mathbb{CP}^3$, and in $D=11$ supergravity all such backgrounds are locally isometric to the maximally supersymmetric $AdS_4\times S^7$ solution.
[ 0, 0, 1, 0, 0, 0 ]
Title: Room Temperature Polariton Lasing in All-Inorganic Perovskites, Abstract: Polariton lasing is the coherent emission arising from a macroscopic polariton condensate first proposed in 1996. Over the past two decades, polariton lasing has been demonstrated in a few inorganic and organic semiconductors in both low and room temperatures. Polariton lasing in inorganic materials significantly relies on sophisticated epitaxial growth of crystalline gain medium layers sandwiched by two distributed Bragg reflectors in which combating the built-in strain and mismatched thermal properties is nontrivial. On the other hand, organic active media usually suffer from large threshold density and weak nonlinearity due to the Frenkel exciton nature. Further development of polariton lasing towards technologically significant applications demand more accessible materials, ease of device fabrication and broadly tunable emission at room temperature. Herein, we report the experimental realization of room-temperature polariton lasing based on an epitaxy-free all-inorganic cesium lead chloride perovskite microcavity. Polariton lasing is unambiguously evidenced by a superlinear power dependence, macroscopic ground state occupation, blueshift of ground state emission, narrowing of the linewidth and the build-up of long-range spatial coherence. Our work suggests considerable promise of lead halide perovskites towards large-area, low-cost, high performance room temperature polariton devices and coherent light sources extending from the ultraviolet to near infrared range.
[ 0, 1, 0, 0, 0, 0 ]
Title: Probabilistic Sensor Fusion for Ambient Assisted Living, Abstract: There is a widely-accepted need to revise current forms of health-care provision, with particular interest in sensing systems in the home. Given a multiple-modality sensor platform with heterogeneous network connectivity, as is under development in the Sensor Platform for HEalthcare in Residential Environment (SPHERE) Interdisciplinary Research Collaboration (IRC), we face specific challenges relating to the fusion of the heterogeneous sensor modalities. We introduce Bayesian models for sensor fusion, which aims to address the challenges of fusion of heterogeneous sensor modalities. Using this approach we are able to identify the modalities that have most utility for each particular activity, and simultaneously identify which features within that activity are most relevant for a given activity. We further show how the two separate tasks of location prediction and activity recognition can be fused into a single model, which allows for simultaneous learning an prediction for both tasks. We analyse the performance of this model on data collected in the SPHERE house, and show its utility. We also compare against some benchmark models which do not have the full structure,and show how the proposed model compares favourably to these methods
[ 1, 0, 0, 1, 0, 0 ]
Title: Bounded time computation on metric spaces and Banach spaces, Abstract: We extend the framework by Kawamura and Cook for investigating computational complexity for operators occurring in analysis. This model is based on second-order complexity theory for functions on the Baire space, which is lifted to metric spaces by means of representations. Time is measured in terms of the length of the input encodings and the required output precision. We propose the notions of a complete representation and of a regular representation. We show that complete representations ensure that any computable function has a time bound. Regular representations generalize Kawamura and Cook's more restrictive notion of a second-order representation, while still guaranteeing fast computability of the length of the encodings. Applying these notions, we investigate the relationship between purely metric properties of a metric space and the existence of a representation such that the metric is computable within bounded time. We show that a bound on the running time of the metric can be straightforwardly translated into size bounds of compact subsets of the metric space. Conversely, for compact spaces and for Banach spaces we construct a family of admissible, complete, regular representations that allow for fast computation of the metric and provide short encodings. Here it is necessary to trade the time bound off against the length of encodings.
[ 1, 0, 1, 0, 0, 0 ]
Title: Defect entropies and enthalpies in Barium Fluoride, Abstract: Various experimental techniques, have revealed that the predominant intrinsic point defects in BaF$_2$ are anion Frenkel defects. Their formation enthalpy and entropy as well as the corresponding parameters for the fluorine vacancy and fluorine interstitial motion have been determined. In addition, low temperature dielectric relaxation measurements in BaF$_2$ doped with uranium leads to the parameters {\tau}$_0$, E in the Arrhenius relation {\tau}={\tau}$_0$exp(E/kBT) for the relaxation time {\tau}. For the relaxation peak associated with a single tetravalent uranium, the migration entropy deduced from the pre-exponential factor {\tau}$_0$, is smaller than the anion Frenkel defect formation entropy by almost two orders of magnitude. We show that, despite their great variation, the defect entropies and enthalpies are interconnected through a model based on anharmonic properties of the bulk material that have been recently studied by employing density-functional theory and density-functional perturbation theory.
[ 0, 1, 0, 0, 0, 0 ]
Title: Surprise-Based Intrinsic Motivation for Deep Reinforcement Learning, Abstract: Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as $\epsilon$-greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent's surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the $k$-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.
[ 1, 0, 0, 0, 0, 0 ]
Title: Active model learning and diverse action sampling for task and motion planning, Abstract: The objective of this work is to augment the basic abilities of a robot by learning to use new sensorimotor primitives to enable the solution of complex long-horizon problems. Solving long-horizon problems in complex domains requires flexible generative planning that can combine primitive abilities in novel combinations to solve problems as they arise in the world. In order to plan to combine primitive actions, we must have models of the preconditions and effects of those actions: under what circumstances will executing this primitive achieve some particular effect in the world? We use, and develop novel improvements on, state-of-the-art methods for active learning and sampling. We use Gaussian process methods for learning the conditions of operator effectiveness from small numbers of expensive training examples collected by experimentation on a robot. We develop adaptive sampling methods for generating diverse elements of continuous sets (such as robot configurations and object poses) during planning for solving a new task, so that planning is as efficient as possible. We demonstrate these methods in an integrated system, combining newly learned models with an efficient continuous-space robot task and motion planner to learn to solve long horizon problems more efficiently than was previously possible.
[ 1, 0, 0, 1, 0, 0 ]
Title: Four Fundamental Questions in Probability Theory and Statistics, Abstract: This study has the purpose of addressing four questions that lie at the base of the probability theory and statistics, and includes two main steps. As first, we conduct the textual analysis of the most significant works written by eminent probability theorists. The textual analysis turns out to be a rather innovative method of study in this domain, and shows how the sampled writers, no matter he is a frequentist or a subjectivist, share a similar approach. Each author argues on the multifold aspects of probability then he establishes the mathematical theory on the basis of his intellectual conclusions. It may be said that mathematics ranks second. Hilbert foresees an approach far different from that used by the sampled authors. He proposes to axiomatize the probability calculus notably to describe the probability concepts using purely mathematical criteria. In the second stage of the present research we address the four issues of the probability theory and statistics following the recommendations of Hilbert. Specifically, we use two theorems that prove how the frequentist and the subjectivist models are not incompatible as many believe. Probability has distinct meanings under different hypotheses, and in turn classical statistics and Bayesian statistics are available for adoption in different circumstances. Subsequently, these results are commented upon, followed by our conclusions
[ 0, 0, 0, 1, 0, 0 ]
Title: Connections on parahoric torsors over curves, Abstract: We define parahoric $\cG$--torsors for certain Bruhat--Tits group scheme $\cG$ on a smooth complex projective curve $X$ when the weights are real, and also define connections on them. We prove that a $\cG$--torsor is given by a homomorphism from $\pi_1(X\setminus D)$ to a maximal compact subgroup of $G$, where $D\, \subset\, X$ is the parabolic divisor, if and only if the torsor is polystable.
[ 0, 0, 1, 0, 0, 0 ]
Title: On Reduced Input-Output Dynamic Mode Decomposition, Abstract: The identification of reduced-order models from high-dimensional data is a challenging task, and even more so if the identified system should not only be suitable for a certain data set, but generally approximate the input-output behavior of the data source. In this work, we consider the input-output dynamic mode decomposition method for system identification. We compare excitation approaches for the data-driven identification process and describe an optimization-based stabilization strategy for the identified systems.
[ 1, 0, 0, 0, 0, 0 ]
Title: An assessment of Fe XX - Fe XXII emission lines in SDO/EVE data as diagnostics for high density solar flare plasmas using EUVE stellar observations, Abstract: The Extreme Ultraviolet Variability Experiment (EVE) on the Solar Dynamics Observatory obtains extreme-ultraviolet (EUV) spectra of the full-disk Sun at a spectral resolution of ~1 A and cadence of 10 s. Such a spectral resolution would normally be considered to be too low for the reliable determination of electron density (N_e) sensitive emission line intensity ratios, due to blending. However, previous work has shown that a limited number of Fe XXI features in the 90-60 A wavelength region of EVE do provide useful N_e-diagnostics at relatively low flare densities (N_e ~ 10^11-10^12 cm^-3). Here we investigate if additional highly ionised Fe line ratios in the EVE 90-160 A range may be reliably employed as N_e-diagnostics. In particular, the potential for such diagnostics to provide density estimates for high N_e (~10^13 cm^-3) flare plasmas is assessed. Our study employs EVE spectra for X-class flares, combined with observations of highly active late-type stars from the Extreme Ultraviolet Explorer (EUVE) satellite plus experimental data for well-diagnosed tokamak plasmas, both of which are similar in wavelength coverage and spectral resolution to those from EVE. Several ratios are identified in EVE data which yield consistent values of electron density, including Fe XX 113.35/121.85 and Fe XXII 114.41/135.79, with confidence in their reliability as N_e-diagnostics provided by the EUVE and tokamak results. These ratios also allow the determination of density in solar flare plasmas up to values of ~10^13 cm^-3.
[ 0, 1, 0, 0, 0, 0 ]
Title: Multi-Scale Spatially Weighted Local Histograms in O(1), Abstract: Weighting pixel contribution considering its location is a key feature in many fundamental image processing tasks including filtering, object modeling and distance matching. Several techniques have been proposed that incorporate Spatial information to increase the accuracy and boost the performance of detection, tracking and recognition systems at the cost of speed. But, it is still not clear how to efficiently ex- tract weighted local histograms in constant time using integral histogram. This paper presents a novel algorithm to compute accurately multi-scale Spatially weighted local histograms in constant time using Weighted Integral Histogram (SWIH) for fast search. We applied our spatially weighted integral histogram approach for fast tracking and obtained more accurate and robust target localization result in comparison with using plain histogram.
[ 1, 0, 0, 0, 0, 0 ]
Title: Limit theorems in bi-free probability theory, Abstract: In this paper additive bi-free convolution is defined for general Borel probability measures, and the limiting distributions for sums of bi-free pairs of selfadjoint commuting random variables in an infinitesimal triangular array are determined. These distributions are characterized by their bi-freely infinite divisibility, and moreover, a transfer principle is established for limit theorems in classical probability theory and Voiculescu's bi-free probability theory. Complete descriptions of bi-free stability and fullness of planar probability distributions are also set down. All these results reveal one important feature about the theory of bi-free probability that it parallels the classical theory perfectly well. The emphasis in the whole work is not on the tool of bi-free combinatorics but only on the analytic machinery.
[ 0, 0, 1, 0, 0, 0 ]
Title: Axiomatisability and hardness for universal Horn classes of hypergraphs, Abstract: We characterise finite axiomatisability and intractability of deciding membership for universal Horn classes generated by finite loop-free hypergraphs.
[ 1, 0, 1, 0, 0, 0 ]
Title: Dissipatively Coupled Waveguide Networks for Coherent Diffusive Photonics, Abstract: A photonic circuit is generally described as a structure in which light propagates by unitary exchange and transfers reversibly between channels. In contrast, the term `diffusive' is more akin to a chaotic propagation in scattering media, where light is driven out of coherence towards a thermal mixture. Based on the dynamics of open quantum systems, the combination of these two opposites can result in novel techniques for coherent light control. The crucial feature of these photonic structures is dissipative coupling between modes, via an interaction with a common reservoir. Here, we demonstrate experimentally that such systems can perform optical equalisation to smooth multimode light, or act as a distributor, guiding it into selected channels. Quantum thermodynamically, these systems can act as catalytic coherent reservoirs by performing perfect non-Landauer erasure. For lattice structures, localised stationary states can be supported in the continuum, similar to compacton-like states in conventional flat band lattices.
[ 0, 1, 0, 0, 0, 0 ]
Title: Inadequate Risk Analysis Might Jeopardize The Functional Safety of Modern Systems, Abstract: In the early 90s, researchers began to focus on security as an important property to address in combination with safety. Over the years, researchers have proposed approaches to harmonize activities within the safety and security disciplines. Despite the academic efforts to identify interdependencies and to propose combined approaches for safety and security, there is still a lack of integration between safety and security practices in the industrial context, as they have separate standards and independent processes often addressed and assessed by different organizational teams and authorities. Specifically, security concerns are generally not covered in any detail in safety standards potentially resulting in successfully safety-certified systems that still are open for security threats from e.g., malicious intents from internal and external personnel and hackers that may jeopardize safety. In recent years security has again received an increasing attention of being an important issue also in safety assurance, as the open interconnected nature of emerging systems makes them susceptible to security threats at a much higher degree than existing more confined products.This article presents initial ideas on how to extend safety work to include aspects of security during the context establishment and initial risk assessment procedures. The ambition of our proposal is to improve safety and increase efficiency and effectiveness of the safety work within the frames of the current safety standards, i.e., raised security awareness in compliance with the current safety standards. We believe that our proposal is useful to raise the security awareness in industrial contexts, although it is not a complete harmonization of safety and security disciplines, as it merely provides applicable guidance to increase security awareness in a safety context.
[ 1, 0, 0, 0, 0, 0 ]
Title: A compactness theorem for four-dimensional shrinking gradient Ricci solitons, Abstract: Haslhofer and Müller proved a compactness Theorem for four-dimensional shrinking gradient Ricci solitons, with the only assumption being that the entropy is uniformly bounded from below. However, the limit in their result could possibly be an orbifold Ricci shrinker. In this paper we prove a compactness theorem for noncompact four-dimensional shrinking gradient Ricci solitons with a topological restriction and a noncollapsing assumption, that is, we consider Ricci shrinkers that can be embedded in a closed four-manifold with vanishing second homology group over every field and are strongly $\kappa$-noncollapsed with respect to a universal $\kappa$. In particular, we do not need any curvature assumption and the limit is still a smooth nonflat shrinking gradient Ricci soliton.
[ 0, 0, 1, 0, 0, 0 ]
Title: An analytic formulation for positive-unlabeled learning via weighted integral probability metric, Abstract: We consider the problem of learning a binary classifier from only positive and unlabeled observations (PU learning). Although recent research in PU learning has succeeded in showing theoretical and empirical performance, most existing algorithms need to solve either a convex or a non-convex optimization problem and thus are not suitable for large-scale datasets. In this paper, we propose a simple yet theoretically grounded PU learning algorithm by extending the previous work proposed for supervised binary classification (Sriperumbudur et al., 2012). The proposed PU learning algorithm produces a closed-form classifier when the hypothesis space is a closed ball in reproducing kernel Hilbert space. In addition, we establish upper bounds of the estimation error and the excess risk. The obtained estimation error bound is sharper than existing results and the excess risk bound does not rely on an approximation error term. To the best of our knowledge, we are the first to explicitly derive the excess risk bound in the field of PU learning. Finally, we conduct extensive numerical experiments using both synthetic and real datasets, demonstrating improved accuracy, scalability, and robustness of the proposed algorithm.
[ 1, 0, 0, 1, 0, 0 ]
Title: Benchmarking Data Analysis and Machine Learning Applications on the Intel KNL Many-Core Processor, Abstract: Knights Landing (KNL) is the code name for the second-generation Intel Xeon Phi product family. KNL has generated significant interest in the data analysis and machine learning communities because its new many-core architecture targets both of these workloads. The KNL many-core vector processor design enables it to exploit much higher levels of parallelism. At the Lincoln Laboratory Supercomputing Center (LLSC), the majority of users are running data analysis applications such as MATLAB and Octave. More recently, machine learning applications, such as the UC Berkeley Caffe deep learning framework, have become increasingly important to LLSC users. Thus, the performance of these applications on KNL systems is of high interest to LLSC users and the broader data analysis and machine learning communities. Our data analysis benchmarks of these application on the Intel KNL processor indicate that single-core double-precision generalized matrix multiply (DGEMM) performance on KNL systems has improved by ~3.5x compared to prior Intel Xeon technologies. Our data analysis applications also achieved ~60% of the theoretical peak performance. Also a performance comparison of a machine learning application, Caffe, between the two different Intel CPUs, Xeon E5 v3 and Xeon Phi 7210, demonstrated a 2.7x improvement on a KNL node.
[ 1, 1, 0, 0, 0, 0 ]
Title: Pseudo asymptotically periodic solutions for fractional integro-differential neutral equations, Abstract: In this paper, we study the existence and uniqueness of pseudo $S$-asymptotically $\omega$-periodic mild solutions of class $r$ for fractional integro-differential neutral equations. An example is presented to illustrate the application of the abstract results.
[ 0, 0, 1, 0, 0, 0 ]
Title: Updating the silent speech challenge benchmark with deep learning, Abstract: The 2010 Silent Speech Challenge benchmark is updated with new results obtained in a Deep Learning strategy, using the same input features and decoding strategy as in the original article. A Word Error Rate of 6.4% is obtained, compared to the published value of 17.4%. Additional results comparing new auto-encoder-based features with the original features at reduced dimensionality, as well as decoding scenarios on two different language models, are also presented. The Silent Speech Challenge archive has been updated to contain both the original and the new auto-encoder features, in addition to the original raw data.
[ 1, 0, 0, 0, 0, 0 ]
Title: On Optimization of Radiative Dipole Body Array Coils for 7 Tesla MRI, Abstract: In this contribution we present numerical and experimental results of a parametric study of radiative dipole antennas in a phased array configuration for efficient body magnetic resonance imaging at 7T via parallel transmit. For magnetic resonance imaging (MRI) at ultrahigh fields (7T and higher) dipole antennas are commonly used in phased arrays, particularly for body imaging targets. This study reveals the effects of dipole positioning in the array (elevation of dipoles above the subject and inter-dipole spacing) on their mutual coupling, $B_1^{+}$ per unit power and $B_1^{+}$ per maximum local SAR efficiencies as well as the RF-shimming capability. The results demonstrate the trade-off between low maximum local SAR and sensitivity to the subject variation and provide the working parameter range for practical body arrays composed of recently suggested fractionated dipoles.
[ 0, 1, 0, 0, 0, 0 ]
Title: Speaking Style Authentication Using Suprasegmental Hidden Markov Models, Abstract: The importance of speaking style authentication from human speech is gaining an increasing attention and concern from the engineering community. The importance comes from the demand to enhance both the naturalness and efficiency of spoken language human-machine interface. Our work in this research focuses on proposing, implementing, and testing speaker-dependent and text-dependent speaking style authentication (verification) systems that accept or reject the identity claim of a speaking style based on suprasegmental hidden Markov models (SPHMMs). Based on using SPHMMs, our results show that the average speaking style authentication performance is: 99%, 37%, 85%, 60%, 61%, 59%, 41%, 61%, and 57% belonging respectively to the speaking styles: neutral, shouted, slow, loud, soft, fast, angry, happy, and fearful.
[ 1, 0, 0, 0, 0, 0 ]
Title: Decomposing manifolds into Cartesian products, Abstract: The decomposability of a Cartesian product of two nondecomposable manifolds into products of lower dimensional manifolds is studied. For 3-manifolds we obtain an analog of a result due to Borsuk for surfaces, and in higher dimensions we show that similar analogs do not exist unless one imposes further restrictions such as simple connectivity.
[ 0, 0, 1, 0, 0, 0 ]
Title: Performance of the MAGIC telescopes under moonlight, Abstract: MAGIC, a system of two imaging atmospheric Cherenkov telescopes, achieves its best performance under dark conditions, i.e. in absence of moonlight or twilight. Since operating the telescopes only during dark time would severely limit the duty cycle, observations are also performed when the Moon is present in the sky. Here we present a dedicated Moon-adapted analysis and characterize the performance of MAGIC under moonlight. We evaluate energy threshold, angular resolution and sensitivity of MAGIC under different background light levels, based on Crab Nebula observations and tuned Monte Carlo simulations. This study includes observations taken under non-standard hardware configurations, such as reducing the camera photomultiplier tubes gain by a factor $\sim$1.7 (reduced HV settings) with respect to standard settings (nominal HV) or using UV-pass filters to strongly reduce the amount of moonlight reaching the telescopes cameras. The Crab Nebula spectrum is correctly reconstructed in all the studied illumination levels, that reach up to 30 times brighter than under dark conditions. The main effect of moonlight is an increase in the analysis energy threshold and in the systematic uncertainties on the flux normalization. The sensitivity degradation is constrained to be below 10\%, within 15-30\% and between 60 and 80\% for nominal HV, reduced HV and UV-pass filter observations, respectively. No worsening of the angular resolution was found. Thanks to observations during moonlight, the duty cycle can be doubled, suppressing the need to stop observations around full Moon.
[ 0, 1, 0, 0, 0, 0 ]
Title: Driver Distraction Identification with an Ensemble of Convolutional Neural Networks, Abstract: The World Health Organization (WHO) reported 1.25 million deaths yearly due to road traffic accidents worldwide and the number has been continuously increasing over the last few years. Nearly fifth of these accidents are caused by distracted drivers. Existing work of distracted driver detection is concerned with a small set of distractions (mostly, cell phone usage). Unreliable ad-hoc methods are often used.In this paper, we present the first publicly available dataset for driver distraction identification with more distraction postures than existing alternatives. In addition, we propose a reliable deep learning-based solution that achieves a 90% accuracy. The system consists of a genetically-weighted ensemble of convolutional neural networks, we show that a weighted ensemble of classifiers using a genetic algorithm yields in a better classification confidence. We also study the effect of different visual elements in distraction detection by means of face and hand localizations, and skin segmentation. Finally, we present a thinned version of our ensemble that could achieve 84.64% classification accuracy and operate in a real-time environment.
[ 1, 0, 0, 1, 0, 0 ]
Title: A simple neural network module for relational reasoning, Abstract: Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.
[ 1, 0, 0, 0, 0, 0 ]
Title: Bridging the Gap Between Value and Policy Based Reinforcement Learning, Abstract: We establish a new connection between value and policy based reinforcement learning (RL) based on a relationship between softmax temporal value consistency and policy optimality under entropy regularization. Specifically, we show that softmax consistent action values correspond to optimal entropy regularized policy probabilities along any action sequence, regardless of provenance. From this observation, we develop a new RL algorithm, Path Consistency Learning (PCL), that minimizes a notion of soft consistency error along multi-step action sequences extracted from both on- and off-policy traces. We examine the behavior of PCL in different scenarios and show that PCL can be interpreted as generalizing both actor-critic and Q-learning algorithms. We subsequently deepen the relationship by showing how a single model can be used to represent both a policy and the corresponding softmax state values, eliminating the need for a separate critic. The experimental evaluation demonstrates that PCL significantly outperforms strong actor-critic and Q-learning baselines across several benchmarks.
[ 1, 0, 0, 1, 0, 0 ]
Title: Speaker verification using end-to-end adversarial language adaptation, Abstract: In this paper we investigate the use of adversarial domain adaptation for addressing the problem of language mismatch between speaker recognition corpora. In the context of speaker verification, adversarial domain adaptation methods aim at minimizing certain divergences between the distribution that the utterance-level features follow (i.e. speaker embeddings) when drawn from source and target domains (i.e. languages), while preserving their capacity in recognizing speakers. Neural architectures for extracting utterance-level representations enable us to apply adversarial adaptation methods in an end-to-end fashion and train the network jointly with the standard cross-entropy loss. We examine several configurations, such as the use of (pseudo-)labels on the target domain as well as domain labels in the feature extractor, and we demonstrate the effectiveness of our method on the challenging NIST SRE16 and SRE18 benchmarks.
[ 1, 0, 0, 0, 0, 0 ]
Title: Omni $n$-Lie algebras and linearization of higher analogues of Courant algebroids, Abstract: In this paper, we introduce the notion of an omni $n$-Lie algebra and show that they are linearization of higher analogues of standard Courant algebroids. We also introduce the notion of a nonabelian omni $n$-Lie algebra and show that they are linearization of higher analogues of Courant algebroids associated to Nambu-Poisson manifolds.
[ 0, 0, 1, 0, 0, 0 ]
Title: Asymptotics of Hankel determinants with a one-cut regular potential and Fisher-Hartwig singularities, Abstract: We obtain asymptotics of large Hankel determinants whose weight depends on a one-cut regular potential and any number of Fisher-Hartwig singularities. This generalises two results: 1) a result of Berestycki, Webb and Wong [5] for root-type singularities, and 2) a result of Its and Krasovsky [37] for a Gaussian weight with a single jump-type singularity. We show that when we apply a piecewise constant thinning on the eigenvalues of a random Hermitian matrix drawn from a one-cut regular ensemble, the gap probability in the thinned spectrum, as well as correlations of the characteristic polynomial of the associated conditional point process, can be expressed in terms of these determinants.
[ 0, 0, 1, 0, 0, 0 ]
Title: Beyond linear galaxy alignments, Abstract: Galaxy intrinsic alignments (IA) are a critical uncertainty for current and future weak lensing measurements. We describe a perturbative expansion of IA, analogous to the treatment of galaxy biasing. From an astrophysical perspective, this model includes the expected large-scale alignment mechanisms for galaxies that are pressure-supported (tidal alignment) and rotation-supported (tidal torquing) as well as the cross-correlation between the two. Alternatively, this expansion can be viewed as an effective model capturing all relevant effects up to the given order. We include terms up to second order in the density and tidal fields and calculate the resulting IA contributions to two-point statistics at one-loop order. For fiducial amplitudes of the IA parameters, we find the quadratic alignment and linear-quadratic cross terms can contribute order-unity corrections to the total intrinsic alignment signal at $k\sim0.1\,h^{-1}{\rm Mpc}$, depending on the source redshift distribution. These contributions can lead to significant biases on inferred cosmological parameters in Stage IV photometric weak lensing surveys. We perform forecasts for an LSST-like survey, finding that use of the standard "NLA" model for intrinsic alignments cannot remove these large parameter biases, even when allowing for a more general redshift dependence. The model presented here will allow for more accurate and flexible IA treatment in weak lensing and combined probes analyses, and an implementation is made available as part of the public FAST-PT code. The model also provides a more advanced framework for understanding the underlying IA processes and their relationship to fundamental physics.
[ 0, 1, 0, 0, 0, 0 ]
Title: Modelling thermo-electro-mechanical effects in orthotropic cardiac tissue, Abstract: In this paper we introduce a new mathematical model for the active contraction of cardiac muscle, featuring different thermo-electric and nonlinear conductivity properties. The passive hyperelastic response of the tissue is described by an orthotropic exponential model, whereas the ionic activity dictates active contraction incorporated through the concept of orthotropic active strain. We use a fully incompressible formulation, and the generated strain modifies directly the conductivity mechanisms in the medium through the pull-back transformation. We also investigate the influence of thermo-electric effects in the onset of multiphysics emergent spatiotemporal dynamics, using nonlinear diffusion. It turns out that these ingredients have a key role in reproducing pathological chaotic dynamics such as ventricular fibrillation during inflammatory events, for instance. The specific structure of the governing equations suggests to cast the problem in mixed-primal form and we write it in terms of Kirchhoff stress, displacements, solid pressure, electric potential, activation generation, and ionic variables. We also propose a new mixed-primal finite element method for its numerical approximation, and we use it to explore the properties of the model and to assess the importance of coupling terms, by means of a few computational experiments in 3D.
[ 0, 0, 0, 0, 1, 0 ]
Title: Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions, Abstract: By building up on the recent theory that established the connection between implicit generative modeling and optimal transport, in this study, we propose a novel parameter-free algorithm for learning the underlying distributions of complicated datasets and sampling from them. The proposed algorithm is based on a functional optimization problem, which aims at finding a measure that is close to the data distribution as much as possible and also expressive enough for generative modeling purposes. We formulate the problem as a gradient flow in the space of probability measures. The connections between gradient flows and stochastic differential equations let us develop a computationally efficient algorithm for solving the optimization problem, where the resulting algorithm resembles the recent dynamics-based Markov Chain Monte Carlo algorithms. We provide formal theoretical analysis where we prove finite-time error guarantees for the proposed algorithm. Our experimental results support our theory and shows that our algorithm is able to capture the structure of challenging distributions.
[ 0, 0, 0, 1, 0, 0 ]
Title: Dual Iterative Hard Thresholding: From Non-convex Sparse Minimization to Non-smooth Concave Maximization, Abstract: Iterative Hard Thresholding (IHT) is a class of projected gradient descent methods for optimizing sparsity-constrained minimization models, with the best known efficiency and scalability in practice. As far as we know, the existing IHT-style methods are designed for sparse minimization in primal form. It remains open to explore duality theory and algorithms in such a non-convex and NP-hard problem setting. In this paper, we bridge this gap by establishing a duality theory for sparsity-constrained minimization with $\ell_2$-regularized loss function and proposing an IHT-style algorithm for dual maximization. Our sparse duality theory provides a set of sufficient and necessary conditions under which the original NP-hard/non-convex problem can be equivalently solved in a dual formulation. The proposed dual IHT algorithm is a super-gradient method for maximizing the non-smooth dual objective. An interesting finding is that the sparse recovery performance of dual IHT is invariant to the Restricted Isometry Property (RIP), which is required by virtually all the existing primal IHT algorithms without sparsity relaxation. Moreover, a stochastic variant of dual IHT is proposed for large-scale stochastic optimization. Numerical results demonstrate the superiority of dual IHT algorithms to the state-of-the-art primal IHT-style algorithms in model estimation accuracy and computational efficiency.
[ 1, 0, 0, 1, 0, 0 ]
Title: Generalized connected sum formula for the Arnold invariants of generic plane curves, Abstract: We define the generalized connected sum for generic closed plane curves, generalizing the strange sum defined by Arnold, and completely describe how the Arnold invariants $J^{\pm}$ and $\mathit{St}$ behave under the generalized connected sums.
[ 0, 0, 1, 0, 0, 0 ]
Title: STAR: Spatio-Temporal Altimeter Waveform Retracking using Sparse Representation and Conditional Random Fields, Abstract: Satellite radar altimetry is one of the most powerful techniques for measuring sea surface height variations, with applications ranging from operational oceanography to climate research. Over open oceans, altimeter return waveforms generally correspond to the Brown model, and by inversion, estimated shape parameters provide mean surface height and wind speed. However, in coastal areas or over inland waters, the waveform shape is often distorted by land influence, resulting in peaks or fast decaying trailing edges. As a result, derived sea surface heights are then less accurate and waveforms need to be reprocessed by sophisticated algorithms. To this end, this work suggests a novel Spatio-Temporal Altimetry Retracking (STAR) technique. We show that STAR enables the derivation of sea surface heights over the open ocean as well as over coastal regions of at least the same quality as compared to existing retracking methods, but for a larger number of cycles and thus retaining more useful data. Novel elements of our method are (a) integrating information from spatially and temporally neighboring waveforms through a conditional random field approach, (b) sub-waveform detection, where relevant sub-waveforms are separated from corrupted or non-relevant parts through a sparse representation approach, and (c) identifying the final best set of sea surfaces heights from multiple likely heights using Dijkstra's algorithm. We apply STAR to data from the Jason-1, Jason-2 and Envisat missions for study sites in the Gulf of Trieste, Italy and in the coastal region of the Ganges-Brahmaputra-Meghna estuary, Bangladesh. We compare to several established and recent retracking methods, as well as to tide gauge data. Our experiments suggest that the obtained sea surface heights are significantly less affected by outliers when compared to results obtained by other approaches.
[ 0, 1, 0, 0, 0, 0 ]
Title: CoMID: Context-based Multi-Invariant Detection for Monitoring Cyber-Physical Software, Abstract: Cyber-physical software continually interacts with its physical environment for adaptation in order to deliver smart services. However, the interactions can be subject to various errors when the software's assumption on its environment no longer holds, thus leading to unexpected misbehavior or even failure. To address this problem, one promising way is to conduct runtime monitoring of invariants, so as to prevent cyber-physical software from entering such errors (a.k.a. abnormal states). To effectively detect abnormal states, we in this article present an approach, named Context-based Multi-Invariant Detection (CoMID), which consists of two techniques: context-based trace grouping and multi-invariant detection. The former infers contexts to distinguish different effective scopes for CoMID's derived invariants, and the latter conducts ensemble evaluation of multiple invariants to detect abnormal states. We experimentally evaluate CoMID on real-world cyber-physical software. The results show that CoMID achieves a 5.7-28.2% higher true-positive rate and a 6.8-37.6% lower false-positive rate in detecting abnormal states, as compared with state-of-the-art approaches (i.e., Daikon and ZoomIn). When deployed in field tests, CoMID's runtime monitoring improves the success rate of cyber-physical software in its task executions by 15.3-31.7%.
[ 1, 0, 0, 0, 0, 0 ]
Title: Asymptotics of multivariate contingency tables with fixed marginals, Abstract: We consider the asymptotic distribution of a cell in a 2 x ... x 2 contingency table as the fixed marginal totals tend to infinity. The asymptotic order of the cell variance is derived and a useful diagnostic is given for determining whether the cell has a Poisson limit or a Gaussian limit. There are three forms of Poisson convergence. The exact form is shown to be determined by the growth rates of the two smallest marginal totals. The results are generalized to contingency tables with arbitrary sizes and are further complemented with concrete examples.
[ 0, 0, 1, 1, 0, 0 ]
Title: Distribution of the periodic points of the Farey map, Abstract: We expand the cross section of the geodesic flow in the tangent bundle of the modular surface given by Series to produce another section whose return map under the geodesic flow is a double cover of the natural extension of the Farey map. We use this cross section to extend the correspondence between the closed geodesics on the modular surface and the periodic points of the Gauss map to include the periodic points of the Farey map. Then, analogous to the work of Pollicott, we prove an equidistribution result for the periodic points of the Farey map when they are ordered according to the length of their corresponding closed geodesics.
[ 0, 0, 1, 0, 0, 0 ]
Title: Marangoni effects on a thin liquid film coating a sphere with axial or radial thermal gradients, Abstract: We study the time evolution of a thin liquid film coating the outer surface of a sphere in the presence of gravity, surface tension and thermal gradients. We derive the fourth-order nonlinear partial differential equation that models the thin film dynamics, including Marangoni terms arising from the dependence of surface tension on temperature. We consider two different imposed temperature distributions with axial or radial thermal gradients. We analyze the stability of a uniform coating under small perturbations and carry out numerical simulations in COMSOL for a range of parameter values. In the case of an axial temperature gradient, we find steady states with either uniform film thickness, or with the fluid accumulating at the bottom or near the top of the sphere, depending on the total volume of liquid in the film, dictating whether gravity or Marangoni effects dominate. In the case of a radial temperature gradient, a stability analysis reveals the most unstable non-axisymmetric modes on an initially uniform coating film.
[ 0, 1, 0, 0, 0, 0 ]
Title: Fooling the classifier: Ligand antagonism and adversarial examples, Abstract: Machine learning algorithms are sensitive to so-called adversarial perturbations. This is reminiscent of cellular decision-making where antagonist ligands may prevent correct signaling, like during the early immune response. We draw a formal analogy between neural networks used in machine learning and the general class of adaptive proofreading networks. We then apply simple adversarial strategies from machine learning to models of ligand discrimination. We show how kinetic proofreading leads to "boundary tilting" and identify three types of perturbation (adversarial, non adversarial and ambiguous). We then use a gradient-descent approach to compare different adaptive proofreading models, and we reveal the existence of two qualitatively different regimes characterized by the presence or absence of a critical point. These regimes are reminiscent of the "feature-to-prototype" transition identified in machine learning, corresponding to two strategies in ligand antagonism (broad vs. specialized). Overall, our work connects evolved cellular decision-making to classification in machine learning, showing that behaviours close to the decision boundary can be understood through the same mechanisms.
[ 0, 0, 0, 1, 1, 0 ]
Title: Comparing Graph Clusterings: Set partition measures vs. Graph-aware measures, Abstract: In this paper, we propose a family of graph partition similarity measures that take the topology of the graph into account. These graph-aware measures are alternatives to using set partition similarity measures that are not specifically designed for graph partitions. The two types of measures, graph-aware and set partition measures, are shown to have opposite behaviors with respect to resolution issues and provide complementary information necessary to assess that two graph partitions are similar.
[ 0, 0, 0, 1, 0, 0 ]
Title: Competitive Resource Allocation in HetNets: the Impact of Small-cell Spectrum Constraints and Investment Costs, Abstract: Heterogeneous wireless networks with small-cell deployments in licensed and unlicensed spectrum bands are a promising approach for expanding wireless connectivity and service. As a result, wireless service providers (SPs) are adding small-cells to augment their existing macro-cell deployments. This added flexibility complicates network management, in particular, service pricing and spectrum allocations across macro- and small-cells. Further, these decisions depend on the degree of competition among SPs. Restrictions on shared spectrum access imposed by regulators, such as low power constraints that lead to small-cell deployments, along with the investment cost needed to add small cells to an existing network, also impact strategic decisions and market efficiency. If the revenue generated by small-cells does not cover the investment cost, then there will be no deployment even if it increases social welfare. We study the implications of such spectrum constraints and investment costs on resource allocation and pricing decisions by competitive SPs, along with the associated social welfare. Our results show that while the optimal resource allocation taking constraints and investment into account can be uniquely determined, adding those features with strategic SPs can have a substantial effect on the equilibrium market structure.
[ 1, 0, 0, 0, 0, 0 ]
Title: Untangling the hairball: fitness based asymptotic reduction of biological networks, Abstract: Complex mathematical models of interaction networks are routinely used for prediction in systems biology. However, it is difficult to reconcile network complexities with a formal understanding of their behavior. Here, we propose a simple procedure (called $\bar \varphi$) to reduce biological models to functional submodules, using statistical mechanics of complex systems combined with a fitness-based approach inspired by $\textit{in silico}$ evolution. $\bar \varphi$ works by putting parameters or combination of parameters to some asymptotic limit, while keeping (or slightly improving) the model performance, and requires parameter symmetry breaking for more complex models. We illustrate $\bar \varphi$ on biochemical adaptation and on different models of immune recognition by T cells. An intractable model of immune recognition with close to a hundred individual transition rates is reduced to a simple two-parameter model. $\bar \varphi$ extracts three different mechanisms for early immune recognition, and automatically discovers similar functional modules in different models of the same process, allowing for model classification and comparison. Our procedure can be applied to biological networks based on rate equations using a fitness function that quantifies phenotypic performance.
[ 0, 1, 0, 0, 0, 0 ]
Title: Exotica and the status of the strong cosmic censor conjecture in four dimensions, Abstract: An immense class of physical counterexamples to the four dimensional strong cosmic censor conjecture---in its usual broad formulation---is exhibited. More precisely, out of any closed and simply connected 4-manifold an open Ricci-flat Lorentzian 4-manifold is constructed which is not globally hyperbolic and no perturbation of it, in any sense, can be globally hyperbolic. This very stable non-global-hyperbolicity is the consequence of our open spaces having a "creased end" i.e., an end diffeomorphic to an exotic ${\mathbb R}^4$. Open manifolds having an end like this is a typical phenomenon in four dimensions. The construction is based on a collection of results of Gompf and Taubes on exotic and self-dual spaces, respectively, as well as applying Penrose' non-linear graviton construction (i.e., twistor theory) to solve the Riemannian Einstein's equation. These solutions then are converted into stably non-globally-hyperbolic Lorentzian vacuum solutions. It follows that the plethora of vacuum solutions we found cannot be obtained via the initial value formulation of the Einstein's equation because they are "too long" in a certain sense (explained in the text). This different (i.e., not based on the initial value formulation but twistorial) technical background might partially explain why the existence of vacuum solutions of this kind have not been realized so far in spite of the fact that, apparently, their superabundance compared to the well-known globally hyperbolic vacuum solutions is overwhelming.
[ 0, 0, 1, 0, 0, 0 ]
Title: The $2$-nd Hessian type equation on almost Hermitian manifolds, Abstract: In this paper, we derive the second order estimate to the $2$-nd Hessian type equation on a compact almost Hermitian manifold.
[ 0, 0, 1, 0, 0, 0 ]