text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Machine Learning Molecular Dynamics for the Simulation of Infrared Spectra, Abstract: Machine learning has emerged as an invaluable tool in many research areas. In the present work, we harness this power to predict highly accurate molecular infrared spectra with unprecedented computational efficiency. To account for vibrational anharmonic and dynamical effects -- typically neglected by conventional quantum chemistry approaches -- we base our machine learning strategy on ab initio molecular dynamics simulations. While these simulations are usually extremely time consuming even for small molecules, we overcome these limitations by leveraging the power of a variety of machine learning techniques, not only accelerating simulations by several orders of magnitude, but also greatly extending the size of systems that can be treated. To this end, we develop a molecular dipole moment model based on environment dependent neural network charges and combine it with the neural network potentials of Behler and Parrinello. Contrary to the prevalent big data philosophy, we are able to obtain very accurate machine learning models for the prediction of infrared spectra based on only a few hundreds of electronic structure reference points. This is made possible through the introduction of a fully automated sampling scheme and the use of molecular forces during neural network potential training. We demonstrate the power of our machine learning approach by applying it to model the infrared spectra of a methanol molecule, n-alkanes containing up to 200 atoms and the protonated alanine tripeptide, which at the same time represents the first application of machine learning techniques to simulate the dynamics of a peptide. In all these case studies we find excellent agreement between the infrared spectra predicted via machine learning models and the respective theoretical and experimental spectra.
[ 0, 1, 0, 1, 0, 0 ]
Title: The cohomology of the full directed graph complex, Abstract: In his seminal paper "Formality conjecture", M. Kontsevich introduced a graph complex $GC_{1ve}$ closely connected with the problem of constructing a formality quasi-isomorphism for Hochschild cochains. In this paper, we express the cohomology of the full directed graph complex explicitly in terms of the cohomology of $GC_{1ve}$. Applications of our results include a recent work by the first author which completely characterizes homotopy classes of formality quasi-isomorphisms for Hochschild cochains in the stable setting.
[ 0, 0, 1, 0, 0, 0 ]
Title: On the Support Recovery of Jointly Sparse Gaussian Sources using Sparse Bayesian Learning, Abstract: In this work, we provide non-asymptotic, probabilistic guarantees for successful sparse support recovery by the multiple sparse Bayesian learning (M-SBL) algorithm in the multiple measurement vector (MMV) framework. For joint sparse Gaussian sources, we show that M-SBL perfectly recovers their common nonzero support with arbitrarily high probability using only finitely many MMVs. In fact, the support error probability decays exponentially fast with the number of MMVs, with the decay rate depending on the restricted isometry property of the self Khatri-Rao product of the measurement matrix. Our analysis theoretically confirms that M-SBL is capable of recovering supports of size as high as $\mathcal{O}(m^2)$, where $m$ is the number of measurements per sparse vector. In contrast, popular MMV algorithms in compressed sensing such as simultaneous orthogonal matching pursuit and row-LASSO can recover only $\mathcal{O}(m)$ sized supports. In the special case of noiseless measurements, we show that a single MMV suffices for perfect recovery of the $k$-sparse support in M-SBL, provided any $k + 1$ columns of the measurement matrix are linearly independent. Unlike existing support recovery guarantees for M-SBL, our sufficient conditions are non-asymptotic in nature, and do not require the orthogonality of the nonzero rows of the joint sparse signals.
[ 1, 0, 0, 0, 0, 0 ]
Title: A Critical-like Collective State Leads to Long-range Cell Communication in Dictyostelium discoideum Aggregation, Abstract: The transition from single-cell to multicellular behavior is important in early development but rarely studied. The starvation-induced aggregation of the social amoeba Dictyostelium discoideum into a multicellular slug is known to result from single-cell chemotaxis towards emitted pulses of cyclic adenosine monophosphate (cAMP). However, how exactly do transient short-range chemical gradients lead to coherent collective movement at a macroscopic scale? Here, we use a multiscale model verified by quantitative microscopy to describe wide-ranging behaviors from chemotaxis and excitability of individual cells to aggregation of thousands of cells. To better understand the mechanism of long-range cell-cell communication and hence aggregation, we analyze cell-cell correlations, showing evidence for self-organization at the onset of aggregation (as opposed to following a leader cell). Surprisingly, cell collectives, despite their finite size, show features of criticality known from phase transitions in physical systems. Application of external cAMP perturbations in our simulations near the sensitive critical point allows steering cells into early aggregation and towards certain locations but not once an aggregation center has been chosen.
[ 0, 0, 0, 0, 1, 0 ]
Title: Strong convergence rates of probabilistic integrators for ordinary differential equations, Abstract: Probabilistic integration of a continuous dynamical system is a way of systematically introducing model error, at scales no larger than errors introduced by standard numerical discretisation, in order to enable thorough exploration of possible responses of the system to inputs. It is thus a potentially useful approach in a number of applications such as forward uncertainty quantification, inverse problems, and data assimilation. We extend the convergence analysis of probabilistic integrators for deterministic ordinary differential equations, as proposed by Conrad et al.\ (\textit{Stat.\ Comput.}, 2016), to establish mean-square convergence in the uniform norm on discrete- or continuous-time solutions under relaxed regularity assumptions on the driving vector fields and their induced flows. Specifically, we show that randomised high-order integrators for globally Lipschitz flows and randomised Euler integrators for dissipative vector fields with polynomially-bounded local Lipschitz constants all have the same mean-square convergence rate as their deterministic counterparts, provided that the variance of the integration noise is not of higher order than the corresponding deterministic integrator. These and similar results are proven for probabilistic integrators where the random perturbations may be state-dependent, non-Gaussian, or non-centred random variables.
[ 0, 0, 1, 1, 0, 0 ]
Title: RFCDE: Random Forests for Conditional Density Estimation, Abstract: Random forests is a common non-parametric regression technique which performs well for mixed-type data and irrelevant covariates, while being robust to monotonic variable transformations. Existing random forest implementations target regression or classification. We introduce the RFCDE package for fitting random forest models optimized for nonparametric conditional density estimation, including joint densities for multiple responses. This enables analysis of conditional probability distributions which is useful for propagating uncertainty and of joint distributions that describe relationships between multiple responses and covariates. RFCDE is released under the MIT open-source license and can be accessed at this https URL . Both R and Python versions, which call a common C++ library, are available.
[ 0, 0, 0, 1, 0, 0 ]
Title: The symplectic approach of gauged linear $σ$-model, Abstract: Witten's Gauged Linear $\sigma$-Model (GLSM) unifies the Gromov-Witten theory and the Landau-Ginzburg theory, and provides a global perspective on mirror symmetry. In this article, we summarize a mathematically rigorous construction of the GLSM in the geometric phase using methods from symplectic geometry.
[ 0, 0, 1, 0, 0, 0 ]
Title: Clicks and Cliques. Exploring the Soul of the Community, Abstract: In the paper we analyze 26 communities across the United States with the objective to understand what attaches people to their community and how this attachment differs among communities. How different are attached people from unattached? What attaches people to their community? How different are the communities? What are key drivers behind emotional attachment? To address these questions, graphical, supervised and unsupervised learning tools were used and information from the Census Bureau and the Knight Foundation were combined. Using the same pre-processed variables as Knight (2010) most likely will drive the results towards the same conclusions than the Knight foundation, so this paper does not use those variables.
[ 0, 0, 0, 1, 0, 0 ]
Title: The Block Point Process Model for Continuous-Time Event-Based Dynamic Networks, Abstract: Many application settings involve the analysis of timestamped relations or events between a set of entities, e.g. messages between users of an on-line social network. Static and discrete-time network models are typically used as analysis tools in these settings; however, they discard a significant amount of information by aggregating events over time to form network snapshots. In this paper, we introduce a block point process model (BPPM) for dynamic networks evolving in continuous time in the form of events at irregular time intervals. The BPPM is inspired by the well-known stochastic block model (SBM) for static networks and is a simpler version of the recently-proposed Hawkes infinite relational model (IRM). We show that networks generated by the BPPM follow an SBM in the limit of a growing number of nodes and leverage this property to develop an efficient inference procedure for the BPPM. We fit the BPPM to several real network data sets, including a Facebook network with over 3, 500 nodes and 130, 000 events, several orders of magnitude larger than the Hawkes IRM and other existing point process network models.
[ 1, 0, 0, 1, 0, 0 ]
Title: Investor Reaction to Financial Disclosures Across Topics: An Application of Latent Dirichlet Allocation, Abstract: This paper provides a holistic study of how stock prices vary in their response to financial disclosures across different topics. Thereby, we specifically shed light into the extensive amount of filings for which no a priori categorization of their content exists. For this purpose, we utilize an approach from data mining - namely, latent Dirichlet allocation - as a means of topic modeling. This technique facilitates our task of automatically categorizing, ex ante, the content of more than 70,000 regulatory 8-K filings from U.S. companies. We then evaluate the subsequent stock market reaction. Our empirical evidence suggests a considerable discrepancy among various types of news stories in terms of their relevance and impact on financial markets. For instance, we find a statistically significant abnormal return in response to earnings results and credit rating, but also for disclosures regarding business strategy, the health sector, as well as mergers and acquisitions. Our results yield findings that benefit managers, investors and policy-makers by indicating how regulatory filings should be structured and the topics most likely to precede changes in stock valuations.
[ 0, 0, 0, 0, 0, 1 ]
Title: Truncated Variational EM for Semi-Supervised Neural Simpletrons, Abstract: Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches. To obtain efficiently trainable, large-scale and well performing generative networks for semi-supervised learning, we here combine two recent developments: a neural network reformulation of hierarchical Poisson mixtures (Neural Simpletrons), and a novel truncated variational EM approach (TV-EM). TV-EM provides theoretical guarantees for learning in generative networks, and its application to Neural Simpletrons results in particularly compact, yet approximately optimal, modifications of learning equations. If applied to standard benchmarks, we empirically find, that learning converges in fewer EM iterations, that the complexity per EM iteration is reduced, and that final likelihood values are higher on average. For the task of classification on data sets with few labels, learning improvements result in consistently lower error rates if compared to applications without truncation. Experiments on the MNIST data set herein allow for comparison to standard and state-of-the-art models in the semi-supervised setting. Further experiments on the NIST SD19 data set show the scalability of the approach when a manifold of additional unlabeled data is available.
[ 0, 0, 0, 1, 0, 0 ]
Title: Opinion Polarization by Learning from Social Feedback, Abstract: We explore a new mechanism to explain polarization phenomena in opinion dynamics in which agents evaluate alternative views on the basis of the social feedback obtained on expressing them. High support of the favored opinion in the social environment, is treated as a positive feedback which reinforces the value associated to this opinion. In connected networks of sufficiently high modularity, different groups of agents can form strong convictions of competing opinions. Linking the social feedback process to standard equilibrium concepts we analytically characterize sufficient conditions for the stability of bi-polarization. While previous models have emphasized the polarization effects of deliberative argument-based communication, our model highlights an affective experience-based route to polarization, without assumptions about negative influence or bounded confidence.
[ 1, 1, 0, 0, 0, 0 ]
Title: Estimating Quality in Multi-Objective Bandits Optimization, Abstract: Many real-world applications are characterized by a number of conflicting performance measures. As optimizing in a multi-objective setting leads to a set of non-dominated solutions, a preference function is required for selecting the solution with the appropriate trade-off between the objectives. The question is: how good do estimations of these objectives have to be in order for the solution maximizing the preference function to remain unchanged? In this paper, we introduce the concept of preference radius to characterize the robustness of the preference function and provide guidelines for controlling the quality of estimations in the multi-objective setting. More specifically, we provide a general formulation of multi-objective optimization under the bandits setting. We show how the preference radius relates to the optimal gap and we use this concept to provide a theoretical analysis of the Thompson sampling algorithm from multivariate normal priors. We finally present experiments to support the theoretical results and highlight the fact that one cannot simply scalarize multi-objective problems into single-objective problems.
[ 1, 0, 0, 1, 0, 0 ]
Title: Spin dynamics of quadrupole nuclei in InGaAs quantum dots, Abstract: Photoluminescence polarization is experimentally studied for samples with (In,Ga)As/GaAs selfassembled quantum dots in transverse magnetic field (Hanle effect) under slow modulation of the excitation light polarization from fractions of Hz to tens of kHz. The polarization reflects the evolution of strongly coupled electron-nuclear spin system in the quantum dots. Strong modification of the Hanle curves under variation of the modulation period is attributed to the peculiarities of the spin dynamics of quadrupole nuclei, which states are split due to deformation of the crystal lattice in the quantum dots. Analysis of the Hanle curves is fulfilled in the framework of a phenomenological model considering a separate dynamics of a nuclear field BNd determined by the +/- 12 nuclear spin states and of a nuclear field BNq determined by the split-off states +/- 3/2, +/- 5/2, etc. It is found that the characteristic relaxation time for the nuclear field BNd is of order of 0.5 s, while the relaxation of the field BNq is faster by three orders of magnitude.
[ 0, 1, 0, 0, 0, 0 ]
Title: Generalized Coordinated Transaction Scheduling: A Market Approach to Seamless Interfaces, Abstract: A generalization of the coordinated transaction scheduling (CTS)---the state-of-the-art interchange scheduling---is proposed. Referred to as generalized coordinated transaction scheduling (GCTS), the proposed approach addresses major seams issues of CTS: the ad hoc use of proxy buses, the presence of loop flow as a result of proxy bus approximation, and difficulties in dealing with multiple interfaces. By allowing market participants to submit bids across market boundaries, GCTS also generalizes the joint economic dispatch that achieves seamless interchange without market participants. It is shown that GCTS asymptotically achieves seamless interface under certain conditions. GCTS is also shown to be revenue adequate in that each regional market has a non-negative net revenue that is equal to its congestion rent. Numerical examples are presented to illustrate the quantitative improvement of the proposed approach.
[ 0, 0, 1, 0, 0, 0 ]
Title: A survey on policy search algorithms for learning robot controllers in a handful of trials, Abstract: Most policy search algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot. This survey article focuses on the extreme other end of the spectrum: how can a robot adapt with only a handful of trials (a dozen) and a few minutes? By analogy with the word "big-data", we refer to this challenge as "micro-data reinforcement learning". We show that a first strategy is to leverage prior knowledge on the policy structure (e.g., dynamic movement primitives), on the policy parameters (e.g., demonstrations), or on the dynamics (e.g., simulators). A second strategy is to create data-driven surrogate models of the expected reward (e.g., Bayesian optimization) or the dynamical model (e.g., model-based policy search), so that the policy optimizer queries the model instead of the real system. Overall, all successful micro-data algorithms combine these two strategies by varying the kind of model and prior knowledge. The current scientific challenges essentially revolve around scaling up to complex robots (e.g., humanoids), designing generic priors, and optimizing the computing time.
[ 1, 0, 0, 1, 0, 0 ]
Title: Non-linear Cyclic Codes that Attain the Gilbert-Varshamov Bound, Abstract: We prove that there exist non-linear binary cyclic codes that attain the Gilbert-Varshamov bound.
[ 1, 0, 1, 0, 0, 0 ]
Title: Consistency of Lipschitz learning with infinite unlabeled data and finite labeled data, Abstract: We study the consistency of Lipschitz learning on graphs in the limit of infinite unlabeled data and finite labeled data. Previous work has conjectured that Lipschitz learning is well-posed in this limit, but is insensitive to the distribution of the unlabeled data, which is undesirable for semi-supervised learning. We first prove that this conjecture is true in the special case of a random geometric graph model with kernel-based weights. Then we go on to show that on a random geometric graph with self-tuning weights, Lipschitz learning is in fact highly sensitive to the distribution of the unlabeled data, and we show how the degree of sensitivity can be adjusted by tuning the weights. In both cases, our results follow from showing that the sequence of learned functions converges to the viscosity solution of an $\infty$-Laplace type equation, and studying the structure of the limiting equation.
[ 1, 0, 0, 0, 0, 0 ]
Title: Geometrical morphology, Abstract: We explore inflectional morphology as an example of the relationship of the discrete and the continuous in linguistics. The grammar requests a form of a lexeme by specifying a set of feature values, which corresponds to a corner M of a hypercube in feature value space. The morphology responds to that request by providing a morpheme, or a set of morphemes, whose vector sum is geometrically closest to the corner M. In short, the chosen morpheme $\mu$ is the morpheme (or set of morphemes) that maximizes the inner product of $\mu$ and M.
[ 1, 0, 0, 0, 0, 0 ]
Title: Self-similar groups of type FP_{n}, Abstract: We construct new classes of self-similar groups : S-aritmetic groups, affine groups and metabelian groups. Most of the soluble ones are finitely presented and of type FP_{n} for appropriate n.
[ 0, 0, 1, 0, 0, 0 ]
Title: Operator Fitting for Parameter Estimation of Stochastic Differential Equations, Abstract: Estimation of parameters is a crucial part of model development. When models are deterministic, one can minimise the fitting error; for stochastic systems one must be more careful. Broadly parameterisation methods for stochastic dynamical systems fit into maximum likelihood estimation- and method of moment-inspired techniques. We propose a method where one matches a finite dimensional approximation of the Koopman operator with the implied Koopman operator as generated by an extended dynamic mode decomposition approximation. One advantage of this approach is that the objective evaluation cost can be independent the number of samples for some dynamical systems. We test our approach on two simple systems in the form of stochastic differential equations, compare to benchmark techniques, and consider limited eigen-expansions of the operators being approximated. Other small variations on the technique are also considered, and we discuss the advantages to our formulation.
[ 0, 0, 1, 1, 0, 0 ]
Title: Strength Factors: An Uncertainty System for a Quantified Modal Logic, Abstract: We present a new system S for handling uncertainty in a quantified modal logic (first-order modal logic). The system is based on both probability theory and proof theory. The system is derived from Chisholm's epistemology. We concretize Chisholm's system by grounding his undefined and primitive (i.e. foundational) concept of reasonablenes in probability and proof theory. S can be useful in systems that have to interact with humans and provide justifications for their uncertainty. As a demonstration of the system, we apply the system to provide a solution to the lottery paradox. Another advantage of the system is that it can be used to provide uncertainty values for counterfactual statements. Counterfactuals are statements that an agent knows for sure are false. Among other cases, counterfactuals are useful when systems have to explain their actions to users. Uncertainties for counterfactuals fall out naturally from our system. Efficient reasoning in just simple first-order logic is a hard problem. Resolution-based first-order reasoning systems have made significant progress over the last several decades in building systems that have solved non-trivial tasks (even unsolved conjectures in mathematics). We present a sketch of a novel algorithm for reasoning that extends first-order resolution. Finally, while there have been many systems of uncertainty for propositional logics, first-order logics and propositional modal logics, there has been very little work in building systems of uncertainty for first-order modal logics. The work described below is in progress; and once finished will address this lack.
[ 1, 0, 0, 0, 0, 0 ]
Title: Binary companions of nearby supernova remnants found with Gaia, Abstract: We search for runaway former companions of the progenitors of nearby Galactic core-collapse supernova remnants (SNRs) in the Tycho-Gaia astrometric solution (TGAS). We look for candidates for a sample of ten SNRs with distances less than $2\;\mathrm{kpc}$, taking astrometry and $G$ magnitude from TGAS and $B,V$ magnitudes from the AAVSO Photometric All-Sky Survey (APASS). A simple method of tracking back stars and finding the closest point to the SNR centre is shown to have several failings when ranking candidates. In particular, it neglects our expectation that massive stars preferentially have massive companions. We evolve a grid of binary stars to exploit these covariances in the distribution of runaway star properties in colour - magnitude - ejection velocity space. We construct an analytic model which predicts the properties of a runaway star, in which the model parameters are the properties of the progenitor binary and the properties of the SNR. Using nested sampling we calculate the Bayesian evidence for each candidate to be the runaway and simultaneously constrain the properties of that runaway and of the SNR itself. We identify four likely runaway companions of the Cygnus Loop, HB 21, S147 and the Monoceros Loop. HD 37424 has previously been suggested as the companion of S147, however the other three stars are new candidates. The favoured companion of HB 21 is the Be star BD+50 3188 whose emission-line features could be explained by pre-supernova mass transfer from the primary. There is a small probability that the $2\;\mathrm{M}_{\odot}$ candidate runaway TYC 2688-1556-1 associated with the Cygnus Loop is a hypervelocity star. If the Monoceros Loop is related to the on-going star formation in the Mon OB2 association, the progenitor of the Monoceros Loop is required to be more massive than $40\;\mathrm{M}_{\odot}$ which is in tension with the posterior for HD 261393.
[ 0, 1, 0, 0, 0, 0 ]
Title: Some Distributions on Finite Rooted Binary Trees, Abstract: We introduce some natural families of distributions on rooted binary ranked plane trees with a view toward unifying ideas from various fields, including macroevolution, epidemiology, computational group theory, search algorithms and other fields. In the process we introduce the notions of split-exchangeability and plane-invariance of a general Markov splitting model in order to readily obtain probabilities over various equivalence classes of trees that arise in statistics, phylogenetics, epidemiology and group theory.
[ 0, 0, 1, 0, 0, 0 ]
Title: Time-delayed SIS epidemic model with population awareness, Abstract: This paper analyses the dynamics of infectious disease with a concurrent spread of disease awareness. The model includes local awareness due to contacts with aware individuals, as well as global awareness due to reported cases of infection and awareness campaigns. We investigate the effects of time delay in response of unaware individuals to available information on the epidemic dynamics by establishing conditions for the Hopf bifurcation of the endemic steady state of the model. Analytical results are supported by numerical bifurcation analysis and simulations.
[ 0, 1, 0, 0, 0, 0 ]
Title: Uplink Performance Analysis in D2D-Enabled mmWave Cellular Networks, Abstract: In this paper, we provide an analytical framework to analyze the uplink performance of device-to-device (D2D)-enabled millimeter wave (mmWave) cellular networks. Signal-to- interference-plus-noise ratio (SINR) outage probabilities are derived for both cellular and D2D links using tools from stochastic geometry. The distinguishing features of mmWave communications such as directional beamforming and having different path loss laws for line-of-sight (LOS) and non-line-of-sight (NLOS) links are incorporated into the outage analysis by employing a flexible mode selection scheme and Nakagami fading. Also, the effect of beamforming alignment errors on the outage probability is investigated to get insight on the performance in practical scenarios.
[ 1, 0, 0, 0, 0, 0 ]
Title: The GAN Landscape: Losses, Architectures, Regularization, and Normalization, Abstract: Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of "tricks". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We reproduce the current state of the art and go beyond fairly exploring the GAN landscape. We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub.
[ 0, 0, 0, 1, 0, 0 ]
Title: Period polynomials, derivatives of $L$-functions, and zeros of polynomials, Abstract: Period polynomials have long been fruitful tools for the study of values of $L$-functions in the context of major outstanding conjectures. In this paper, we survey some facets of this study from the perspective of Eichler cohomology. We discuss ways to incorporate non-cuspidal modular forms and values of derivatives of $L$-functions into the same framework. We further review investigations of the location of zeros of the period polynomial as well as of its analogue for $L$-derivatives.
[ 0, 0, 1, 0, 0, 0 ]
Title: Forming disc galaxies in major mergers II. The central mass concentration problem and a comparison of GADGET3 with GIZMO, Abstract: Context: In a series of papers, we study the major merger of two disk galaxies in order to establish whether or not such a merger can produce a disc galaxy. Aims: Our aim here is to describe in detail the technical aspects of our numerical experiments. Methods: We discuss the initial conditions of our major merger, which consist of two protogalaxies on a collision orbit. We show that such merger simulations can produce a non-realistic central mass concentration, and we propose simple, parametric, AGN-like feedback as a solution to this problem. Our AGN-like feedback algorithm is very simple: at each time-step we take all particles whose local volume density is above a given threshold value and increase their temperature to a preset value. We also compare the GADGET3 and GIZMO codes, by applying both of them to the same initial conditions. Results: We show that the evolution of isolated protogalaxies resembles the evolution of disk galaxies, thus arguing that our protogalaxies are well suited for our merger simulations. We demonstrate that the problem with the unphysical central mass concentration in our merger simulations is further aggravated when we increase the resolution. We show that our AGN-like feedback removes this non-physical central mass concentration, and thus allows the formation of realistic bars. Note that our AGN-like feedback mainly affects the central region of a model, without significantly modifying the rest of the galaxy. We demonstrate that, in the context of our kind of simulation, GADGET3 gives results which are very similar to those obtained with the PSPH (density independent SPH) flavor of GIZMO. Moreover, in the examples we tried, the differences between the results of the two flavors of GIZMO, namely PSPH, and MFM (mesh-less algorithm) are similar to and, in some comparisons, larger than the differences between the results of GADGET3 and PSPH.
[ 0, 1, 0, 0, 0, 0 ]
Title: Measuring Systematic Risk with Neural Network Factor Model, Abstract: In this paper, we measure systematic risk with a new nonparametric factor model, the neural network factor model. The suitable factors for systematic risk can be naturally found by inserting daily returns on a wide range of assets into the bottleneck network. The network-based model does not stick to a probabilistic structure unlike parametric factor models, and it does not need feature engineering because it selects notable features by itself. In addition, we compare performance between our model and the existing models using 20-year data of S&P 100 components. Although the new model can not outperform the best ones among the parametric factor models due to limitations of the variational inference, the estimation method used for this study, it is still noteworthy in that it achieves the performance as best the comparable models could without any prior knowledge.
[ 0, 0, 0, 0, 0, 1 ]
Title: Lensing and the Warm Hot Intergalactic Medium, Abstract: The correlation of weak lensing and Cosmic Microwave Anisotropy (CMB) data traces the pressure distribution of the hot, ionized gas and the underlying matter density field. The measured correlation is dominated by baryons residing in halos. Detecting the contribution from unbound gas by measuring the residual cross-correlation after masking all known halos requires a theoretical understanding of this correlation and its dependence with model parameters. Our model assumes that the gas in filaments is well described by a log-normal probability distribution function, with temperatures $10^{5-7}$K and overdensities $\xi\le 100$. The lensing-comptonization cross-correlation is dominated by gas with overdensities in the range $\xi\approx[3-33]$; the signal is generated at redshifts $z\le 1$. If only 10\% of the measured cross-correlation is due to unbound gas, then the most recent measurements set an upper limit of $\bar{T}_e\lesssim 10^6$K on the mean temperature of Inter Galactic Medium. The amplitude is proportional to the baryon fraction stored in filaments. The lensing-comptonization power spectrum peaks at a different scale than the gas in halos making it possible to distinguish both contributions. To trace the distribution of the low density and low temperature plasma on cosmological scales, the effect of halos will have to be subtracted from the data, requiring observations with larger signal-to-noise ratio than currently available.
[ 0, 1, 0, 0, 0, 0 ]
Title: Guaranteed Simultaneous Asymmetric Tensor Decomposition via Orthogonalized Alternating Least Squares, Abstract: We consider the asymmetric orthogonal tensor decomposition problem, and present an orthogonalized alternating least square algorithm that converges to rank-$r$ of the true tensor factors simultaneously in $O(\log(\log(\frac{1}{\epsilon})))$ steps under our proposed Trace Based Initialization procedure. Trace Based Initialization requires $O(1/{\log (\frac{\lambda_{r}}{\lambda_{r+1}})})$ number of matrix subspace iterations to guarantee a "good" initialization for the simultaneous orthogonalized ALS method, where $\lambda_r$ is the $r$-th largest singular value of the tensor. We are the first to give a theoretical guarantee on orthogonal asymmetric tensor decomposition using Trace Based Initialization procedure and the orthogonalized alternating least squares. Our Trace Based Initialization also improves convergence for symmetric orthogonal tensor decomposition.
[ 0, 0, 0, 1, 0, 0 ]
Title: The average sizes of two-torsion subgroups in quotients of class groups of cubic fields, Abstract: We prove a generalization of a result of Bhargava regarding the average size $\mathrm{Cl}(K)[2]$ as $K$ varies among cubic fields. For a fixed set of rational primes $S$, we obtain a formula for the average size of $\mathrm{Cl}(K)/\langle S \rangle[2]$ as $K$ varies among cubic fields with a fixed signature, where $\langle S \rangle$ is the subgroup of $\mathrm{Cl}(K)$ generated by the classes of primes of $K$ above primes in $S$. As a consequence, we are able to calculate the average sizes of $K_{2n}(\mathcal{O}_K)[2]$ for $n > 0$ and for the relaxed Selmer group $\mathrm{Sel}_2^S(K)$ as $K$ varies in these same families.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Strongly Consistent Finite Difference Scheme for Steady Stokes Flow and its Modified Equations, Abstract: We construct and analyze a strongly consistent second-order finite difference scheme for the steady two-dimensional Stokes flow. The pressure Poisson equation is explicitly incorporated into the scheme. Our approach suggested by the first two authors is based on a combination of the finite volume method, difference elimination, and numerical integration. We make use of the techniques of the differential and difference Janet/Groebner bases. In order to prove strong consistency of the generated scheme we correlate the differential ideal generated by the polynomials in the Stokes equations with the difference ideal generated by the polynomials in the constructed difference scheme. Additionally, we compute the modified differential system of the obtained scheme and analyze the scheme's accuracy and strong consistency by considering this system. An evaluation of our scheme against the established marker-and-cell method is carried out.
[ 1, 0, 0, 0, 0, 0 ]
Title: Robust stability analysis of DC microgrids with constant power loads, Abstract: This paper studies stability analysis of DC microgrids with uncertain constant power loads (CPLs). It is well known that CPLs have negative impedance effects, which may cause instability in a DC microgrid. Existing works often study the stability around a given equilibrium based on some nominal values of CPLs. However, in real applications, the equilibrium of a DC microgrid depends on the loading condition that often changes over time. Different from many previous results, this paper develops a framework that can analyze the DC microgrid stability for a given range of CPLs. The problem is formulated as a robust stability problem of a polytopic uncertain linear system. By exploiting the structure of the problem, we derive a set of sufficient conditions that can guarantee robust stability. The conditions can be efficiently checked by solving a convex optimization problem whose complexity does not grow with the number of buses in the microgrid. The effectiveness and non-conservativeness of the proposed framework are demonstrated using simulation examples.
[ 0, 0, 1, 0, 0, 0 ]
Title: Comparison results for first order linear operators with reflection and periodic boundary value conditions, Abstract: This work is devoted to the study of the first order operator $x'(t)+m\,x(-t)$ coupled with periodic boundary value conditions. We describe the eigenvalues of the operator and obtain the expression of its related Green's function in the non resonant case. We also obtain the range of the values of the real parameter $m$ for which the integral kernel, which provides the unique solution, has constant sign. In this way, we automatically establish maximum and anti-maximum principles for the equation. Some applications to the existence of nonlinear periodic boundary value problems are showed.
[ 0, 0, 1, 0, 0, 0 ]
Title: Quantitative stochastic homogenization and regularity theory of parabolic equations, Abstract: We develop a quantitative theory of stochastic homogenization for linear, uniformly parabolic equations with coefficients depending on space and time. Inspired by recent works in the elliptic setting, our analysis is focused on certain subadditive quantities derived from a variational interpretation of parabolic equations. These subadditive quantities are intimately connected to spatial averages of the fluxes and gradients of solutions. We implement a renormalization-type scheme to obtain an algebraic rate for their convergence, which is essentially a quantification of the weak convergence of the gradients and fluxes of solutions to their homogenized limits. As a consequence, we obtain estimates of the homogenization error for the Cauchy-Dirichlet problem which are optimal in stochastic integrability. We also develop a higher regularity theory for solutions of the heterogeneous equation, including a uniform $C^{0,1}$-type estimate and a Liouville theorem of every finite order.
[ 0, 0, 1, 0, 0, 0 ]
Title: Boundedness of the Bergman projection on generalized Fock-Sobolev spaces on ${\mathbb C}^n$, Abstract: In this paper we solve a problem posed by H. Bommier-Hato, M. Engliš and E.H. Youssfi in [3] on the boundedness of the Bergman-type projections in generalized Fock spaces. It will be a consequence of two facts: a full description of the embeddings between generalized Fock-Sobolev spaces and a complete characterization of the boundedness of the above Bergman type projections between weighted $L^p$-spaces related to generalized Fock-Sobolev spaces.
[ 0, 0, 1, 0, 0, 0 ]
Title: Support Vector Machines and generalisation in HEP, Abstract: We review the concept of Support Vector Machines (SVMs) and discuss examples of their use in a number of scenarios. Several SVM implementations have been used in HEP and we exemplify this algorithm using the Toolkit for Multivariate Analysis (TMVA) implementation. We discuss examples relevant to HEP including background suppression for $H\to\tau^+\tau^-$ at the LHC with several different kernel functions. Performance benchmarking leads to the issue of generalisation of hyper-parameter selection. The avoidance of fine tuning (over training or over fitting) in MVA hyper-parameter optimisation, i.e. the ability to ensure generalised performance of an MVA that is independent of the training, validation and test samples, is of utmost importance. We discuss this issue and compare and contrast performance of hold-out and k-fold cross-validation. We have extended the SVM functionality and introduced tools to facilitate cross validation in TMVA and present results based on these improvements.
[ 0, 1, 0, 0, 0, 0 ]
Title: A sequent calculus for the Tamari order, Abstract: We introduce a sequent calculus with a simple restriction of Lambek's product rules that precisely captures the classical Tamari order, i.e., the partial order on fully-bracketed words (equivalently, binary trees) induced by a semi-associative law (equivalently, tree rotation). We establish a focusing property for this sequent calculus (a strengthening of cut-elimination), which yields the following coherence theorem: every valid entailment in the Tamari order has exactly one focused derivation. One combinatorial application of this coherence theorem is a new proof of the Tutte-Chapoton formula for the number of intervals in the Tamari lattice $Y_n$. We also apply the sequent calculus and the coherence theorem to build a surprising bijection between intervals of the Tamari order and a certain fragment of lambda calculus, consisting of the $\beta$-normal planar lambda terms with no closed proper subterms.
[ 1, 0, 1, 0, 0, 0 ]
Title: Seebeck Effect in Nanoscale Ferromagnets, Abstract: We present a theory of the Seebeck effect in nanoscale ferromagnets with dimensions smaller than the spin diffusion length. The spin accumulation generated by a temperature gradient strongly affects the thermopower. We also identify a correction arising from the transverse temperature gradient induced by the anomalous Ettingshausen effect. The effect of an induced spin-heat accu- mulation gradient is considered as well. The importance of these effects for nanoscale ferromagnets is illustrated by ab initio calculations for dilute ferromagnetic alloys.
[ 0, 1, 0, 0, 0, 0 ]
Title: Fast Asymmetric Fronts Propagation for Image Segmentation, Abstract: In this paper, we introduce a generalized asymmetric fronts propagation model based on the geodesic distance maps and the Eikonal partial differential equations. One of the key ingredients for the computation of the geodesic distance map is the geodesic metric, which can govern the action of the geodesic distance level set propagation. We consider a Finsler metric with the Randers form, through which the asymmetry and anisotropy enhancements can be taken into account to prevent the fronts leaking problem during the fronts propagation. These enhancements can be derived from the image edge-dependent vector field such as the gradient vector flow. The numerical implementations are carried out by the Finsler variant of the fast marching method, leading to very efficient interactive segmentation schemes. We apply the proposed Finsler fronts propagation model to image segmentation applications. Specifically, the foreground and background segmentation is implemented by the Voronoi index map. In addition, for the application of tubularity segmentation, we exploit the level set lines of the geodesic distance map associated to the proposed Finsler metric providing that a thresholding value is given.
[ 1, 0, 0, 0, 0, 0 ]
Title: A step towards Twist Conjecture, Abstract: Under the assumption that a defining graph of a Coxeter group admits only twists in $\mathbb{Z}_2$ and is of type FC, we prove Mühlherr's Twist Conjecture.
[ 0, 0, 1, 0, 0, 0 ]
Title: Epidemic Threshold in Continuous-Time Evolving Networks, Abstract: Current understanding of the critical outbreak condition on temporal networks relies on approximations (time scale separation, discretization) that may bias the results. We propose a theoretical framework to compute the epidemic threshold in continuous time through the infection propagator approach. We introduce the {\em weak commutation} condition allowing the interpretation of annealed networks, activity-driven networks, and time scale separation into one formalism. Our work provides a coherent connection between discrete and continuous time representations applicable to realistic scenarios.
[ 0, 1, 0, 0, 0, 0 ]
Title: Can Two-Way Direct Communication Protocols Be Considered Secure?, Abstract: We consider attacks on two-way quantum key distribution protocols in which an undetectable eavesdropper copies all messages in the message mode. We show that under the attacks there is no disturbance in the message mode and that the mutual information between the sender and the receiver is always constant and equal to one. It follows that recent proofs of security for two-way protocols cannot be considered complete since they do not cover the considered attacks.
[ 1, 0, 0, 0, 0, 0 ]
Title: Using Deep Neural Network Approximate Bayesian Network, Abstract: We present a new method to approximate posterior probabilities of Bayesian Network using Deep Neural Network. Experiment results on several public Bayesian Network datasets shows that Deep Neural Network is capable of learning joint probability distri- bution of Bayesian Network by learning from a few observation and posterior probability distribution pairs with high accuracy. Compared with traditional approximate method likelihood weighting sampling algorithm, our method is much faster and gains higher accuracy in medium sized Bayesian Network. Another advantage of our method is that our method can be parallelled much easier in GPU without extra effort. We also ex- plored the connection between the accuracy of our model and the number of training examples. The result shows that our model saturate as the number of training examples grow and we don't need many training examples to get reasonably good result. Another contribution of our work is that we have shown discriminative model like Deep Neural Network can approximate generative model like Bayesian Network.
[ 0, 0, 0, 1, 0, 0 ]
Title: Simple Classification using Binary Data, Abstract: Binary, or one-bit, representations of data arise naturally in many applications, and are appealing in both hardware implementations and algorithm design. In this work, we study the problem of data classification from binary data and propose a framework with low computation and resource costs. We illustrate the utility of the proposed approach through stylized and realistic numerical experiments, and provide a theoretical analysis for a simple case. We hope that our framework and analysis will serve as a foundation for studying similar types of approaches.
[ 1, 0, 0, 1, 0, 0 ]
Title: Inverse regression for ridge recovery: A data-driven approach for parameter reduction in computer experiments, Abstract: Parameter reduction can enable otherwise infeasible design and uncertainty studies with modern computational science models that contain several input parameters. In statistical regression, techniques for sufficient dimension reduction (SDR) use data to reduce the predictor dimension of a regression problem. A computational scientist hoping to use SDR for parameter reduction encounters a problem: a computer prediction is best represented by a deterministic function of the inputs, so data comprised of computer simulation queries fail to satisfy the SDR assumptions. To address this problem, we interpret SDR methods sliced inverse regression (SIR) and sliced average variance estimation (SAVE) as estimating the directions of a ridge function, which is a composition of a low-dimensional linear transformation with a nonlinear function. Within this interpretation, SIR and SAVE estimate matrices of integrals whose column spaces are contained in the ridge directions' span; we analyze and numerically verify convergence of these column spaces as the number of computer model queries increases. Moreover, we show example functions that are not ridge functions but whose inverse conditional moment matrices are low-rank. Consequently, the computational scientist should beware when using SIR and SAVE for parameter reduction, since SIR and SAVE may mistakenly suggest that truly important directions are unimportant.
[ 0, 0, 1, 0, 0, 0 ]
Title: Robust, high brightness, degenerate entangled photon source at room temperature, Abstract: We report on a compact, simple and robust high brightness entangled photon source at room temperature. Based on a 30 mm long periodically poled potassium titanyl phosphate (PPKTP), the source produces non-collinear, type0 phase matched, degenerate photons at 810 nm with pair production rate as high 39.13 MHz per mW at room temperature. To the best of our knowledge, this is the highest photon pair rate generated using bulk crystals pump with continuous-wave laser. Combined with the inherently stable polarization Sagnac interferometer, the source produces entangled state violating the Bells inequality by nearly 10 standard deviations and a Bell state fidelity of 0.96. The compact footprint, simple and robust experimental design and room temperature operation, make our source ideal for various quantum communication experiments including long distance free space and satellite communications.
[ 0, 1, 0, 0, 0, 0 ]
Title: Exploiting Spatial Degrees of Freedom for High Data Rate Ultrasound Communication with Implantable Devices, Abstract: We propose and demonstrate an ultrasonic communication link using spatial degrees of freedom to increase data rates for deeply implantable medical devices. Low attenuation and millimeter wavelengths make ultrasound an ideal communication medium for miniaturized low-power implants. While small spectral bandwidth has drastically limited achievable data rates in conventional ultrasonic implants, large spatial bandwidth can be exploited by using multiple transducers in a multiple-input/multiple-output system to provide spatial multiplexing gain without additional power, larger bandwidth, or complicated packaging. We experimentally verify the communication link in mineral oil with a transmitter and receiver 5 cm apart, each housing two custom-designed mm-sized piezoelectric transducers operating at the same frequency. Two streams of data modulated with quadrature phase-shift keying at 125 kbps are simultaneously transmitted and received on both channels, effectively doubling the data rate to 250 kbps with a measured bit error rate below 1e-4. We also evaluate the performance and robustness of the channel separation network by testing the communication link after introducing position offsets. These results demonstrate the potential of spatial multiplexing to enable more complex implant applications requiring higher data rates.
[ 1, 1, 0, 0, 0, 0 ]
Title: Unexpected Enhancement of Three-Dimensional Low-Energy Spin Correlations in Quasi-Two-Dimensional Fe$_{1+y}$Te$_{1-x}$Se$_{x}$ System at High Temperature, Abstract: We report inelastic neutron scattering measurements of low energy ($\hbar \omega < 10$ meV) magnetic excitations in the "11" system Fe$_{1+y}$Te$_{1-x}$Se$_{x}$. The spin correlations are two-dimensional (2D) in the superconducting samples at low temperature, but appear much more three-dimensional when the temperature rises well above $T_c \sim 15$ K, with a clear increase of the (dynamic) spin correlation length perpendicular to the Fe planes. The spontaneous change of dynamic spin correlations from 2D to 3D on warming is unexpected and cannot be naturally explained when only the spin degree of freedom is considered. Our results suggest that the low temperature physics in the "11" system, in particular the evolution of low energy spin excitations towards %better satisfying the nesting condition for mediating superconducting pairing, is driven by changes in orbital correlations.
[ 0, 1, 0, 0, 0, 0 ]
Title: Three-dimensional image reconstruction in J-PET using Filtered Back Projection method, Abstract: We present a method and preliminary results of the image reconstruction in the Jagiellonian PET tomograph. Using GATE (Geant4 Application for Tomographic Emission), interactions of the 511 keV photons with a cylindrical detector were generated. Pairs of such photons, flying back-to-back, originate from e+e- annihilations inside a 1-mm spherical source. Spatial and temporal coordinates of hits were smeared using experimental resolutions of the detector. We incorporated the algorithm of the 3D Filtered Back Projection, implemented in the STIR and TomoPy software packages, which differ in approximation methods. Consistent results for the Point Spread Functions of ~5/7,mm and ~9/20, mm were obtained, using STIR, for transverse and longitudinal directions, respectively, with no time of flight information included.
[ 0, 1, 0, 0, 0, 0 ]
Title: Two-component domain decomposition scheme with overlapping subdomains for parabolic equations, Abstract: An iteration-free method of domain decomposition is considered for approximate solving a boundary value problem for a second-order parabolic equation. A standard approach to constructing domain decomposition schemes is based on a partition of unity for the domain under the consideration. Here a new general approach is proposed for constructing domain decomposition schemes with overlapping subdomains based on indicator functions of subdomains. The basic peculiarity of this method is connected with a representation of the problem operator as the sum of two operators, which are constructed for two separate subdomains with the subtraction of the operator that is associated with the intersection of the subdomains. There is developed a two-component factorized scheme, which can be treated as a generalization of the standard Alternating Direction Implicit (ADI) schemes to the case of a special three-component splitting. There are obtained conditions for the unconditional stability of regionally additive schemes constructed using indicator functions of subdomains. Numerical results are presented for a model two-dimensional problem.
[ 1, 0, 0, 0, 0, 0 ]
Title: Dynamical correlations in the electronic structure of BiFeO$_{3}$, as revealed by dynamical mean field theory, Abstract: Using local density approximation plus dynamical mean-field theory (LDA+DMFT), we have computed the valence band photoelectron spectra of highly popular multiferroic BiFeO$_{3}$. Within DMFT, the local impurity problem is tackled by exact diagonalization (ED) solver. For comparison, we also present result from LDA+U approach, which is commonly used to compute physical properties of this compound. Our LDA+DMFT derived spectra match adequately with the experimental hard X-ray photoelectron spectroscopy (HAXPES) and resonant photoelectron spectroscopy (RPES) for Fe 3$d$ states, whereas the other theoretical method that we employed failed to capture the features of the measured spectra. Thus, our investigation shows the importance of accurately incorporating the dynamical aspects of electron-electron interaction among the Fe 3$d$ orbitals in calculations to produce the experimental excitation spectra, which establishes BiFeO$_{3}$ as a strongly correlated electron system. The LDA+DMFT derived density of states (DOSs) exhibit significant amount of Fe 3$d$ states at the energy of Bi lone-pairs, implying that the latter is not as alone as previously thought in the spectral scenario. Our study also demonstrates that the combination of orbital cross-sections for the constituent elements and broadening schemes for the calculated spectral function are pivotal to explain the detailed structures of the experimental spectra.
[ 0, 1, 0, 0, 0, 0 ]
Title: Robust Parameter Estimation of Regression Model with AR(p) Error Terms, Abstract: In this paper, we consider a linear regression model with AR(p) error terms with the assumption that the error terms have a t distribution as a heavy tailed alternative to the normal distribution. We obtain the estimators for the model parameters by using the conditional maximum likelihood (CML) method. We conduct an iteratively reweighting algorithm (IRA) to find the estimates for the parameters of interest. We provide a simulation study and three real data examples to illustrate the performance of the proposed robust estimators based on t distribution.
[ 0, 0, 0, 1, 0, 0 ]
Title: Measuring filament orientation: a new quantitative, local approach, Abstract: The relative orientation between filamentary structures in molecular clouds and the ambient magnetic field provides insight into filament formation and stability. To calculate the relative orientation, a measurement of filament orientation is first required. We propose a new method to calculate the orientation of the one pixel wide filament skeleton that is output by filament identification algorithms such as \textsc{filfinder}. We derive the local filament orientation from the direction of the intensity gradient in the skeleton image using the Sobel filter and a few simple post-processing steps. We call this the `Sobel-gradient method'. The resulting filament orientation map can be compared quantitatively on a local scale with the magnetic field orientation map to then find the relative orientation of the filament with respect to the magnetic field at each point along the filament. It can also be used in constructing radial profiles for filament width fitting. The proposed method facilitates automation in analysis of filament skeletons, which is imperative in this era of `big data'.
[ 0, 1, 0, 0, 0, 0 ]
Title: Motions about a fixed point by hypergeometric functions: new non-complex analytical solutions and integration of the herpolhode, Abstract: We study four problems in the dynamics of a body moving about a fixed point, providing a non-complex, analytical solution for all of them. For the first two, we will work on the motion first integrals. For the symmetrical heavy body, that is the Lagrange-Poisson case, we compute the second and third Euler angles in explicit and real forms by means of multiple hypergeometric functions (Lauricella, functions). Releasing the weight load but adding the complication of the asymmetry, by means of elliptic integrals of third kind, we provide the precession angle completing some previous treatments of the Euler-Poinsot case. Integrating then the relevant differential equation, we reach the finite polar equation of a special trajectory named the {\it herpolhode}. In the last problem we keep the symmetry of the first problem, but without the weight, and take into account a viscous dissipation. The approach of first integrals is no longer practicable in this situation and the Euler equations are faced directly leading to dumped goniometric functions obtained as particular occurrences of Bessel functions of order $-1/2$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Spatial solitons in thermo-optical media from the nonlinear Schrodinger-Poisson equation and dark matter analogues, Abstract: We analyze theoretically the Schrodinger-Poisson equation in two transverse dimensions in the presence of a Kerr term. The model describes the nonlinear propagation of optical beams in thermooptical media and can be regarded as an analogue system for a self-gravitating self-interacting wave. We compute numerically the family of radially symmetric ground state bright stationary solutions for focusing and defocusing local nonlinearity, keeping in both cases a focusing nonlocal nonlinearity. We also analyze excited states and oscillations induced by fixing the temperature at the borders of the material. We provide simulations of soliton interactions, drawing analogies with the dynamics of galactic cores in the scalar field dark matter scenario.
[ 0, 1, 0, 0, 0, 0 ]
Title: Instrument-Armed Bandits, Abstract: We extend the classic multi-armed bandit (MAB) model to the setting of noncompliance, where the arm pull is a mere instrument and the treatment applied may differ from it, which gives rise to the instrument-armed bandit (IAB) problem. The IAB setting is relevant whenever the experimental units are human since free will, ethics, and the law may prohibit unrestricted or forced application of treatment. In particular, the setting is relevant in bandit models of dynamic clinical trials and other controlled trials on human interventions. Nonetheless, the setting has not been fully investigate in the bandit literature. We show that there are various and divergent notions of regret in this setting, all of which coincide only in the classic MAB setting. We characterize the behavior of these regrets and analyze standard MAB algorithms. We argue for a particular kind of regret that captures the causal effect of treatments but show that standard MAB algorithms cannot achieve sublinear control on this regret. Instead, we develop new algorithms for the IAB problem, prove new regret bounds for them, and compare them to standard MAB algorithms in numerical examples.
[ 1, 0, 0, 1, 0, 0 ]
Title: Deep learning Inversion of Seismic Data, Abstract: In this paper, we propose a new method to tackle the mapping challenge from time-series data to spatial image in the field of seismic exploration, i.e., reconstructing the velocity model directly from seismic data by deep neural networks (DNNs). The conventional way to address this ill-posed seismic inversion problem is through iterative algorithms, which suffer from poor nonlinear mapping and strong non-uniqueness. Other attempts may either import human intervention errors or underuse seismic data. The challenge for DNNs mainly lies in the weak spatial correspondence, the uncertain reflection-reception relationship between seismic data and velocity model as well as the time-varying property of seismic data. To approach these challenges, we propose an end-to-end Seismic Inversion Networks (SeisInvNet for short) with novel components to make the best use of all seismic data. Specifically, we start with every seismic trace and enhance it with its neighborhood information, its observation setup and global context of its corresponding seismic profile. Then from enhanced seismic traces, the spatially aligned feature maps can be learned and further concatenated to reconstruct velocity model. In general, we let every seismic trace contribute to the reconstruction of the whole velocity model by finding spatial correspondence. The proposed SeisInvNet consistently produces improvements over the baselines and achieves promising performance on our proposed SeisInv dataset according to various evaluation metrics, and the inversion results are more consistent with the target from the aspects of velocity value, subsurface structure and geological interface. In addition to the superior performance, the mechanism is also carefully discussed, and some potential problems are identified for further study.
[ 1, 0, 0, 0, 0, 0 ]
Title: Refined open intersection numbers and the Kontsevich-Penner matrix model, Abstract: A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J. P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J. P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.
[ 0, 0, 1, 0, 0, 0 ]
Title: Label Embedding Network: Learning Label Representation for Soft Training of Deep Networks, Abstract: We propose a method, called Label Embedding Network, which can learn label representation (label embedding) during the training process of deep networks. With the proposed method, the label embedding is adaptively and automatically learned through back propagation. The original one-hot represented loss function is converted into a new loss function with soft distributions, such that the originally unrelated labels have continuous interactions with each other during the training process. As a result, the trained model can achieve substantially higher accuracy and with faster convergence speed. Experimental results based on competitive tasks demonstrate the effectiveness of the proposed method, and the learned label embedding is reasonable and interpretable. The proposed method achieves comparable or even better results than the state-of-the-art systems. The source code is available at \url{this https URL}.
[ 1, 0, 0, 0, 0, 0 ]
Title: On the Performance of Multi-Instrument Solar Flare Observations During Solar Cycle 24, Abstract: The current fleet of space-based solar observatories offers us a wealth of opportunities to study solar flares over a range of wavelengths. Significant advances in our understanding of flare physics often come from coordinated observations between multiple instruments. Consequently, considerable efforts have been, and continue to be made to coordinate observations among instruments (e.g. through the Max Millennium Program of Solar Flare Research). However, there has been no study to date that quantifies how many flares have been observed by combinations of various instruments. Here we describe a technique that retrospectively searches archival databases for flares jointly observed by RHESSI, SDO/EVE (MEGS-A and -B), Hinode/(EIS, SOT, and XRT), and IRIS. Out of the 6953 flares of GOES magnitude C1 or greater that we consider over the 6.5 years after the launch of SDO, 40 have been observed by six or more instruments simultaneously. Using each instrument's individual rate of success in observing flares, we show that the numbers of flares co-observed by three or more instruments are higher than the number expected under the assumption that the instruments operated independently of one another. In particular, the number of flares observed by larger numbers of instruments is much higher than expected. Our study illustrates that these missions often acted in cooperation, or at least had aligned goals. We also provide details on an interactive widget now available in SSWIDL that allows a user to search for flaring events that have been observed by a chosen set of instruments. This provides access to a broader range of events in order to answer specific science questions. The difficulty in scheduling coordinated observations for solar-flare research is discussed with respect to instruments projected to begin operations during Solar Cycle 25, such as DKIST, Solar Orbiter, and Parker Solar Probe.
[ 0, 1, 0, 0, 0, 0 ]
Title: Effective Extensible Programming: Unleashing Julia on GPUs, Abstract: GPUs and other accelerators are popular devices for accelerating compute-intensive, parallelizable applications. However, programming these devices is a difficult task. Writing efficient device code is challenging, and is typically done in a low-level programming language. High-level languages are rarely supported, or do not integrate with the rest of the high-level language ecosystem. To overcome this, we propose compiler infrastructure to efficiently add support for new hardware or environments to an existing programming language. We evaluate our approach by adding support for NVIDIA GPUs to the Julia programming language. By integrating with the existing compiler, we significantly lower the cost to implement and maintain the new compiler, and facilitate reuse of existing application code. Moreover, use of the high-level Julia programming language enables new and dynamic approaches for GPU programming. This greatly improves programmer productivity, while maintaining application performance similar to that of the official NVIDIA CUDA toolkit.
[ 1, 0, 0, 0, 0, 0 ]
Title: Maximum and minimum operators of convex integrands, Abstract: For given convex integrands $\gamma_{{}_{i}}: S^{n}\to \mathbb{R}_{+}$ (where $i=1, 2$), the functions $\gamma_{{}_{max}}$ and $\gamma_{{}_{min}}$ can be defined as natural way. In this paper, we show that the Wulff shape of $\gamma_{{}_{max}}$ (resp. the Wulff shape of $\gamma_{{}_{min}}$) is exactly the convex hull of $(\mathcal{W}_{\gamma_{{}_{1}}}\cup \mathcal{W}_{\gamma_{{}_{2}}})$ (resp. $\mathcal{W}_{\gamma_{{}_{1}}}\cap \mathcal{W}_{\gamma_{{}_{2}}}$).
[ 0, 0, 1, 0, 0, 0 ]
Title: SEDIGISM: Structure, excitation, and dynamics of the inner Galactic interstellar medium, Abstract: The origin and life-cycle of molecular clouds are still poorly constrained, despite their importance for understanding the evolution of the interstellar medium. We have carried out a systematic, homogeneous, spectroscopic survey of the inner Galactic plane, in order to complement the many continuum Galactic surveys available with crucial distance and gas-kinematic information. Our aim is to combine this data set with recent infrared to sub-millimetre surveys at similar angular resolutions. The SEDIGISM survey covers 78 deg^2 of the inner Galaxy (-60 deg < l < +18 deg, |b| < 0.5 deg) in the J=2-1 rotational transition of 13CO. This isotopologue of CO is less abundant than 12CO by factors up to 100. Therefore, its emission has low to moderate optical depths, and higher critical density, making it an ideal tracer of the cold, dense interstellar medium. The data have been observed with the SHFI single-pixel instrument at APEX. The observational setup covers the 13CO(2-1) and C18O(2-1) lines, plus several transitions from other molecules. The observations have been completed. Data reduction is in progress, and the final data products will be made available in the near future. Here we give a detailed description of the survey and the dedicated data reduction pipeline. Preliminary results based on a science demonstration field covering -20 deg < l < -18.5 deg are presented. Analysis of the 13CO(2-1) data in this field reveals compact clumps, diffuse clouds, and filamentary structures at a range of heliocentric distances. By combining our data with data in the (1-0) transition of CO isotopologues from the ThrUMMS survey, we are able to compute a 3D realization of the excitation temperature and optical depth in the interstellar medium. Ultimately, this survey will provide a detailed, global view of the inner Galactic interstellar medium at an unprecedented angular resolution of ~30".
[ 0, 1, 0, 0, 0, 0 ]
Title: Imitating Driver Behavior with Generative Adversarial Networks, Abstract: The ability to accurately predict and simulate human driving behavior is critical for the development of intelligent transportation systems. Traditional modeling methods have employed simple parametric models and behavioral cloning. This paper adopts a method for overcoming the problem of cascading errors inherent in prior approaches, resulting in realistic behavior that is robust to trajectory perturbations. We extend Generative Adversarial Imitation Learning to the training of recurrent policies, and we demonstrate that our model outperforms rule-based controllers and maximum likelihood models in realistic highway simulations. Our model both reproduces emergent behavior of human drivers, such as lane change rate, while maintaining realistic control over long time horizons.
[ 1, 0, 0, 0, 0, 0 ]
Title: Computing isomorphisms and embeddings of finite fields, Abstract: Let $\mathbb{F}_q$ be a finite field. Given two irreducible polynomials $f,g$ over $\mathbb{F}_q$, with $\mathrm{deg} f$ dividing $\mathrm{deg} g$, the finite field embedding problem asks to compute an explicit description of a field embedding of $\mathbb{F}_q[X]/f(X)$ into $\mathbb{F}_q[Y]/g(Y)$. When $\mathrm{deg} f = \mathrm{deg} g$, this is also known as the isomorphism problem. This problem, a special instance of polynomial factorization, plays a central role in computer algebra software. We review previous algorithms, due to Lenstra, Allombert, Rains, and Narayanan, and propose improvements and generalizations. Our detailed complexity analysis shows that our newly proposed variants are at least as efficient as previously known algorithms, and in many cases significantly better. We also implement most of the presented algorithms, compare them with the state of the art computer algebra software, and make the code available as open source. Our experiments show that our new variants consistently outperform available software.
[ 1, 0, 1, 0, 0, 0 ]
Title: Analysis of the current-driven domain wall motion in a ratchet ferromagnetic strip, Abstract: The current-driven domain wall motion in a ratchet memory due to spin-orbit torques is studied from both full micromagnetic simulations and the one dimensional model. Within the framework of this model, the integration of the anisotropy energy contribution leads to a new term in the well known q-$\Phi$ equations, being this contribution responsible for driving the domain wall to an equilibrium position. The comparison between the results drawn by the one dimensional model and full micromagnetic simulations proves the utility of such a model in order to predict the current-driven domain wall motion in the ratchet memory. Additionally, since current pulses are applied, the paper shows how the proper working of such a device requires the adequate balance of excitation and relaxation times, being the latter longer than the former. Finally, the current-driven regime of a ratchet memory is compared to the field-driven regime described elsewhere, then highlighting the advantages of this current-driven regime.
[ 0, 1, 0, 0, 0, 0 ]
Title: End-to-End ASR-free Keyword Search from Speech, Abstract: End-to-end (E2E) systems have achieved competitive results compared to conventional hybrid hidden Markov model (HMM)-deep neural network based automatic speech recognition (ASR) systems. Such E2E systems are attractive due to the lack of dependence on alignments between input acoustic and output grapheme or HMM state sequence during training. This paper explores the design of an ASR-free end-to-end system for text query-based keyword search (KWS) from speech trained with minimal supervision. Our E2E KWS system consists of three sub-systems. The first sub-system is a recurrent neural network (RNN)-based acoustic auto-encoder trained to reconstruct the audio through a finite-dimensional representation. The second sub-system is a character-level RNN language model using embeddings learned from a convolutional neural network. Since the acoustic and text query embeddings occupy different representation spaces, they are input to a third feed-forward neural network that predicts whether the query occurs in the acoustic utterance or not. This E2E ASR-free KWS system performs respectably despite lacking a conventional ASR system and trains much faster.
[ 1, 0, 0, 0, 0, 0 ]
Title: Schatten class Hankel and $\overline{\partial}$-Neumann operators on pseudoconvex domains in $\mathbb{C}^n$, Abstract: Let $\Omega$ be a $C^2$-smooth bounded pseudoconvex domain in $\mathbb{C}^n$ for $n\geq 2$ and let $\varphi$ be a holomorphic function on $\Omega$ that is $C^2$-smooth on the closure of $\Omega$. We prove that if $H_{\overline{\varphi}}$ is in Schatten $p$-class for $p\leq 2n$ then $\varphi$ is a constant function. As a corollary, we show that the $\overline{\partial}$-Neumann operator on $\Omega$ is not Hilbert-Schmidt.
[ 0, 0, 1, 0, 0, 0 ]
Title: Cross-Sectional Variation of Intraday Liquidity, Cross-Impact, and their Effect on Portfolio Execution, Abstract: The composition of natural liquidity has been changing over time. An analysis of intraday volumes for the S&P500 constituent stocks illustrates that (i) volume surprises, i.e., deviations from their respective forecasts, are correlated across stocks, and (ii) this correlation increases during the last few hours of the trading session. These observations could be attributed, in part, to the prevalence of portfolio trading activity that is implicit in the growth of ETF, passive and systematic investment strategies; and, to the increased trading intensity of such strategies towards the end of the trading session, e.g., due to execution of mutual fund inflows/outflows that are benchmarked to the closing price on each day. In this paper, we investigate the consequences of such portfolio liquidity on price impact and portfolio execution. We derive a linear cross-asset market impact from a stylized model that explicitly captures the fact that a certain fraction of natural liquidity providers only trade portfolios of stocks whenever they choose to execute. We find that due to cross-impact and its intraday variation, it is optimal for a risk-neutral, cost minimizing liquidator to execute a portfolio of orders in a coupled manner, as opposed to a separable VWAP-like execution that is often assumed. The optimal schedule couples the execution of the various orders so as to be able to take advantage of increased portfolio liquidity towards the end of the day. A worst case analysis shows that the potential cost reduction from this optimized execution schedule over the separable approach can be as high as 6% for plausible model parameters. Finally, we discuss how to estimate cross-sectional price impact if one had a dataset of realized portfolio transaction records that exploits the low-rank structure of its coefficient matrix suggested by our analysis.
[ 0, 0, 0, 0, 0, 1 ]
Title: An investigation of pulsar searching techniques with the Fast Folding Algorithm, Abstract: Here we present an in-depth study of the behaviour of the Fast Folding Algorithm, an alternative pulsar searching technique to the Fast Fourier Transform. Weaknesses in the Fast Fourier Transform, including a susceptibility to red noise, leave it insensitive to pulsars with long rotational periods (P > 1 s). This sensitivity gap has the potential to bias our understanding of the period distribution of the pulsar population. The Fast Folding Algorithm, a time-domain based pulsar searching technique, has the potential to overcome some of these biases. Modern distributed-computing frameworks now allow for the application of this algorithm to all-sky blind pulsar surveys for the first time. However, many aspects of the behaviour of this search technique remain poorly understood, including its responsiveness to variations in pulse shape and the presence of red noise. Using a custom CPU-based implementation of the Fast Folding Algorithm, ffancy, we have conducted an in-depth study into the behaviour of the Fast Folding Algorithm in both an ideal, white noise regime as well as a trial on observational data from the HTRU-S Low Latitude pulsar survey, including a comparison to the behaviour of the Fast Fourier Transform. We are able to both confirm and expand upon earlier studies that demonstrate the ability of the Fast Folding Algorithm to outperform the Fast Fourier Transform under ideal white noise conditions, and demonstrate a significant improvement in sensitivity to long-period pulsars in real observational data through the use of the Fast Folding Algorithm.
[ 0, 1, 0, 0, 0, 0 ]
Title: Approximate Collapsed Gibbs Clustering with Expectation Propagation, Abstract: We develop a framework for approximating collapsed Gibbs sampling in generative latent variable cluster models. Collapsed Gibbs is a popular MCMC method, which integrates out variables in the posterior to improve mixing. Unfortunately for many complex models, integrating out these variables is either analytically or computationally intractable. We efficiently approximate the necessary collapsed Gibbs integrals by borrowing ideas from expectation propagation. We present two case studies where exact collapsed Gibbs sampling is intractable: mixtures of Student-t's and time series clustering. Our experiments on real and synthetic data show that our approximate sampler enables a runtime-accuracy tradeoff in sampling these types of models, providing results with competitive accuracy much more rapidly than the naive Gibbs samplers one would otherwise rely on in these scenarios.
[ 0, 0, 0, 1, 0, 0 ]
Title: Thermal graphene metamaterials and epsilon-near-zero high temperature plasmonics, Abstract: The key feature of a thermophotovoltaic (TPV) emitter is the enhancement of thermal emission corresponding to energies just above the bandgap of the absorbing photovoltaic cell and simultaneous suppression of thermal emission below the bandgap. We show here that a single layer plasmonic coating can perform this task with high efficiency. Our key design principle involves tuning the epsilon-near-zero frequency (plasma frequency) of the metal acting as a thermal emitter to the electronic bandgap of the semiconducting cell. This approach utilizes the change in reflectivity of a metal near its plasma frequency (epsilon-near-zero frequency) to lead to spectrally selective thermal emission and can be adapted to large area coatings using high temperature plasmonic materials. We provide a detailed analysis of the spectral and angular performance of high temperature plasmonic coatings as TPV emitters. We show the potential of such high temperature plasmonic thermal emitter coatings (p-TECs) for narrowband near-field thermal emission. We also show the enhancement of near-surface energy density in graphene-multilayer thermal metamaterials due to a topological transition at an effective epsilon-near-zero frequency. This opens up spectrally selective thermal emission from graphene multilayers in the infrared frequency regime. Our design paves the way for the development of single layer p-TECs and graphene multilayers for spectrally selective radiative heat transfer applications.
[ 0, 1, 0, 0, 0, 0 ]
Title: Learning Convex Regularizers for Optimal Bayesian Denoising, Abstract: We propose a data-driven algorithm for the maximum a posteriori (MAP) estimation of stochastic processes from noisy observations. The primary statistical properties of the sought signal is specified by the penalty function (i.e., negative logarithm of the prior probability density function). Our alternating direction method of multipliers (ADMM)-based approach translates the estimation task into successive applications of the proximal mapping of the penalty function. Capitalizing on this direct link, we define the proximal operator as a parametric spline curve and optimize the spline coefficients by minimizing the average reconstruction error for a given training set. The key aspects of our learning method are that the associated penalty function is constrained to be convex and the convergence of the ADMM iterations is proven. As a result of these theoretical guarantees, adaptation of the proposed framework to different levels of measurement noise is extremely simple and does not require any retraining. We apply our method to estimation of both sparse and non-sparse models of Lévy processes for which the minimum mean square error (MMSE) estimators are available. We carry out a single training session and perform comparisons at various signal-to-noise ratio (SNR) values. Simulations illustrate that the performance of our algorithm is practically identical to the one of the MMSE estimator irrespective of the noise power.
[ 1, 0, 0, 1, 0, 0 ]
Title: On the $L^p$ boundedness of wave operators for two-dimensional Schrödinger operators with threshold obstructions, Abstract: Let $H=-\Delta+V$ be a Schrödinger operator on $L^2(\mathbb R^2)$ with real-valued potential $V$, and let $H_0=-\Delta$. If $V$ has sufficient pointwise decay, the wave operators $W_{\pm}=s-\lim_{t\to \pm\infty} e^{itH}e^{-itH_0}$ are known to be bounded on $L^p(\mathbb R^2)$ for all $1< p< \infty$ if zero is not an eigenvalue or resonance. We show that if there is an s-wave resonance or an eigenvalue only at zero, then the wave operators are bounded on $L^p(\mathbb R^2)$ for $1 < p<\infty$. This result stands in contrast to results in higher dimensions, where the presence of zero energy obstructions is known to shrink the range of valid exponents $p$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Bivariate Causal Discovery and its Applications to Gene Expression and Imaging Data Analysis, Abstract: The mainstream of research in genetics, epigenetics and imaging data analysis focuses on statistical association or exploring statistical dependence between variables. Despite their significant progresses in genetic research, understanding the etiology and mechanism of complex phenotypes remains elusive. Using association analysis as a major analytical platform for the complex data analysis is a key issue that hampers the theoretic development of genomic science and its application in practice. Causal inference is an essential component for the discovery of mechanical relationships among complex phenotypes. Many researchers suggest making the transition from association to causation. Despite its fundamental role in science, engineering and biomedicine, the traditional methods for causal inference require at least three variables. However, quantitative genetic analysis such as QTL, eQTL, mQTL, and genomic-imaging data analysis requires exploring the causal relationships between two variables. This paper will focus on bivariate causal discovery. We will introduce independence of cause and mechanism (ICM) as a basic principle for causal inference, algorithmic information theory and additive noise model (ANM) as major tools for bivariate causal discovery. Large-scale simulations will be performed to evaluate the feasibility of the ANM for bivariate causal discovery. To further evaluate their performance for causal inference, the ANM will be applied to the construction of gene regulatory networks. Also, the ANM will be applied to trait-imaging data analysis to illustrate three scenarios: presence of both causation and association, presence of association while absence of causation, and presence of causation, while lack of association between two variables.
[ 0, 0, 0, 0, 1, 0 ]
Title: Revisiting Distillation and Incremental Classifier Learning, Abstract: One of the key differences between the learning mechanism of humans and Artificial Neural Networks (ANNs) is the ability of humans to learn one task at a time. ANNs, on the other hand, can only learn multiple tasks simultaneously. Any attempts at learning new tasks incrementally cause them to completely forget about previous tasks. This lack of ability to learn incrementally, called Catastrophic Forgetting, is considered a major hurdle in building a true AI system. In this paper, our goal is to isolate the truly effective existing ideas for incremental learning from those that only work under certain conditions. To this end, we first thoroughly analyze the current state of the art (iCaRL) method for incremental learning and demonstrate that the good performance of the system is not because of the reasons presented in the existing literature. We conclude that the success of iCaRL is primarily due to knowledge distillation and recognize a key limitation of knowledge distillation, i.e, it often leads to bias in classifiers. Finally, we propose a dynamic threshold moving algorithm that is able to successfully remove this bias. We demonstrate the effectiveness of our algorithm on CIFAR100 and MNIST datasets showing near-optimal results. Our implementation is available at this https URL.
[ 0, 0, 0, 1, 0, 0 ]
Title: Cavitation near the oscillating piezoelectric plate in water, Abstract: It is known that gas bubbles on the surface bounding a fluid flow can change the coefficient of friction and affect the parameters of the boundary layer. In this paper, we propose a method that allows us to create, in the near-wall region, a thin layer of liquid filled with bubbles. It will be shown that if there is an oscillating piezoelectric plate on the surface bounding a liquid, then, under certain conditions, cavitation develops in the boundary layer. The relationship between the parameters of cavitation and the characteristics of the piezoelectric plate oscillations is obtained. Possible applications are discussed.
[ 0, 1, 0, 0, 0, 0 ]
Title: Revised Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks, Abstract: Inverse problems correspond to a certain type of optimization problems formulated over appropriate input distributions. Recently, there has been a growing interest in understanding the computational hardness of these optimization problems, not only in the worst case, but in an average-complexity sense under this same input distribution. In this revised note, we are interested in studying another aspect of hardness, related to the ability to learn how to solve a problem by simply observing a collection of previously solved instances. These 'planted solutions' are used to supervise the training of an appropriate predictive model that parametrizes a broad class of algorithms, with the hope that the resulting model will provide good accuracy-complexity tradeoffs in the average sense. We illustrate this setup on the Quadratic Assignment Problem, a fundamental problem in Network Science. We observe that data-driven models based on Graph Neural Networks offer intriguingly good performance, even in regimes where standard relaxation based techniques appear to suffer.
[ 1, 0, 0, 1, 0, 0 ]
Title: On Completeness Results of Hoare Logic Relative to the Standard Model, Abstract: The general completeness problem of Hoare logic relative to the standard model $N$ of Peano arithmetic has been studied by Cook, and it allows for the use of arbitrary arithmetical formulas as assertions. In practice, the assertions would be simple arithmetical formulas, e.g. of a low level in the arithmetical hierarchy. In addition, we find that, by restricting inputs to $N$, the complexity of the minimal assertion theory for the completeness of Hoare logic to hold can be reduced. This paper further studies the completeness of Hoare Logic relative to $N$ by restricting assertions to subclasses of arithmetical formulas (and by restricting inputs to $N$). Our completeness results refine Cook's result by reducing the complexity of the assertion theory.
[ 1, 0, 0, 0, 0, 0 ]
Title: Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks, Abstract: Deep neural networks (DNNs) have excellent representative power and are state of the art classifiers on many tasks. However, they often do not capture their own uncertainties well making them less robust in the real world as they overconfidently extrapolate and do not notice domain shift. Gaussian processes (GPs) with RBF kernels on the other hand have better calibrated uncertainties and do not overconfidently extrapolate far from data in their training set. However, GPs have poor representational power and do not perform as well as DNNs on complex domains. In this paper we show that GP hybrid deep networks, GPDNNs, (GPs on top of DNNs and trained end-to-end) inherit the nice properties of both GPs and DNNs and are much more robust to adversarial examples. When extrapolating to adversarial examples and testing in domain shift settings, GPDNNs frequently output high entropy class probabilities corresponding to essentially "don't know". GPDNNs are therefore promising as deep architectures that know when they don't know.
[ 0, 0, 0, 1, 0, 0 ]
Title: Elliptic regularization of the isometric immersion problem, Abstract: We introduce an elliptic regularization of the PDE system representing the isometric immersion of a surface in $\mathbb R^{3}$. The regularization is geometric, and has a natural variational interpretation.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Debris Backwards Flow Simulation System for Malaysia Airlines Flight 370, Abstract: This paper presents a system based on a Two-Way Particle-Tracking Model to analyze possible crash positions of flight MH370. The particle simulator includes a simple flow simulation of the debris based on a Lagrangian approach and a module to extract appropriated ocean current data from netCDF files. The influence of wind, waves, immersion depth and hydrodynamic behavior are not considered in the simulation.
[ 1, 1, 0, 0, 0, 0 ]
Title: Energy network: towards an interconnected energy infrastructure for the future, Abstract: The fundamental theory of energy networks in different energy forms is established following an in-depth analysis of the nature of energy for comprehensive energy utilization. The definition of an energy network is given. Combining the generalized balance equation of energy in space and the Pfaffian equation, the generalized transfer equations of energy in lines (pipes) are proposed. The energy variation laws in the transfer processes are investigated. To establish the equations of energy networks, the Kirchhoff's Law in electric networks is extended to energy networks, which is called the Generalized Kirchhoff"s Law. According to the linear phenomenological law, the generalized equivalent energy transfer equations with lumped parameters are derived in terms of the characteristic equations of energy transfer in lines(pipes).The equations are finally unified into a complete energy network equation system and its solvability is further discussed. Experiments are carried out on a combined cooling, heating and power(CCHP) system in engineering, the energy network theory proposed in this paper is used to model and analyze this system. By comparing the theoretical results obtained by our modeling approach and the data measured in experiments, the energy equations are validated.
[ 0, 1, 0, 0, 0, 0 ]
Title: Smoothing of transport plans with fixed marginals and rigorous semiclassical limit of the Hohenberg-Kohn functional, Abstract: We prove rigorously that the exact N-electron Hohenberg-Kohn density functional converges in the strongly interacting limit to the strictly correlated electrons (SCE) functional, and that the absolute value squared of the associated constrained-search wavefunction tends weakly in the sense of probability measures to a minimizer of the multi-marginal optimal transport problem with Coulomb cost associated to the SCE functional. This extends our previous work for N=2 [CFK11]. The correct limit problem has been derived in the physics literature by Seidl [Se99] and Seidl, Gori-Giorgi and Savin [SGS07]; in these papers the lack of a rigorous proof was pointed out. We also give a mathematical counterexample to this type of result, by replacing the constraint of given one-body density -- an infinite-dimensional quadratic expression in the wavefunction -- by an infinite-dimensional quadratic expression in the wavefunction and its gradient. Connections with the Lawrentiev phenomenon in the calculus of variations are indicated.
[ 0, 0, 1, 0, 0, 0 ]
Title: Standards for enabling heterogeneous IaaS cloud federations, Abstract: Technology market is continuing a rapid growth phase where different resource providers and Cloud Management Frameworks are positioning to provide ad-hoc solutions -in terms of management interfaces, information discovery or billing- trying to differentiate from competitors but that as a result remain incompatible between them when addressing more complex scenarios like federated clouds. Grasping interoperability problems present in current infrastructures is then a must-do, tackled by studying how existing and emerging standards could enhance user experience in the cloud ecosystem. In this paper we will review the current open challenges in Infrastructure as a Service cloud interoperability and federation, as well as point to the potential standards that should alleviate these problems.
[ 1, 0, 0, 0, 0, 0 ]
Title: Transforming acoustic characteristics to deceive playback spoofing countermeasures of speaker verification systems, Abstract: Automatic speaker verification (ASV) systems use a playback detector to filter out playback attacks and ensure verification reliability. Since current playback detection models are almost always trained using genuine and played-back speech, it may be possible to degrade their performance by transforming the acoustic characteristics of the played-back speech close to that of the genuine speech. One way to do this is to enhance speech "stolen" from the target speaker before playback. We tested the effectiveness of a playback attack using this method by using the speech enhancement generative adversarial network to transform acoustic characteristics. Experimental results showed that use of this "enhanced stolen speech" method significantly increases the equal error rates for the baseline used in the ASVspoof 2017 challenge and for a light convolutional neural network-based method. The results also showed that its use degrades the performance of a Gaussian mixture model-universal background model-based ASV system. This type of attack is thus an urgent problem needing to be solved.
[ 1, 0, 0, 0, 0, 0 ]
Title: Low frequency spectral energy distributions of radio pulsars detected with the Murchison Widefield Array, Abstract: We present low-frequency spectral energy distributions of 60 known radio pulsars observed with the Murchison Widefield Array (MWA) telescope. We searched the GaLactic and Extragalactic All-sky MWA (GLEAM) survey images for 200-MHz continuum radio emission at the position of all pulsars in the ATNF pulsar catalogue. For the 60 confirmed detections we have measured flux densities in 20 x 8 MHz bands between 72 and 231 MHz. We compare our results to existing measurements and show that the MWA flux densities are in good agreement.
[ 0, 1, 0, 0, 0, 0 ]
Title: On Geometry and Symmetry of Kepler Systems. I, Abstract: We study the Kepler metrics on Kepler manifolds from the point of view of Sasakian geometry and Hessian geometry. This establishes a link between the problem of classical gravity and the modern geometric methods in the study of AdS/CFT correspondence in string theory.
[ 0, 0, 1, 0, 0, 0 ]
Title: IoT Data Analytics Using Deep Learning, Abstract: Deep learning is a popular machine learning approach which has achieved a lot of progress in all traditional machine learning areas. Internet of thing (IoT) and Smart City deployments are generating large amounts of time-series sensor data in need of analysis. Applying deep learning to these domains has been an important topic of research. The Long-Short Term Memory (LSTM) network has been proven to be well suited for dealing with and predicting important events with long intervals and delays in the time series. LTSM networks have the ability to maintain long-term memory. In an LTSM network, a stacked LSTM hidden layer also makes it possible to learn a high level temporal feature without the need of any fine tuning and preprocessing which would be required by other techniques. In this paper, we construct a long-short term memory (LSTM) recurrent neural network structure, use the normal time series training set to build the prediction model. And then we use the predicted error from the prediction model to construct a Gaussian naive Bayes model to detect whether the original sample is abnormal. This method is called LSTM-Gauss-NBayes for short. We use three real-world data sets, each of which involve long-term time-dependence or short-term time-dependence, even very weak time dependence. The experimental results show that LSTM-Gauss-NBayes is an effective and robust model.
[ 1, 0, 0, 0, 0, 0 ]
Title: Viscous Dissipation in One-Dimensional Quantum Liquids, Abstract: We develop a theory of viscous dissipation in one-dimensional single-component quantum liquids at low temperatures. Such liquids are characterized by a single viscosity coefficient, the bulk viscosity. We show that for a generic interaction between the constituent particles this viscosity diverges in the zero-temperature limit. In the special case of integrable models, the viscosity is infinite at any temperature, which can be interpreted as a breakdown of the hydrodynamic description. Our consideration is applicable to all single-component Galilean-invariant one-dimensional quantum liquids, regardless of the statistics of the constituent particles and the interaction strength.
[ 0, 1, 0, 0, 0, 0 ]
Title: On the existence of homoclinic type solutions of inhomogenous Lagrangian systems, Abstract: We study the existence of homoclinic type solutions for second order Lagrangian systems of the type $\ddot{q}(t)-q(t)+a(t)\nabla G(q(t))=f(t)$, where $t\in\mathbb{R}$, $q\in\mathbb{R}^n$, $a\colon\mathbb{R}\to\mathbb{R}$ is a continuous positive bounded function, $G\colon\mathbb{R}^n\to\mathbb{R}$ is a $C^1$-smooth potential satisfying the Ambrosetti-Rabinowitz superquadratic growth condition and $f\colon\mathbb{R}\to\mathbb{R}^n$ is a continuous bounded square integrable forcing term. A homoclinic type solution is obtained as limit of $2k$-periodic solutions of an approximative sequence of second order differential equations.
[ 0, 0, 1, 0, 0, 0 ]
Title: Reactive User Behavior and Mobility Models, Abstract: In this paper, we present a set of simulation models to more realistically mimic the behaviour of users reading messages. We propose a User Behaviour Model, where a simulated user reacts to a message by a flexible set of possible reactions (e.g. ignore, read, like, save, etc.) and a mobility-based reaction (visit a place, run away from danger, etc.). We describe our models and their implementation in OMNeT++. We strongly believe that these models will significantly contribute to the state of the art of simulating realistically opportunistic networks.
[ 1, 0, 0, 0, 0, 0 ]
Title: The Minimal Resolution Conjecture on a general quartic surface in $\mathbb P^3$, Abstract: Mustaţă has given a conjecture for the graded Betti numbers in the minimal free resolution of the ideal of a general set of points on an irreducible projective algebraic variety. For surfaces in $\mathbb P^3$ this conjecture has been proven for points on quadric surfaces and on general cubic surfaces. In the latter case, Gorenstein liaison was the main tool. Here we prove the conjecture for general quartic surfaces. Gorenstein liaison continues to be a central tool, but to prove the existence of our links we make use of certain dimension computations. We also discuss the higher degree case, but now the dimension count does not force the existence of our links.
[ 0, 0, 1, 0, 0, 0 ]
Title: On Multilingual Training of Neural Dependency Parsers, Abstract: We show that a recently proposed neural dependency parser can be improved by joint training on multiple languages from the same family. The parser is implemented as a deep neural network whose only input is orthographic representations of words. In order to successfully parse, the network has to discover how linguistically relevant concepts can be inferred from word spellings. We analyze the representations of characters and words that are learned by the network to establish which properties of languages were accounted for. In particular we show that the parser has approximately learned to associate Latin characters with their Cyrillic counterparts and that it can group Polish and Russian words that have a similar grammatical function. Finally, we evaluate the parser on selected languages from the Universal Dependencies dataset and show that it is competitive with other recently proposed state-of-the art methods, while having a simple structure.
[ 1, 0, 0, 0, 0, 0 ]
Title: Inside-Out Planet Formation. IV. Pebble Evolution and Planet Formation Timescales, Abstract: Systems with tightly-packed inner planets (STIPs) are very common. Chatterjee & Tan proposed Inside-Out Planet Formation (IOPF), an in situ formation theory, to explain these planets. IOPF involves sequential planet formation from pebble-rich rings that are fed from the outer disk and trapped at the pressure maximum associated with the dead zone inner boundary (DZIB). Planet masses are set by their ability to open a gap and cause the DZIB to retreat outwards. We present models for the disk density and temperature structures that are relevant to the conditions of IOPF. For a wide range of DZIB conditions, we evaluate the gap opening masses of planets in these disks that are expected to lead to truncation of pebble accretion onto the forming planet. We then consider the evolution of dust and pebbles in the disk, estimating that pebbles typically grow to sizes of a few cm during their radial drift from several tens of AU to the inner, $\lesssim1\:$AU-scale disk. A large fraction of the accretion flux of solids is expected to be in such pebbles. This allows us to estimate the timescales for individual planet formation and entire planetary system formation in the IOPF scenario. We find that to produce realistic STIPs within reasonable timescales similar to disk lifetimes requires disk accretion rates of $\sim10^{-9}\:M_\odot\:{\rm yr}^{-1}$ and relatively low viscosity conditions in the DZIB region, i.e., Shakura-Sunyaev parameter of $\alpha\sim10^{-4}$.
[ 0, 1, 0, 0, 0, 0 ]
Title: Bayesian Nonparametric Unmixing of Hyperspectral Images, Abstract: Hyperspectral imaging is an important tool in remote sensing, allowing for accurate analysis of vast areas. Due to a low spatial resolution, a pixel of a hyperspectral image rarely represents a single material, but rather a mixture of different spectra. HSU aims at estimating the pure spectra present in the scene of interest, referred to as endmembers, and their fractions in each pixel, referred to as abundances. Today, many HSU algorithms have been proposed, based either on a geometrical or statistical model. While most methods assume that the number of endmembers present in the scene is known, there is only little work about estimating this number from the observed data. In this work, we propose a Bayesian nonparametric framework that jointly estimates the number of endmembers, the endmembers itself, and their abundances, by making use of the Indian Buffet Process as a prior for the endmembers. Simulation results and experiments on real data demonstrate the effectiveness of the proposed algorithm, yielding results comparable with state-of-the-art methods while being able to reliably infer the number of endmembers. In scenarios with strong noise, where other algorithms provide only poor results, the proposed approach tends to overestimate the number of endmembers slightly. The additional endmembers, however, often simply represent noisy replicas of present endmembers and could easily be merged in a post-processing step.
[ 1, 0, 0, 0, 0, 0 ]
Title: Dynamical control of electron-phonon interactions with high-frequency light, Abstract: This work addresses the one-dimensional problem of Bloch electrons when they are rapidly driven by a homogeneous time-periodic light and linearly coupled to vibrational modes. Starting from a generic time-periodic electron-phonon Hamiltonian, we derive a time-independent effective Hamiltonian that describes the stroboscopic dynamics up to the third order in the high-frequency limit. This yields nonequilibrium corrections to the electron-phonon coupling that are controllable dynamically via the driving strength. This shows in particular that local Holstein interactions in equilibrium are corrected by nonlocal Peierls interactions out of equilibrium, as well as by phonon-assisted hopping processes that make the dynamical Wannier-Stark localization of Bloch electrons impossible. Subsequently, we revisit the Holstein polaron problem out of equilibrium in terms of effective Green functions, and specify explicitly how the binding energy and effective mass of the polaron can be controlled dynamically. These tunable properties are reported within the weak- and strong-coupling regimes since both can be visited within the same material when varying the driving strength. This work provides some insight into controllable microscopic mechanisms that may be involved during the multicycle laser irradiations of organic molecular crystals in ultrafast pump-probe experiments, although it should also be suitable for realizations in shaken optical lattices of ultracold atoms.
[ 0, 1, 0, 0, 0, 0 ]