text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Rectangular Photonic Crystal Nanobeam Cavities in Bulk Diamond, Abstract: We demonstrate the fabrication of photonic crystal nanobeam cavities with rectangular cross section into bulk diamond. In simulation, these cavities have an unloaded quality factor (Q) of over 1 million. Measured cavity resonances show fundamental modes with spectrometer-limited quality factors larger than 14,000 within 1nm of the NV center's zero phonon line at 637nm. We find high cavity yield across the full diamond chip with deterministic resonance trends across the fabricated parameter sweeps.
[ 0, 1, 0, 0, 0, 0 ]
Title: Accelerated Primal-Dual Proximal Block Coordinate Updating Methods for Constrained Convex Optimization, Abstract: Block Coordinate Update (BCU) methods enjoy low per-update computational complexity because every time only one or a few block variables would need to be updated among possibly a large number of blocks. They are also easily parallelized and thus have been particularly popular for solving problems involving large-scale dataset and/or variables. In this paper, we propose a primal-dual BCU method for solving linearly constrained convex program in multi-block variables. The method is an accelerated version of a primal-dual algorithm proposed by the authors, which applies randomization in selecting block variables to update and establishes an $O(1/t)$ convergence rate under weak convexity assumption. We show that the rate can be accelerated to $O(1/t^2)$ if the objective is strongly convex. In addition, if one block variable is independent of the others in the objective, we then show that the algorithm can be modified to achieve a linear rate of convergence. The numerical experiments show that the accelerated method performs stably with a single set of parameters while the original method needs to tune the parameters for different datasets in order to achieve a comparable level of performance.
[ 1, 0, 1, 1, 0, 0 ]
Title: Weakly supervised training of deep convolutional neural networks for overhead pedestrian localization in depth fields, Abstract: Overhead depth map measurements capture sufficient amount of information to enable human experts to track pedestrians accurately. However, fully automating this process using image analysis algorithms can be challenging. Even though hand-crafted image analysis algorithms are successful in many common cases, they fail frequently when there are complex interactions of multiple objects in the image. Many of the assumptions underpinning the hand-crafted solutions do not hold in these cases and the multitude of exceptions are hard to model precisely. Deep Learning (DL) algorithms, on the other hand, do not require hand crafted solutions and are the current state-of-the-art in object localization in images. However, they require exceeding amount of annotations to produce successful models. In the case of object localization these annotations are difficult and time consuming to produce. In this work we present an approach for developing pedestrian localization models using DL algorithms with efficient weak supervision from an expert. We circumvent the need for annotation of large corpus of data by annotating only small amount of patches and relying on synthetic data augmentation as a vehicle for injecting expert knowledge in the model training. This approach of weak supervision through expert selection of representative patches, suitable transformations and synthetic data augmentations enables us to successfully develop DL models for pedestrian localization efficiently.
[ 1, 1, 0, 0, 0, 0 ]
Title: Absence of magnetic long range order in Ba$_3$ZnRu$_2$O$_9$: A spin-liquid candidate in the $S=3/2$ dimer lattice, Abstract: We have discovered a novel candidate for a spin liquid state in a ruthenium oxide composed of dimers of $S = $ 3/2 spins of Ru$^{5+}$,Ba$_3$ZnRu$_2$O$_9$. This compound lacks a long range order down to 37 mK, which is a temperature 5000-times lower than the magnetic interaction scale of around 200 K. Partial substitution for Zn can continuously vary the magnetic ground state from an antiferromagnetic order to a spin-gapped state through the liquid state. This indicates that the spin-liquid state emerges from a delicate balance of inter- and intra-dimer interactions, and the spin state of the dimer plays a vital role. This unique feature should realize a new type of quantum magnetism.
[ 0, 1, 0, 0, 0, 0 ]
Title: Deforming Representations of SL(2,R), Abstract: The spherical principal series representations $\pi(\nu)$ of SL(2,$\mathbb R$) is a family of infinite dimensional representations parametrized by $\nu\in\mathbb C$. The representation $\pi(\nu)$ is irreducible unless $\nu$ is an odd integer, in which case it is indecomposable. We find a new continuous family of representations $\Pi(\nu)$ such that $\pi(\nu)$ and $\Pi(\nu)$ have the same composition factors, and $\Pi(\nu)$ is completely reducible, for all $\nu$. We also describe a connection between this construction and families of invariant Hermitian forms on the representations.
[ 0, 0, 1, 0, 0, 0 ]
Title: 3D Reconstruction & Assessment Framework based on affordable 2D Lidar, Abstract: Lidar is extensively used in the industry and mass-market. Due to its measurement accuracy and insensitivity to illumination compared to cameras, It is applied onto a broad range of applications, like geodetic engineering, self driving cars or virtual reality. But the 3D Lidar with multi-beam is very expensive, and the massive measurements data can not be fully leveraged on some constrained platforms. The purpose of this paper is to explore the possibility of using cheap 2D Lidar off-the-shelf, to preform complex 3D Reconstruction, moreover, the generated 3D map quality is evaluated by our proposed metrics at the end. The 3D map is constructed in two ways, one way in which the scan is performed at known positions with an external rotary axis at another plane. The other way, in which the 2D Lidar for mapping and another 2D Lidar for localization are placed on a trolley, the trolley is pushed on the ground arbitrarily. The generated maps by different approaches are converted to octomaps uniformly before the evaluation. The similarity and difference between two maps will be evaluated by the proposed metrics thoroughly. The whole mapping system is composed of several modular components. A 3D bracket was made for assembling of the Lidar with a long range, the driver and the motor together. A cover platform made for the IMU and 2D Lidar with a shorter range but high accuracy. The software is stacked up in different ROS packages.
[ 1, 0, 0, 0, 0, 0 ]
Title: Probabilistic Relational Reasoning via Metrics, Abstract: The Fuzz programming language [Reed and Pierce, 2010] uses an elegant linear type system combined with a monad-like type to express and reason about probabilistic sensitivity properties, most notably $\epsilon$-differential privacy. We show how to extend Fuzz to capture more general relational properties of probabilistic programs, with approximate, or $(\epsilon, \delta)$-differential privacy serving as a leading example. Our technical contributions are threefold. First, we introduce the categorical notion of comonadic lifting of a monad to model composition properties of probabilistic divergences. Then, we show how to express relational properties in terms of sensitivity properties via an adjunction we call the path construction. Finally, we instantiate our semantics to model the terminating fragment of Fuzz extended with types carrying information about other divergences between distributions.
[ 1, 0, 0, 0, 0, 0 ]
Title: Upper bounds for constant slope $p$-adic families of modular forms, Abstract: We study $p$-adic families of eigenforms for which the $p$-th Hecke eigenvalue $a_p$ has constant $p$-adic valuation ("constant slope families"). We prove two separate upper bounds for the size of such families. The first is in terms of the logarithmic derivative of $a_p$ while the second depends only on the slope of the family. We also investigate the numerical relationship between our results and the former Gouvêa--Mazur conjecture.
[ 0, 0, 1, 0, 0, 0 ]
Title: Size dependence of the surface tension of a free surface of an isotropic fluid, Abstract: We report on the size dependence of the surface tension of a free surface of an isotropic fluid. The size dependence of the surface tension is evaluated based on the Gibbs-Tolman-Koenig-Buff equation for positive and negative values of curvatures and the Tolman lengths. For all combinations of positive and negative signs of curvature and the Tolman length, we succeed to have a continuous function, avoiding the existing discontinuity at zero curvature (flat interfaces). As an example, a water droplet in the thermodynamical equilibrium with the vapor is analyzed in detail. The size dependence of the surface tension and the Tolman length are evaluated with the use of experimental data of the International Association for the Properties of Water and Steam. The evaluated Tolman length of our approach is in good agreement with molecular dynamics and experimental data
[ 0, 1, 0, 0, 0, 0 ]
Title: Bayesian Belief Updating of Spatiotemporal Seizure Dynamics, Abstract: Epileptic seizure activity shows complicated dynamics in both space and time. To understand the evolution and propagation of seizures spatially extended sets of data need to be analysed. We have previously described an efficient filtering scheme using variational Laplace that can be used in the Dynamic Causal Modelling (DCM) framework [Friston, 2003] to estimate the temporal dynamics of seizures recorded using either invasive or non-invasive electrical recordings (EEG/ECoG). Spatiotemporal dynamics are modelled using a partial differential equation -- in contrast to the ordinary differential equation used in our previous work on temporal estimation of seizure dynamics [Cooray, 2016]. We provide the requisite theoretical background for the method and test the ensuing scheme on simulated seizure activity data and empirical invasive ECoG data. The method provides a framework to assimilate the spatial and temporal dynamics of seizure activity, an aspect of great physiological and clinical importance.
[ 1, 0, 0, 1, 0, 0 ]
Title: Split, Send, Reassemble: A Formal Specification of a CAN Bus Protocol Stack, Abstract: We present a formal model for a fragmentation and a reassembly protocol running on top of the standardised CAN bus, which is widely used in automotive and aerospace applications. Although the CAN bus comes with an in-built mechanism for prioritisation, we argue that this is not sufficient and provide another protocol to overcome this shortcoming.
[ 1, 0, 0, 0, 0, 0 ]
Title: An extinction free AGN selection by 18-band SED fitting in mid-infrared in the AKARI NEP deep field, Abstract: We have developed an efficient Active Galactic Nucleus (AGN) selection method using 18-band Spectral Energy Distribution (SED) fitting in mid-infrared (mid-IR). AGNs are often obscured by gas and dust, and those obscured AGNs tend to be missed in optical, UV and soft X-ray observations. Mid-IR light can help us to recover them in an obscuration free way using their thermal emission. On the other hand, Star-Forming Galaxies (SFG) also have strong PAH emission features in mid-IR. Hence, establishing an accurate method to separate populations of AGN and SFG is important. However, in previous mid-IR surveys, only 3 or 4 filters were available, and thus the selection was limited. We combined AKARI's continuous 9 mid-IR bands with WISE and Spitzer data to create 18 mid-IR bands for AGN selection. Among 4682 galaxies in the AKARI NEP deep field, 1388 are selected to be AGN hosts, which implies an AGN fraction of 29.6$\pm$0.8$\%$ (among them 47$\%$ are Seyfert 1.8 and 2). Comparing the result from SED fitting into WISE and Spitzer colour-colour diagram reveals that Seyferts are often missed by previous studies. Our result has been tested by stacking median magnitude for each sample. Using X-ray data from Chandra, we compared the result of our SED fitting with WISE's colour box selection. We recovered more X-ray detected AGN than previous methods by 20$\%$.
[ 0, 1, 0, 0, 0, 0 ]
Title: Spatial point processes intensity estimation with a diverging number of covariates, Abstract: Feature selection procedures for spatial point processes parametric intensity estimation have been recently developed since more and more applications involve a large number of covariates. In this paper, we investigate the setting where the number of covariates diverges as the domain of observation increases. In particular, we consider estimating equations based on Campbell theorems derived from Poisson and logistic regression likelihoods regularized by a general penalty function. We prove that, under some conditions, the consistency, the sparsity, and the asymptotic normality are valid for such a setting. We support the theoretical results by numerical ones obtained from simulation experiments and an application to forestry datasets.
[ 0, 0, 1, 1, 0, 0 ]
Title: A novel methodology on distributed representations of proteins using their interacting ligands, Abstract: The effective representation of proteins is a crucial task that directly affects the performance of many bioinformatics problems. Related proteins usually bind to similar ligands. Chemical characteristics of ligands are known to capture the functional and mechanistic properties of proteins suggesting that a ligand based approach can be utilized in protein representation. In this study, we propose SMILESVec, a SMILES-based method to represent ligands and a novel method to compute similarity of proteins by describing them based on their ligands. The proteins are defined utilizing the word-embeddings of the SMILES strings of their ligands. The performance of the proposed protein description method is evaluated in protein clustering task using TransClust and MCL algorithms. Two other protein representation methods that utilize protein sequence, BLAST and ProtVec, and two compound fingerprint based protein representation methods are compared. We showed that ligand-based protein representation, which uses only SMILES strings of the ligands that proteins bind to, performs as well as protein-sequence based representation methods in protein clustering. The results suggest that ligand-based protein description can be an alternative to the traditional sequence or structure based representation of proteins and this novel approach can be applied to different bioinformatics problems such as prediction of new protein-ligand interactions and protein function annotation.
[ 0, 0, 0, 1, 1, 0 ]
Title: Distributed Robust Set-Invariance for Interconnected Linear Systems, Abstract: We introduce a class of distributed control policies for networks of discrete-time linear systems with polytopic additive disturbances. The objective is to restrict the network-level state and controls to user-specified polyhedral sets for all times. This problem arises in many safety-critical applications. We consider two problems. First, given a communication graph characterizing the structure of the information flow in the network, we find the optimal distributed control policy by solving a single linear program. Second, we find the sparsest communication graph required for the existence of a distributed invariance-inducing control policy. Illustrative examples, including one on platooning, are presented.
[ 1, 0, 0, 0, 0, 0 ]
Title: Liouville's theorem and comparison results for solutions of degenerate elliptic equations in exterior domains, Abstract: A version of Liouville's theorem is proved for solutions of some degenerate elliptic equations defined in $\mathbb{R}^n\backslash K$, where $K$ is a compact set, provided the structure of this equation and the dimension $n$ are related. This result is a correction of a previous one established by Serrin, since some additional hypotheses are necessary. Theoretical and numerical examples are given. Furthermore, a comparison result and the uniqueness of solution are obtained for such equations in exterior domains.
[ 0, 0, 1, 0, 0, 0 ]
Title: An Optimal Control Problem for the Steady Nonhomogeneous Asymmetric Fluids, Abstract: We study an optimal boundary control problem for the two-dimensional stationary micropolar fluids system with variable density. We control the system by considering boundary controls, for the velocity vector and angular velocity of rotation of particles, on parts of the boundary of the flow domain. On the remaining part of the boundary, we consider mixed boundary conditions for the vector velocity (Dirichlet and Navier conditions) and Dirichlet boundary conditions for the angular velocity. We analyze the existence of a weak solution obtaining the fluid density as a scalar function of the stream function. We prove the existence of an optimal solution and, by using the Lagrange multipliers theorem, we state first-order optimality conditions. We also derive, through a penalty method, some optimality conditions satisfied by the optimal controls.
[ 0, 0, 1, 0, 0, 0 ]
Title: A thermodynamic view of dusty protoplanetary disks, Abstract: Small solids embedded in gaseous protoplanetary disks are subject to strong dust-gas friction. Consequently, tightly-coupled dust particles almost follow the gas flow. This near conservation of dust-to-gas ratio along streamlines is analogous to the near conservation of entropy along flows of (dust-free) gas with weak heating and cooling. We develop this thermodynamic analogy into a framework to study dusty gas dynamics in protoplanetary disks. We show that an isothermal dusty gas behaves like an adiabatic pure gas; and that finite dust-gas coupling may be regarded as an effective heating/cooling. We exploit this correspondence to deduce that 1) perfectly coupled, thin dust layers cannot cause axisymmetric instabilities; 2) radial dust edges are unstable if the dust is vertically well-mixed; 3) the streaming instability necessarily involves a gas pressure response that lags behind dust density; 4) dust-loading introduces buoyancy forces that generally stabilizes the vertical shear instability associated with global radial temperature gradients. We also discuss dusty analogs of other hydrodynamic processes (e.g. Rossby wave instability, convective overstability, and zombie vortices), and how to simulate dusty protoplanetary disks with minor tweaks to existing codes for pure gas dynamics.
[ 0, 1, 0, 0, 0, 0 ]
Title: Stable determination of a Lamé coefficient by one internal measurement of displacement, Abstract: In this paper we show that the shear modulus $\mu$ of an isotropic elastic body can be stably recovered by the knowledge of one single displacement field whose boundary data can be assigned independently of the unknown elasticity tensor.
[ 0, 0, 1, 0, 0, 0 ]
Title: Mordell-Weil Groups of Linear Systems and the Hitchin Fibration, Abstract: In this paper, we study rational sections of the relative Picard scheme of a linear system on a smooth projective variety. We prove that if the linear system is basepoint-free and the locus of non-integral divisors has codimension at least two, then all rational sections of the relative Picard scheme come from restrictions of line bundles on the variety. As a consequence, we describe the group of sections of the Hitchin fibration for moduli spaces of Higgs bundles on curves.
[ 0, 0, 1, 0, 0, 0 ]
Title: Subgradients of Minimal Time Functions without Calmness, Abstract: In recent years there has been great interest in variational analysis of a class of nonsmooth functions called the minimal time function. In this paper we continue this line of research by providing new results on generalized differentiation of this class of functions, relaxing assumptions imposed on the functions and sets involved for the results. In particular, we focus on the singular subdifferential and the limiting subdifferential of this class of functions.
[ 0, 0, 1, 0, 0, 0 ]
Title: Multiple Kernel Learning and Automatic Subspace Relevance Determination for High-dimensional Neuroimaging Data, Abstract: Alzheimer's disease is a major cause of dementia. Its diagnosis requires accurate biomarkers that are sensitive to disease stages. In this respect, we regard probabilistic classification as a method of designing a probabilistic biomarker for disease staging. Probabilistic biomarkers naturally support the interpretation of decisions and evaluation of uncertainty associated with them. In this paper, we obtain probabilistic biomarkers via Gaussian Processes. Gaussian Processes enable probabilistic kernel machines that offer flexible means to accomplish Multiple Kernel Learning. Exploiting this flexibility, we propose a new variation of Automatic Relevance Determination and tackle the challenges of high dimensionality through multiple kernels. Our research results demonstrate that the Gaussian Process models are competitive with or better than the well-known Support Vector Machine in terms of classification performance even in the cases of single kernel learning. Extending the basic scheme towards the Multiple Kernel Learning, we improve the efficacy of the Gaussian Process models and their interpretability in terms of the known anatomical correlates of the disease. For instance, the disease pathology starts in and around the hippocampus and entorhinal cortex. Through the use of Gaussian Processes and Multiple Kernel Learning, we have automatically and efficiently determined those portions of neuroimaging data. In addition to their interpretability, our Gaussian Process models are competitive with recent deep learning solutions under similar settings.
[ 1, 0, 0, 1, 0, 0 ]
Title: Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems, Abstract: For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
[ 0, 1, 1, 0, 0, 0 ]
Title: Unsupervised Learning with Stein's Unbiased Risk Estimator, Abstract: Learning from unlabeled and noisy data is one of the grand challenges of machine learning. As such, it has seen a flurry of research with new ideas proposed continuously. In this work, we revisit a classical idea: Stein's Unbiased Risk Estimator (SURE). We show that, in the context of image recovery, SURE and its generalizations can be used to train convolutional neural networks (CNNs) for a range of image denoising and recovery problems {\em without any ground truth data.} Specifically, our goal is to reconstruct an image $x$ from a {\em noisy} linear transformation (measurement) of the image. We consider two scenarios: one where no additional data is available and one where we have measurements of other images that are drawn from the same noisy distribution as $x$, but have no access to the clean images. Such is the case, for instance, in the context of medical imaging, microscopy, and astronomy, where noise-less ground truth data is rarely available. We show that in this situation, SURE can be used to estimate the mean-squared-error loss associated with an estimate of $x$. Using this estimate of the loss, we train networks to perform denoising and compressed sensing recovery. In addition, we also use the SURE framework to partially explain and improve upon an intriguing results presented by Ulyanov et al. in "Deep Image Prior": that a network initialized with random weights and fit to a single noisy image can effectively denoise that image.
[ 0, 0, 0, 1, 0, 0 ]
Title: Cellular function given parametric variation: excitability in the Hodgkin-Huxley model, Abstract: How is reliable physiological function maintained in cells despite considerable variability in the values of key parameters of multiple interacting processes that govern that function? Here we use the classic Hodgkin-Huxley formulation of the squid giant axon action potential to propose a possible approach to this problem. Although the full Hodgkin-Huxley model is very sensitive to fluctuations that independently occur in its many parameters, the outcome is in fact determined by simple combinations of these parameters along two physiological dimensions: Structural and Kinetic (denoted $S$ and $K$). Structural parameters describe the properties of the cell, including its capacitance and the densities of its ion channels. Kinetic parameters are those that describe the opening and closing of the voltage-dependent conductances. The impacts of parametric fluctuations on the dynamics of the system, seemingly complex in the high dimensional representation of the Hodgkin-Huxley model, are tractable when examined within the $S-K$ plane. We demonstrate that slow inactivation, a ubiquitous activity-dependent feature of ionic channels, is a powerful local homeostatic control mechanism that stabilizes excitability amid changes in structural and kinetic parameters.
[ 0, 0, 0, 0, 1, 0 ]
Title: P4K: A Formal Semantics of P4 and Applications, Abstract: Programmable packet processors and P4 as a programming language for such devices have gained significant interest, because their flexibility enables rapid development of a diverse set of applications that work at line rate. However, this flexibility, combined with the complexity of devices and networks, increases the chance of introducing subtle bugs that are hard to discover manually. Worse, this is a domain where bugs can have catastrophic consequences, yet formal analysis tools for P4 programs / networks are missing. We argue that formal analysis tools must be based on a formal semantics of the target language, rather than on its informal specification. To this end, we provide an executable formal semantics of the P4 language in the K framework. Based on this semantics, K provides an interpreter and various analysis tools including a symbolic model checker and a deductive program verifier for P4. This paper overviews our formal K semantics of P4, as well as several P4 language design issues that we found during our formalization process. We also discuss some applications resulting from the tools provided by K for P4 programmers and network administrators as well as language designers and compiler developers, such as detection of unportable code, state space exploration of P4 programs and of networks, bug finding using symbolic execution, data plane verification, program verification, and translation validation.
[ 1, 0, 0, 0, 0, 0 ]
Title: Global aspects of polarization optics and coset space geometry, Abstract: We use group theoretic ideas and coset space methods to deal with problems in polarization optics of a global nature. These include the possibility of a globally smooth phase convention for electric fields for all points on the Poincaré sphere, and a similar possibility of real or complex bases of transverse electric vectors for all possible propagation directions. It is shown that these methods help in understanding some known results in an effective manner, and in answering new questions as well. We find that apart from the groups $SU(2)$ and $SO(3)$ which occur naturally in these problems, the group $SU(3)$ also plays an important role.
[ 0, 1, 0, 0, 0, 0 ]
Title: Nonlinear Large Deviations: Beyond the Hypercube, Abstract: We present a framework to calculate large deviations for nonlinear functions of independent random variables supported on compact sets in Banach spaces, by extending the result in Chatterjee and Dembo [6]. Previous research on nonlinear large deviations has only focused on random variables supported on $\{-1,+1\}^{n}$, a small subset of random objects people usually study, thus it is of natural interest and need to research the corresponding theory for random variables with general distributions. Since our results put fewer constraints on the random variables, it has considerable flexibility in application. To show this, we provide examples with continuous and high dimensional random variables. Our framework could also be used to verify the mathematical rigor of the mean field approximation method; to demonstrate, we verify the mean field approximation for a class of spin vector models.
[ 0, 0, 1, 0, 0, 0 ]
Title: Force and torque of a string on a pulley, Abstract: Every university introductory physics course considers the problem of Atwood's machine taking into account the mass of the pulley. In the usual treatment the tensions at the two ends of the string are offhandedly taken to act on the pulley and be responsible for its rotation. However such a free-body diagram of the forces on the pulley is not {\it a priori} justified, inducing students to construct wrong hypotheses such as that the string transfers its tension to the pulley or that some symmetry is in operation. We reexamine this problem by integrating the contact forces between each element of the string and the pulley and show that although the pulley does behave as if the tensions were acting on it, this comes only as the end result of a detailed analysis. We also address the question of how much friction is needed to prevent the string from slipping over the pulley. Finally, we deal with the case in which the string is on the verge of sliding and show that this will never happen unless certain conditions are met by the coefficient of friction and the masses involved.
[ 0, 1, 0, 0, 0, 0 ]
Title: Imagining Probabilistic Belief Change as Imaging (Technical Report), Abstract: Imaging is a form of probabilistic belief change which could be employed for both revision and update. In this paper, we propose a new framework for probabilistic belief change based on imaging, called Expected Distance Imaging (EDI). EDI is sufficiently general to define Bayesian conditioning and other forms of imaging previously defined in the literature. We argue that, and investigate how, EDI can be used for both revision and update. EDI's definition depends crucially on a weight function whose properties are studied and whose effect on belief change operations is analysed. Finally, four EDI instantiations are proposed, two for revision and two for update, and probabilistic rationality postulates are suggested for their analysis.
[ 1, 0, 0, 0, 0, 0 ]
Title: Sparse Rational Function Interpolation with Finitely Many Values for the Coefficients, Abstract: In this paper, we give new sparse interpolation algorithms for black box univariate and multivariate rational functions h=f/g whose coefficients are integers with an upper bound. The main idea is as follows: choose a proper integer beta and let h(beta) = a/b with gcd(a,b)=1. Then f and g can be computed by solving the polynomial interpolation problems f(beta)=ka and g(beta)=ka for some integer k. It is shown that the univariate interpolation algorithm is almost optimal and multivariate interpolation algorithm has low complexity in T but the data size is exponential in n.
[ 1, 0, 0, 0, 0, 0 ]
Title: Research Opportunities and Visions for Smart and Pervasive Health, Abstract: Improving the health of the nation's population and increasing the capabilities of the US healthcare system to support diagnosis, treatment, and prevention of disease is a critical national and societal priority. In the past decade, tremendous advances in expanding computing capabilities--sensors, data analytics, networks, advanced imaging, and cyber-physical systems--have, and will continue to, enhance healthcare and health research, with resulting improvements in health and wellness. However, the cost and complexity of healthcare continues to rise alongside the impact of poor health on productivity and quality of life. What is lacking are transformative capabilities that address significant health and healthcare trends: the growing demands and costs of chronic disease, the greater responsibility placed on patients and informal caregivers, and the increasing complexity of health challenges in the US, including mental health, that are deeply rooted in a person's social and environmental context.
[ 1, 0, 0, 0, 0, 0 ]
Title: Training L1-Regularized Models with Orthant-Wise Passive Descent Algorithms, Abstract: The $L_1$-regularized models are widely used for sparse regression or classification tasks. In this paper, we propose the orthant-wise passive descent algorithm (OPDA) for optimizing $L_1$-regularized models, as an improved substitute of proximal algorithms, which are the standard tools for optimizing the models nowadays. OPDA uses a stochastic variance-reduced gradient (SVRG) to initialize the descent direction, then apply a novel alignment operator to encourage each element keeping the same sign after one iteration of update, so the parameter remains in the same orthant as before. It also explicitly suppresses the magnitude of each element to impose sparsity. The quasi-Newton update can be utilized to incorporate curvature information and accelerate the speed. We prove a linear convergence rate for OPDA on general smooth and strongly-convex loss functions. By conducting experiments on $L_1$-regularized logistic regression and convolutional neural networks, we show that OPDA outperforms state-of-the-art stochastic proximal algorithms, implying a wide range of applications in training sparse models.
[ 1, 0, 0, 1, 0, 0 ]
Title: Zero-Delay Source-Channel Coding with a One-Bit ADC Front End and Correlated Side Information at the Receiver, Abstract: Zero-delay transmission of a Gaussian source over an additive white Gaussian noise (AWGN) channel is considered with a one-bit analog-to-digital converter (ADC) front end and a correlated side information at the receiver. The design of the optimal encoder and decoder is studied for two performance criteria, namely, the mean squared error (MSE) distortion and the distortion outage probability (DOP), under an average power constraint on the channel input. For both criteria, necessary optimality conditions for the encoder and the decoder are derived. Using these conditions, it is observed that the numerically optimized encoder (NOE) under the MSE distortion criterion is periodic, and its period increases with the correlation between the source and the receiver side information. For the DOP, it is instead seen that the NOE mappings periodically acquire positive and negative values, which decay to zero with increasing source magnitude, and the interval over which the mapping takes non-zero values, becomes wider with the correlation between the source and the side information.
[ 1, 0, 0, 0, 0, 0 ]
Title: Non-Oscillatory Pattern Learning for Non-Stationary Signals, Abstract: This paper proposes a novel non-oscillatory pattern (NOP) learning scheme for several oscillatory data analysis problems including signal decomposition, super-resolution, and signal sub-sampling. To the best of our knowledge, the proposed NOP is the first algorithm for these problems with fully non-stationary oscillatory data with close and crossover frequencies, and general oscillatory patterns. NOP is capable of handling complicated situations while existing algorithms fail; even in simple cases, e.g., stationary cases with trigonometric patterns, numerical examples show that NOP admits competitive or better performance in terms of accuracy and robustness than several state-of-the-art algorithms.
[ 0, 0, 0, 1, 0, 0 ]
Title: Analysis of Set-Valued Stochastic Approximations: Applications to Noisy Approximate Value and Fixed point Iterations, Abstract: The main aim of this paper is the development of Lyapunov function based sufficient conditions for stability (almost sure boundedness) and convergence of stochastic approximation algorithms (SAAs) with set-valued mean-fields, a class of model-free algorithms that have become important in recent times. In this paper we provide a complete analysis of such algorithms under three different, yet related set of sufficient conditions, based on the existence of an associated global/local Lyapunov function. Unlike previous Lyapunov function based approaches, we provide a simple recipe for explicitly constructing the Lyapunov function needed for analysis. Our work builds on the works of Abounadi, Bertsekas and Borkar (2002), Munos (2005) and Ramaswamy and Bhatnagar (2016). An important motivation to the flavor of our assumptions comes from the need to understand approximate dynamic programming and reinforcement learning algorithms, that use deep neural networks (DNNs) for function approximations and parameterizations. These algorithms are popularly known as deep reinforcement learning algorithms. As an important application of our theory we provide a complete analysis of the stochastic approximation counterpart of approximate value iteration (AVI), an important dynamic programming method designed to tackle Bellman's curse of dimensionality. Although motivated by the need to understand deep reinforcement learning algorithms our theory is more generally applicable. It is further used to develop the first SAA for finding fixed points of contractive set-valued maps and provide a comprehensive analysis of the same.
[ 1, 0, 0, 1, 0, 0 ]
Title: Predicting Surgery Duration with Neural Heteroscedastic Regression, Abstract: Scheduling surgeries is a challenging task due to the fundamental uncertainty of the clinical environment, as well as the risks and costs associated with under- and over-booking. We investigate neural regression algorithms to estimate the parameters of surgery case durations, focusing on the issue of heteroscedasticity. We seek to simultaneously estimate the duration of each surgery, as well as a surgery-specific notion of our uncertainty about its duration. Estimating this uncertainty can lead to more nuanced and effective scheduling strategies, as we are able to schedule surgeries more efficiently while allowing an informed and case-specific margin of error. Using surgery records %from the UC San Diego Health System, from a large United States health system we demonstrate potential improvements on the order of 20% (in terms of minutes overbooked) compared to current scheduling techniques. Moreover, we demonstrate that surgery durations are indeed heteroscedastic. We show that models that estimate case-specific uncertainty better fit the data (log likelihood). Additionally, we show that the heteroscedastic predictions can more optimally trade off between over and under-booking minutes, especially when idle minutes and scheduling collisions confer disparate costs.
[ 1, 0, 0, 1, 0, 0 ]
Title: Multichannel End-to-end Speech Recognition, Abstract: The field of speech recognition is in the midst of a paradigm shift: end-to-end neural networks are challenging the dominance of hidden Markov models as a core technology. Using an attention mechanism in a recurrent encoder-decoder architecture solves the dynamic time alignment problem, allowing joint end-to-end training of the acoustic and language modeling components. In this paper we extend the end-to-end framework to encompass microphone array signal processing for noise suppression and speech enhancement within the acoustic encoding network. This allows the beamforming components to be optimized jointly within the recognition architecture to improve the end-to-end speech recognition objective. Experiments on the noisy speech benchmarks (CHiME-4 and AMI) show that our multichannel end-to-end system outperformed the attention-based baseline with input from a conventional adaptive beamformer.
[ 1, 0, 0, 0, 0, 0 ]
Title: Universal Rules for Fooling Deep Neural Networks based Text Classification, Abstract: Recently, deep learning based natural language processing techniques are being extensively used to deal with spam mail, censorship evaluation in social networks, among others. However, there is only a couple of works evaluating the vulnerabilities of such deep neural networks. Here, we go beyond attacks to investigate, for the first time, universal rules, i.e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one. In fact, the universal rules do not use any information from the method itself (no information from the method, gradient information or training dataset information is used), making them black-box universal attacks. In other words, the universal rules are sample and method agnostic. By proposing a coevolutionary optimization algorithm we show that it is possible to create universal rules that can automatically craft imperceptible adversarial samples (only less than five perturbations which are close to misspelling are inserted in the text sample). A comparison with a random search algorithm further justifies the strength of the method. Thus, universal rules for fooling networks are here shown to exist. Hopefully, the results from this work will impact the development of yet more sample and model agnostic attacks as well as their defenses, culminating in perhaps a new age for artificial intelligence.
[ 1, 0, 0, 1, 0, 0 ]
Title: Synthetic Observations of 21cm HI Line Profiles from Inhomogeneous Turbulent Interstellar HI Gas with Magnetic Field, Abstract: We carried out synthetic observations of interstellar atomic hydrogen at 21cm wavelength by utilizing the magneto-hydrodynamical numerical simulations of the inhomogeneous turbulent interstellar medium (ISM) Inoue and Inutsuka (2012). The cold neutral medium (CNM) shows significantly clumpy distribution with a small volume filling factor of 3.5%, whereas the warm neutral medium (WNM) distinctly different smooth distribution with a large filling factor of 96.5%. In projection on the sky, the CNM exhibits highly filamentary distribution with a sub-pc width, whereas the WNM shows smooth extended distribution. In the HI optical depth the CNM is dominant and the contribution of the WNM is negligibly small. The CNM has an area covering factor of 30% in projection, while the WNM has a covering factor of 70%. This causes that the emission-absorption measurements toward radio continuum compact sources tend to sample the WNM with a probability of 70%, yielding smaller HI optical depth and smaller HI column density than those of the bulk HI gas. The emission-absorption measurements, which are significantly affected by the small-scale large fluctuations of the CNM properties, are not suitable to characterize the bulk HI gas. Larger-beam emission measurements which are able to fully sample the HI gas will provide a better tool for that purpose, if a reliable proxy for hydrogen column density, possibly dust optical depth and gamma rays, is available.
[ 0, 1, 0, 0, 0, 0 ]
Title: Clarifying Trust in Social Internet of Things, Abstract: A social approach can be exploited for the Internet of Things (IoT) to manage a large number of connected objects. These objects operate as autonomous agents to request and provide information and services to users. Establishing trustworthy relationships among the objects greatly improves the effectiveness of node interaction in the social IoT and helps nodes overcome perceptions of uncertainty and risk. However, there are limitations in the existing trust models. In this paper, a comprehensive model of trust is proposed that is tailored to the social IoT. The model includes ingredients such as trustor, trustee, goal, trustworthiness evaluation, decision, action, result, and context. Building on this trust model, we clarify the concept of trust in the social IoT in five aspects such as (1) mutuality of trustor and trustee, (2) inferential transfer of trust, (3) transitivity of trust, (4) trustworthiness update, and (5) trustworthiness affected by dynamic environment. With network connectivities that are from real-world social networks, a series of simulations are conducted to evaluate the performance of the social IoT operated with the proposed trust model. An experimental IoT network is used to further validate the proposed trust model.
[ 1, 0, 0, 0, 0, 0 ]
Title: The Leave-one-out Approach for Matrix Completion: Primal and Dual Analysis, Abstract: In this paper, we introduce a powerful technique, Leave-One-Out, to the analysis of low-rank matrix completion problems. Using this technique, we develop a general approach for obtaining fine-grained, entry-wise bounds on iterative stochastic procedures. We demonstrate the power of this approach in analyzing two of the most important algorithms for matrix completion: the non-convex approach based on Singular Value Projection (SVP), and the convex relaxation approach based on nuclear norm minimization (NNM). In particular, we prove for the first time that the original form of SVP, without re-sampling or sample splitting, converges linearly in the infinity norm. We further apply our leave-one-out approach to an iterative procedure that arises in the analysis of the dual solutions of NNM. Our results show that NNM recovers the true $ d $-by-$ d $ rank-$ r $ matrix with $\mathcal{O}(\mu^2 r^3d \log d )$ observed entries, which has optimal dependence on the dimension and is independent of the condition number of the matrix. To the best of our knowledge, this is the first sample complexity result for a tractable matrix completion algorithm that satisfies these two properties simultaneously.
[ 0, 0, 0, 1, 0, 0 ]
Title: Weyl calculus with respect to the Gaussian measure and restricted $L^p$-$L^q$ boundedness of the Ornstein-Uhlenbeck semigroup in complex time, Abstract: In this paper, we introduce a Weyl functional calculus $a \mapsto a(Q,P)$ for the position and momentum operators $Q$ and $P$ associated with the Ornstein-Uhlenbeck operator $ L = -\Delta + x\cdot \nabla$, and give a simple criterion for restricted $L^p$-$L^q$ boundedness of operators in this functional calculus. The analysis of this non-commutative functional calculus is simpler than the analysis of the functional calculus of $L$. It allows us to recover, unify, and extend, old and new results concerning the boundedness of $\exp(-zL)$ as an operator from $L^p(\mathbb{R}^d,\gamma_{\alpha})$ to $L^q(\mathbb{R}^d,\gamma_{\beta})$ for suitable values of $z\in \mathbb{C}$ with $\Re z>0$, $p,q\in [1,\infty)$, and $\alpha,\beta>0$. Here, $\gamma_\tau$ denotes the centred Gaussian measure on $\mathbb{R}^d$ with density $(2\pi\tau)^{-d/2}\exp(-|x|^2/2\tau)$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Model-Based Value Estimation for Efficient Model-Free Reinforcement Learning, Abstract: Recent model-free reinforcement learning algorithms have proposed incorporating learned dynamics models as a source of additional data with the intention of reducing sample complexity. Such methods hold the promise of incorporating imagined data coupled with a notion of model uncertainty to accelerate the learning of continuous control tasks. Unfortunately, they rely on heuristics that limit usage of the dynamics model. We present model-based value expansion, which controls for uncertainty in the model by only allowing imagination to fixed depth. By enabling wider use of learned dynamics models within a model-free reinforcement learning algorithm, we improve value estimation, which, in turn, reduces the sample complexity of learning.
[ 0, 0, 0, 1, 0, 0 ]
Title: A new method of joint nonparametric estimation of probability density and its support, Abstract: In this paper we propose a new method of joint nonparametric estimation of probability density and its support. As is well known, nonparametric kernel density estimator has "boundary bias problem" when the support of the population density is not the whole real line. To avoid the unknown boundary effects, our estimator detects the boundary, and eliminates the boundary-bias of the estimator simultaneously. Moreover, we refer an extension to a simple multivariate case, and propose an improved estimator free from the unknown boundary bias.
[ 0, 0, 1, 1, 0, 0 ]
Title: An OpenCL(TM) Deep Learning Accelerator on Arria 10, Abstract: Convolutional neural nets (CNNs) have become a practical means to perform vision tasks, particularly in the area of image classification. FPGAs are well known to be able to perform convolutions efficiently, however, most recent efforts to run CNNs on FPGAs have shown limited advantages over other devices such as GPUs. Previous approaches on FPGAs have often been memory bound due to the limited external memory bandwidth on the FPGA device. We show a novel architecture written in OpenCL(TM), which we refer to as a Deep Learning Accelerator (DLA), that maximizes data reuse and minimizes external memory bandwidth. Furthermore, we show how we can use the Winograd transform to significantly boost the performance of the FPGA. As a result, when running our DLA on Intel's Arria 10 device we can achieve a performance of 1020 img/s, or 23 img/s/W when running the AlexNet CNN benchmark. This comes to 1382 GFLOPs and is 10x faster with 8.4x more GFLOPS and 5.8x better efficiency than the state-of-the-art on FPGAs. Additionally, 23 img/s/W is competitive against the best publicly known implementation of AlexNet on nVidia's TitanX GPU.
[ 1, 0, 0, 0, 0, 0 ]
Title: Two-channel conduction in YbPtBi, Abstract: We investigated transport, magnetotransport, and broadband optical properties of the half-Heusler compound YbPtBi. Hall measurements evidence two types of charge carriers: highly mobile electrons with a temperature-dependent concentration and low-mobile holes; their concentration stays almost constant within the investigated temperature range from 2.5 to 300 K. The optical spectra (10 meV - 2.7 eV) can be naturally decomposed into contributions from intra- and interband absorption processes, the former manifesting themselves as two Drude bands with very different scattering rates, corresponding to the charges with different mobilities. These results of the optical measurements allow us to separate the contributions from electrons and holes to the total conductivity and to implement a two-channel-conduction model for description of the magnetotransport data. In this approach, the electron and hole mobilities are found to be around 50000 and 10 cm$^{2}$/Vs at the lowest temperatures (2.5 K), respectively.
[ 0, 1, 0, 0, 0, 0 ]
Title: Cosmological constraints on scalar-tensor gravity and the variation of the gravitational constant, Abstract: We present cosmological constraints on the scalar-tensor theory of gravity by analyzing the angular power spectrum data of the cosmic microwave background obtained from the Planck 2015 results together with the baryon acoustic oscillations (BAO) data. We find that the inclusion of the BAO data improves the constraints on the time variation of the effective gravitational constant by more than $10\%$, that is, the time variation of the effective gravitational constant between the recombination and the present epochs is constrained as $G_{\rm rec}/G_0-1 <1.9\times 10^{-3}\ (95.45\%\ {\rm C.L.})$ and $G_{\rm rec}/G_0-1 <5.5\times 10^{-3}\ (99.99 \%\ {\rm C.L.})$. We also discuss the dependence of the constraints on the choice of the prior.
[ 0, 1, 0, 0, 0, 0 ]
Title: Extending Bayesian structural time-series estimates of causal impact to many-household conservation initiatives, Abstract: Government agencies offer economic incentives to citizens for conservation actions, such as rebates for installing efficient appliances and compensation for modifications to homes. The intention of these conservation actions is frequently to reduce the consumption of a utility. Measuring the conservation impact of incentives is important for guiding policy, but doing so is technically difficult. However, the methods for estimating the impact of public outreach efforts have seen substantial developments in marketing to consumers in recent years as marketers seek to substantiate the value of their services. One such method uses Bayesian Stuctural Time Series (BSTS) to compare a market exposed to an advertising campaign with control markets identified through a matching procedure. This paper introduces an extension of the matching/BSTS method for impact estimation to make it applicable for general conservation program impact estimation when multi-household data is available. This is accomplished by household matching/BSTS steps to obtain conservation estimates and then aggregating the results using a meta-regression step to aggregate the findings. A case study examining the impact of rebates for household turf removal on water consumption in multiple Californian water districts is conducted to illustrate the work flow of this method.
[ 0, 0, 0, 1, 0, 0 ]
Title: Absorption probabilities for Gaussian polytopes, and regular spherical simplices, Abstract: The Gaussian polytope $\mathcal P_{n,d}$ is the convex hull of $n$ independent standard normally distributed points in $\mathbb R^d$. We derive explicit expressions for the probability that $\mathcal P_{n,d}$ contains a fixed point $x\in\mathbb R^d$ as a function of the Euclidean norm of $x$, and the probability that $\mathcal P_{n,d}$ contains the point $\sigma X$, where $\sigma\geq 0$ is constant and $X$ is a standard normal vector independent of $\mathcal P_{n,d}$. As a by-product, we also compute the expected number of $k$-faces and the expected volume of $\mathcal P_{n,d}$, thus recovering the results of Affentranger and Schneider [Discr. and Comput. Geometry, 1992] and Efron [Biometrika, 1965], respectively. All formulas are in terms of the volumes of regular spherical simplices, which, in turn, can be expressed through the standard normal distribution function $\Phi(z)$ and its complex version $\Phi(iz)$. The main tool used in the proofs is the conic version of the Crofton formula.
[ 0, 0, 1, 0, 0, 0 ]
Title: Modeling news spread as an SIR process over temporal networks, Abstract: News spread in internet media outlets can be seen as a contagious process generating temporal networks representing the influence between published articles. In this article we propose a methodology based on the application of natural language analysis of the articles to reconstruct the spread network. From the reconstructed network, we show that the dynamics of the news spread can be approximated by a classical SIR epidemiological dynamics upon the network. From the results obtained we argue that the methodology proposed can be used to make predictions about media repercussion, and also to detect viral memes in news streams.
[ 1, 1, 0, 0, 0, 0 ]
Title: Price dynamics on a risk-averse market with asymmetric information, Abstract: A market with asymmetric information can be viewed as a repeated exchange game between the informed sector and the uninformed one. In a market with risk-neutral agents, De Meyer [2010] proves that the price process should be a particular kind of Brownian martingale called CMMV. This type of dynamics is due to the strategic use of their private information by the informed agents. In the current paper, we consider the more realistic case where agents on the market are risk-averse. This case is much more complex to analyze as it leads to a non-zero-sum game. Our main result is that the price process is still a CMMV under a martingale equivalent measure. This paper provides thus a theoretical justification for the use of the CMMV class of dynamics in financial analysis. This class contains as a particular case the Black and Scholes dynamics.
[ 1, 0, 1, 0, 0, 0 ]
Title: Learning to Generalize: Meta-Learning for Domain Generalization, Abstract: Domain shift refers to the well known problem that a model trained in one source domain performs poorly when applied to a target domain with different statistics. {Domain Generalization} (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel {meta-learning} method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train/test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.
[ 1, 0, 0, 0, 0, 0 ]
Title: Divide and Conquer: Recovering Contextual Information of Behaviors in Android Apps around Limited-quantity Audit Logs, Abstract: Android users are now suffering serious threats from various unwanted apps. The analysis of apps' audit logs is one of the critical methods for some device manufactures to unveil the underlying malice of apps. We propose and implement DroidHolmes, a novel system that recovers contextual information around limited-quantity audit logs. It also can help improving the performance of existing analysis tools, such as FlowDroid and IccTA. The key module of DroidHolmes is finding a path matched with the logs on the app's control-flow graph. The challenge, however, is that the limited-quantity logs may incur high computational complexity in log matching, where there are a large amount of candidates caused by the coupling relation of successive logs. To address the challenge, we propose a divide and conquer algorithm for effectively positioning each log record individually. In our evaluation, DroidHolmes helps existing tools to achieve 94.87% and 100% in precision and recall respectively on 132 apps from open-source test suites. Based on the result of DroidHolmes, the contextual information in the behaviors of 500 real-world apps is also recovered. Meanwhile, DroidHolmes incurs negligible performance overhead on the smartphone.
[ 1, 0, 0, 0, 0, 0 ]
Title: Homotopy Theoretic Classification of Symmetry Protected Phases, Abstract: We classify a number of symmetry protected phases using Freed-Hopkins' homotopy theoretic classification. Along the way we compute the low-dimensional homotopy groups of a number of novel cobordism spectra.
[ 0, 1, 1, 0, 0, 0 ]
Title: Automatic Renal Segmentation in DCE-MRI using Convolutional Neural Networks, Abstract: Kidney function evaluation using dynamic contrast-enhanced MRI (DCE-MRI) images could help in diagnosis and treatment of kidney diseases of children. Automatic segmentation of renal parenchyma is an important step in this process. In this paper, we propose a time and memory efficient fully automated segmentation method which achieves high segmentation accuracy with running time in the order of seconds in both normal kidneys and kidneys with hydronephrosis. The proposed method is based on a cascaded application of two 3D convolutional neural networks that employs spatial and temporal information at the same time in order to learn the tasks of localization and segmentation of kidneys, respectively. Segmentation performance is evaluated on both normal and abnormal kidneys with varying levels of hydronephrosis. We achieved a mean dice coefficient of 91.4 and 83.6 for normal and abnormal kidneys of pediatric patients, respectively.
[ 0, 0, 0, 1, 0, 0 ]
Title: Self-adjoint approximations of degenerate Schrodinger operator, Abstract: The problem of construction a quantum mechanical evolution for the Schrodinger equation with a degenerate Hamiltonian which is a symmetric operator that does not have self-adjoint extensions is considered. Self-adjoint regularization of the Hamiltonian does not lead to a preserving probability limiting evolution for vectors from the Hilbert space but it is used to construct a limiting evolution of states on a C*-algebra of compact operators and on an abelian subalgebra of operators in the Hilbert space. The limiting evolution of the states on the abelian algebra can be presented by the Kraus decomposition with two terms. Both of this terms are corresponded to the unitary and shift components of Wold's decomposition of isometric semigroup generated by the degenerate Hamiltonian. Properties of the limiting evolution of the states on the C*-algebras are investigated and it is shown that pure states could evolve into mixed states.
[ 0, 0, 1, 0, 0, 0 ]
Title: Inelastic deformation during sill and laccolith emplacement: Insights from an analytic elastoplastic model, Abstract: Numerous geological observations evidence that inelastic deformation occurs during sills and laccoliths emplacement. However, most models of sill and laccolith emplacement neglect inelastic processes by assuming purely elastic deformation of the host rock. This assumption has never been tested, so that the role of inelastic deformation on the growth dynamics of magma intrusions remains poorly understood. In this paper, we introduce the first analytical model of shallow sill and laccolith emplacement that accounts for elasto-plastic deformation of the host rock. It considers the intrusion's overburden as a thin elastic bending plate attached to an elastic-perfectly-plastic foundation. We find that, for geologically realistic values of the model parameters, the horizontal extent of the plastic zone lp is much smaller than the radius of the intrusion a. By modeling the quasi-static growth of a sill, we find that the ratio lp/a decreases during propagation, as 1/ $\sqrt$ a 4 $\Delta$P , with $\Delta$P the magma overpressure. The model also shows that the extent of the plastic zone decreases with the intrusion's depth, while it increases if the host rock is weaker. Comparison between our elasto-plastic model and existing purely elastic models shows that plasticity can have a significant effect on intrusion propagation dynamics, with e.g. up to a doubling of the overpressure necessary for the sill to grow. Our results suggest that plasticity effects might be small for large sills, but conversely that they might be substantial for early sill propagation. 2
[ 0, 1, 0, 0, 0, 0 ]
Title: On the magnetic shield for a Vlasov-Poisson plasma, Abstract: We study the screening of a bounded body $\Gamma$ against the effect of a wind of charged particles, by means of a shield produced by a magnetic field which becomes infinite on the border of $\Gamma$. The charged wind is modeled by a Vlasov-Poisson plasma, the bounded body by a torus, and the external magnetic field is taken close to the border of $\Gamma$. We study two models: a plasma composed by different species with positive or negative charges, and finite total mass of each species, and another made of many species of the same sign, each having infinite mass. We investigate the time evolution of both systems, showing in particular that the plasma particles cannot reach the body. Finally we discuss possible extensions to more general initial data. We show also that when the magnetic lines are straight lines, (that imposes an unbounded body), the previous results can be improved.
[ 0, 0, 1, 0, 0, 0 ]
Title: Judicious partitions of uniform hypergraphs, Abstract: The vertices of any graph with $m$ edges may be partitioned into two parts so that each part meets at least $\frac{2m}{3}$ edges. Bollobás and Thomason conjectured that the vertices of any $r$-uniform hypergraph with $m$ edges may likewise be partitioned into $r$ classes such that each part meets at least $\frac{r}{2r-1}m$ edges. In this paper we prove the weaker statement that, for each $r\ge 4$, a partition into $r$ classes may be found in which each class meets at least $\frac{r}{3r-4}m$ edges, a substantial improvement on previous bounds.
[ 0, 0, 1, 0, 0, 0 ]
Title: Full and maximal squashed flat antichains of minimum weight, Abstract: A full squashed flat antichain (FSFA) in the Boolean lattice $B_n$ is a family $\mathcal{A}\cup\mathcal{B}$ of subsets of $[n]=\{1,2,\dots,n\}$ such that, for some $k\in [n]$ and $0\le m\le \binom n k$, $\mathcal{A}$ is the family of the first $m$ $k$-sets in squashed (reverse-lexicographic) order and $\mathcal{B}$ contains exactly those $(k-1)$-subsets of $[n]$ that are not contained in some $A\in\mathcal{A}$. If, in addition, every $k$-subset of $[n]$ which is not in $\mathcal{A}$ contains some $B\in\mathcal{B}$, then $\mathcal{A}\cup\mathcal{B}$ is a maximal squashed flat antichain (MSFA). For any $n,k$ and positive real numbers $\alpha,\beta$, we determine all FSFA and all MSFA of minimum weight $\alpha\cdot|\mathcal{A}|+\beta\cdot|\mathcal{B}|$. Based on this, asymptotic results on MSFA with minimum size and minimum BLYM value, respectively, are derived.
[ 1, 0, 0, 0, 0, 0 ]
Title: On the phantom barrier crossing and the bounds on the speed of sound in non-minimal derivative coupling theories, Abstract: In this paper we investigate the so called "phantom barrier crossing" issue in a cosmological model based in the scalar-tensor theory with non-minimal derivative coupling to the Einstein's tensor. Special attention will be paid to the physical bounds on the squared sound speed. The numeric results are geometrically illustrated by means of a qualitative procedure of analysis that is based on the mapping of the orbits in the phase plane onto the surfaces that represent physical quantities in the extended phase space, that is: the phase plane complemented with an additional dimension relative to the given physical parameter. We find that the cosmological model based in the non-minimal derivative coupling theory -- this includes both the quintessence and the pure derivative coupling cases -- has serious causality problems related with superluminal propagation of the scalar and tensor perturbations. Even more disturbing is the finding that, despite that the underlying theory is free of the Ostrogradsky instability, the corresponding cosmological model is plagued by the Laplacian (classical) instability related with negative squared sound speed. This instability leads to an uncontrollable growth of the energy density of the perturbations that is inversely proportional to their wavelength. We show that independent of the self-interaction potential, for the positive coupling the tensor perturbations propagate superluminally, while for the negative coupling a Laplacian instability arises. This latter instability invalidates the possibility for the model to describe the primordial inflation.
[ 0, 1, 0, 0, 0, 0 ]
Title: Detecting tropical defects of polynomial equations, Abstract: We introduce the notion of tropical defects, certificates that a system of polynomial equations is not a tropical basis, and provide algorithms for finding them around affine spaces of complementary dimension to the zero set. We use these techniques to solve open problems regarding del Pezzo surfaces of degree 3 and realizability of valuated gaussoids of rank 4.
[ 1, 0, 0, 0, 0, 0 ]
Title: Wasserstein Variational Inference, Abstract: This paper introduces Wasserstein variational inference, a new form of approximate Bayesian inference based on optimal transport theory. Wasserstein variational inference uses a new family of divergences that includes both f-divergences and the Wasserstein distance as special cases. The gradients of the Wasserstein variational loss are obtained by backpropagating through the Sinkhorn iterations. This technique results in a very stable likelihood-free training method that can be used with implicit distributions and probabilistic programs. Using the Wasserstein variational inference framework, we introduce several new forms of autoencoders and test their robustness and performance against existing variational autoencoding techniques.
[ 0, 0, 0, 1, 0, 0 ]
Title: Comptage probabiliste sur la frontière de Furstenberg, Abstract: Let $G$ be a real linear semisimple algebraic group without compact factors and $\Gamma$ a Zariski dense subgroup of $G$. In this paper, we use a probabilistic counting in order to study the asymptotic properties of $\Gamma$ acting on the Furstenberg boundary of $G$. First, we show that the $K$ components of the elements of $\Gamma$ in the KAK decomposition of $G$ become asymptotically independent. This result is an analog of a result of Gorodnik-Oh in the context of the Archimedean counting. Then, we give a new proof of a result of Guivarc'h concerning the positivity of the Hausdorff dimension of the unique stationary probability measure on the Furstenberg Boundary of $G$. Finally, we show how these results can be combined to give a probabilistic proof of the Tit's alternative; namely that two independent random walks on $\Gamma$ will eventually generate a free subgroup. This result answered a question of Guivarc'h and was published earlier by the author. Since we're working with the field of real numbers, we give here a more direct proof and a more general statement.
[ 0, 0, 1, 0, 0, 0 ]
Title: Bures-Hall Ensemble: Spectral Densities and Average Entropies, Abstract: We consider an ensemble of random density matrices distributed according to the Bures measure. The corresponding joint probability density of eigenvalues is described by the fixed trace Bures-Hall ensemble of random matrices which, in turn, is related to its unrestricted trace counterpart via a Laplace transform. We investigate the spectral statistics of both these ensembles and, in particular, focus on the level density, for which we obtain exact closed-form results involving Pfaffians. In the fixed trace case, the level density expression is used to obtain an exact result for the average Havrda-Charvát-Tsallis (HCT) entropy as a finite sum. Averages of von Neumann entropy, linear entropy and purity follow by considering appropriate limits in the average HCT expression. Based on exact evaluations of the average von Neumann entropy and the average purity, we also conjecture very simple formulae for these, which are similar to those in the Hilbert-Schmidt ensemble.
[ 0, 0, 0, 1, 0, 0 ]
Title: Electron and Nucleon Localization Functions of Oganesson: Approaching the Thomas-Fermi Limit, Abstract: Fermion localization functions are used to discuss electronic and nucleonic shell structure effects in the superheavy element oganesson, the heaviest element discovered to date. Spin-orbit splitting in the $7p$ electronic shell becomes so large ($\sim$ 10 eV) that Og is expected to show uniform-gas-like behavior in the valence region with a rather large dipole polarizability compared to the lighter rare gas elements. The nucleon localization in Og is also predicted to undergo a transition to the Thomas-Fermi gas behavior in the valence region. This effect, particularly strong for neutrons, is due to the high density of single-particle orbitals.
[ 0, 1, 0, 0, 0, 0 ]
Title: Constrained Bayesian Optimization with Noisy Experiments, Abstract: Randomized experiments are the gold standard for evaluating the effects of changes to real-world systems. Data in these tests may be difficult to collect and outcomes may have high variance, resulting in potentially large measurement error. Bayesian optimization is a promising technique for efficiently optimizing multiple continuous parameters, but existing approaches degrade in performance when the noise level is high, limiting its applicability to many randomized experiments. We derive an expression for expected improvement under greedy batch optimization with noisy observations and noisy constraints, and develop a quasi-Monte Carlo approximation that allows it to be efficiently optimized. Simulations with synthetic functions show that optimization performance on noisy, constrained problems outperforms existing methods. We further demonstrate the effectiveness of the method with two real-world experiments conducted at Facebook: optimizing a ranking system, and optimizing server compiler flags.
[ 1, 0, 0, 1, 0, 0 ]
Title: On synthetic data with predetermined subject partitioning and cluster profiling, and pre-specified categorical variable marginal dependence structure, Abstract: A standard approach for assessing the performance of partition or mixture models is to create synthetic data sets with a pre-specified clustering structure, and assess how well the model reveals this structure. A common format is that subjects are assigned to different clusters, with variable observations simulated so that subjects within the same cluster have similar profiles, allowing for some variability. In this manuscript, we consider observations from nominal, ordinal and interval categorical variables. Theoretical and empirical results are utilized to explore the dependence structure between the variables, in relation to the clustering structure for the subjects. A novel approach is proposed that allows to control the marginal association or correlation structure of the variables, and to specify exact correlation values. Practical examples are shown and additional theoretical results are derived for interval data, commonly observed in cohort studies, including observations that emulate Single Nucleotide Polymorphisms. We compare a synthetic dataset to a real one, to demonstrate similarities and differences.
[ 0, 0, 0, 1, 0, 0 ]
Title: Multi-Agent Diverse Generative Adversarial Networks, Abstract: We propose MAD-GAN, an intuitive generalization to the Generative Adversarial Networks (GANs) and its conditional variants to address the well known problem of mode collapse. First, MAD-GAN is a multi-agent GAN architecture incorporating multiple generators and one discriminator. Second, to enforce that different generators capture diverse high probability modes, the discriminator of MAD-GAN is designed such that along with finding the real and fake samples, it is also required to identify the generator that generated the given fake sample. Intuitively, to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. We perform extensive experiments on synthetic and real datasets and compare MAD-GAN with different variants of GAN. We show high quality diverse sample generations for challenging tasks such as image-to-image translation and face generation. In addition, we also show that MAD-GAN is able to disentangle different modalities when trained using highly challenging diverse-class dataset (e.g. dataset with images of forests, icebergs, and bedrooms). In the end, we show its efficacy on the unsupervised feature representation task. In Appendix, we introduce a similarity based competing objective (MAD-GAN-Sim) which encourages different generators to generate diverse samples based on a user defined similarity metric. We show its performance on the image-to-image translation, and also show its effectiveness on the unsupervised feature representation task.
[ 1, 0, 0, 1, 0, 0 ]
Title: Analysis of Extremely Obese Individuals Using Deep Learning Stacked Autoencoders and Genome-Wide Genetic Data, Abstract: The aetiology of polygenic obesity is multifactorial, which indicates that life-style and environmental factors may influence multiples genes to aggravate this disorder. Several low-risk single nucleotide polymorphisms (SNPs) have been associated with BMI. However, identified loci only explain a small proportion of the variation ob-served for this phenotype. The linear nature of genome wide association studies (GWAS) used to identify associations between genetic variants and the phenotype have had limited success in explaining the heritability variation of BMI and shown low predictive capacity in classification studies. GWAS ignores the epistatic interactions that less significant variants have on the phenotypic outcome. In this paper we utilise a novel deep learning-based methodology to reduce the high dimensional space in GWAS and find epistatic interactions between SNPs for classification purposes. SNPs were filtered based on the effects associations have with BMI. Since Bonferroni adjustment for multiple testing is highly conservative, an important proportion of SNPs involved in SNP-SNP interactions are ignored. Therefore, only SNPs with p-values < 1x10-2 were considered for subsequent epistasis analysis using stacked auto encoders (SAE). This allows the nonlinearity present in SNP-SNP interactions to be discovered through progressively smaller hidden layer units and to initialise a multi-layer feedforward artificial neural network (ANN) classifier. The classifier is fine-tuned to classify extremely obese and non-obese individuals. The best results were obtained with 2000 compressed units (SE=0.949153, SP=0.933014, Gini=0.949936, Lo-gloss=0.1956, AUC=0.97497 and MSE=0.054057). Using 50 compressed units it was possible to achieve (SE=0.785311, SP=0.799043, Gini=0.703566, Logloss=0.476864, AUC=0.85178 and MSE=0.156315).
[ 0, 0, 0, 0, 1, 0 ]
Title: Packet Throughput Analysis of Static and Dynamic TDD in Small Cell Networks, Abstract: We develop an analytical framework for the perfor- mance comparison of small cell networks operating under static time division duplexing (S-TDD) and dynamic TDD (D-TDD). While in S-TDD downlink/uplink (DL/UL) cell transmissions are synchronized, in D-TDD each cell dynamically allocates resources to the most demanding direction. By leveraging stochastic geom- etry and queuing theory, we derive closed-form expressions for the UL and DL packet throughput, also capturing the impact of random traffic arrivals and packet retransmissions. Through our analysis, which is validated via simulations, we confirm that D-TDD outperforms S-TDD in DL, with the vice versa occurring in UL, since asymmetric transmissions reduce DL interference at the expense of an increased UL interference. We also find that in asymmetric scenarios, where most of the traffic is in DL, D-TDD provides a DL packet throughput gain by better controlling the queuing delay, and that such gain vanishes in the light-traffic regime.
[ 1, 0, 0, 0, 0, 0 ]
Title: False Discovery Rate Control via Debiased Lasso, Abstract: We consider the problem of variable selection in high-dimensional statistical models where the goal is to report a set of variables, out of many predictors $X_1, \dotsc, X_p$, that are relevant to a response of interest. For linear high-dimensional model, where the number of parameters exceeds the number of samples $(p>n)$, we propose a procedure for variables selection and prove that it controls the \emph{directional} false discovery rate (FDR) below a pre-assigned significance level $q\in [0,1]$. We further analyze the statistical power of our framework and show that for designs with subgaussian rows and a common precision matrix $\Omega\in\mathbb{R}^{p\times p}$, if the minimum nonzero parameter $\theta_{\min}$ satisfies $$\sqrt{n} \theta_{\min} - \sigma \sqrt{2(\max_{i\in [p]}\Omega_{ii})\log\left(\frac{2p}{qs_0}\right)} \to \infty\,,$$ then this procedure achieves asymptotic power one. Our framework is built upon the debiasing approach and assumes the standard condition $s_0 = o(\sqrt{n}/(\log p)^2)$, where $s_0$ indicates the number of true positives among the $p$ features. Notably, this framework achieves exact directional FDR control without any assumption on the amplitude of unknown regression parameters, and does not require any knowledge of the distribution of covariates or the noise level. We test our method in synthetic and real data experiments to asses its performance and to corroborate our theoretical results.
[ 0, 0, 0, 1, 0, 0 ]
Title: Relativistic Astronomy, Abstract: The "Breakthrough Starshot" aims at sending near-speed-of-light cameras to nearby stellar systems in the future. Due to the relativistic effects, a trans-relativistic camera naturally serves as a spectrograph, a lens, and a wide-field camera. We demonstrate this through a simulation of the optical-band image of the nearby galaxy M51 in the rest frame of the trans-relativistic camera. We suggest that observing celestial objects using a trans-relativistic camera may allow one to study the astronomical objects in a special way, and to perform unique tests on the principles of special relativity. We outline several examples that trans-relativistic cameras may make important contributions to astrophysics and suggest that the Breakthrough Starshot cameras may be launched in any direction to serve as a unique astronomical observatory.
[ 0, 1, 0, 0, 0, 0 ]
Title: Connection between Fermi contours of zero-field electrons and $ν=\frac12$ composite fermions in two-dimensional systems, Abstract: We investigate the relation between the Fermi sea (FS) of zero-field carriers in two-dimensional systems and the FS of the corresponding composite fermions which emerge in a high magnetic field at filling $\nu = \frac{1}{2}$, as the kinetic energy dispersion is varied. We study cases both with and without rotational symmetry, and find that there is generally no straightforward relation between the geometric shapes and topologies of the two FSs. In particular, we show analytically that the composite Fermi liquid (CFL) is completely insensitive to a wide range of changes to the zero-field dispersion which preserve rotational symmetry, including ones that break the zero-field FS into multiple disconnected pieces. In the absence of rotational symmetry, we show that the notion of `valley pseudospin' in many-valley systems is generically not transferred to the CFL, in agreement with experimental observations. We also discuss how a rotationally symmetric band structure can induce a reordering of the Landau levels, opening interesting possibilities of observing higher-Landau-level physics in the high-field regime.
[ 0, 1, 0, 0, 0, 0 ]
Title: An instrumental intelligibility metric based on information theory, Abstract: We propose a monaural intrusive instrumental intelligibility metric called speech intelligibility in bits (SIIB). SIIB is an estimate of the amount of information shared between a talker and a listener in bits per second. Unlike existing information theoretic intelligibility metrics, SIIB accounts for talker variability and statistical dependencies between time-frequency units. Our evaluation shows that relative to state-of-the-art intelligibility metrics, SIIB is highly correlated with the intelligibility of speech that has been degraded by noise and processed by speech enhancement algorithms.
[ 1, 0, 0, 0, 0, 0 ]
Title: Scattering of kinks in a non-polynomial model, Abstract: We study a model described by a single real scalar field in the two-dimensional space-time. The model is specified by a potential which is non-polynomial and supports analytical kink-like solutions that are similar to the standard kink-like solutions that appear in the $\varphi^4$ model when it develops spontaneous symmetry breaking. We investigate the kink-antikink scattering problem in the non-polynomial model numerically and highlight some specific features, which are not present in the standard case.
[ 0, 1, 0, 0, 0, 0 ]
Title: On the Specification of Constraints for Dynamic Architectures, Abstract: In dynamic architectures, component activation and connections between components may vary over time. With the emergence of mobile computing such architectures became increasingly important and several techniques emerged to support in their specification. These techniques usually allow for the specification of concrete architecture instances. Sometimes, however, it is desired to focus on the specification of constraints, rather than concrete architectures. Especially specifications of architecture patterns usually focus on a few, important constraints, leaving out the details of the concrete architecture implementing the pattern. With this article we introduce an approach to specify such constraints for dynamic architectures. To this end, we introduce the notion of configuration traces as an abstract model for dynamic architectures. Then, we introduce the notion of configuration trace assertions as a formal language based on linear temporal logic to specify constraints for such architectures. In addition, we also introduce the notion of configuration diagrams to specify interfaces and certain common activation and connection constraints in one single, graphical notation. The approach is well-suited to specify patterns for dynamic architectures and verify them by means of formal analyses. This is demonstrated by applying the approach to specify and verify the Blackboard pattern for dynamic architectures.
[ 1, 0, 0, 0, 0, 0 ]
Title: Objectness Scoring and Detection Proposals in Forward-Looking Sonar Images with Convolutional Neural Networks, Abstract: Forward-looking sonar can capture high resolution images of underwater scenes, but their interpretation is complex. Generic object detection in such images has not been solved, specially in cases of small and unknown objects. In comparison, detection proposal algorithms have produced top performing object detectors in real-world color images. In this work we develop a Convolutional Neural Network that can reliably score objectness of image windows in forward-looking sonar images and by thresholding objectness, we generate detection proposals. In our dataset of marine garbage objects, we obtain 94% recall, generating around 60 proposals per image. The biggest strength of our method is that it can generalize to previously unseen objects. We show this by detecting chain links, walls and a wrench without previous training in such objects. We strongly believe our method can be used for class-independent object detection, with many real-world applications such as chain following and mine detection.
[ 1, 0, 0, 0, 0, 0 ]
Title: Rgtsvm: Support Vector Machines on a GPU in R, Abstract: Rgtsvm provides a fast and flexible support vector machine (SVM) implementation for the R language. The distinguishing feature of Rgtsvm is that support vector classification and support vector regression tasks are implemented on a graphical processing unit (GPU), allowing the libraries to scale to millions of examples with >100-fold improvement in performance over existing implementations. Nevertheless, Rgtsvm retains feature parity and has an interface that is compatible with the popular e1071 SVM package in R. Altogether, Rgtsvm enables large SVM models to be created by both experienced and novice practitioners.
[ 1, 0, 0, 1, 0, 0 ]
Title: Geometric Bijections for Regular Matroids, Zonotopes, and Ehrhart Theory, Abstract: Let $M$ be a regular matroid. The Jacobian group ${\rm Jac}(M)$ of $M$ is a finite abelian group whose cardinality is equal to the number of bases of $M$. This group generalizes the definition of the Jacobian group (also known as the critical group or sandpile group) ${\rm Jac}(G)$ of a graph $G$ (in which case bases of the corresponding regular matroid are spanning trees of $G$). There are many explicit combinatorial bijections in the literature between the Jacobian group of a graph ${\rm Jac}(G)$ and spanning trees. However, most of the known bijections use vertices of $G$ in some essential way and are inherently "non-matroidal". In this paper, we construct a family of explicit and easy-to-describe bijections between the Jacobian group of a regular matroid $M$ and bases of $M$, many instances of which are new even in the case of graphs. We first describe our family of bijections in a purely combinatorial way in terms of orientations; more specifically, we prove that the Jacobian group of $M$ admits a canonical simply transitive action on the set ${\mathcal G}(M)$ of circuit-cocircuit reversal classes of $M$, and then define a family of combinatorial bijections $\beta_{\sigma,\sigma^*}$ between ${\mathcal G}(M)$ and bases of $M$. (Here $\sigma$ (resp. $\sigma^*$) is an acyclic signature of the set of circuits (resp. cocircuits) of $M$.) We then give a geometric interpretation of each such map $\beta=\beta_{\sigma,\sigma^*}$ in terms of zonotopal subdivisions which is used to verify that $\beta$ is indeed a bijection. Finally, we give a combinatorial interpretation of lattice points in the zonotope $Z$; by passing to dilations we obtain a new derivation of Stanley's formula linking the Ehrhart polynomial of $Z$ to the Tutte polynomial of $M$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Cross-Domain Recommendation for Cold-Start Users via Neighborhood Based Feature Mapping, Abstract: Collaborative Filtering (CF) is a widely adopted technique in recommender systems. Traditional CF models mainly focus on predicting a user's preference to the items in a single domain such as the movie domain or the music domain. A major challenge for such models is the data sparsity problem, and especially, CF cannot make accurate predictions for the cold-start users who have no ratings at all. Although Cross-Domain Collaborative Filtering (CDCF) is proposed for effectively transferring users' rating preference across different domains, it is still difficult for existing CDCF models to tackle the cold-start users in the target domain due to the extreme data sparsity. In this paper, we propose a Cross-Domain Latent Feature Mapping (CDLFM) model for cold-start users in the target domain. Firstly, in order to better characterize users in sparse domains, we take the users' similarity relationship on rating behaviors into consideration and propose the Matrix Factorization by incorporating User Similarities (MFUS) in which three similarity measures are proposed. Next, to perform knowledge transfer across domains, we propose a neighborhood based gradient boosting trees method to learn the cross-domain user latent feature mapping function. For each cold-start user, we learn his/her feature mapping function based on the latent feature pairs of those linked users who have similar rating behaviors with the cold-start user in the auxiliary domain. And the preference of the cold-start user in the target domain can be predicted based on the mapping function and his/her latent features in the auxiliary domain. Experimental results on two real data sets extracted from Amazon transaction data demonstrate the superiority of our proposed model against other state-of-the-art methods.
[ 1, 0, 0, 0, 0, 0 ]
Title: Deep Interest Network for Click-Through Rate Prediction, Abstract: Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
[ 1, 0, 0, 1, 0, 0 ]
Title: Nonlinear Dirac Cones, Abstract: Physics arising from two-dimensional~(2D) Dirac cones has been a topic of great theoretical and experimental interest to studies of gapless topological phases and to simulations of relativistic systems. Such $2$D Dirac cones are often characterized by a $\pi$ Berry phase and are destroyed by a perturbative mass term. By considering mean-field nonlinearity in a minimal two-band Chern insulator model, we obtain a novel type of Dirac cones that are robust to local perturbations without symmetry restrictions. Due to a different pseudo-spin texture, the Berry phase of the Dirac cone is no longer quantized in $\pi$, and can be continuously tuned as an order parameter. Furthermore, in an Aharonov-Bohm~(AB) interference setup to detect such Dirac cones, the adiabatic AB phase is found to be $\pi$ both theoretically and computationally, offering an observable topological invariant and a fascinating example where the Berry phase and AB phase are fundamentally different. We hence discover a nonlinearity-induced quantum phase transition from a known topological insulating phase to an unusual gapless topological phase.
[ 0, 1, 0, 0, 0, 0 ]
Title: Portfolio Optimization in Fractional and Rough Heston Models, Abstract: We consider a fractional version of the Heston volatility model which is inspired by [16]. Within this model we treat portfolio optimization problems for power utility functions. Using a suitable representation of the fractional part, followed by a reasonable approximation we show that it is possible to cast the problem into the classical stochastic control framework. This approach is generic for fractional processes. We derive explicit solutions and obtain as a by-product the Laplace transform of the integrated volatility. In order to get rid of some undesirable features we introduce a new model for the rough path scenario which is based on the Marchaud fractional derivative. We provide a numerical study to underline our results.
[ 0, 0, 0, 0, 0, 1 ]
Title: Vanishing in stable motivic homotopy sheaves, Abstract: We determine systematic regions in which the bigraded homotopy sheaves of the motivic sphere spectrum vanish.
[ 0, 0, 1, 0, 0, 0 ]
Title: Classifying Character Degree Graphs With 6 Vertices, Abstract: We investigate prime character degree graphs of solvable groups that have six vertices. There are one hundred twelve non-isomorphic connected graphs with six vertices, of which all except nine are classified in this paper. We also completely classify the disconnected graphs with six vertices.
[ 0, 0, 1, 0, 0, 0 ]
Title: On the Table of Marks of a Direct Product of Finite Groups, Abstract: We present a method for computing the table of marks of a direct product of finite groups. In contrast to the character table of a direct product of two finite groups, its table of marks is not simply the Kronecker product of the tables of marks of the two groups. Based on a decomposition of the inclusion order on the subgroup lattice of a direct product as a relation product of three smaller partial orders, we describe the table of marks of the direct product essentially as a matrix product of three class incidence matrices. Each of these matrices is in turn described as a sparse block diagonal matrix. As an application, we use a variant of this matrix product to construct a ghost ring and a mark homomorphism for the rational double Burnside algebra of the symmetric group~$S_3$.
[ 0, 0, 1, 0, 0, 0 ]
Title: An Adiabatic Decomposition of the Hodge Cohomology of Manifolds Fibred over Graphs, Abstract: In this article we use the combinatorial and geometric structure of manifolds with embedded cylinders in order to develop an adiabatic decomposition of the Hodge cohomology of these manifolds. We will on the one hand describe the adiabatic behaviour of spaces of harmonic forms by means of a certain Čech-de Rham complex and on the other hand generalise the Cappell-Lee-Miller splicing map to the case of a finite number of edges, thus combining the topological and the analytic viewpoint. In parts, this work is a generalisation of works of Cappell, Lee and Miller in which a single-edged graph is considered, but it is more specific since only the Gauss-Bonnet operator is studied.
[ 0, 0, 1, 0, 0, 0 ]
Title: Search Intelligence: Deep Learning For Dominant Category Prediction, Abstract: Deep Neural Networks, and specifically fully-connected convolutional neural networks are achieving remarkable results across a wide variety of domains. They have been trained to achieve state-of-the-art performance when applied to problems such as speech recognition, image classification, natural language processing and bioinformatics. Most of these deep learning models when applied to classification employ the softmax activation function for prediction and aim to minimize cross-entropy loss. In this paper, we have proposed a supervised model for dominant category prediction to improve search recall across all eBay classifieds platforms. The dominant category label for each query in the last 90 days is first calculated by summing the total number of collaborative clicks among all categories. The category having the highest number of collaborative clicks for the given query will be considered its dominant category. Second, each query is transformed to a numeric vector by mapping each unique word in the query document to a unique integer value; all padded to equal length based on the maximum document length within the pre-defined vocabulary size. A fully-connected deep convolutional neural network (CNN) is then applied for classification. The proposed model achieves very high classification accuracy compared to other state-of-the-art machine learning techniques.
[ 1, 0, 0, 1, 0, 0 ]
Title: Grouped Convolutional Neural Networks for Multivariate Time Series, Abstract: Analyzing multivariate time series data is important for many applications such as automated control, fault diagnosis and anomaly detection. One of the key challenges is to learn latent features automatically from dynamically changing multivariate input. In visual recognition tasks, convolutional neural networks (CNNs) have been successful to learn generalized feature extractors with shared parameters over the spatial domain. However, when high-dimensional multivariate time series is given, designing an appropriate CNN model structure becomes challenging because the kernels may need to be extended through the full dimension of the input volume. To address this issue, we present two structure learning algorithms for deep CNN models. Our algorithms exploit the covariance structure over multiple time series to partition input volume into groups. The first algorithm learns the group CNN structures explicitly by clustering individual input sequences. The second algorithm learns the group CNN structures implicitly from the error backpropagation. In experiments with two real-world datasets, we demonstrate that our group CNNs outperform existing CNN based regression methods.
[ 1, 0, 0, 0, 0, 0 ]
Title: A mathematical bridge between discretized gauge theories in quantum physics and approximate reasoning in pairwise comparisons, Abstract: We describe a mathematical link between aspects of information theory, called pairwise comparisons, and discretized gauge theories. The link is made by the notion of holonomy along the edges of a simplex. This correspondance leads to open questions in both field.
[ 0, 0, 1, 0, 0, 0 ]
Title: Ten Simple Rules for Reproducible Research in Jupyter Notebooks, Abstract: Reproducibility of computational studies is a hallmark of scientific methodology. It enables researchers to build with confidence on the methods and findings of others, reuse and extend computational pipelines, and thereby drive scientific progress. Since many experimental studies rely on computational analyses, biologists need guidance on how to set up and document reproducible data analyses or simulations. In this paper, we address several questions about reproducibility. For example, what are the technical and non-technical barriers to reproducible computational studies? What opportunities and challenges do computational notebooks offer to overcome some of these barriers? What tools are available and how can they be used effectively? We have developed a set of rules to serve as a guide to scientists with a specific focus on computational notebook systems, such as Jupyter Notebooks, which have become a tool of choice for many applications. Notebooks combine detailed workflows with narrative text and visualization of results. Combined with software repositories and open source licensing, notebooks are powerful tools for transparent, collaborative, reproducible, and reusable data analyses.
[ 1, 0, 0, 0, 0, 0 ]
Title: No-But-Semantic-Match: Computing Semantically Matched XML Keyword Search Results, Abstract: Users are rarely familiar with the content of a data source they are querying, and therefore cannot avoid using keywords that do not exist in the data source. Traditional systems may respond with an empty result, causing dissatisfaction, while the data source in effect holds semantically related content. In this paper we study this no-but-semantic-match problem on XML keyword search and propose a solution which enables us to present the top-k semantically related results to the user. Our solution involves two steps: (a) extracting semantically related candidate queries from the original query and (b) processing candidate queries and retrieving the top-k semantically related results. Candidate queries are generated by replacement of non-mapped keywords with candidate keywords obtained from an ontological knowledge base. Candidate results are scored using their cohesiveness and their similarity to the original query. Since the number of queries to process can be large, with each result having to be analyzed, we propose pruning techniques to retrieve the top-$k$ results efficiently. We develop two query processing algorithms based on our pruning techniques. Further, we exploit a property of the candidate queries to propose a technique for processing multiple queries in batch, which improves the performance substantially. Extensive experiments on two real datasets verify the effectiveness and efficiency of the proposed approaches.
[ 1, 0, 0, 0, 0, 0 ]
Title: Avalanches and Plastic Flow in Crystal Plasticity: An Overview, Abstract: Crystal plasticity is mediated through dislocations, which form knotted configurations in a complex energy landscape. Once they disentangle and move, they may also be impeded by permanent obstacles with finite energy barriers or frustrating long-range interactions. The outcome of such complexity is the emergence of dislocation avalanches as the basic mechanism of plastic flow in solids at the nanoscale. While the deformation behavior of bulk materials appears smooth, a predictive model should clearly be based upon the character of these dislocation avalanches and their associated strain bursts. We provide here a comprehensive overview of experimental observations, theoretical models and computational approaches that have been developed to unravel the multiple aspects of dislocation avalanche physics and the phenomena leading to strain bursts in crystal plasticity.
[ 0, 1, 0, 0, 0, 0 ]
Title: Explicit expression for the stationary distribution of reflected brownian motion in a wedge, Abstract: For Brownian motion in a (two-dimensional) wedge with negative drift and oblique reflection on the axes, we derive an explicit formula for the Laplace transform of its stationary distribution (when it exists), in terms of Cauchy integrals and generalized Chebyshev polyno-mials. To that purpose we solve a Carleman-type boundary value problem on a hyperbola, satisfied by the Laplace transforms of the boundary stationary distribution.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Data-Driven Framework for Assessing Cold Load Pick-up Demand in Service Restoration, Abstract: Cold load pick-up (CLPU) has been a critical concern to utilities. Researchers and industry practitioners have underlined the impact of CLPU on distribution system design and service restoration. The recent large-scale deployment of smart meters has provided the industry with a huge amount of data that is highly granular, both temporally and spatially. In this paper, a data-driven framework is proposed for assessing CLPU demand of residential customers using smart meter data. The proposed framework consists of two interconnected layers: 1) At the feeder level, a nonlinear auto-regression model is applied to estimate the diversified demand during the system restoration and calculate the CLPU demand ratio. 2) At the customer level, Gaussian Mixture Models (GMM) and probabilistic reasoning are used to quantify the CLPU demand increase. The proposed methodology has been verified using real smart meter data and outage cases.
[ 1, 0, 0, 0, 0, 0 ]
Title: On the topological complexity of aspherical spaces, Abstract: The well-known theorem of Eilenberg and Ganea expresses the Lusternik - Schnirelmann category of an aspherical space as the cohomological dimension of its fundamental group. In this paper we study a similar problem of determining algebraically the topological complexity of the Eilenberg-MacLane spaces. One of our main results states that in the case when the fundamental group is hyperbolic in the sense of Gromov the topological complexity of an aspherical space $K(\pi, 1)$ either equals or is by one larger than the cohomological dimension of $\pi\times \pi$. We approach the problem by studying essential cohomology classes, i.e. classes which can be obtained from the powers of the canonical class via coefficient homomorphisms. We describe a spectral sequence which allows to specify a full set of obstructions for a cohomology class to be essential. In the case of a hyperbolic group we establish a vanishing property of this spectral sequence which leads to the main result.
[ 0, 0, 1, 0, 0, 0 ]
Title: Computable Isomorphisms for Certain Classes of Infinite Graphs, Abstract: We investigate (2,1):1 structures, which consist of a countable set $A$ together with a function $f: A \to A$ such that for every element $x$ in $A$, $f$ maps either exactly one element or exactly two elements of $A$ to $x$. These structures extend the notions of injection structures, 2:1 structures, and (2,0):1 structures studied by Cenzer, Harizanov, and Remmel, all of which can be thought of as infinite directed graphs. We look at various computability-theoretic properties of (2,1):1 structures, most notably that of computable categoricity. We say that a structure $\mathcal{A}$ is computably categorical if there exists a computable isomorphism between any two computable copies of $\mathcal{A}$. We give a sufficient condition under which a (2,1):1 structure is computably categorical, and present some examples of (2,1):1 structures with different computability-theoretic properties.
[ 0, 0, 1, 0, 0, 0 ]
Title: Tug-of-War: Observations on Unified Content Handling, Abstract: Modern applications and Operating Systems vary greatly with respect to how they register and identify different types of content. These discrepancies lead to exploits and inconsistencies in user experience. In this paper, we highlight the issues arising in the modern content handling ecosystem, and examine how the operating system can be used to achieve unified and consistent content identification.
[ 1, 0, 0, 0, 0, 0 ]