title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
An upper bound on the distinguishing index of graphs with minimum degree at least two
The distinguishing index of a simple graph $G$, denoted by $D'(G)$, is the least number of labels in an edge labeling of $G$ not preserved by any non-trivial automorphism. It was conjectured by Pilśniak (2015) that for any 2-connected graph $D'(G) \leq \lceil \sqrt{\Delta (G)}\rceil +1$. We prove a more general result for the distinguishing index of graphs with minimum degree at least two from which the conjecture follows. Also we present graphs $G$ for which $D'(G)\leq \lceil \sqrt{\Delta }\rceil$.
0
0
1
0
0
0
Adversarial Pseudo Healthy Synthesis Needs Pathology Factorization
Pseudo healthy synthesis, i.e. the creation of a subject-specific `healthy' image from a pathological one, could be helpful in tasks such as anomaly detection, understanding changes induced by pathology and disease or even as data augmentation. We treat this task as a factor decomposition problem: we aim to separate what appears to be healthy and where disease is (as a map). The two factors are then recombined (by a network) to reconstruct the input disease image. We train our models in an adversarial way using either paired or unpaired settings, where we pair disease images and maps (as segmentation masks) when available. We quantitatively evaluate the quality of pseudo healthy images. We show in a series of experiments, performed in ISLES and BraTS datasets, that our method is better than conditional GAN and CycleGAN, highlighting challenges in using adversarial methods in the image translation task of pseudo healthy image generation.
1
0
0
1
0
0
Closure operators, frames, and neatest representations
Given a poset $P$ and a standard closure operator $\Gamma:\wp(P)\to\wp(P)$ we give a necessary and sufficient condition for the lattice of $\Gamma$-closed sets of $\wp(P)$ to be a frame in terms of the recursive construction of the $\Gamma$-closure of sets. We use this condition to show that given a set $\mathcal{U}$ of distinguished joins from $P$, the lattice of $\mathcal{U}$-ideals of $P$ fails to be a frame if and only if it fails to be $\sigma$-distributive, with $\sigma$ depending on the cardinalities of sets in $\mathcal{U}$. From this we deduce that if a poset has the property that whenever $a\wedge(b\vee c)$ is defined for $a,b,c\in P$ it is necessarily equal to $(a\wedge b)\vee (a\wedge c)$, then it has an $(\omega,3)$-representation. This answers a question from the literature.
0
0
1
0
0
0
The structure of rationally factorized Lax type flows and their analytical integrability
The work is devoted to constructing a wide class of differential-functional dynamical systems, whose rich algebraic structure makes their integrability analytically effective. In particular, there is analyzed in detail the operator Lax type equations for factorized seed elements, there is proved an important theorem about their operator factorization and the related analytical solution scheme to the corresponding nonlinear differential-functional dynamical systems.
0
1
0
0
0
0
Learning Policy Representations in Multiagent Systems
Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by hand-engineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging high-dimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning.
0
0
0
1
0
0
Jamming-Resistant Receivers for the Massive MIMO Uplink
We design a jamming-resistant receiver scheme to enhance the robustness of a massive MIMO uplink system against jamming. We assume that a jammer attacks the system both in the pilot and data transmission phases. The key feature of the proposed scheme is that, in the pilot phase, we estimate not only the legitimate channel, but also the jamming channel by exploiting a purposely unused pilot sequence. The jamming channel estimate is used to constructed linear receive filters that reject the impact of the jamming signal. The performance of the proposed scheme is analytically evaluated using asymptotic properties of massive MIMO. The optimal regularized zero-forcing receiver and the optimal power allocation are also studied. Numerical results are provided to verify our analysis and show that the proposed scheme greatly improves the achievable rates, as compared to conventional receivers. Interestingly, the proposed scheme works particularly well under strong jamming attacks, since the improved estimate of the jamming channel outweighs the extra jamming power.
1
0
0
0
0
0
Multiplex core-periphery organization of the human connectome
The behavior of many complex systems is determined by a core of densely interconnected units. While many methods are available to identify the core of a network when connections between nodes are all of the same type, a principled approach to define the core when multiple types of connectivity are allowed is still lacking. Here we introduce a general framework to define and extract the core-periphery structure of multi-layer networks by explicitly taking into account the connectivity of the nodes at each layer. We show how our method works on synthetic networks with different size, density, and overlap between the cores at the different layers. We then apply the method to multiplex brain networks whose layers encode information both on the anatomical and the functional connectivity among regions of the human cortex. Results confirm the presence of the main known hubs, but also suggest the existence of novel brain core regions that have been discarded by previous analysis which focused exclusively on the structural layer. Our work is a step forward in the identification of the core of the human connectome, and contributes to shed light to a fundamental question in modern neuroscience.
1
0
0
0
1
0
Towards Learned Clauses Database Reduction Strategies Based on Dominance Relationship
Clause Learning is one of the most important components of a conflict driven clause learning (CDCL) SAT solver that is effective on industrial instances. Since the number of learned clauses is proved to be exponential in the worse case, it is necessary to identify the most relevant clauses to maintain and delete the irrelevant ones. As reported in the literature, several learned clauses deletion strategies have been proposed. However the diversity in both the number of clauses to be removed at each step of reduction and the results obtained with each strategy creates confusion to determine which criterion is better. Thus, the problem to select which learned clauses are to be removed during the search step remains very challenging. In this paper, we propose a novel approach to identify the most relevant learned clauses without favoring or excluding any of the proposed measures, but by adopting the notion of dominance relationship among those measures. Our approach bypasses the problem of the diversity of results and reaches a compromise between the assessments of these measures. Furthermore, the proposed approach also avoids another non-trivial problem which is the amount of clauses to be deleted at each reduction of the learned clause database.
1
0
0
0
0
0
Integrable structure of products of finite complex Ginibre random matrices
We consider the squared singular values of the product of $M$ standard complex Gaussian matrices. Since the squared singular values form a determinantal point process with a particular Meijer G-function kernel, the gap probabilities are given by a Fredholm determinant based on this kernel. It was shown by Strahov \cite{St14} that a hard edge scaling limit of the gap probabilities is described by Hamiltonian differential equations which can be formulated as an isomonodromic deformation system similar to the theory of the Kyoto school. We generalize this result to the case of finite matrices by first finding a representation of the finite kernel in integrable form. As a result we obtain the Hamiltonian structure for a finite size matrices and formulate it in terms of a $(M+1) \times (M+1)$ matrix Schlesinger system. The case $M=1$ reproduces the Tracy and Widom theory which results in the Painlevé V equation for the $(0,s)$ gap probability. Some integrals of motion for $M = 2$ are identified, and a coupled system of differential equations in two unknowns is presented which uniquely determines the corresponding $(0,s)$ gap probability.
0
1
1
0
0
0
Global well-posedness of the 3D primitive equations with horizontal viscosity and vertical diffusivity
In this paper, we consider the 3D primitive equations of oceanic and atmospheric dynamics with only horizontal eddy viscosities in the horizontal momentum equations and only vertical diffusivity in the temperature equation. Global well-posedness of strong solutions is established for any initial data such that the initial horizontal velocity $v_0\in H^2(\Omega)$ and the initial temperature $T_0\in H^1(\Omega)\cap L^\infty(\Omega)$ with $\nabla_HT_0\in L^q(\Omega)$, for some $q\in(2,\infty)$. Moreover, the strong solutions enjoy correspondingly more regularities if the initial temperature belongs to $H^2(\Omega)$. The main difficulties are the absence of the vertical viscosity and the lack of the horizontal diffusivity, which, interact with each other, thus causing the "\,mismatching\," of regularities between the horizontal momentum and temperature equations. To handle this "mismatching" of regularities, we introduce several auxiliary functions, i.e., $\eta, \theta, \varphi,$ and $\psi$ in the paper, which are the horizontal curls or some appropriate combinations of the temperature with the horizontal divergences of the horizontal velocity $v$ or its vertical derivative $\partial_zv$. To overcome the difficulties caused by the absence of the horizontal diffusivity, which leads to the requirement of some $L^1_t(W^{1,\infty}_\textbf{x})$-type a priori estimates on $v$, we decompose the velocity into the "temperature-independent" and temperature-dependent parts and deal with them in different ways, by using the logarithmic Sobolev inequalities of the Brézis-Gallouet-Wainger and Beale-Kato-Majda types, respectively. Specifically, a logarithmic Sobolev inequality of the limiting type, introduced in our previous work [12], is used, and a new logarithmic type Gronwall inequality is exploited.
0
1
1
0
0
0
Superintegrable systems on 3-dimensional curved spaces: Eisenhart formalism and separability
The Eisenhart geometric formalism, which transforms an Euclidean natural Hamiltonian $H=T+V$ into a geodesic Hamiltonian ${\cal T}$ with one additional degree of freedom, is applied to the four families of quadratically superintegrable systems with multiple separability in the Euclidean plane. Firstly, the separability and superintegrability of such four geodesic Hamiltonians ${\cal T}_r$ ($r=a,b,c,d$) in a three-dimensional curved space are studied and then these four systems are modified with the addition of a potential ${\cal U}_r$ leading to ${\cal H}_r={\cal T}_r +{\cal U}_r$. Secondly, we study the superintegrability of the four Hamiltonians $\widetilde{\cal H}_r= {\cal H}_r/ \mu_r$, where $\mu_r$ is a certain position-dependent mass, that enjoys the same separability as the original system ${\cal H}_r$. All the Hamiltonians here studied describe superintegrable systems on non-Euclidean three-dimensional manifolds with a broken spherically symmetry.
0
1
1
0
0
0
An Incentive-Based Online Optimization Framework for Distribution Grids
This paper formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs then adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. The stability of the proposed schemes is analytically established and numerically corroborated.
1
0
1
0
0
0
Ricci solitons on Ricci pseudosymmetric $(LCS)_n$-manifolds
The object of the present paper is to study some types of Ricci pseudosymmetric $(LCS)_n$-manifolds whose metric is Ricci soliton. We found the conditions when Ricci soliton on concircular Ricci pseudosymmetric, projective Ricci pseudosymmetric, $W_{3}$-Ricci pseudosymmetric, conharmonic Ricci pseudosymmetric, conformal Ricci pseudosymmetric $(LCS)_n$-manifolds to be shrinking, steady and expanding. We also construct an example of concircular Ricci pseudosymmetric $(LCS)_3$-manifold whose metric is Ricci soliton.
0
0
1
0
0
0
Optimal Gossip Algorithms for Exact and Approximate Quantile Computations
This paper gives drastically faster gossip algorithms to compute exact and approximate quantiles. Gossip algorithms, which allow each node to contact a uniformly random other node in each round, have been intensely studied and been adopted in many applications due to their fast convergence and their robustness to failures. Kempe et al. [FOCS'03] gave gossip algorithms to compute important aggregate statistics if every node is given a value. In particular, they gave a beautiful $O(\log n + \log \frac{1}{\epsilon})$ round algorithm to $\epsilon$-approximate the sum of all values and an $O(\log^2 n)$ round algorithm to compute the exact $\phi$-quantile, i.e., the the $\lceil \phi n \rceil$ smallest value. We give an quadratically faster and in fact optimal gossip algorithm for the exact $\phi$-quantile problem which runs in $O(\log n)$ rounds. We furthermore show that one can achieve an exponential speedup if one allows for an $\epsilon$-approximation. We give an $O(\log \log n + \log \frac{1}{\epsilon})$ round gossip algorithm which computes a value of rank between $\phi n$ and $(\phi+\epsilon)n$ at every node.% for any $0 \leq \phi \leq 1$ and $0 < \epsilon < 1$. Our algorithms are extremely simple and very robust - they can be operated with the same running times even if every transmission fails with a, potentially different, constant probability. We also give a matching $\Omega(\log \log n + \log \frac{1}{\epsilon})$ lower bound which shows that our algorithm is optimal for all values of $\epsilon$.
1
0
0
0
0
0
Influence of Heat Treatment on the Corrosion Behavior of Purified Magnesium and AZ31 Alloy
Magnesium and its alloys are ideal for biodegradable implants due to their biocompatibility and their low-stress shielding. However, they can corrode too rapidly in the biological environment. The objective of this research was to develop heat treatments to slow the corrosion of high purified magnesium and AZ31 alloy in simulated body fluid at 37°C. Heat treatments were performed at different temperatures and times. Hydrogen evolution, weight loss, PDP, and EIS methods were used to measure the corrosion rates. Results show that heat treating can increase the corrosion resistance of HP-Mg by 2x and AZ31 by 10x.
0
1
0
0
0
0
Towards a scientific blockchain framework for reproducible data analysis
Publishing reproducible analyses is a long-standing and widespread challenge for the scientific community, funding bodies and publishers. Although a definitive solution is still elusive, the problem is recognized to affect all disciplines and lead to a critical system inefficiency. Here, we propose a blockchain-based approach to enhance scientific reproducibility, with a focus on life science studies and precision medicine. While the interest of encoding permanently into an immutable ledger all the study key information-including endpoints, data and metadata, protocols, analytical methods and all findings-has been already highlighted, here we apply the blockchain approach to solve the issue of rewarding time and expertise of scientists that commit to verify reproducibility. Our mechanism builds a trustless ecosystem of researchers, funding bodies and publishers cooperating to guarantee digital and permanent access to information and reproducible results. As a natural byproduct, a procedure to quantify scientists' and institutions' reputation for ranking purposes is obtained.
1
0
0
0
0
0
Time Series Anomaly Detection; Detection of anomalous drops with limited features and sparse examples in noisy highly periodic data
Google uses continuous streams of data from industry partners in order to deliver accurate results to users. Unexpected drops in traffic can be an indication of an underlying issue and may be an early warning that remedial action may be necessary. Detecting such drops is non-trivial because streams are variable and noisy, with roughly regular spikes (in many different shapes) in traffic data. We investigated the question of whether or not we can predict anomalies in these data streams. Our goal is to utilize Machine Learning and statistical approaches to classify anomalous drops in periodic, but noisy, traffic patterns. Since we do not have a large body of labeled examples to directly apply supervised learning for anomaly classification, we approached the problem in two parts. First we used TensorFlow to train our various models including DNNs, RNNs, and LSTMs to perform regression and predict the expected value in the time series. Secondly we created anomaly detection rules that compared the actual values to predicted values. Since the problem requires finding sustained anomalies, rather than just short delays or momentary inactivity in the data, our two detection methods focused on continuous sections of activity rather than just single points. We tried multiple combinations of our models and rules and found that using the intersection of our two anomaly detection methods proved to be an effective method of detecting anomalies on almost all of our models. In the process we also found that not all data fell within our experimental assumptions, as one data stream had no periodicity, and therefore no time based model could predict it.
1
0
0
1
0
0
A parallel approach to bi-objective integer programming
To obtain a better understanding of the trade-offs between various objectives, Bi-Objective Integer Programming (BOIP) algorithms calculate the set of all non-dominated vectors and present these as the solution to a BOIP problem. Historically, these algorithms have been compared in terms of the number of single-objective IPs solved and total CPU time taken to produce the solution to a problem. This is equitable, as researchers can often have access to widely differing amounts of computing power. However, the real world has recently seen a large uptake of multi-core processors in computers, laptops, tablets and even mobile phones. With this in mind, we look at how to best utilise parallel processing to improve the elapsed time of optimisation algorithms. We present two methods of parallelising the recursive algorithm presented by Ozlen, Burton and MacRae. Both new methods utilise two threads and improve running times. One of the new methods, the Meeting algorithm, halves running time to achieve near-perfect parallelisation. The results are compared with the efficiency of parallelisation within the commercial IP solver IBM ILOG CPLEX, and the new methods are both shown to perform better.
1
0
1
0
0
0
The adaptive zero-error capacity for a class of channels with noisy feedback
The adaptive zero-error capacity of discrete memoryless channels (DMC) with noiseless feedback has been shown to be positive whenever there exists at least one channel output "disprover", i.e. a channel output that cannot be reached from at least one of the inputs. Furthermore, whenever there exists a disprover, the adaptive zero-error capacity attains the Shannon (small-error) capacity. Here, we study the zero-error capacity of a DMC when the channel feedback is noisy rather than perfect. We show that the adaptive zero-error capacity with noisy feedback is lower bounded by the forward channel's zero-undetected error capacity, and show that under certain conditions this is tight.
1
0
0
0
0
0
Comparison of Self-Aware and Organic Computing Systems
With increasing complexity and heterogeneity of computing devices, it has become crucial for system to be autonomous, adaptive to dynamic environment, robust, flexible, and having so called self-*properties. These autonomous systems are called organic computing(OC) systems. OC system was proposed as a solution to tackle complex systems. Design time decisions have been shifted to run time in highly complex and interconnected systems as it is very hard to consider all scenarios and their appropriate actions in advance. Consequently, Self-awareness becomes crucial for these adaptive autonomous systems. To cope with evolving environment and changing user needs, system need to have knowledge about itself and its surroundings. Literature review shows that for autonomous and intelligent systems, researchers are concerned about knowledge acquisition, representation and learning which is necessary for a system to adapt. This paper is written to compare self-awareness and organic computing by discussing their definitions, properties and architecture.
1
0
0
0
0
0
First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems
We focus on nonconvex and nonsmooth minimization problems with a composite objective, where the differentiable part of the objective is freed from the usual and restrictive global Lipschitz gradient continuity assumption. This longstanding smoothness restriction is pervasive in first order methods (FOM), and was recently circumvent for convex composite optimization by Bauschke, Bolte and Teboulle, through a simple and elegant framework which captures, all at once, the geometry of the function and of the feasible set. Building on this work, we tackle genuine nonconvex problems. We first complement and extend their approach to derive a full extended descent lemma by introducing the notion of smooth adaptable functions. We then consider a Bregman-based proximal gradient methods for the nonconvex composite model with smooth adaptable functions, which is proven to globally converge to a critical point under natural assumptions on the problem's data. To illustrate the power and potential of our general framework and results, we consider a broad class of quadratic inverse problems with sparsity constraints which arises in many fundamental applications, and we apply our approach to derive new globally convergent schemes for this class.
1
0
1
0
0
0
An Army of Me: Sockpuppets in Online Discussion Communities
In online discussion communities, users can interact and share information and opinions on a wide variety of topics. However, some users may create multiple identities, or sockpuppets, and engage in undesired behavior by deceiving others or manipulating discussions. In this work, we study sockpuppetry across nine discussion communities, and show that sockpuppets differ from ordinary users in terms of their posting behavior, linguistic traits, as well as social network structure. Sockpuppets tend to start fewer discussions, write shorter posts, use more personal pronouns such as "I", and have more clustered ego-networks. Further, pairs of sockpuppets controlled by the same individual are more likely to interact on the same discussion at the same time than pairs of ordinary users. Our analysis suggests a taxonomy of deceptive behavior in discussion communities. Pairs of sockpuppets can vary in their deceptiveness, i.e., whether they pretend to be different users, or their supportiveness, i.e., if they support arguments of other sockpuppets controlled by the same user. We apply these findings to a series of prediction tasks, notably, to identify whether a pair of accounts belongs to the same underlying user or not. Altogether, this work presents a data-driven view of deception in online discussion communities and paves the way towards the automatic detection of sockpuppets.
1
1
0
1
0
0
Robust Bayesian Optimization with Student-t Likelihood
Bayesian optimization has recently attracted the attention of the automatic machine learning community for its excellent results in hyperparameter tuning. BO is characterized by the sample efficiency with which it can optimize expensive black-box functions. The efficiency is achieved in a similar fashion to the learning to learn methods: surrogate models (typically in the form of Gaussian processes) learn the target function and perform intelligent sampling. This surrogate model can be applied even in the presence of noise; however, as with most regression methods, it is very sensitive to outlier data. This can result in erroneous predictions and, in the case of BO, biased and inefficient exploration. In this work, we present a GP model that is robust to outliers which uses a Student-t likelihood to segregate outliers and robustly conduct Bayesian optimization. We present numerical results evaluating the proposed method in both artificial functions and real problems.
1
0
0
1
0
0
Vehicle Localization and Control on Roads with Prior Grade Map
We propose a map-aided vehicle localization method for GPS-denied environments. This approach exploits prior knowledge of the road grade map and vehicle on-board sensor measurements to accurately estimate the longitudinal position of the vehicle. Real-time localization is crucial to systems that utilize position-dependent information for planning and control. We validate the effectiveness of the localization method on a hierarchical control system. The higher level planner optimizes the vehicle velocity to minimize the energy consumption for a given route by employing traffic condition and road grade data. The lower level is a cruise control system that tracks the position-dependent optimal reference velocity. Performance of the proposed localization algorithm is evaluated using both simulations and experiments.
1
0
0
0
0
0
Estimating Tactile Data for Adaptive Grasping of Novel Objects
We present an adaptive grasping method that finds stable grasps on novel objects. The main contributions of this paper is in the computation of the probability of success of grasps in the vicinity of an already applied grasp. Our method performs grasp adaptions by simulating tactile data for grasps in the vicinity of the current grasp. The simulated data is used to evaluate hypothetical grasps and thereby guide us toward better grasps. We demonstrate the applicability of our method by constructing a system that can plan, apply and adapt grasps on novel objects. Experiments are conducted on objects from the YCB object set and the success rate of our method is 88%. Our experiments show that the application of our grasp adaption method improves grasp stability significantly.
1
0
0
0
0
0
Generalization Bounds of SGLD for Non-convex Learning: Two Theoretical Viewpoints
Algorithm-dependent generalization error bounds are central to statistical learning theory. A learning algorithm may use a large hypothesis space, but the limited number of iterations controls its model capacity and generalization error. The impacts of stochastic gradient methods on generalization error for non-convex learning problems not only have important theoretical consequences, but are also critical to generalization errors of deep learning. In this paper, we study the generalization errors of Stochastic Gradient Langevin Dynamics (SGLD) with non-convex objectives. Two theories are proposed with non-asymptotic discrete-time analysis, using Stability and PAC-Bayesian results respectively. The stability-based theory obtains a bound of $O\left(\frac{1}{n}L\sqrt{\beta T_k}\right)$, where $L$ is uniform Lipschitz parameter, $\beta$ is inverse temperature, and $T_k$ is aggregated step sizes. For PAC-Bayesian theory, though the bound has a slower $O(1/\sqrt{n})$ rate, the contribution of each step is shown with an exponentially decaying factor by imposing $\ell^2$ regularization, and the uniform Lipschitz constant is also replaced by actual norms of gradients along trajectory. Our bounds have no implicit dependence on dimensions, norms or other capacity measures of parameter, which elegantly characterizes the phenomenon of "Fast Training Guarantees Generalization" in non-convex settings. This is the first algorithm-dependent result with reasonable dependence on aggregated step sizes for non-convex learning, and has important implications to statistical learning aspects of stochastic gradient methods in complicated models such as deep learning.
1
0
1
1
0
0
Near-UV OH Prompt Emission in the Innermost Coma of 103P/Hartley 2
The Deep Impact spacecraft fly-by of comet 103P/Hartley 2 occurred on 2010 November 4, one week after perihelion with a closest approach (CA) distance of about 700 km. We used narrowband images obtained by the Medium Resolution Imager (MRI) onboard the spacecraft to study the gas and dust in the innermost coma. We derived an overall dust reddening of 15\%/100 nm between 345 and 749 nm and identified a blue enhancement in the dust coma in the sunward direction within 5 km from the nucleus, which we interpret as a localized enrichment in water ice. OH column density maps show an anti-sunward enhancement throughout the encounter except for the highest resolution images, acquired at CA, where a radial jet becomes visible in the innermost coma, extending up to 12 km from the nucleus. The OH distribution in the inner coma is very different from that expected for a fragment species. Instead, it correlates well with the water vapor map derived by the HRI-IR instrument onboard Deep Impact \citep{AHearn2011}. Radial profiles of the OH column density and derived water production rates show an excess of OH emission during CA that cannot be explained with pure fluorescence. We attribute this excess to a prompt emission process where photodissociation of H$_2$O directly produces excited OH*($A^2\it{\Sigma}^+$) radicals. Our observations provide the first direct imaging of Near-UV prompt emission of OH. We therefore suggest the use of a dedicated filter centered at 318.8 nm to directly trace the water in the coma of comets.
0
1
0
0
0
0
Effective gravity and effective quantum equations in a system inspired by walking droplets experiments
In this paper we suggest a macroscopic toy system in which a potential-like energy is generated by a non-uniform pulsation of the medium (i.e. pulsation of transverse standing oscillations that the elastic medium of the system tends to support at each point). This system is inspired by walking droplets experiments with submerged barriers. We first show that a Poincaré-Lorentz covariant formalization of the system causes inconsistency and contradiction. The contradiction is solved by using a general covariant formulation and by assuming a relation between the metric associated with the elastic medium and the pulsation of the medium. (Calculations are performed in a Newtonian-like metric, constant in time). We find ($i$) an effective Schrödinger equation with external potential, ($ii$) an effective de Broglie-Bohm guidance formula and ($iii$) an energy of the `particle' which has a direct counterpart in general relativity as well as in quantum mechanics. We analyze the wave and the `particle' in an effective free fall and with a harmonic potential. This potential-like energy is an effective gravitational potential, rooted in the pulsation of the medium at each point. The latter, also conceivable as a natural clock, makes easy to understand why proper time varies from place to place.
0
1
0
0
0
0
A gradient estimate for nonlocal minimal graphs
We consider the class of measurable functions defined in all of $\mathbb{R}^n$ that give rise to a nonlocal minimal graph over a ball of $\mathbb{R}^n$. We establish that the gradient of any such function is bounded in the interior of the ball by a power of its oscillation. This estimate, together with previously known results, leads to the $C^\infty$ regularity of the function in the ball. While the smoothness of nonlocal minimal graphs was known for $n = 1, 2$ (but without a quantitative bound), in higher dimensions only their continuity had been established. To prove the gradient bound, we show that the normal to a nonlocal minimal graph is a supersolution of a truncated fractional Jacobi operator, for which we prove a weak Harnack inequality. To this end, we establish a new universal fractional Sobolev inequality on nonlocal minimal surfaces. Our estimate provides an extension to the fractional setting of the celebrated gradient bounds of Finn and of Bombieri, De Giorgi & Miranda for solutions of the classical mean curvature equation.
0
0
1
0
0
0
The GAPS Programme with HARPS-N@TNG XIV. Investigating giant planet migration history via improved eccentricity and mass determination for 231 transiting planets
We carried out a Bayesian homogeneous determination of the orbital parameters of 231 transiting giant planets (TGPs) that are alone or have distant companions; we employed DE-MCMC methods to analyse radial-velocity (RV) data from the literature and 782 new high-accuracy RVs obtained with the HARPS-N spectrograph for 45 systems over 3 years. Our work yields the largest sample of systems with a transiting giant exoplanet and coherently determined orbital, planetary, and stellar parameters. We found that the orbital parameters of TGPs in non-compact planetary systems are clearly shaped by tides raised by their host stars. Indeed, the most eccentric planets have relatively large orbital separations and/or high mass ratios, as expected from the equilibrium tide theory. This feature would be the outcome of high-eccentricity migration (HEM). The distribution of $\alpha=a/a_R$, where $a$ and $a_R$ are the semi-major axis and the Roche limit, for well-determined circular orbits peaks at 2.5; this also agrees with expectations from the HEM. The few planets of our sample with circular orbits and $\alpha >5$ values may have migrated through disc-planet interactions instead of HEM. By comparing circularisation times with stellar ages, we found that hot Jupiters with $a < 0.05$ au have modified tidal quality factors $10^{5} < Q'_p < 10^{9}$, and that stellar $Q'_s > 10^{6}-10^{7}$ are required to explain the presence of eccentric planets at the same orbital distance. As a by-product of our analysis, we detected a non-zero eccentricity for HAT-P-29; we determined that five planets that were previously regarded to have hints of non-zero eccentricity have circular orbits or undetermined eccentricities; we unveiled curvatures caused by distant companions in the RV time series of HAT-P-2, HAT-P-22, and HAT-P-29; and we revised the planetary parameters of CoRoT-1b.
0
1
0
0
0
0
Injective stabilization of additive functors. I. Preliminaries
This paper is the first one in a series of three dealing with the concept of injective stabilization of the tensor product and its applications. Its primary goal is to collect known facts and establish a basic operational calculus that will be used in the subsequent parts. This is done in greater generality than is necessary for the stated goal. Several results of independent interest are also established. They include, among other things, connections with satellites, an explicit construction of the stabilization of a finitely presented functor, various exactness properties of the injectively stable functors, a construction, from a functor and a short exact sequence, of a doubly-infinite exact sequence by splicing the injective stabilization of the functor and its derived functors. When specialized to the tensor product with a finitely presented module, the injective stabilization with coefficients in the ring is isomorphic to the 1-torsion functor. The Auslander-Reiten formula is extended to a more general formula, which holds for arbitrary (i.e., not necessarily finite) modules over arbitrary associative rings with identity. Weakening of the assumptions in the theorems of Eilenberg and Watts leads to characterizations of the requisite zeroth derived functors. The subsequent papers, provide applications of the developed techniques. Part~II deals with new notions of torsion module and cotorsion module of a module. This is done for arbitrary modules over arbitrary rings. Part~III introduces a new concept, called the asymptotic stabilization of the tensor product. The result is closely related to different variants of stable homology (these are generalizations of Tate homology to arbitrary rings). A comparison transformation from Vogel homology to the asymptotic stabilization of the tensor product is constructed and shown to be epic.
0
0
1
0
0
0
Three hypergraph eigenvector centralities
Eigenvector centrality is a standard network analysis tool for determining the importance of (or ranking of) entities in a connected system that is represented by a graph. However, many complex systems and datasets have natural multi-way interactions that are more faithfully modeled by a hypergraph. Here we extend the notion of graph eigenvector centrality to uniform hypergraphs. Traditional graph eigenvector centralities are given by a positive eigenvector of the adjacency matrix, which is guaranteed to exist by the Perron-Frobenius theorem under some mild conditions. The natural representation of a hypergraph is a hypermatrix (colloquially, a tensor). Using recently established Perron-Frobenius theory for tensors, we develop three tensor eigenvectors centralities for hypergraphs, each with different interpretations. We show that these centralities can reveal different information on real-world data by analyzing hypergraphs constructed from n-gram frequencies, co-tagging on stack exchange, and drug combinations observed in patient emergency room visits.
1
0
0
0
0
0
Reinforcement Learning using Augmented Neural Networks
Neural networks allow Q-learning reinforcement learning agents such as deep Q-networks (DQN) to approximate complex mappings from state spaces to value functions. However, this also brings drawbacks when compared to other function approximators such as tile coding or their generalisations, radial basis functions (RBF) because they introduce instability due to the side effect of globalised updates present in neural networks. This instability does not even vanish in neural networks that do not have any hidden layers. In this paper, we show that simple modifications to the structure of the neural network can improve stability of DQN learning when a multi-layer perceptron is used for function approximation.
0
0
0
1
0
0
Instantons and Fluctuations in a Lagrangian Model of Turbulence
We perform a detailed analytical study of the Recent Fluid Deformation (RFD) model for the onset of Lagrangian intermittency, within the context of the Martin-Siggia-Rose-Janssen-de Dominicis (MSRJD) path integral formalism. The model is based, as a key point, upon local closures for the pressure Hessian and the viscous dissipation terms in the stochastic dynamical equations for the velocity gradient tensor. We carry out a power counting hierarchical classification of the several perturbative contributions associated to fluctuations around the instanton-evaluated MSRJD action, along the lines of the cumulant expansion. The most relevant Feynman diagrams are then integrated out into the renormalized effective action, for the computation of velocity gradient probability distribution functions (vgPDFs). While the subleading perturbative corrections do not affect the global shape of the vgPDFs in an appreciable qualitative way, it turns out that they have a significant role in the accurate description of their non-Gaussian cores.
0
1
0
0
0
0
The heavy path approach to Galton-Watson trees with an application to Apollonian networks
We study the heavy path decomposition of conditional Galton-Watson trees. In a standard Galton-Watson tree conditional on its size $n$, we order all children by their subtree sizes, from large (heavy) to small. A node is marked if it is among the $k$ heaviest nodes among its siblings. Unmarked nodes and their subtrees are removed, leaving only a tree of marked nodes, which we call the $k$-heavy tree. We study various properties of these trees, including their size and the maximal distance from any original node to the $k$-heavy tree. In particular, under some moment condition, the $2$-heavy tree is with high probability larger than $cn$ for some constant $c > 0$, and the maximal distance from the $k$-heavy tree is $O(n^{1/(k+1)})$ in probability. As a consequence, for uniformly random Apollonian networks of size $n$, the expected size of the longest simple path is $\Omega(n)$.
1
0
1
0
0
0
Perishability of Data: Dynamic Pricing under Varying-Coefficient Models
We consider a firm that sells a large number of products to its customers in an online fashion. Each product is described by a high dimensional feature vector, and the market value of a product is assumed to be linear in the values of its features. Parameters of the valuation model are unknown and can change over time. The firm sequentially observes a product's features and can use the historical sales data (binary sale/no sale feedbacks) to set the price of current product, with the objective of maximizing the collected revenue. We measure the performance of a dynamic pricing policy via regret, which is the expected revenue loss compared to a clairvoyant that knows the sequence of model parameters in advance. We propose a pricing policy based on projected stochastic gradient descent (PSGD) and characterize its regret in terms of time $T$, features dimension $d$, and the temporal variability in the model parameters, $\delta_t$. We consider two settings. In the first one, feature vectors are chosen antagonistically by nature and we prove that the regret of PSGD pricing policy is of order $O(\sqrt{T} + \sum_{t=1}^T \sqrt{t}\delta_t)$. In the second setting (referred to as stochastic features model), the feature vectors are drawn independently from an unknown distribution. We show that in this case, the regret of PSGD pricing policy is of order $O(d^2 \log T + \sum_{t=1}^T t\delta_t/d)$.
1
0
0
1
0
0
A fast algorithm for maximal propensity score matching
We present a new algorithm which detects the maximal possible number of matched disjoint pairs satisfying a given caliper when a bipartite matching is done with respect to a scalar index (e.g., propensity score), and constructs a corresponding matching. Variable width calipers are compatible with the technique, provided that the width of the caliper is a Lipschitz function of the index. If the observations are ordered with respect to the index then the matching needs $O(N)$ operations, where $N$ is the total number of subjects to be matched. The case of 1-to-$n$ matching is also considered. We offer also a new fast algorithm for optimal complete one-to-one matching on a scalar index when the treatment and control groups are of the same size. This allows us to improve greedy nearest neighbor matching on a scalar index. Keywords: propensity score matching, nearest neighbor matching, matching with caliper, variable width caliper.
1
0
0
1
0
0
Fractional quantum Hall systems near nematicity: bimetric theory, composite fermions, and Dirac brackets
We perform a detailed comparison of the Dirac composite fermion and the recently proposed bimetric theory for a quantum Hall Jain states near half filling. By tuning the composite Fermi liquid to the vicinity of a nematic phase transition, we find that the two theories are equivalent to each other. We verify that the single mode approximation for the response functions and the static structure factor becomes reliable near the phase transition. We show that the dispersion relation of the nematic mode near the phase transition can be obtained from the Dirac brackets between the components of the nematic order parameter. The dispersion is quadratic at low momenta and has a magnetoroton minimum at a finite momentum, which is not related to any nearby inhomogeneous phase.
0
1
0
0
0
0
Recognizing Objects In-the-wild: Where Do We Stand?
The ability to recognize objects is an essential skill for a robotic system acting in human-populated environments. Despite decades of effort from the robotic and vision research communities, robots are still missing good visual perceptual systems, preventing the use of autonomous agents for real-world applications. The progress is slowed down by the lack of a testbed able to accurately represent the world perceived by the robot in-the-wild. In order to fill this gap, we introduce a large-scale, multi-view object dataset collected with an RGB-D camera mounted on a mobile robot. The dataset embeds the challenges faced by a robot in a real-life application and provides a useful tool for validating object recognition algorithms. Besides describing the characteristics of the dataset, the paper evaluates the performance of a collection of well-established deep convolutional networks on the new dataset and analyzes the transferability of deep representations from Web images to robotic data. Despite the promising results obtained with such representations, the experiments demonstrate that object classification with real-life robotic data is far from being solved. Finally, we provide a comparative study to analyze and highlight the open challenges in robot vision, explaining the discrepancies in the performance.
1
0
0
0
0
0
Generalizing Geometric Brownian Motion
To convert standard Brownian motion $Z$ into a positive process, Geometric Brownian motion (GBM) $e^{\beta Z_t}, \beta >0$ is widely used. We generalize this positive process by introducing an asymmetry parameter $ \alpha \geq 0$ which describes the instantaneous volatility whenever the process reaches a new low. For our new process, $\beta$ is the instantaneous volatility as prices become arbitrarily high. Our generalization preserves the positivity, constant proportional drift, and tractability of GBM, while expressing the instantaneous volatility as a randomly weighted $L^2$ mean of $\alpha$ and $\beta$. The running minimum and relative drawup of this process are also analytically tractable. Letting $\alpha = \beta$, our positive process reduces to Geometric Brownian motion. By adding a jump to default to the new process, we introduce a non-negative martingale with the same tractabilities. Assuming a security's dynamics are driven by these processes in risk neutral measure, we price several derivatives including vanilla, barrier and lookback options.
0
0
0
0
0
1
Challenges testing the no-hair theorem with gravitational waves
General relativity's no-hair theorem states that isolated astrophysical black holes are described by only two numbers: mass and spin. As a consequence, there are strict relationships between the frequency and damping time of the different modes of a perturbed Kerr black hole. Testing the no-hair theorem has been a longstanding goal of gravitational-wave astronomy. The recent detection of gravitational waves from black hole mergers would seem to make such tests imminent. We investigate how constraints on black hole ringdown parameters scale with the loudness of the ringdown signal---subject to the constraint that the post-merger remnant must be allowed to settle into a perturbative, Kerr-like state. In particular, we require that---for a given detector---the gravitational waveform predicted by numerical relativity is indistinguishable from an exponentially damped sine after time $t^\text{cut}$. By requiring the post-merger remnant to settle into such a perturbative state, we find that confidence intervals for ringdown parameters do not necessarily shrink with louder signals. In at least some cases, more sensitive measurements probe later times without necessarily providing tighter constraints on ringdown frequencies and damping times. Preliminary investigations are unable to explain this result in terms of a numerical relativity artifact.
0
1
0
0
0
0
Speculation On a Source of Dark Matter
By drawing an analogy with superfluid 4He vortices we suggest that dark matter may consist of irreducibly small remnants of cosmic strings.
0
1
0
0
0
0
Analyzing Cloud Optical Properties Using Sky Cameras
Clouds play a significant role in the fluctuation of solar radiation received by the earth's surface. It is important to study the various cloud properties, as it impacts the total solar irradiance falling on the earth's surface. One of such important optical properties of the cloud is the Cloud Optical Thickness (COT). It is defined with the amount of light that can pass through the clouds. The COT values are generally obtained from satellite images. However, satellite images have a low temporal- and spatial- resolutions; and are not suitable for study in applications as solar energy generation and forecasting. Therefore, ground-based sky cameras are now getting popular in such fields. In this paper, we analyze the cloud optical thickness value, from the ground-based sky cameras, and provide future research directions.
0
1
0
0
0
0
Response Formulae for $n$-point Correlations in Statistical Mechanical Systems and Application to a Problem of Coarse Graining
Predicting the response of a system to perturbations is a key challenge in mathematical and natural sciences. Under suitable conditions on the nature of the system, of the perturbation, and of the observables of interest, response theories allow to construct operators describing the smooth change of the invariant measure of the system of interest as a function of the small parameter controlling the intensity of the perturbation. In particular, response theories can be developed both for stochastic and chaotic deterministic dynamical systems, where in the latter case stricter conditions imposing some degree of structural stability are required. In this paper we extend previous findings and derive general response formulae describing how n-point correlations are affected by perturbations to the vector flow. We also show how to compute the response of the spectral properties of the system to perturbations. We then apply our results to the seemingly unrelated problem of coarse graining in multiscale systems: we find explicit formulae describing the change in the terms describing parameterisation of the neglected degrees of freedom resulting from applying perturbations to the full system. All the terms envisioned by the Mori-Zwanzig theory - the deterministic, stochastic, and non-Markovian terms - are affected at 1st order in the perturbation. The obtained results provide a more comprehesive understanding of the response of statistical mechanical systems to perturbations and contribute to the goal of constructing accurate and robust parameterisations and are of potential relevance for fields like molecular dynamics, condensed matter, and geophysical fluid dynamics. We envision possible applications of our general results to the study of the response of climate variability to anthropogenic and natural forcing and to the study of the equivalence of thermostatted statistical mechanical systems.
0
1
1
0
0
0
The Australian PCEHR system: Ensuring Privacy and Security through an Improved Access Control Mechanism
An Electronic Health Record (EHR) is designed to store diverse data accurately from a range of health care providers and to capture the status of a patient by a range of health care providers across time. Realising the numerous benefits of the system, EHR adoption is growing globally and many countries invest heavily in electronic health systems. In Australia, the Government invested $467 million to build key components of the Personally Controlled Electronic Health Record (PCEHR) system in July 2012. However, in the last three years, the uptake from individuals and health care providers has not been satisfactory. Unauthorised access of the PCEHR was one of the major barriers. We propose an improved access control model for the PCEHR system to resolve the unauthorised access issue. We discuss the unauthorised access issue with real examples and present a potential solution to overcome the issue to make the PCEHR system a success in Australia.
1
0
0
0
0
0
Randomized Kernel Methods for Least-Squares Support Vector Machines
The least-squares support vector machine is a frequently used kernel method for non-linear regression and classification tasks. Here we discuss several approximation algorithms for the least-squares support vector machine classifier. The proposed methods are based on randomized block kernel matrices, and we show that they provide good accuracy and reliable scaling for multi-class classification problems with relatively large data sets. Also, we present several numerical experiments that illustrate the practical applicability of the proposed methods.
1
0
0
1
0
0
Optimal Transport: Fast Probabilistic Approximation with Exact Solvers
We propose a simple subsampling scheme for fast randomized approximate computation of optimal transport distances. This scheme operates on a random subset of the full data and can use any exact algorithm as a black-box back-end, including state-of-the-art solvers and entropically penalized versions. It is based on averaging the exact distances between empirical measures generated from independent samples from the original measures and can easily be tuned towards higher accuracy or shorter computation times. To this end, we give non-asymptotic deviation bounds for its accuracy in the case of discrete optimal transport problems. In particular, we show that in many important instances, including images (2D-histograms), the approximation error is independent of the size of the full problem. We present numerical experiments that demonstrate that a very good approximation in typical applications can be obtained in a computation time that is several orders of magnitude smaller than what is required for exact computation of the full problem.
0
0
0
1
0
0
Reliable Clustering of Bernoulli Mixture Models
A Bernoulli Mixture Model (BMM) is a finite mixture of random binary vectors with independent Bernoulli dimensions. The problem of clustering BMM data arises in a variety of real-world applications, ranging from population genetics to activity analysis in social networks. In this paper, we have analyzed the information-theoretic PAC-learnability of BMMs, when the number of clusters is unknown. In particular, we stipulate certain conditions on both sample complexity and the dimension of the model in order to guarantee the Probably Approximately Correct (PAC)-clusterability of a given dataset. To the best of our knowledge, these findings are the first non-asymptotic (PAC) bounds on the sample complexity of learning BMMs.
1
0
0
1
0
0
Short-time behavior of the heat kernel and Weyl's law on $RCD^*(K, N)$-spaces
In this paper, we prove pointwise convergence of heat kernels for mGH-convergent sequences of $RCD^*(K,N)$-spaces. We obtain as a corollary results on the short-time behavior of the heat kernel in $RCD^*(K,N)$-spaces. We use then these results to initiate the study of Weyl's law in the $RCD$ setting
0
0
1
0
0
0
Football and Beer - a Social Media Analysis on Twitter in Context of the FIFA Football World Cup 2018
In many societies alcohol is a legal and common recreational substance and socially accepted. Alcohol consumption often comes along with social events as it helps people to increase their sociability and to overcome their inhibitions. On the other hand we know that increased alcohol consumption can lead to serious health issues, such as cancer, cardiovascular diseases and diseases of the digestive system, to mention a few. This work examines alcohol consumption during the FIFA Football World Cup 2018, particularly the usage of alcohol related information on Twitter. For this we analyse the tweeting behaviour and show that the tournament strongly increases the interest in beer. Furthermore we show that countries who had to leave the tournament at early stage might have done something good to their fans as the interest in beer decreased again.
1
0
0
0
0
0
Cross-stream migration of a surfactant-laden deformable droplet in a Poiseuille flow
The motion of a viscous deformable droplet suspended in an unbounded Poiseuille flow in the presence of bulk-insoluble surfactants is studied analytically. Assuming the convective transport of fluid and heat to be negligible, we perform a small-deformation perturbation analysis to obtain the droplet migration velocity. The droplet dynamics strongly depends on the distribution of surfactants along the droplet interface, which is governed by the relative strength of convective transport of surfactants as compared with the diffusive transport of surfactants. The present study is focused on the following two limits: (i) when the surfactant transport is dominated by surface diffusion, and (ii) when the surfactant transport is dominated by surface convection. In the first limiting case, it is seen that the axial velocity of the droplet decreases with increase in the advection of the surfactants along the surface. The variation of cross-stream migration velocity, on the other hand, is analyzed over three different regimes based on the ratio of the viscosity of the droplet phase to that of the carrier phase. In the first regime the migration velocity decreases with increase in surface advection of the surfactants although there is no change in direction of droplet migration. For the second regime, the direction of the cross-stream migration of the droplet changes depending on different parameters. In the third regime, the migration velocity is merely affected by any change in the surfactant distribution. For the other limit of higher surface advection in comparison to surface diffusion of the surfactants, the axial velocity of the droplet is found to be independent of the surfactant distribution. However, the cross-stream velocity is found to decrease with increase in non-uniformity in surfactant distribution.
0
1
0
0
0
0
PCA in Data-Dependent Noise (Correlated-PCA): Nearly Optimal Finite Sample Guarantees
We study Principal Component Analysis (PCA) in a setting where a part of the corrupting noise is data-dependent and, as a result, the noise and the true data are correlated. Under a bounded-ness assumption on the true data and the noise, and a simple assumption on data-noise correlation, we obtain a nearly optimal sample complexity bound for the most commonly used PCA solution, singular value decomposition (SVD). This bound is a significant improvement over the bound obtained by Vaswani and Guo in recent work (NIPS 2016) where this "correlated-PCA" problem was first studied; and it holds under a significantly weaker data-noise correlation assumption than the one used for this earlier result.
1
0
0
1
0
0
Using a Predator-Prey Model to Explain Variations of Cloud Spot Price
The spot pricing scheme has been considered to be resource-efficient for providers and cost-effective for consumers in the Cloud market. Nevertheless, unlike the static and straightforward strategies of trading on-demand and reserved Cloud services, the market-driven mechanism for trading spot service would be complicated for both implementation and understanding. The largely invisible market activities and their complex interactions could especially make Cloud consumers hesitate to enter the spot market. To reduce the complexity in understanding the Cloud spot market, we decided to reveal the backend information behind spot price variations. Inspired by the methodology of reverse engineering, we developed a Predator-Prey model that can simulate the interactions between demand and resource based on the visible spot price traces. The simulation results have shown some basic regular patterns of market activities with respect to Amazon's spot instance type m3.large. Although the findings of this study need further validation by using practical data, our work essentially suggests a promising approach (i.e.~using a Predator-Prey model) to investigate spot market activities.
1
0
0
0
0
0
Separatrix crossing in rotation of a body with changing geometry of masses
We consider free rotation of a body whose parts move slowly with respect to each other under the action of internal forces. This problem can be considered as a perturbation of the Euler-Poinsot problem. The dynamics has an approximate conservation law - an adiabatic invariant. This allows to describe the evolution of rotation in the adiabatic approximation. The evolution leads to an overturn in the rotation of the body: the vector of angular velocity crosses the separatrix of the Euler-Poinsot problem. This crossing leads to a quasi-random scattering in body's dynamics. We obtain formulas for probabilities of capture into different domains in the phase space at separatrix crossings.
0
1
0
0
0
0
SING: Symbol-to-Instrument Neural Generator
Recent progress in deep learning for audio synthesis opens the way to models that directly produce the waveform, shifting away from the traditional paradigm of relying on vocoders or MIDI synthesizers for speech or music generation. Despite their successes, current state-of-the-art neural audio synthesizers such as WaveNet and SampleRNN suffer from prohibitive training and inference times because they are based on autoregressive models that generate audio samples one at a time at a rate of 16kHz. In this work, we study the more computationally efficient alternative of generating the waveform frame-by-frame with large strides. We present SING, a lightweight neural audio synthesizer for the original task of generating musical notes given desired instrument, pitch and velocity. Our model is trained end-to-end to generate notes from nearly 1000 instruments with a single decoder, thanks to a new loss function that minimizes the distances between the log spectrograms of the generated and target waveforms. On the generalization task of synthesizing notes for pairs of pitch and instrument not seen during training, SING produces audio with significantly improved perceptual quality compared to a state-of-the-art autoencoder based on WaveNet as measured by a Mean Opinion Score (MOS), and is about 32 times faster for training and 2, 500 times faster for inference.
1
0
0
0
0
0
Path-by-path regularization by noise for scalar conservation laws
We prove a path-by-path regularization by noise result for scalar conservation laws. In particular, this proves regularizing properties for scalar conservation laws driven by fractional Brownian motion and generalizes the respective results obtained in [Gess, Souganidis; Comm. Pure Appl. Math. (2017)]. In addition, we introduce a new path-by-path scaling property which is shown to be sufficient to imply regularizing effects.
0
0
1
0
0
0
An End-to-End Trainable Neural Network Model with Belief Tracking for Task-Oriented Dialog
We present a novel end-to-end trainable neural network model for task-oriented dialog systems. The model is able to track dialog state, issue API calls to knowledge base (KB), and incorporate structured KB query results into system responses to successfully complete task-oriented dialogs. The proposed model produces well-structured system responses by jointly learning belief tracking and KB result processing conditioning on the dialog history. We evaluate the model in a restaurant search domain using a dataset that is converted from the second Dialog State Tracking Challenge (DSTC2) corpus. Experiment results show that the proposed model can robustly track dialog state given the dialog history. Moreover, our model demonstrates promising results in producing appropriate system responses, outperforming prior end-to-end trainable neural network models using per-response accuracy evaluation metrics.
1
0
0
0
0
0
Neural IR Meets Graph Embedding: A Ranking Model for Product Search
Recently, neural models for information retrieval are becoming increasingly popular. They provide effective approaches for product search due to their competitive advantages in semantic matching. However, it is challenging to use graph-based features, though proved very useful in IR literature, in these neural approaches. In this paper, we leverage the recent advances in graph embedding techniques to enable neural retrieval models to exploit graph-structured data for automatic feature extraction. The proposed approach can not only help to overcome the long-tail problem of click-through data, but also incorporate external heterogeneous information to improve search results. Extensive experiments on a real-world e-commerce dataset demonstrate significant improvement achieved by our proposed approach over multiple strong baselines both as an individual retrieval model and as a feature used in learning-to-rank frameworks.
1
0
0
0
0
0
Scaling up the software development process, a case study highlighting the complexities of large team software development
Diamond Light Source is the UK's National Synchrotron Facility and as such provides access to world class experimental services for UK and international researchers. As a user facility, that is one that focuses on providing a good user experience to our varied visitors, Diamond invests heavily in software infrastructure and staff. Over 100 members of the 600 strong workforce consider software development as a significant tool to help them achieve their primary role. These staff work on a diverse number of different software packages, providing support for installation and configuration, maintenance and bug fixing, as well as additional research and development of software when required. This talk focuses on one of the software projects undertaken to unify and improve the user experience of several experiments. The "mapping project" is a large 2 year, multi group project targeting the collection and processing experiments which involve scanning an X-ray beam over a sample and building up an image of that sample, similar to the way that google maps bring together small pieces of information to produce a full map of the world. The project itself is divided into several work packages, ranging from teams of one to 5 or 6 in size, with varying levels of time commitment to the project. This paper aims to explore one of these work packages as a case study, highlighting the experiences of the project team, the methodologies employed, their outcomes, and the lessons learnt from the experience.
1
0
0
0
0
0
Lyapunov exponents for products of matrices
Let ${\bf M}=(M_1,\ldots, M_k)$ be a tuple of real $d\times d$ matrices. Under certain irreducibility assumptions, we give checkable criteria for deciding whether ${\bf M}$ possesses the following property: there exist two constants $\lambda\in {\Bbb R}$ and $C>0$ such that for any $n\in {\Bbb N}$ and any $i_1, \ldots, i_n \in \{1,\ldots, k\}$, either $M_{i_1} \cdots M_{i_n}={\bf 0}$ or $C^{-1} e^{\lambda n} \leq \| M_{i_1} \cdots M_{i_n} \| \leq C e^{\lambda n}$, where $\|\cdot\|$ is a matrix norm. The proof is based on symbolic dynamics and the thermodynamic formalism for matrix products. As applications, we are able to check the absolute continuity of a class of overlapping self-similar measures on ${\Bbb R}$, the absolute continuity of certain self-affine measures in ${\Bbb R}^d$ and the dimensional regularity of a class of sofic affine-invariant sets in the plane.
0
0
1
0
0
0
A definitive improvement of a game-theoretic bound and the long tightness game
The main goal of the paper is the full proof of a cardinal inequality for a space with points $G_\delta $, obtained with the help of a long version of the Menger game. This result, which improves a similar one of Scheepers and Tall, was already established by the authors under the Continuum Hypothesis. The paper is completed by few remarks on a long version of the tightness game.
0
0
1
0
0
0
Group-Server Queues
By analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting {\it Group-Server Queues}, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times and the expected virtual service times. In addition, this paper also shows that this class of group-server queues are often encountered in many other practical areas including communication networks, manufacturing systems, transportation networks, financial networks and healthcare systems. Note that the group-server queues are always used to design effectively dynamic control mechanisms through regrouping and recombining such many servers in a large-scale service system by means of, for example, bilateral threshold control, and customers transfer to the buffer or server groups. This leads to the large-scale service system that is divided into several adaptive and self-organizing subsystems through scheduling of batch customers and regrouping of service resources, which make the middle layer of this service system more effectively managed and strengthened under a dynamic, real-time and even reward optimal framework. Based on this, performance of such a large-scale service system may be improved greatly in terms of introducing and analyzing such group-server queues. Therefore, not only analysis of group-server queues is regarded as a new interesting research direction, but there also exists many theoretical challenges, basic difficulties and open problems in the area of queueing networks.
1
0
0
0
0
0
MC$^2$: Multi-wavelength and dynamical analysis of the merging galaxy cluster ZwCl 0008.8+5215: An older and less massive Bullet Cluster
We analyze a rich dataset including Subaru/SuprimeCam, HST/ACS and WFC3, Keck/DEIMOS, Chandra/ACIS-I, and JVLA/C and D array for the merging galaxy cluster ZwCl 0008.8+5215. With a joint Subaru/HST weak gravitational lensing analysis, we identify two dominant subclusters and estimate the masses to be M$_{200}=\text{5.7}^{+\text{2.8}}_{-\text{1.8}}\times\text{10}^{\text{14}}\,\text{M}_{\odot}$ and 1.2$^{+\text{1.4}}_{-\text{0.6}}\times10^{14}$ M$_{\odot}$. We estimate the projected separation between the two subclusters to be 924$^{+\text{243}}_{-\text{206}}$ kpc. We perform a clustering analysis on confirmed cluster member galaxies and estimate the line of sight velocity difference between the two subclusters to be 92$\pm$164 km s$^{-\text{1}}$. We further motivate, discuss, and analyze the merger scenario through an analysis of the 42 ks of Chandra/ACIS-I and JVLA/C and D polarization data. The X-ray surface brightness profile reveals a remnant core reminiscent of the Bullet Cluster. The X-ray luminosity in the 0.5-7.0 keV band is 1.7$\pm$0.1$\times$10$^{\text{44}}$ erg s$^{-\text{1}}$ and the X-ray temperature is 4.90$\pm$0.13 keV. The radio relics are polarized up to 40$\%$. We implement a Monte Carlo dynamical analysis and estimate the merger velocity at pericenter to be 1800$^{+\text{400}}_{-\text{300}}$ km s$^{-\text{1}}$. ZwCl 0008.8+5215 is a low-mass version of the Bullet Cluster and therefore may prove useful in testing alternative models of dark matter. We do not find significant offsets between dark matter and galaxies, as the uncertainties are large with the current lensing data. Furthermore, in the east, the BCG is offset from other luminous cluster galaxies, which poses a puzzle for defining dark matter -- galaxy offsets.
0
1
0
0
0
0
Bayesian adaptive bandit-based designs using the Gittins index for multi-armed trials with normally distributed endpoints
Adaptive designs for multi-armed clinical trials have become increasingly popular recently in many areas of medical research because of their potential to shorten development times and to increase patient response. However, developing response-adaptive trial designs that offer patient benefit while ensuring the resulting trial avoids bias and provides a statistically rigorous comparison of the different treatments included is highly challenging. In this paper, the theory of Multi-Armed Bandit Problems is used to define a family of near optimal adaptive designs in the context of a clinical trial with a normally distributed endpoint with known variance. Through simulation studies based on an ongoing trial as a motivation we report the operating characteristics (type I error, power, bias) and patient benefit of these approaches and compare them to traditional and existing alternative designs. These results are then compared to those recently published in the context of Bernoulli endpoints. Many limitations and advantages are similar in both cases but there are also important differences, specially with respect to type I error control. This paper proposes a simulation-based testing procedure to correct for the observed type I error inflation that bandit-based and adaptive rules can induce. Results presented extend recent work by considering a normally distributed endpoint, a very common case in clinical practice yet mostly ignored in the response-adaptive theoretical literature, and illustrate the potential advantages of using these methods in a rare disease context. We also recommend a suitable modified implementation of the bandit-based adaptive designs for the case of common diseases.
0
0
0
1
0
0
Convexification of Queueing Formulas by Mixed-Integer Second-Order Cone Programming: An Application to a Discrete Location Problem with Congestion
Mixed-Integer Second-Order Cone Programs (MISOCPs) form a nice class of mixed-inter convex programs, which can be solved very efficiently due to the recent advances in optimization solvers. Our paper bridges the gap between modeling a class of optimization problems and using MISOCP solvers. It is shown how various performance metrics of M/G/1 queues can be molded by different MISOCPs. To motivate our method practically, it is first applied to a challenging stochastic location problem with congestion, which is broadly used to design socially optimal service networks. Four different MISOCPs are developed and compared on sets of benchmark test problems. The new formulations efficiently solve large-size test problems, which cannot be solved by the best existing method. Then, the general applicability of our method is shown for similar optimization problems that use queue-theoretic performance measures to address customer satisfaction and service quality.
1
0
0
0
0
0
Discriminative Metric Learning with Deep Forest
A Discriminative Deep Forest (DisDF) as a metric learning algorithm is proposed in the paper. It is based on the Deep Forest or gcForest proposed by Zhou and Feng and can be viewed as a gcForest modification. The case of the fully supervised learning is studied when the class labels of individual training examples are known. The main idea underlying the algorithm is to assign weights to decision trees in random forest in order to reduce distances between objects from the same class and to increase them between objects from different classes. The weights are training parameters. A specific objective function which combines Euclidean and Manhattan distances and simplifies the optimization problem for training the DisDF is proposed. The numerical experiments illustrate the proposed distance metric algorithm.
1
0
0
1
0
0
An accurate and robust genuinely multidimensional Riemann solver for Euler equations based on TV flux splitting
A simple robust genuinely multidimensional convective pressure split (CPS) , contact preserving, shock stable Riemann solver (GM-K-CUSP-X) for Euler equations of gas dynamics is developed. The convective and pressure components of the Euler system are separated following the Toro-Vazquez type PDE flux splitting [Toro et al, 2012]. Upwind discretization of these components are achieved using the framework of Mandal et al [Mandal et al, 2015]. The robustness of the scheme is studied on a few two dimensional test problems. The results demonstrate the efficacy of the scheme over the corresponding conventional two state version of the solver. Results from two classic strong shock test cases associated with the infamous Carbuncle phenomenon, indicate that the present solver is completely free of any such numerical instabilities albeit possessing contact resolution abilities.Such a finding emphasizes the pre-existing notion about the positive effects that multidimensional flow modelling may have towards curing of shock instabilities.
0
1
1
0
0
0
Strong Convergence Rate of Splitting Schemes for Stochastic Nonlinear Schrödinger Equations
We prove the optimal strong convergence rate of a fully discrete scheme, based on a splitting approach, for a stochastic nonlinear Schrödinger (NLS) equation. The main novelty of our method lies on the uniform a priori estimate and exponential integrability of a sequence of splitting processes which are used to approximate the solution of the stochastic NLS equation. We show that the splitting processes converge to the solution with strong order $1/2$. Then we use the Crank--Nicolson scheme to temporally discretize the splitting process and get the temporal splitting scheme which also possesses strong order $1/2$. To obtain a full discretization, we apply this splitting Crank--Nicolson scheme to the spatially discrete equation which is achieved through the spectral Galerkin approximation. Furthermore, we establish the convergence of this fully discrete scheme with optimal strong convergence rate $\mathcal{O}(N^{-2}+\tau^\frac12)$, where $N$ denotes the dimension of the approximate space and $\tau$ denotes the time step size. To the best of our knowledge, this is the first result about strong convergence rates of temporally numerical approximations and fully discrete schemes for stochastic NLS equations, or even for stochastic partial differential equations (SPDEs) with non-monotone coefficients. Numerical experiments verify our theoretical result.
0
0
1
0
0
0
Controllability of Conjunctive Boolean Networks with Application to Gene Regulation
A Boolean network is a finite state discrete time dynamical system. At each step, each variable takes a value from a binary set. The value update rule for each variable is a local function which depends only on a selected subset of variables. Boolean networks have been used in modeling gene regulatory networks. We focus in this paper on a special class of Boolean networks, namely the conjunctive Boolean networks (CBNs), whose value update rule is comprised of only logic AND operations. It is known that any trajectory of a Boolean network will enter a periodic orbit. Periodic orbits of a CBN have been completely understood. In this paper, we investigate the orbit-controllability and state-controllability of a CBN: We ask the question of how one can steer a CBN to enter any periodic orbit or to reach any final state, from any initial state. We establish necessary and sufficient conditions for a CBN to be orbit-controllable and state-controllable. Furthermore, explicit control laws are presented along the analysis.
0
0
1
0
0
0
Fast-neutron and gamma-ray imaging with a capillary liquid xenon converter coupled to a gaseous photomultiplier
Gamma-ray and fast-neutron imaging was performed with a novel liquid xenon (LXe) scintillation detector read out by a Gaseous Photomultiplier (GPM). The 100 mm diameter detector prototype comprised a capillary-filled LXe converter/scintillator, coupled to a triple-THGEM imaging-GPM, with its first electrode coated by a CsI UV-photocathode, operated in Ne/5%CH4 cryogenic temperatures. Radiation localization in 2D was derived from scintillation-induced photoelectron avalanches, measured on the GPM's segmented anode. The localization properties of Co-60 gamma-rays and a mixed fast-neutron/gamma-ray field from an AmBe neutron source were derived from irradiation of a Pb edge absorber. Spatial resolutions of 12+/-2 mm and 10+/-2 mm (FWHM) were reached with Co-60 and AmBe sources, respectively. The experimental results are in good agreement with GEANT4 simulations. The calculated ultimate expected resolutions for our application-relevant 4.4 and 15.1 MeV gamma-rays and 1-15 MeV neutrons are 2-4 mm and ~2 mm (FWHM), respectively. These results indicate the potential applicability of the new detector concept to Fast-Neutron Resonance Radiography (FNRR) and Dual-Discrete-Energy Gamma Radiography (DDEGR) of large objects.
0
1
0
0
0
0
Is Task Board Customization Beneficial? - An Eye Tracking Study
The task board is an essential artifact in many agile development approaches. It provides a good overview of the project status. Teams often customize their task boards according to the team members' needs. They modify the structure of boards, define colored codings for different purposes, and introduce different card sizes. Although the customizations are intended to improve the task board's usability and effectiveness, they may also complicate its comprehension and use. The increased effort impedes the work of both the team and team externals. Hence, task board customization is in conflict with the agile practice of fast and easy overview for everyone. In an eye tracking study with 30 participants, we compared an original task board design with three customized ones to investigate which design shortened the required time to identify a particular story card. Our findings yield that only the customized task board design with modified structures reduces the required time. The original task board design is more beneficial than individual colored codings and changed card sizes. According to our findings, agile teams should rethink their current task board design. They may be better served by focusing on the original task board design and by applying only carefully selected adjustments. In case of customization, a task board's structure should be adjusted since this is the only beneficial kind of customization, that additionally complies more precisely with the concept of fast and easy project overview.
1
0
0
0
0
0
Characterizing the impact of model error in hydrogeologic time series recovery inverse problems
Hydrogeologic models are commonly over-smoothed relative to reality, owing to the difficulty of obtaining accurate high-resolution information about the subsurface. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased "observation noise" term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. Here, model error is considered for a hydrogeologically important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series that has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. A possible diagnostic criterion for blind transfer function characterization is also uncovered.
0
0
1
0
0
0
Suszko's Problem: Mixed Consequence and Compositionality
Suszko's problem is the problem of finding the minimal number of truth values needed to semantically characterize a syntactic consequence relation. Suszko proved that every Tarskian consequence relation can be characterized using only two truth values. Malinowski showed that this number can equal three if some of Tarski's structural constraints are relaxed. By so doing, Malinowski introduced a case of so-called mixed consequence, allowing the notion of a designated value to vary between the premises and the conclusions of an argument. In this paper we give a more systematic perspective on Suszko's problem and on mixed consequence. First, we prove general representation theorems relating structural properties of a consequence relation to their semantic interpretation, uncovering the semantic counterpart of substitution-invariance, and establishing that (intersective) mixed consequence is fundamentally the semantic counterpart of the structural property of monotonicity. We use those to derive maximum-rank results proved recently in a different setting by French and Ripley, as well as by Blasio, Marcos and Wansing, for logics with various structural properties (reflexivity, transitivity, none, or both). We strengthen these results into exact rank results for non-permeable logics (roughly, those which distinguish the role of premises and conclusions). We discuss the underlying notion of rank, and the associated reduction proposed independently by Scott and Suszko. As emphasized by Suszko, that reduction fails to preserve compositionality in general, meaning that the resulting semantics is no longer truth-functional. We propose a modification of that notion of reduction, allowing us to prove that over compact logics with what we call regular connectives, rank results are maintained even if we request the preservation of truth-functionality and additional semantic properties.
1
0
1
0
0
0
Optimization of distributions differences for classification
In this paper we introduce a new classification algorithm called Optimization of Distributions Differences (ODD). The algorithm aims to find a transformation from the feature space to a new space where the instances in the same class are as close as possible to one another while the gravity centers of these classes are as far as possible from one another. This aim is formulated as a multiobjective optimization problem that is solved by a hybrid of an evolutionary strategy and the Quasi-Newton method. The choice of the transformation function is flexible and could be any continuous space function. We experiment with a linear and a non-linear transformation in this paper. We show that the algorithm can outperform 6 other state-of-the-art classification methods, namely naive Bayes, support vector machines, linear discriminant analysis, multi-layer perceptrons, decision trees, and k-nearest neighbors, in 12 standard classification datasets. Our results show that the method is less sensitive to the imbalanced number of instances comparing to these methods. We also show that ODD maintains its performance better than other classification methods in these datasets, hence, offers a better generalization ability.
1
0
0
1
0
0
Galois descent of semi-affinoid spaces
We study the Galois descent of semi-affinoid non-archimedean analytic spaces. These are the non-archimedean analytic spaces which admit an affine special formal scheme as model over a complete discrete valuation ring, such as for example open or closed polydiscs or polyannuli. Using Weil restrictions and Galois fixed loci for semi-affinoid spaces and their formal models, we describe a formal model of a $K$-analytic space $X$, provided that $X\otimes_KL$ is semi-affinoid for some finite tamely ramified extension $L$ of $K$. As an application, we study the forms of analytic annuli that are trivialized by a wide class of Galois extensions that includes totally tamely ramified extensions. In order to do so, we first establish a Weierstrass preparation result for analytic functions on annuli, and use it to linearize finite order automorphisms of annuli. Finally, we explain how from these results one can deduce a non-archimedean analytic proof of the existence of resolutions of singularities of surfaces in characteristic zero.
0
0
1
0
0
0
Synthesis, Crystal Structure, and Physical Properties of New Layered Oxychalcogenide La2O2Bi3AgS6
We have synthesized a new layered oxychalcogenide La2O2Bi3AgS6. From synchrotron X-ray diffraction and Rietveld refinement, the crystal structure of La2O2Bi3AgS6 was refined using a model of the P4/nmm space group with a = 4.0644(1) {\AA} and c = 19.412(1) {\AA}, which is similar to the related compound LaOBiPbS3, while the interlayer bonds (M2-S1 bonds) are apparently shorter in La2O2Bi3AgS6. The tunneling electron microscopy (TEM) image confirmed the lattice constant derived from Rietveld refinement (c ~ 20 {\AA}). The electrical resistivity and Seebeck coefficient suggested that the electronic states of La2O2Bi3AgS6 are more metallic than those of LaOBiS2 and LaOBiPbS3. The insertion of a rock-salt-type chalcogenide into the van der Waals gap of BiS2-based layered compounds, such as LaOBiS2, will be a useful strategy for designing new layered functional materials in the layered chalcogenide family.
0
1
0
0
0
0
Planar Graph Perfect Matching is in NC
Is perfect matching in NC? That is, is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in theoretical computer science for over three decades, ever since the discovery of RNC matching algorithms. Within this question, the case of planar graphs has remained an enigma: On the one hand, counting the number of perfect matchings is far harder than finding one (the former is #P-complete and the latter is in P), and on the other, for planar graphs, counting has long been known to be in NC whereas finding one has resisted a solution. In this paper, we give an NC algorithm for finding a perfect matching in a planar graph. Our algorithm uses the above-stated fact about counting matchings in a crucial way. Our main new idea is an NC algorithm for finding a face of the perfect matching polytope at which $\Omega(n)$ new conditions, involving constraints of the polytope, are simultaneously satisfied. Several other ideas are also needed, such as finding a point in the interior of the minimum weight face of this polytope and finding a balanced tight odd set in NC.
1
0
0
0
0
0
Recommendation under Capacity Constraints
In this paper, we investigate the common scenario where every candidate item for recommendation is characterized by a maximum capacity, i.e., number of seats in a Point-of-Interest (POI) or size of an item's inventory. Despite the prevalence of the task of recommending items under capacity constraints in a variety of settings, to the best of our knowledge, none of the known recommender methods is designed to respect capacity constraints. To close this gap, we extend three state-of-the art latent factor recommendation approaches: probabilistic matrix factorization (PMF), geographical matrix factorization (GeoMF), and bayesian personalized ranking (BPR), to optimize for both recommendation accuracy and expected item usage that respects the capacity constraints. We introduce the useful concepts of user propensity to listen and item capacity. Our experimental results in real-world datasets, both for the domain of item recommendation and POI recommendation, highlight the benefit of our method for the setting of recommendation under capacity constraints.
1
0
0
1
0
0
Simulation to scaled city: zero-shot policy transfer for traffic control via autonomous vehicles
Using deep reinforcement learning, we train control policies for autonomous vehicles leading a platoon of vehicles onto a roundabout. Using Flow, a library for deep reinforcement learning in micro-simulators, we train two policies, one policy with noise injected into the state and action space and one without any injected noise. In simulation, the autonomous vehicle learns an emergent metering behavior for both policies in which it slows to allow for smoother merging. We then directly transfer this policy without any tuning to the University of Delaware Scaled Smart City (UDSSC), a 1:25 scale testbed for connected and automated vehicles. We characterize the performance of both policies on the scaled city. We show that the noise-free policy winds up crashing and only occasionally metering. However, the noise-injected policy consistently performs the metering behavior and remains collision-free, suggesting that the noise helps with the zero-shot policy transfer. Additionally, the transferred, noise-injected policy leads to a 5% reduction of average travel time and a reduction of 22% in maximum travel time in the UDSSC. Videos of the controllers can be found at this https URL.
1
0
0
0
0
0
Relative Singularity Categories
We study the following generalization of singularity categories. Let X be a quasi-projective Gorenstein scheme with isolated singularities and A a non-commutative resolution of singularities of X in the sense of Van den Bergh. We introduce the relative singularity category as the Verdier quotient of the bounded derived category of coherent sheaves on A modulo the category of perfect complexes on X. We view it as a measure for the difference between X and A. The main results of this thesis are the following. (i) We prove an analogue of Orlov's localization result in our setup. If X has isolated singularities, then this reduces the study of the relative singularity categories to the affine case. (ii) We prove Hom-finiteness and idempotent completeness of the relative singularity categories in the complete local situation and determine its Grothendieck group. (iii) We give a complete and explicit description of the relative singularity categories when X has only nodal singularities and the resolution is given by a sheaf of Auslander algebras. (iv) We study relations between relative singularity categories and classical singularity categories. For a simple hypersurface singularity and its Auslander resolution, we show that these categories determine each other. (v) The developed technique leads to the following `purely commutative' application: a description of Iyama & Wemyss triangulated category for rational surface singularities in terms of the singularity category of the rational double point resolution. (vi) We give a description of singularity categories of gentle algebras.
0
0
1
0
0
0
Challenges to Keeping the Computer Industry Centered in the US
It is undeniable that the worldwide computer industry's center is the US, specifically in Silicon Valley. Much of the reason for the success of Silicon Valley had to do with Moore's Law: the observation by Intel co-founder Gordon Moore that the number of transistors on a microchip doubled at a rate of approximately every two years. According to the International Technology Roadmap for Semiconductors, Moore's Law will end in 2021. How can we rethink computing technology to restart the historic explosive performance growth? Since 2012, the IEEE Rebooting Computing Initiative (IEEE RCI) has been working with industry and the US government to find new computing approaches to answer this question. In parallel, the CCC has held a number of workshops addressing similar questions. This whitepaper summarizes some of the IEEE RCI and CCC findings. The challenge for the US is to lead this new era of computing. Our international competitors are not sitting still: China has invested significantly in a variety of approaches such as neuromorphic computing, chip fabrication facilities, computer architecture, and high-performance simulation and data analytics computing, for example. We must act now, otherwise, the center of the computer industry will move from Silicon Valley and likely move off shore entirely.
1
0
0
0
0
0
Contiguous Relations, Laplace's Methods and Continued Fractions for 3F2(1)
Using contiguous relations we construct an infinite number of continued fraction expansions for ratios of generalized hypergeometric series 3F2(1). We establish exact error term estimates for their approximants and prove their rapid convergences. To do so we develop a discrete version of Laplace's method for hypergeometric series in addition to the use of ordinary (continuous) Laplace's method for Euler's hypergeometric integrals.
0
0
1
0
0
0
Arimoto-Rényi Conditional Entropy and Bayesian $M$-ary Hypothesis Testing
This paper gives upper and lower bounds on the minimum error probability of Bayesian $M$-ary hypothesis testing in terms of the Arimoto-Rényi conditional entropy of an arbitrary order $\alpha$. The improved tightness of these bounds over their specialized versions with the Shannon conditional entropy ($\alpha=1$) is demonstrated. In particular, in the case where $M$ is finite, we show how to generalize Fano's inequality under both the conventional and list-decision settings. As a counterpart to the generalized Fano's inequality, allowing $M$ to be infinite, a lower bound on the Arimoto-Rényi conditional entropy is derived as a function of the minimum error probability. Explicit upper and lower bounds on the minimum error probability are obtained as a function of the Arimoto-Rényi conditional entropy for both positive and negative $\alpha$. Furthermore, we give upper bounds on the minimum error probability as functions of the Rényi divergence. In the setup of discrete memoryless channels, we analyze the exponentially vanishing decay of the Arimoto-Rényi conditional entropy of the transmitted codeword given the channel output when averaged over a random coding ensemble.
1
0
1
1
0
0
A note on the violation of Bell's inequality
With Bell's inequalities one has a formal expression to show how essentially all local theories of natural phenomena that are formulated within the framework of realism may be tested using a simple experimental arrangement. For the case of entangled pairs of spin-1/2 particles we propose an alternative measurement setup which is consistent to the necessary assumptions corresponding to the derivation of the Bell inequalities. We find that the Bell inequalities are never violated with respect to our suggested measurement process.
0
1
0
0
0
0
Real-Time Model Predictive Control for Energy Management in Autonomous Underwater Vehicle
Improving endurance is crucial for extending the spatial and temporal operation range of autonomous underwater vehicles (AUVs). Considering the hardware constraints and the performance requirements, an intelligent energy management system is required to extend the operation range of AUVs. This paper presents a novel model predictive control (MPC) framework for energy-optimal point-to-point motion control of an AUV. In this scheme, the energy management problem of an AUV is reformulated as a surge motion optimization problem in two stages. First, a system-level energy minimization problem is solved by managing the trade-off between the energies required for overcoming the positive buoyancy and surge drag force in static optimization. Next, an MPC with a special cost function formulation is proposed to deal with transients and system dynamics. A switching logic for handling the transition between the static and dynamic stages is incorporated to reduce the computational efforts. Simulation results show that the proposed method is able to achieve near-optimal energy consumption with considerable lower computational complexity.
1
0
0
0
0
0
Surjective H-Colouring over Reflexive Digraphs
The Surjective H-Colouring problem is to test if a given graph allows a vertex-surjective homomorphism to a fixed graph H. The complexity of this problem has been well studied for undirected (partially) reflexive graphs. We introduce endo-triviality, the property of a structure that all of its endomorphisms that do not have range of size 1 are automorphisms, as a means to obtain complexity-theoretic classifications of Surjective H-Colouring in the case of reflexive digraphs. Chen [2014] proved, in the setting of constraint satisfaction problems, that Surjective H-Colouring is NP-complete if H has the property that all of its polymorphisms are essentially unary. We give the first concrete application of his result by showing that every endo-trivial reflexive digraph H has this property. We then use the concept of endo-triviality to prove, as our main result, a dichotomy for Surjective H-Colouring when H is a reflexive tournament: if H is transitive, then Surjective H-Colouring is in NL, otherwise it is NP-complete. By combining this result with some known and new results we obtain a complexity classification for Surjective H-Colouring when H is a partially reflexive digraph of size at most 3.
1
0
0
0
0
0
A modal typing system for self-referential programs and specifications
This paper proposes a modal typing system that enables us to handle self-referential formulae, including ones with negative self-references, which on one hand, would introduce a logical contradiction, namely Russell's paradox, in the conventional setting, while on the other hand, are necessary to capture a certain class of programs such as fixed-point combinators and objects with so-called binary methods in object-oriented programming. The proposed system provides a basis for axiomatic semantics of such a wider range of programs and a new framework for natural construction of recursive programs in the proofs-as-programs paradigm.
1
0
0
0
0
0
Scalable Realistic Recommendation Datasets through Fractal Expansions
Recommender System research suffers currently from a disconnect between the size of academic data sets and the scale of industrial production systems. In order to bridge that gap we propose to generate more massive user/item interaction data sets by expanding pre-existing public data sets. User/item incidence matrices record interactions between users and items on a given platform as a large sparse matrix whose rows correspond to users and whose columns correspond to items. Our technique expands such matrices to larger numbers of rows (users), columns (items) and non zero values (interactions) while preserving key higher order statistical properties. We adapt the Kronecker Graph Theory to user/item incidence matrices and show that the corresponding fractal expansions preserve the fat-tailed distributions of user engagements, item popularity and singular value spectra of user/item interaction matrices. Preserving such properties is key to building large realistic synthetic data sets which in turn can be employed reliably to benchmark Recommender Systems and the systems employed to train them. We provide algorithms to produce such expansions and apply them to the MovieLens 20 million data set comprising 20 million ratings of 27K movies by 138K users. The resulting expanded data set has 10 billion ratings, 2 million items and 864K users in its smaller version and can be scaled up or down. A larger version features 655 billion ratings, 7 million items and 17 million users.
1
0
0
1
0
0
Implementation of a Distributed Coherent Quantum Observer
This paper considers the problem of implementing a previously proposed distributed direct coupling quantum observer for a closed linear quantum system. By modifying the form of the previously proposed observer, the paper proposes a possible experimental implementation of the observer plant system using a non-degenerate parametric amplifier and a chain of optical cavities which are coupled together via optical interconnections. It is shown that the distributed observer converges to a consensus in a time averaged sense in which an output of each element of the observer estimates the specified output of the quantum plant.
1
0
1
0
0
0
Stacking and stability
Stacking is a general approach for combining multiple models toward greater predictive accuracy. It has found various application across different domains, ensuing from its meta-learning nature. Our understanding, nevertheless, on how and why stacking works remains intuitive and lacking in theoretical insight. In this paper, we use the stability of learning algorithms as an elemental analysis framework suitable for addressing the issue. To this end, we analyze the hypothesis stability of stacking, bag-stacking, and dag-stacking and establish a connection between bag-stacking and weighted bagging. We show that the hypothesis stability of stacking is a product of the hypothesis stability of each of the base models and the combiner. Moreover, in bag-stacking and dag-stacking, the hypothesis stability depends on the sampling strategy used to generate the training set replicates. Our findings suggest that 1) subsampling and bootstrap sampling improve the stability of stacking, and 2) stacking improves the stability of both subbagging and bagging.
1
0
0
1
0
0
Accelerating Discrete Wavelet Transforms on GPUs
The two-dimensional discrete wavelet transform has a huge number of applications in image-processing techniques. Until now, several papers compared the performance of such transform on graphics processing units (GPUs). However, all of them only dealt with lifting and convolution computation schemes. In this paper, we show that corresponding horizontal and vertical lifting parts of the lifting scheme can be merged into non-separable lifting units, which halves the number of steps. We also discuss an optimization strategy leading to a reduction in the number of arithmetic operations. The schemes were assessed using the OpenCL and pixel shaders. The proposed non-separable lifting scheme outperforms the existing schemes in many cases, irrespective of its higher complexity.
1
0
0
0
0
0
Accelerated Consensus via Min-Sum Splitting
We apply the Min-Sum message-passing protocol to solve the consensus problem in distributed optimization. We show that while the ordinary Min-Sum algorithm does not converge, a modified version of it known as Splitting yields convergence to the problem solution. We prove that a proper choice of the tuning parameters allows Min-Sum Splitting to yield subdiffusive accelerated convergence rates, matching the rates obtained by shift-register methods. The acceleration scheme embodied by Min-Sum Splitting for the consensus problem bears similarities with lifted Markov chains techniques and with multi-step first order methods in convex optimization.
0
0
1
0
0
0
Chemception: A Deep Neural Network with Minimal Chemistry Knowledge Matches the Performance of Expert-developed QSAR/QSPR Models
In the last few years, we have seen the transformative impact of deep learning in many applications, particularly in speech recognition and computer vision. Inspired by Google's Inception-ResNet deep convolutional neural network (CNN) for image classification, we have developed "Chemception", a deep CNN for the prediction of chemical properties, using just the images of 2D drawings of molecules. We develop Chemception without providing any additional explicit chemistry knowledge, such as basic concepts like periodicity, or advanced features like molecular descriptors and fingerprints. We then show how Chemception can serve as a general-purpose neural network architecture for predicting toxicity, activity, and solvation properties when trained on a modest database of 600 to 40,000 compounds. When compared to multi-layer perceptron (MLP) deep neural networks trained with ECFP fingerprints, Chemception slightly outperforms in activity and solvation prediction and slightly underperforms in toxicity prediction. Having matched the performance of expert-developed QSAR/QSPR deep learning models, our work demonstrates the plausibility of using deep neural networks to assist in computational chemistry research, where the feature engineering process is performed primarily by a deep learning algorithm.
1
0
0
1
0
0
Constraining the Milky Way assembly history with Galactic Archaeology. Ludwig Biermann Award Lecture 2015
The aim of Galactic Archaeology is to recover the evolutionary history of the Milky Way from its present day kinematical and chemical state. Because stars move away from their birth sites, the current dynamical information alone is not sufficient for this task. The chemical composition of stellar atmospheres, on the other hand, is largely preserved over the stellar lifetime and, together with accurate ages, can be used to recover the birthplaces of stars currently found at the same Galactic radius. In addition to the availability of large stellar samples with accurate 6D kinematics and chemical abundance measurements, this requires detailed modeling with both dynamical and chemical evolution taken into account. An important first step is to understand the variety of dynamical processes that can take place in the Milky Way, including the perturbative effects of both internal (bar and spiral structure) and external (infalling satellites) agents. We discuss here (1) how to constrain the Galactic bar, spiral structure, and merging satellites by their effect on the local and global disc phase-space, (2) the effect of multiple patterns on the disc dynamics, and (3) the importance of radial migration and merger perturbations for the formation of the Galactic thick disc. Finally, we discuss the construction of Milky Way chemo-dynamical models and relate to observations.
0
1
0
0
0
0
A unified view of entropy-regularized Markov decision processes
We propose a general framework for entropy-regularized average-reward reinforcement learning in Markov decision processes (MDPs). Our approach is based on extending the linear-programming formulation of policy optimization in MDPs to accommodate convex regularization functions. Our key result is showing that using the conditional entropy of the joint state-action distributions as regularization yields a dual optimization problem closely resembling the Bellman optimality equations. This result enables us to formalize a number of state-of-the-art entropy-regularized reinforcement learning algorithms as approximate variants of Mirror Descent or Dual Averaging, and thus to argue about the convergence properties of these methods. In particular, we show that the exact version of the TRPO algorithm of Schulman et al. (2015) actually converges to the optimal policy, while the entropy-regularized policy gradient methods of Mnih et al. (2016) may fail to converge to a fixed point. Finally, we illustrate empirically the effects of using various regularization techniques on learning performance in a simple reinforcement learning setup.
1
0
0
1
0
0
Arithmetic Circuits for Multilevel Qudits Based on Quantum Fourier Transform
We present some basic integer arithmetic quantum circuits, such as adders and multipliers-accumulators of various forms, as well as diagonal operators, which operate on multilevel qudits. The integers to be processed are represented in an alternative basis after they have been Fourier transformed. Several arithmetic circuits operating on Fourier transformed integers have appeared in the literature for two level qubits. Here we extend these techniques on multilevel qudits, as they may offer some advantages relative to qubits implementations. The arithmetic circuits presented can be used as basic building blocks for higher level algorithms such as quantum phase estimation, quantum simulation, quantum optimization etc., but they can also be used in the implementation of a quantum fractional Fourier transform as it is shown in a companion work presented separately.
1
0
0
0
0
0
Quantifying the distribution of editorial power and manuscript decision bias at the mega-journal PLOS ONE
We analyzed the longitudinal activity of nearly 7,000 editors at the mega-journal PLOS ONE over the 10-year period 2006-2015. Using the article-editor associations, we develop editor-specific measures of power, activity, article acceptance time, citation impact, and editorial renumeration (an analogue to self-citation). We observe remarkably high levels of power inequality among the PLOS ONE editors, with the top-10 editors responsible for 3,366 articles -- corresponding to 2.4% of the 141,986 articles we analyzed. Such high inequality levels suggest the presence of unintended incentives, which may reinforce unethical behavior in the form of decision-level biases at the editorial level. Our results indicate that editors may become apathetic in judging the quality of articles and susceptible to modes of power-driven misconduct. We used the longitudinal dimension of editor activity to develop two panel regression models which test and verify the presence of editor-level bias. In the first model we analyzed the citation impact of articles, and in the second model we modeled the decision time between an article being submitted and ultimately accepted by the editor. We focused on two variables that represent social factors that capture potential conflicts-of-interest: (i) we accounted for the social ties between editors and authors by developing a measure of repeat authorship among an editor's article set, and (ii) we accounted for the rate of citations directed towards the editor's own publications in the reference list of each article he/she oversaw. Our results indicate that these two factors play a significant role in the editorial decision process. Moreover, these two effects appear to increase with editor age, which is consistent with behavioral studies concerning the evolution of misbehavior and response to temptation in power-driven environments.
1
1
0
0
0
0
Threshold Constraints with Guarantees for Parity Objectives in Markov Decision Processes
The beyond worst-case synthesis problem was introduced recently by Bruyère et al. [BFRR14]: it aims at building system controllers that provide strict worst-case performance guarantees against an antagonistic environment while ensuring higher expected performance against a stochastic model of the environment. Our work extends the framework of [BFRR14] and follow-up papers, which focused on quantitative objectives, by addressing the case of $\omega$-regular conditions encoded as parity objectives, a natural way to represent functional requirements of systems. We build strategies that satisfy a main parity objective on all plays, while ensuring a secondary one with sufficient probability. This setting raises new challenges in comparison to quantitative objectives, as one cannot easily mix different strategies without endangering the functional properties of the system. We establish that, for all variants of this problem, deciding the existence of a strategy lies in ${\sf NP} \cap {\sf coNP}$, the same complexity class as classical parity games. Hence, our framework provides additional modeling power while staying in the same complexity class. [BFRR14] Véronique Bruyère, Emmanuel Filiot, Mickael Randour, and Jean-François Raskin. Meet your expectations with guarantees: Beyond worst-case synthesis in quantitative games. In Ernst W. Mayr and Natacha Portier, editors, 31st International Symposium on Theoretical Aspects of Computer Science, STACS 2014, March 5-8, 2014, Lyon, France, volume 25 of LIPIcs, pages 199-213. Schloss Dagstuhl - Leibniz - Zentrum fuer Informatik, 2014.
1
0
1
0
0
0
GLSR-VAE: Geodesic Latent Space Regularization for Variational AutoEncoder Architectures
VAEs (Variational AutoEncoders) have proved to be powerful in the context of density modeling and have been used in a variety of contexts for creative purposes. In many settings, the data we model possesses continuous attributes that we would like to take into account at generation time. We propose in this paper GLSR-VAE, a Geodesic Latent Space Regularization for the Variational AutoEncoder architecture and its generalizations which allows a fine control on the embedding of the data into the latent space. When augmenting the VAE loss with this regularization, changes in the learned latent space reflects changes of the attributes of the data. This deeper understanding of the VAE latent space structure offers the possibility to modulate the attributes of the generated data in a continuous way. We demonstrate its efficiency on a monophonic music generation task where we manage to generate variations of discrete sequences in an intended and playful way.
1
0
0
1
0
0
Timing Solution and Single-pulse Properties for Eight Rotating Radio Transients
Rotating radio transients (RRATs), loosely defined as objects that are discovered through only their single pulses, are sporadic pulsars that have a wide range of emission properties. For many of them, we must measure their periods and determine timing solutions relying on the timing of their individual pulses, while some of the less sporadic RRATs can be timed by using folding techniques as we do for other pulsars. Here, based on Parkes and Green Bank Telescope (GBT) observations, we introduce our results on eight RRATs including their timing-derived rotation parameters, positions, and dispersion measures (DMs), along with a comparison of the spin-down properties of RRATs and normal pulsars. Using data for 24 RRATs, we find that their period derivatives are generally larger than those of normal pulsars, independent of any intrinsic correlation with period, indicating that RRATs' highly sporadic emission may be associated with intrinsically larger magnetic fields. We carry out Lomb$-$Scargle tests to search for periodicities in RRATs' pulse detection times with long timescales. Periodicities are detected for all targets, with significant candidates of roughly 3.4 hr for PSR J1623$-$0841 and 0.7 hr for PSR J1839$-$0141. We also analyze their single-pulse amplitude distributions, finding that log-normal distributions provide the best fits, as is the case for most pulsars. However, several RRATs exhibit power-law tails, as seen for pulsars emitting giant pulses. This, along with consideration of the selection effects against the detection of weak pulses, imply that RRAT pulses generally represent the tail of a normal intensity distribution.
0
1
0
0
0
0