title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Rank Two Non-Commutative Laurent Phenomenon and Pseudo-Positivity | We study polynomial generalizations of the Kontsevich automorphisms acting on
the skew-field of formal rational expressions in two non-commuting variables.
Our main result is the Laurentness and pseudo-positivity of iterations of these
automorphisms. The resulting expressions are described combinatorially using a
generalization of the combinatorics of compatible pairs in a maximal Dyck path
developed by Lee, Li, and Zelevinsky. By specializing to quasi-commuting
variables we obtain pseudo-positive expressions for rank 2 quantum generalized
cluster variables. In the case that all internal exchange coefficients are
zero, this quantum specialization provides a combinatorial construction of
counting polynomials for Grassmannians of submodules in exceptional
representations of valued quivers with two vertices.
| 0 | 0 | 1 | 0 | 0 | 0 |
Faster Coordinate Descent via Adaptive Importance Sampling | Coordinate descent methods employ random partial updates of decision
variables in order to solve huge-scale convex optimization problems. In this
work, we introduce new adaptive rules for the random selection of their
updates. By adaptive, we mean that our selection rules are based on the dual
residual or the primal-dual gap estimates and can change at each iteration. We
theoretically characterize the performance of our selection rules and
demonstrate improvements over the state-of-the-art, and extend our theory and
algorithms to general convex objectives. Numerical evidence with hinge-loss
support vector machines and Lasso confirm that the practice follows the theory.
| 1 | 0 | 1 | 1 | 0 | 0 |
On the role of synaptic stochasticity in training low-precision neural networks | Stochasticity and limited precision of synaptic weights in neural network
models are key aspects of both biological and hardware modeling of learning
processes. Here we show that a neural network model with stochastic binary
weights naturally gives prominence to exponentially rare dense regions of
solutions with a number of desirable properties such as robustness and good
generalization performance, while typical solutions are isolated and hard to
find. Binary solutions of the standard perceptron problem are obtained from a
simple gradient descent procedure on a set of real values parametrizing a
probability distribution over the binary synapses. Both analytical and
numerical results are presented. An algorithmic extension aimed at training
discrete deep neural networks is also investigated.
| 1 | 1 | 0 | 1 | 0 | 0 |
Secants, bitangents, and their congruences | A congruence is a surface in the Grassmannian $\mathrm{Gr}(1,\mathbb{P}^3)$
of lines in projective $3$-space. To a space curve $C$, we associate the Chow
hypersurface in $\mathrm{Gr}(1,\mathbb{P}^3)$ consisting of all lines which
intersect $C$. We compute the singular locus of this hypersurface, which
contains the congruence of all secants to $C$. A surface $S$ in $\mathbb{P}^3$
defines the Hurwitz hypersurface in $\mathrm{Gr}(1,\mathbb{P}^3)$ of all lines
which are tangent to $S$. We show that its singular locus has two components
for general enough $S$: the congruence of bitangents and the congruence of
inflectional tangents. We give new proofs for the bidegrees of the secant,
bitangent and inflectional congruences, using geometric techniques such as
duality, polar loci and projections. We also study the singularities of these
congruences.
| 0 | 0 | 1 | 0 | 0 | 0 |
Convergence of Stochastic Approximation Monte Carlo and modified Wang-Landau algorithms: Tests for the Ising model | We investigate the behavior of the deviation of the estimator for the density
of states (DOS) with respect to the exact solution in the course of Wang-Landau
and Stochastic Approximation Monte Carlo (SAMC) simulations of the
two-dimensional Ising model. We find that the deviation saturates in the
Wang-Landau case. This can be cured by adjusting the refinement scheme. To this
end, the 1/t-modification of the Wang-Landau algorithm has been suggested. A
similar choice of refinement scheme is employed in the SAMC algorithm. The
convergence behavior of all three algorithms is examined. It turns out that the
convergence of the SAMC algorithm is very sensitive to the onset of the
refinement. Finally, the internal energy and specific heat of the Ising model
are calculated from the SAMC DOS and compared to exact values.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the approximation by convolution type double singular integral operators | In this paper, we prove the pointwise convergence and the rate of pointwise
convergence for a family of singular integral operators in two-dimensional
setting in the following form: \begin{equation*} L_{\lambda }\left(
f;x,y\right) =\underset{D}{\iint }f\left( t,s\right) K_{\lambda }\left(
t-x,s-y\right) dsdt,\text{ }\left( x,y\right) \in D, \end{equation*} where
$D=\left \langle a,b\right \rangle \times \left \langle c,d\right \rangle $ is
an arbitrary closed, semi-closed or open rectangle in $\mathbb{R}^{2}$ and $%
\lambda \in \Lambda ,$ $\Lambda $ is a set of non-negative indices with
accumulation point $\lambda_{0}$. Also, we provide an example to support these
theoretical results. In contrast to previous works, the kernel function
$K_{\lambda }\left( t,s\right) $ does not have to be even, positive or 2$\pi
-$periodic.
| 0 | 0 | 1 | 0 | 0 | 0 |
Characterization of polynomials whose large powers have all positive coefficients | We give a criterion which characterizes a homogeneous real multi-variate
polynomial to have the property that all sufficiently large powers of the
polynomial (as well as their products with any given positive homogeneous
polynomial) have positive coefficients. Our result generalizes a result of De
Angelis, which corresponds to the case of homogeneous bi-variate polynomials,
as well as a classical result of Pólya, which corresponds to the case of a
specific linear polynomial. As an application, we also give a characterization
of certain polynomial beta functions, which are the spectral radius functions
of the defining matrix functions of Markov chains.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Moon Illusion explained by the Projective Consciousness Model | The Moon often appears larger near the perceptual horizon and smaller high in
the sky though the visual angle subtended is invariant. We show how this
illusion results from the optimization of a projective geometrical frame for
conscious perception through free energy minimization, as articulated in the
Projective Consciousness Model. The model accounts for all documented
modulations of the illusion without anomalies (e.g., the size-distance
paradox), surpasses other theories in explanatory power, makes sense of inter-
and intra-subjective variability vis-a-vis the illusion, and yields new
quantitative and qualitative predictions.
| 0 | 0 | 0 | 0 | 1 | 0 |
Bloch line dynamics within moving domain walls in 3D ferromagnets | We study field-driven magnetic domain wall dynamics in garnet strips by
large-scale three-dimensional micromagnetic simulations. The domain wall
propagation velocity as a function of the applied field exhibits a low-field
linear part terminated by a sudden velocity drop at a threshold field
magnitude, related to the onset of excitations of internal degrees of freedom
of the domain wall magnetization. By considering a wide range of strip
thicknesses from 30 nm to 1.89 $\mu$m, we find a non-monotonic thickness
dependence of the threshold field for the onset of this instability, proceeding
via nucleation and propagation of Bloch lines within the domain wall. We
identify a critical strip thickness above which the velocity drop is due to
nucleation of horizontal Bloch lines, while for thinner strips and depending on
the boundary conditions employed, either generation of vertical Bloch lines, or
close-to-uniform precession of the domain wall internal magnetization takes
place. For strips of intermediate thicknesses, the vertical Bloch lines assume
a deformed structure due to demagnetizing fields at the strip surfaces,
breaking the symmetry between the top and bottom faces of the strip, and
resulting in circulating Bloch line dynamics along the perimeter of the domain
wall.
| 0 | 1 | 0 | 0 | 0 | 0 |
Approaching the UCT problem via crossed products of the Razak-Jacelon algebra | We show that the UCT problem for separable, nuclear $\mathrm C^*$-algebras
relies only on whether the UCT holds for crossed products of certain finite
cyclic group actions on the Razak-Jacelon algebra. This observation is
analogous to and in fact recovers a characterization of the UCT problem in
terms of finite group actions on the Cuntz algebra $\mathcal O_2$ established
in previous work by the authors. Although based on a similar approach, the new
conceptual ingredients in the finite context are the recent advances in the
classification of stably projectionless $\mathrm C^*$-algebras, as well as a
known characterization of the UCT problem in terms of certain tracially AF
$\mathrm C^*$-algebras due to Dadarlat.
| 0 | 0 | 1 | 0 | 0 | 0 |
Some Remarks about the Complexity of Epidemics Management | Recent outbreaks of Ebola, H1N1 and other infectious diseases have shown that
the assumptions underlying the established theory of epidemics management are
too idealistic. For an improvement of procedures and organizations involved in
fighting epidemics, extended models of epidemics management are required. The
necessary extensions consist in a representation of the management loop and the
potential frictions influencing the loop. The effects of the non-deterministic
frictions can be taken into account by including the measures of robustness and
risk in the assessment of management options. Thus, besides of the increased
structural complexity resulting from the model extensions, the computational
complexity of the task of epidemics management - interpreted as an optimization
problem - is increased as well. This is a serious obstacle for analyzing the
model and may require an additional pre-processing enabling a simplification of
the analysis process. The paper closes with an outlook discussing some
forthcoming problems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Entanglement Entropy in Excited States of the Quantum Lifshitz Model | We investigate the entanglement properties of an infinite class of excited
states in the quantum Lifshitz model (QLM). The presence of a conformal quantum
critical point in the QLM makes it unusually tractable for a model above one
spatial dimension, enabling the ground state entanglement entropy for an
arbitrary domain to be expressed in terms of geometrical and topological
quantities. Here we extend this result to excited states and find that the
entanglement can be naturally written in terms of quantities which we dub
"entanglement propagator amplitudes" (EPAs). EPAs are geometrical probabilities
that we explicitly calculate and interpret. A comparison of lattice and
continuum results demonstrates that EPAs are universal. This work shows that
the QLM is an example of a 2+1d field theory where the universal behavior of
excited-state entanglement may be computed analytically.
| 0 | 1 | 0 | 0 | 0 | 0 |
Poisson-Fermi Formulation of Nonlocal Electrostatics in Electrolyte Solutions | We present a nonlocal electrostatic formulation of nonuniform ions and water
molecules with interstitial voids that uses a Fermi-like distribution to
account for steric and correlation effects in electrolyte solutions. The
formulation is based on the volume exclusion of hard spheres leading to a
steric potential and Maxwell's displacement field with Yukawa-type interactions
resulting in a nonlocal electric potential. The classical Poisson-Boltzmann
model fails to describe steric and correlation effects important in a variety
of chemical and biological systems, especially in high field or large
concentration conditions found in and near binding sites, ion channels, and
electrodes. Steric effects and correlations are apparent when we compare
nonlocal Poisson-Fermi results to Poisson-Boltzmann calculations in electric
double layer and to experimental measurements on the selectivity of potassium
channels for K+ over Na+. The present theory links atomic scale descriptions of
the crystallized KcsA channel with macroscopic bulk conditions. Atomic
structures and macroscopic conditions determine complex functions of great
importance in biology, nanotechnology, and electrochemistry.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning to Transfer | Transfer learning borrows knowledge from a source domain to facilitate
learning in a target domain. Two primary issues to be addressed in transfer
learning are what and how to transfer. For a pair of domains, adopting
different transfer learning algorithms results in different knowledge
transferred between them. To discover the optimal transfer learning algorithm
that maximally improves the learning performance in the target domain,
researchers have to exhaustively explore all existing transfer learning
algorithms, which is computationally intractable. As a trade-off, a sub-optimal
algorithm is selected, which requires considerable expertise in an ad-hoc way.
Meanwhile, it is widely accepted in educational psychology that human beings
improve transfer learning skills of deciding what to transfer through
meta-cognitive reflection on inductive transfer learning practices. Motivated
by this, we propose a novel transfer learning framework known as Learning to
Transfer (L2T) to automatically determine what and how to transfer are the best
by leveraging previous transfer learning experiences. We establish the L2T
framework in two stages: 1) we first learn a reflection function encrypting
transfer learning skills from experiences; and 2) we infer what and how to
transfer for a newly arrived pair of domains by optimizing the reflection
function. Extensive experiments demonstrate the L2T's superiority over several
state-of-the-art transfer learning algorithms and its effectiveness on
discovering more transferable knowledge.
| 1 | 0 | 0 | 1 | 0 | 0 |
Strong Metric Subregularity of Mappings in Variational Analysis and Optimization | Although the property of strong metric subregularity of set-valued mappings
has been present in the literature under various names and with various
definitions for more than two decades, it has attracted much less attention
than its older "siblings", the metric regularity and the strong metric
regularity. The purpose of this paper is to show that the strong metric
subregularity shares the main features of these two most popular regularity
properties and is not less instrumental in applications. We show that the
strong metric subregularity of a mapping F acting between metric spaces is
stable under perturbations of the form f + F, where f is a function with a
small calmness constant. This result is parallel to the Lyusternik-Graves
theorem for metric regularity and to the Robinson theorem for strong
regularity, where the perturbations are represented by a function f with a
small Lipschitz constant. Then we study perturbation stability of the same kind
for mappings acting between Banach spaces, where f is not necessarily
differentiable but admits a set-valued derivative-like approximation. Strong
metric q-subregularity is also considered, where q is a positive real constant
appearing as exponent in the definition. Rockafellar's criterion for strong
metric subregularity involving injectivity of the graphical derivative is
extended to mappings acting in infinite-dimensional spaces. A sufficient
condition for strong metric subregularity is established in terms of
surjectivity of the Frechet coderivative. Various versions of Newton's method
for solving generalized equations are considered including inexact and
semismooth methods, for which superlinear convergence is shown under strong
metric subregularity.
| 0 | 0 | 1 | 0 | 0 | 0 |
Systematic Identification of LAEs for Visible Exploration and Reionization Research Using Subaru HSC (SILVERRUSH). I. Program Strategy and Clustering Properties of ~2,000 Lya Emitters at z=6-7 over the 0.3-0.5 Gpc$^2$ Survey Area | We present the SILVERRUSH program strategy and clustering properties
investigated with $\sim 2,000$ Ly$\alpha$ emitters at $z=5.7$ and $6.6$ found
in the early data of the Hyper Suprime-Cam (HSC) Subaru Strategic Program
survey exploiting the carefully designed narrowband filters. We derive angular
correlation functions with the unprecedentedly large samples of LAEs at $z=6-7$
over the large total area of $14-21$ deg$^2$ corresponding to $0.3-0.5$
comoving Gpc$^2$. We obtain the average large-scale bias values of $b_{\rm
avg}=4.1\pm 0.2$ ($4.5\pm 0.6$) at $z=5.7$ ($z=6.6$) for $\gtrsim L^*$ LAEs,
indicating the weak evolution of LAE clustering from $z=5.7$ to $6.6$. We
compare the LAE clustering results with two independent theoretical models that
suggest an increase of an LAE clustering signal by the patchy ionized bubbles
at the epoch of reionization (EoR), and estimate the neutral hydrogen fraction
to be $x_{\rm HI}=0.15^{+0.15}_{-0.15}$ at $z=6.6$. Based on the halo
occupation distribution models, we find that the $\gtrsim L^*$ LAEs are hosted
by the dark-matter halos with the average mass of $\log (\left < M_{\rm h}
\right >/M_\odot) =11.1^{+0.2}_{-0.4}$ ($10.8^{+0.3}_{-0.5}$) at $z=5.7$
($6.6$) with a Ly$\alpha$ duty cycle of 1 % or less, where the results of
$z=6.6$ LAEs may be slightly biased, due to the increase of the clustering
signal at the EoR. Our clustering analysis reveals the low-mass nature of
$\gtrsim L^*$ LAEs at $z=6-7$, and that these LAEs probably evolve into massive
super-$L^*$ galaxies in the present-day universe.
| 0 | 1 | 0 | 0 | 0 | 0 |
Experimental study of mini-magnetosphere | Magnetosphere at ion kinetic scales, or mini-magnetosphere, possesses unusual
features as predicted by numerical simulations. However, there are practically
no data on the subject from space observations and the data which are available
are far too incomplete. In the present work we describe results of laboratory
experiment on interaction of plasma flow with magnetic dipole with parameters
such that ion inertia length is smaller than a size of observed magnetosphere.
A detailed structure of non-coplanar or out-of-plane component of magnetic
field has been obtained in meridian plane. Independence of this component on
dipole moment reversal, as was reported in previous work, has been verified. In
the tail distinct lobes and central current sheet have been observed. It was
found that lobe regions adjacent to boundary layer are dominated by
non-coplanar component of magnetic field. Tail-ward oriented electric current
in plasma associated with that component appears to be equal to ion current in
the frontal part of magnetosphere and in the tail current sheet implying that
electrons are stationary in those regions while ions flow by. Obtained data
strongly support the proposed model of mini-magnetosphere based on two-fluid
effects as described by the Hall term.
| 0 | 1 | 0 | 0 | 0 | 0 |
Matrix KP: tropical limit and Yang-Baxter maps | We study soliton solutions of matrix Kadomtsev-Petviashvili (KP) equations in
a tropical limit, in which their support at fixed time is a planar graph and
polarizations are attached to its constituting lines. There is a subclass of
"pure line soliton solutions" for which we find that, in this limit, the
distribution of polarizations is fully determined by a Yang-Baxter map. For a
vector KP equation, this map is given by an R-matrix, whereas it is a
non-linear map in case of a more general matrix KP equation. We also consider
the corresponding Korteweg-deVries (KdV) reduction. Furthermore, exploiting the
fine structure of soliton interactions in the tropical limit, we obtain a new
solution of the tetrahedron (or Zamolodchikov) equation. Moreover, a solution
of the functional tetrahedron equation arises from the parameter-dependence of
the vector KP R-matrix.
| 0 | 1 | 0 | 0 | 0 | 0 |
Poster Abstract: LPWA-MAC - a Low Power Wide Area network MAC protocol for cyber-physical systems | Low-Power Wide-Area Networks (LPWANs) are being successfully used for the
monitoring of large-scale systems that are delay-tolerant and which have
low-bandwidth requirements. The next step would be instrumenting these for the
control of Cyber-Physical Systems (CPSs) distributed over large areas which
require more bandwidth, bounded delays and higher reliability or at least more
rigorous guarantees therein. This paper presents LPWA-MAC, a novel Low Power
Wide-Area network MAC protocol, that ensures bounded end-to-end delays, high
channel utility and supports many of the different traffic patterns and
data-rates typical of CPS.
| 1 | 0 | 0 | 0 | 0 | 0 |
Asymptotic Enumeration of Compacted Binary Trees | A compacted tree is a graph created from a binary tree such that repeatedly
occurring subtrees in the original tree are represented by pointers to existing
ones, and hence every subtree is unique. Such representations form a special
class of directed acyclic graphs. We are interested in the asymptotic number of
compacted trees of given size, where the size of a compacted tree is given by
the number of its internal nodes. Due to its superexponential growth this
problem poses many difficulties. Therefore we restrict our investigations to
compacted trees of bounded right height, which is the maximal number of edges
going to the right on any path from the root to a leaf.
We solve the asymptotic counting problem for this class as well as a closely
related, further simplified class.
For this purpose, we develop a calculus on exponential generating functions
for compacted trees of bounded right height and for relaxed trees of bounded
right height, which differ from compacted trees by dropping the above described
uniqueness condition. This enables us to derive a recursively defined sequence
of differential equations for the exponential generating functions. The
coefficients can then be determined by performing a singularity analysis of the
solutions of these differential equations.
Our main results are the computation of the asymptotic numbers of relaxed as
well as compacted trees of bounded right height and given size, when the size
tends to infinity.
| 1 | 0 | 0 | 0 | 0 | 0 |
The STAR MAPS-based PiXeL detector | The PiXeL detector (PXL) for the Heavy Flavor Tracker (HFT) of the STAR
experiment at RHIC is the first application of the state-of-the-art thin
Monolithic Active Pixel Sensors (MAPS) technology in a collider environment.
Custom built pixel sensors, their readout electronics and the detector
mechanical structure are described in detail. Selected detector design aspects
and production steps are presented. The detector operations during the three
years of data taking (2014-2016) and the overall performance exceeding the
design specifications are discussed in the conclusive sections of this paper.
| 0 | 1 | 0 | 0 | 0 | 0 |
Complete intersection monomial curves and the Cohen-Macaulayness of their tangent cones | Let $C({\bf n})$ be a complete intersection monomial curve in the
4-dimensional affine space. In this paper we study the complete intersection
property of the monomial curve $C({\bf n}+w{\bf v})$, where $w>0$ is an integer
and ${\bf v} \in \mathbb{N}^{4}$. Also we investigate the Cohen-Macaulayness of
the tangent cone of $C({\bf n}+w{\bf v})$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Stable and unstable vortex knots in a trapped Bose-Einstein condensate | The dynamics of a quantum vortex torus knot ${\cal T}_{P,Q}$ and similar
knots in an atomic Bose-Einstein condensate at zero temperature in the
Thomas-Fermi regime has been considered in the hydrodynamic approximation. The
condensate has a spatially nonuniform equilibrium density profile $\rho(z,r)$
due to an external axisymmetric potential. It is assumed that $z_*=0$, $r_*=1$
is a maximum point for function $r\rho(z,r)$, with $\delta
(r\rho)\approx-(\alpha-\epsilon) z^2/2 -(\alpha+\epsilon) (\delta r)^2/2$ at
small $z$ and $\delta r$. Configuration of knot in the cylindrical coordinates
is specified by a complex $2\pi P$-periodic function
$A(\varphi,t)=Z(\varphi,t)+i [R(\varphi,t)-1]$. In the case $|A|\ll 1$ the
system is described by relatively simple approximate equations for re-scaled
functions $W_n(\varphi)\propto A(2\pi n+\varphi)$, where $n=0,\dots,P-1$, and
$iW_{n,t}=-(W_{n,\varphi\varphi}+\alpha W_n -\epsilon W_n^*)/2-\sum_{j\neq
n}1/(W_n^*-W_j^*)$. At $\epsilon=0$, numerical examples of stable solutions as
$W_n=\theta_n(\varphi-\gamma t)\exp(-i\omega t)$ with non-trivial topology have
been found for $P=3$. Besides that, dynamics of various non-stationary knots
with $P=3$ was simulated, and in some cases a tendency towards a finite-time
singularity has been detected. For $P=2$ at small $\epsilon\neq 0$, rotating
around $z$ axis configurations of the form $(W_0-W_1)\approx
B_0\exp(i\zeta)+\epsilon C(B_0,\alpha)\exp(-i\zeta) + \epsilon
D(B_0,\alpha)\exp(3i\zeta)$ have been investigated, where $B_0>0$ is an
arbitrary constant, $\zeta=k_0\varphi -\Omega_0 t+\zeta_0$, $k_0=Q/2$,
$\Omega_0=(k_0^2-\alpha)/2-2/B_0^2$. In the parameter space $(\alpha, B_0)$,
wide stability regions for such solutions have been found. In unstable bands, a
recurrence of the vortex knot to a weakly excited state has been noted to be
possible.
| 0 | 1 | 0 | 0 | 0 | 0 |
Design of Configurable Sequential Circuits in Quantum-dot Cellular Automata | Quantum-dot cellular automata (QCA) is a likely candidate for future low
power nano-scale electronic devices. Sequential circuits in QCA attract more
attention due to its numerous application in digital industry. On the other
hand, configurable devices provide low device cost and efficient utilization of
device area. Since the fundamental building block of any sequential logic
circuit is flip flop, hence constructing configurable, multi-purpose QCA
flip-flops are one of the prime importance of current research. This work
proposes a design of configurable flip-flop (CFF) which is the first of its
kind in QCA domain. The proposed flip-flop can be configured to D, T and JK
flip-flop by configuring its control inputs. In addition, to make more
efficient configurable flip-flop, a clock pulse generator (CPG) is designed
which can trigger all types of edges (falling, rising and dual) of a clock. The
same CFF design is used to realize an edge configurable (dual/rising/falling)
flip- flop with the help of CPG. The biggest advantage of using edge
configurable (dual/rising/falling) flip-flop is that it can be used in 9
different ways using the same single circuit. All the proposed designs are
verified using QCADesigner simulator.
| 1 | 0 | 0 | 0 | 0 | 0 |
ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent | Two major momentum-based techniques that have achieved tremendous success in
optimization are Polyak's heavy ball method and Nesterov's accelerated
gradient. A crucial step in all momentum-based methods is the choice of the
momentum parameter $m$ which is always suggested to be set to less than $1$.
Although the choice of $m < 1$ is justified only under very strong theoretical
assumptions, it works well in practice even when the assumptions do not
necessarily hold. In this paper, we propose a new momentum based method
$\textit{ADINE}$, which relaxes the constraint of $m < 1$ and allows the
learning algorithm to use adaptive higher momentum. We motivate our hypothesis
on $m$ by experimentally verifying that a higher momentum ($\ge 1$) can help
escape saddles much faster. Using this motivation, we propose our method
$\textit{ADINE}$ that helps weigh the previous updates more (by setting the
momentum parameter $> 1$), evaluate our proposed algorithm on deep neural
networks and show that $\textit{ADINE}$ helps the learning algorithm to
converge much faster without compromising on the generalization error.
| 1 | 0 | 0 | 1 | 0 | 0 |
Symplectic rational $G$-surfaces and equivariant symplectic cones | We give characterizations of a finite group $G$ acting symplectically on a
rational surface ($\mathbb{C}P^2$ blown up at two or more points). In
particular, we obtain a symplectic version of the dichotomy of $G$-conic
bundles versus $G$-del Pezzo surfaces for the corresponding $G$-rational
surfaces, analogous to a classical result in algebraic geometry. Besides the
characterizations of the group $G$ (which is completely determined for the case
of $\mathbb{C}P^2\# N\overline{\mathbb{C}P^2}$, $N=2,3,4$), we also investigate
the equivariant symplectic minimality and equivariant symplectic cone of a
given $G$-rational surface.
| 0 | 0 | 1 | 0 | 0 | 0 |
Biderivations of the twisted Heisenberg-Virasoro algebra and their applications | In this paper, the biderivations without the skew-symmetric condition of the
twisted Heisenberg-Virasoro algebra are presented. We find some non-inner and
non-skew-symmetric biderivations. As applications, the characterizations of the
forms of linear commuting maps and the commutative post-Lie algebra structures
on the twisted Heisenberg-Virasoro algebra are given. It also is proved that
every biderivation of the graded twisted Heisenberg-Virasoro left-symmetric
algebra is trivial.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Data Science Approach to Understanding Residential Water Contamination in Flint | When the residents of Flint learned that lead had contaminated their water
system, the local government made water-testing kits available to them free of
charge. The city government published the results of these tests, creating a
valuable dataset that is key to understanding the causes and extent of the lead
contamination event in Flint. This is the nation's largest dataset on lead in a
municipal water system.
In this paper, we predict the lead contamination for each household's water
supply, and we study several related aspects of Flint's water troubles, many of
which generalize well beyond this one city. For example, we show that elevated
lead risks can be (weakly) predicted from observable home attributes. Then we
explore the factors associated with elevated lead. These risk assessments were
developed in part via a crowd sourced prediction challenge at the University of
Michigan. To inform Flint residents of these assessments, they have been
incorporated into a web and mobile application funded by \texttt{Google.org}.
We also explore questions of self-selection in the residential testing program,
examining which factors are linked to when and how frequently residents
voluntarily sample their water.
| 1 | 0 | 0 | 1 | 0 | 0 |
Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems | In this paper, we introduce a stochastic projected subgradient method for
weakly convex (i.e., uniformly prox-regular) nonsmooth, nonconvex functions---a
wide class of functions which includes the additive and convex composite
classes. At a high-level, the method is an inexact proximal point iteration in
which the strongly convex proximal subproblems are quickly solved with a
specialized stochastic projected subgradient method. The primary contribution
of this paper is a simple proof that the proposed algorithm converges at the
same rate as the stochastic gradient method for smooth nonconvex problems. This
result appears to be the first convergence rate analysis of a stochastic (or
even deterministic) subgradient method for the class of weakly convex
functions.
| 1 | 0 | 1 | 0 | 0 | 0 |
Solvable Integration Problems and Optimal Sample Size Selection | We compute the integral of a function or the expectation of a random variable
with minimal cost and use, for our new algorithm and for upper bounds of the
complexity, i.i.d. samples. Under certain assumptions it is possible to select
a sample size based on a variance estimation, or -- more generally -- based on
an estimation of a (central absolute) $p$-moment. That way one can guarantee a
small absolute error with high probability, the problem is thus called
solvable. The expected cost of the method depends on the $p$-moment of the
random variable, which can be arbitrarily large.
In order to prove the optimality of our algorithm we also provide lower
bounds. These bounds apply not only to methods based on i.i.d. samples but also
to general randomized algorithms. They show that -- up to constants -- the cost
of the algorithm is optimal in terms of accuracy, confidence level, and norm of
the particular input random variable. Since the considered classes of random
variables or integrands are very large, the worst case cost would be infinite.
Nevertheless one can define adaptive stopping rules such that for each input
the expected cost is finite.
We contrast these positive results with examples of integration problems that
are not solvable.
| 0 | 0 | 0 | 1 | 0 | 0 |
Label Stability in Multiple Instance Learning | We address the problem of \emph{instance label stability} in multiple
instance learning (MIL) classifiers. These classifiers are trained only on
globally annotated images (bags), but often can provide fine-grained
annotations for image pixels or patches (instances). This is interesting for
computer aided diagnosis (CAD) and other medical image analysis tasks for which
only a coarse labeling is provided. Unfortunately, the instance labels may be
unstable. This means that a slight change in training data could potentially
lead to abnormalities being detected in different parts of the image, which is
undesirable from a CAD point of view. Despite MIL gaining popularity in the CAD
literature, this issue has not yet been addressed. We investigate the stability
of instance labels provided by several MIL classifiers on 5 different datasets,
of which 3 are medical image datasets (breast histopathology, diabetic
retinopathy and computed tomography lung images). We propose an unsupervised
measure to evaluate instance stability, and demonstrate that a
performance-stability trade-off can be made when comparing MIL classifiers.
| 1 | 0 | 0 | 1 | 0 | 0 |
New indicators for assessing the quality of in silico produced biomolecules: the case study of the aptamer-Angiopoietin-2 complex | Computational procedures to foresee the 3D structure of aptamers are in
continuous progress. They constitute a crucial input to research, mainly when
the crystallographic counterpart of the structures in silico produced is not
present. At now, many codes are able to perform structure and binding
prediction, although their ability in scoring the results remains rather weak.
In this paper, we propose a novel procedure to complement the ranking outcomes
of free docking code, by applying it to a set of anti-angiopoietin aptamers,
whose performances are known. We rank the in silico produced configurations,
adopting a maximum likelihood estimate, based on their topological and
electrical properties. From the analysis, two principal kinds of conformers are
identified, whose ability to mimick the binding features of the natural
receptor is discussed. The procedure is easily generalizable to many biological
biomolecules, useful for increasing chances of success in designing
high-specificity biosensors (aptasensors).
| 0 | 0 | 0 | 0 | 1 | 0 |
A finite element method for elliptic problems with observational boundary data | In this paper we propose a finite element method for solving elliptic
equations with the observational Dirichlet boundary data which may subject to
random noises. The method is based on the weak formulation of Lagrangian
multiplier. We show the convergence of the random finite element error in
expectation and, when the noise is sub-Gaussian, in the Orlicz 2- norm which
implies the probability that the finite element error estimates are violated
decays exponentially. Numerical examples are included.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sum-Product Graphical Models | This paper introduces a new probabilistic architecture called Sum-Product
Graphical Model (SPGM). SPGMs combine traits from Sum-Product Networks (SPNs)
and Graphical Models (GMs): Like SPNs, SPGMs always enable tractable inference
using a class of models that incorporate context specific independence. Like
GMs, SPGMs provide a high-level model interpretation in terms of conditional
independence assumptions and corresponding factorizations. Thus, the new
architecture represents a class of probability distributions that combines, for
the first time, the semantics of graphical models with the evaluation
efficiency of SPNs. We also propose a novel algorithm for learning both the
structure and the parameters of SPGMs. A comparative empirical evaluation
demonstrates competitive performances of our approach in density estimation.
| 1 | 0 | 0 | 1 | 0 | 0 |
Dependence of the Martian radiation environment on atmospheric depth: Modeling and measurement | The energetic particle environment on the Martian surface is influenced by
solar and heliospheric modulation and changes in the local atmospheric pressure
(or column depth). The Radiation Assessment Detector (RAD) on board the Mars
Science Laboratory rover Curiosity on the surface of Mars has been measuring
this effect for over four Earth years (about two Martian years). The
anticorrelation between the recorded surface Galactic Cosmic Ray-induced dose
rates and pressure changes has been investigated by Rafkin et al. (2014) and
the long-term solar modulation has also been empirically analyzed and modeled
by Guo et al. (2015). This paper employs the newly updated HZETRN2015 code to
model the Martian atmospheric shielding effect on the accumulated dose rates
and the change of this effect under different solar modulation and atmospheric
conditions. The modeled results are compared with the most up-to-date (from 14
August 2012 to 29 June 2016) observations of the RAD instrument on the surface
of Mars. Both model and measurements agree reasonably well and show the
atmospheric shielding effect under weak solar modulation conditions and the
decline of this effect as solar modulation becomes stronger. This result is
important for better risk estimations of future human explorations to Mars
under different heliospheric and Martian atmospheric conditions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Transactional Partitioning: A New Abstraction for Main-Memory Databases | The growth in variety and volume of OLTP (Online Transaction Processing)
applications poses a challenge to OLTP systems to meet performance and cost
demands in the existing hardware landscape. These applications are highly
interactive (latency sensitive) and require update consistency. They target
commodity hardware for deployment and demand scalability in throughput with
increasing clients and data. Currently, OLTP systems used by these applications
provide trade-offs in performance and ease of development over a variety of
applications. In order to bridge the gap between performance and ease of
development, we propose an intuitive, high-level programming model which allows
OLTP applications to be modeled as a cluster of application logic units. By
extending transactions guaranteeing full ACID semantics to provide the proposed
model, we maintain ease of application development. The model allows the
application developer to reason about program performance, and to influence it
without the involvement of OLTP system designers (database designers) and/or
DBAs. As a result, the database designer is free to focus on efficient running
of programs to ensure optimal cluster resource utilization.
| 1 | 0 | 0 | 0 | 0 | 0 |
Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign | Until recently, social media was seen to promote democratic discourse on
social and political issues. However, this powerful communication platform has
come under scrutiny for allowing hostile actors to exploit online discussions
in an attempt to manipulate public opinion. A case in point is the ongoing U.S.
Congress' investigation of Russian interference in the 2016 U.S. election
campaign, with Russia accused of using trolls (malicious accounts created to
manipulate) and bots to spread misinformation and politically biased
information. In this study, we explore the effects of this manipulation
campaign, taking a closer look at users who re-shared the posts produced on
Twitter by the Russian troll accounts publicly disclosed by U.S. Congress
investigation. We collected a dataset with over 43 million election-related
posts shared on Twitter between September 16 and October 21, 2016, by about 5.7
million distinct users. This dataset included accounts associated with the
identified Russian trolls. We use label propagation to infer the ideology of
all users based on the news sources they shared. This method enables us to
classify a large number of users as liberal or conservative with precision and
recall above 90%. Conservatives retweeted Russian trolls about 31 times more
often than liberals and produced 36x more tweets. Additionally, most retweets
of troll content originated from two Southern states: Tennessee and Texas.
Using state-of-the-art bot detection techniques, we estimated that about 4.9%
and 6.2% of liberal and conservative users respectively were bots. Text
analysis on the content shared by trolls reveals that they had a mostly
conservative, pro-Trump agenda. Although an ideologically broad swath of
Twitter users was exposed to Russian Trolls in the period leading up to the
2016 U.S. Presidential election, it was mainly conservatives who helped amplify
their message.
| 1 | 0 | 0 | 0 | 0 | 0 |
Event-Radar: Real-time Local Event Detection System for Geo-Tagged Tweet Streams | The local event detection is to use posting messages with geotags on social
networks to reveal the related ongoing events and their locations. Recent
studies have demonstrated that the geo-tagged tweet stream serves as an
unprecedentedly valuable source for local event detection. Nevertheless, how to
effectively extract local events from large geo-tagged tweet streams in real
time remains challenging. A robust and efficient cloud-based real-time local
event detection software system would benefit various aspects in the real-life
society, from shopping recommendation for customer service providers to
disaster alarming for emergency departments. We use the preliminary research
GeoBurst as a starting point, which proposed a novel method to detect local
events. GeoBurst+ leverages a novel cross-modal authority measure to identify
several pivots in the query window. Such pivots reveal different geo-topical
activities and naturally attract related tweets to form candidate events. It
further summarises the continuous stream and compares the candidates against
the historical summaries to pinpoint truly interesting local events. We mainly
implement a website demonstration system Event-Radar with an improved algorithm
to show the real-time local events online for public interests. Better still,
as the query window shifts, our method can update the event list with little
time cost, thus achieving continuous monitoring of the stream.
| 1 | 0 | 0 | 0 | 0 | 0 |
Computing metric hulls in graphs | We prove that, given a closure function the smallest preimage of a closed set
can be calculated in polynomial time in the number of closed sets. This
confirms a conjecture of Albenque and Knauer and implies that there is a
polynomial time algorithm to compute the convex hull-number of a graph, when
all its convex subgraphs are given as input. We then show that computing if the
smallest preimage of a closed set is logarithmic in the size of the ground set
is LOGSNP-complete if only the ground set is given. A special instance of this
problem is computing the dimension of a poset given its linear extension graph,
that was conjectured to be in P.
The intent to show that the latter problem is LOGSNP-complete leads to
several interesting questions and to the definition of the isometric hull,
i.e., a smallest isometric subgraph containing a given set of vertices $S$.
While for $|S|=2$ an isometric hull is just a shortest path, we show that
computing the isometric hull of a set of vertices is NP-complete even if
$|S|=3$. Finally, we consider the problem of computing the isometric
hull-number of a graph and show that computing it is $\Sigma^P_2$ complete.
| 1 | 0 | 0 | 0 | 0 | 0 |
Evolutionary games on cycles with strong selection | Evolutionary games on graphs describe how strategic interactions and
population structure determine evolutionary success, quantified by the
probability that a single mutant takes over a population. Graph structures,
compared to the well-mixed case, can act as amplifiers or suppressors of
selection by increasing or decreasing the fixation probability of a beneficial
mutant. Properties of the associated mean fixation times can be more intricate,
especially when selection is strong. The intuition is that fixation of a
beneficial mutant happens fast (in a dominance game), that fixation takes very
long (in a coexistence game), and that strong selection eliminates demographic
noise. Here we show that these intuitions can be misleading in structured
populations. We analyze mean fixation times on the cycle graph under strong
frequency-dependent selection for two different microscopic evolutionary update
rules (death-birth and birth-death). We establish exact analytical results for
fixation times under strong selection, and show that there are coexistence
games in which fixation occurs in time polynomial in population size. Depending
on the underlying game, we observe inherence of demographic noise even under
strong selection, if the process is driven by random death before selection for
birth of an offspring (death-birth update). In contrast, if selection for an
offspring occurs before random removal (birth-death update), strong selection
can remove demographic noise almost entirely.
| 0 | 1 | 0 | 0 | 0 | 0 |
Epsilon-shapes: characterizing, detecting and thickening thin features in geometric models | We focus on the analysis of planar shapes and solid objects having thin
features and propose a new mathematical model to characterize them. Based on
our model, that we call an epsilon-shape, we show how thin parts can be
effectively and efficiently detected by an algorithm, and propose a novel
approach to thicken these features while leaving all the other parts of the
shape unchanged. When compared with state-of-the-art solutions, our proposal
proves to be particularly flexible, efficient and stable, and does not require
any unintuitive parameter to fine-tune the process. Furthermore, our method is
able to detect thin features both in the object and in its complement, thus
providing a useful tool to detect thin cavities and narrow channels. We discuss
the importance of this kind of analysis in the design of robust structures and
in the creation of geometry to be fabricated with modern additive manufacturing
technology.
| 1 | 0 | 0 | 0 | 0 | 0 |
Central Moment Discrepancy (CMD) for Domain-Invariant Representation Learning | The learning of domain-invariant representations in the context of domain
adaptation with neural networks is considered. We propose a new regularization
method that minimizes the discrepancy between domain-specific latent feature
representations directly in the hidden activation space. Although some standard
distribution matching approaches exist that can be interpreted as the matching
of weighted sums of moments, e.g. Maximum Mean Discrepancy (MMD), an explicit
order-wise matching of higher order moments has not been considered before. We
propose to match the higher order central moments of probability distributions
by means of order-wise moment differences. Our model does not require
computationally expensive distance and kernel matrix computations. We utilize
the equivalent representation of probability distributions by moment sequences
to define a new distance function, called Central Moment Discrepancy (CMD). We
prove that CMD is a metric on the set of probability distributions on a compact
interval. We further prove that convergence of probability distributions on
compact intervals w.r.t. the new metric implies convergence in distribution of
the respective random variables. We test our approach on two different
benchmark data sets for object recognition (Office) and sentiment analysis of
product reviews (Amazon reviews). CMD achieves a new state-of-the-art
performance on most domain adaptation tasks of Office and outperforms networks
trained with MMD, Variational Fair Autoencoders and Domain Adversarial Neural
Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity
analysis shows that the new approach is stable w.r.t. parameter changes in a
certain interval. The source code of the experiments is publicly available.
| 0 | 0 | 0 | 1 | 0 | 0 |
Understanding Geometry of Encoder-Decoder CNNs | Encoder-decoder networks using convolutional neural network (CNN)
architecture have been extensively used in deep learning literatures thanks to
its excellent performance for various inverse problems in computer vision,
medical imaging, etc. However, it is still difficult to obtain coherent
geometric view why such an architecture gives the desired performance. Inspired
by recent theoretical understanding on generalizability, expressivity and
optimization landscape of neural networks, as well as the theory of
convolutional framelets, here we provide a unified theoretical framework that
leads to a better understanding of geometry of encoder-decoder CNNs. Our
unified mathematical framework shows that encoder-decoder CNN architecture is
closely related to nonlinear basis representation using combinatorial
convolution frames, whose expressibility increases exponentially with the
network depth. We also demonstrate the importance of skipped connection in
terms of expressibility, and optimization landscape.
| 1 | 0 | 0 | 1 | 0 | 0 |
AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks | ShuffleNet is a state-of-the-art light weight convolutional neural network
architecture. Its basic operations include group, channel-wise convolution and
channel shuffling. However, channel shuffling is manually designed empirically.
Mathematically, shuffling is a multiplication by a permutation matrix. In this
paper, we propose to automate channel shuffling by learning permutation
matrices in network training. We introduce an exact Lipschitz continuous
non-convex penalty so that it can be incorporated in the stochastic gradient
descent to approximate permutation at high precision. Exact permutations are
obtained by simple rounding at the end of training and are used in inference.
The resulting network, referred to as AutoShuffleNet, achieved improved
classification accuracies on CIFAR-10 and ImageNet data sets. In addition, we
found experimentally that the standard convex relaxation of permutation
matrices into stochastic matrices leads to poor performance. We prove
theoretically the exactness (error bounds) in recovering permutation matrices
when our penalty function is zero (very small). We present examples of
permutation optimization through graph matching and two-layer neural network
models where the loss functions are calculated in closed analytical form. In
the examples, convex relaxation failed to capture permutations whereas our
penalty succeeded.
| 1 | 0 | 0 | 1 | 0 | 0 |
ACDC: Altering Control Dependence Chains for Automated Patch Generation | Once a failure is observed, the primary concern of the developer is to
identify what caused it in order to repair the code that induced the incorrect
behavior. Until a permanent repair is afforded, code repair patches are
invaluable. The aim of this work is to devise an automated patch generation
technique that proceeds as follows: Step1) It identifies a set of
failure-causing control dependence chains that are minimal in terms of number
and length. Step2) It identifies a set of predicates within the chains along
with associated execution instances, such that negating the predicates at the
given instances would exhibit correct behavior. Step3) For each candidate
predicate, it creates a classifier that dictates when the predicate should be
negated to yield correct program behavior. Step4) Prior to each candidate
predicate, the faulty program is injected with a call to its corresponding
classifier passing it the program state and getting a return value predictively
indicating whether to negate the predicate or not. The role of the classifiers
is to ensure that: 1) the predicates are not negated during passing runs; and
2) the predicates are negated at the appropriate instances within failing runs.
We implemented our patch generation approach for the Java platform and
evaluated our toolset using 148 defects from the Introclass and Siemens
benchmarks. The toolset identified 56 full patches and another 46 partial
patches, and the classification accuracy averaged 84%.
| 1 | 0 | 0 | 0 | 0 | 0 |
Proceedings Eighth Workshop on Intersection Types and Related Systems | This volume contains a final and revised selection of papers presented at the
Eighth Workshop on Intersection Types and Related Systems (ITRS 2016), held on
June 26, 2016 in Porto, in affiliation with FSCD 2016.
| 1 | 0 | 0 | 0 | 0 | 0 |
Microlensing of Extremely Magnified Stars near Caustics of Galaxy Clusters | Recent observations of lensed galaxies at cosmological distances have
detected individual stars that are extremely magnified when crossing the
caustics of lensing clusters. In idealized cluster lenses with smooth mass
distributions, two images of a star of radius $R$ approaching a caustic
brighten as $t^{-1/2}$ and reach a peak magnification $\sim 10^{6}\, (10\,
R_{\odot}/R)^{1/2}$ before merging on the critical curve. We show that a mass
fraction ($\kappa_\star \gtrsim \, 10^{-4.5}$) in microlenses inevitably
disrupts the smooth caustic into a network of corrugated microcaustics, and
produces light curves with numerous peaks. Using analytical calculations and
numerical simulations, we derive the characteristic width of the network,
caustic-crossing frequencies, and peak magnifications. For the lens parameters
of a recent detection and a population of intracluster stars with $\kappa_\star
\sim 0.01$, we find a source-plane width of $\sim 20 \, {\rm pc}$ for the
caustic network, which spans $0.2 \, {\rm arcsec}$ on the image plane. A source
star takes $\sim 2\times 10^4$ years to cross this width, with a total of $\sim
6 \times 10^4$ crossings, each one lasting for $\sim 5\,{\rm
hr}\,(R/10\,R_\odot)$ with typical peak magnifications of $\sim 10^{4} \left(
R/ 10\,R_\odot \right)^{-1/2}$. The exquisite sensitivity of caustic-crossing
events to the granularity of the lens-mass distribution makes them ideal probes
of dark matter components, such as compact halo objects and ultralight axion
dark matter.
| 0 | 1 | 0 | 0 | 0 | 0 |
NFFT meets Krylov methods: Fast matrix-vector products for the graph Laplacian of fully connected networks | The graph Laplacian is a standard tool in data science, machine learning, and
image processing. The corresponding matrix inherits the complex structure of
the underlying network and is in certain applications densely populated. This
makes computations, in particular matrix-vector products, with the graph
Laplacian a hard task. A typical application is the computation of a number of
its eigenvalues and eigenvectors. Standard methods become infeasible as the
number of nodes in the graph is too large. We propose the use of the fast
summation based on the nonequispaced fast Fourier transform (NFFT) to perform
the dense matrix-vector product with the graph Laplacian fast without ever
forming the whole matrix. The enormous flexibility of the NFFT algorithm allows
us to embed the accelerated multiplication into Lanczos-based eigenvalues
routines or iterative linear system solvers and even consider other than the
standard Gaussian kernels. We illustrate the feasibility of our approach on a
number of test problems from image segmentation to semi-supervised learning
based on graph-based PDEs. In particular, we compare our approach with the
Nyström method. Moreover, we present and test an enhanced, hybrid version of
the Nyström method, which internally uses the NFFT.
| 0 | 0 | 0 | 1 | 0 | 0 |
Convergence rate of a simulated annealing algorithm with noisy observations | In this paper we propose a modified version of the simulated annealing
algorithm for solving a stochastic global optimization problem. More precisely,
we address the problem of finding a global minimizer of a function with noisy
evaluations. We provide a rate of convergence and its optimized parametrization
to ensure a minimal number of evaluations for a given accuracy and a confidence
level close to 1. This work is completed with a set of numerical
experimentations and assesses the practical performance both on benchmark test
cases and on real world examples.
| 0 | 0 | 1 | 1 | 0 | 0 |
Trans-allelic model for prediction of peptide:MHC-II interactions | Major histocompatibility complex class two (MHC-II) molecules are
trans-membrane proteins and key components of the cellular immune system. Upon
recognition of foreign peptides expressed on the MHC-II binding groove, helper
T cells mount an immune response against invading pathogens. Therefore,
mechanistic identification and knowledge of physico-chemical features that
govern interactions between peptides and MHC-II molecules is useful for the
design of effective epitope-based vaccines, as well as for understanding of
immune responses. In this paper, we present a comprehensive trans-allelic
prediction model, a generalized version of our previous biophysical model, that
can predict peptide interactions for all three human MHC-II loci (HLA-DR,
HLA-DP and HLA-DQ), using both peptide sequence data and structural information
of MHC-II molecules. The advantage of this approach over other machine learning
models is that it offers a simple and plausible physical explanation for
peptide-MHC-II interactions. We train the model using a benchmark experimental
dataset, and measure its predictive performance using novel data. Despite its
relative simplicity, we find that the model has comparable performance to the
state-of-the-art method. Focusing on the physical bases of peptide-MHC binding,
we find support for previous theoretical predictions about the contributions of
certain binding pockets to the binding energy. Additionally, we find that
binding pockets P 4 and P 5 of HLA-DP, which were not previously considered as
primary anchors, do make strong contributions to the binding energy. Together,
the results indicate that our model can serve as a useful complement to
alternative approaches to predicting peptide-MHC interactions.
| 0 | 0 | 0 | 1 | 0 | 0 |
Improved Speech Reconstruction from Silent Video | Speechreading is the task of inferring phonetic information from visually
observed articulatory facial movements, and is a notoriously difficult task for
humans to perform. In this paper we present an end-to-end model based on a
convolutional neural network (CNN) for generating an intelligible and
natural-sounding acoustic speech signal from silent video frames of a speaking
person. We train our model on speakers from the GRID and TCD-TIMIT datasets,
and evaluate the quality and intelligibility of reconstructed speech using
common objective measurements. We show that speech predictions from the
proposed model attain scores which indicate significantly improved quality over
existing models. In addition, we show promising results towards reconstructing
speech from an unconstrained dictionary.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cluster-based Haldane state in edge-shared tetrahedral spin-cluster chain: Fedotovite K$_2$Cu$_3$O(SO$_4$)$_3$ | Fedotovite K$_2$Cu$_3$O(SO$_4$)$_3$ is a candidate of new quantum spin
systems, in which the edge-shared tetrahedral (EST) spin-clusters consisting of
Cu$^{2+}$ are connected by weak inter-cluster couplings to from one-dimensional
array. Comprehensive experimental studies by magnetic susceptibility,
magnetization, heat capacity, and inelastic neutron scattering measurements
reveal the presence of an effective $S$ = 1 Haldane state below $T \cong 4$ K.
Rigorous theoretical studies provide an insight into the magnetic state of
K$_2$Cu$_3$O(SO$_4$)$_3$: an EST cluster makes a triplet in the ground state
and one-dimensional chain of the EST induces a cluster-based Haldane state. We
predict that the cluster-based Haldene state emerges whenever the number of
tetrahedra in the EST is $even$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Super-Isolated Elliptic Curves and Abelian Surfaces in Cryptography | We call a simple abelian variety over $\mathbb{F}_p$ super-isolated if its
($\mathbb{F}_p$-rational) isogeny class contains no other varieties. The
motivation for considering these varieties comes from concerns about isogeny
based attacks on the discrete log problem. We heuristically estimate that the
number of super-isolated elliptic curves over $\mathbb{F}_p$ with prime order
and $p \leq N$, is roughly $\tilde{\Theta}(\sqrt{N})$. In contrast, we prove
that there are only 2 super-isolated surfaces of cryptographic size and
near-prime order.
| 1 | 0 | 1 | 0 | 0 | 0 |
Long Short-Term Memory (LSTM) networks with jet constituents for boosted top tagging at the LHC | Multivariate techniques based on engineered features have found wide adoption
in the identification of jets resulting from hadronic top decays at the Large
Hadron Collider (LHC). Recent Deep Learning developments in this area include
the treatment of the calorimeter activation as an image or supplying a list of
jet constituent momenta to a fully connected network. This latter approach
lends itself well to the use of Recurrent Neural Networks. In this work the
applicability of architectures incorporating Long Short-Term Memory (LSTM)
networks is explored. Several network architectures, methods of ordering of jet
constituents, and input pre-processing are studied. The best performing LSTM
network achieves a background rejection of 100 for 50% signal efficiency. This
represents more than a factor of two improvement over a fully connected Deep
Neural Network (DNN) trained on similar types of inputs.
| 1 | 0 | 0 | 1 | 0 | 0 |
Tangent measures of elliptic harmonic measure and applications | Tangent measure and blow-up methods, are powerful tools for understanding the
relationship between the infinitesimal structure of the boundary of a domain
and the behavior of its harmonic measure. We introduce a method for studying
tangent measures of elliptic measures in arbitrary domains associated with
(possibly non-symmetric) elliptic operators in divergence form whose
coefficients have vanishing mean oscillation at the boundary. In this setting,
we show the following for domains $ \Omega \subset \mathbb{R}^{n+1}$:
1. We extend the results of Kenig, Preiss, and Toro [KPT09] by showing mutual
absolute continuity of interior and exterior elliptic measures for {\it any}
domains implies the tangent measures are a.e. flat and the elliptic measures
have dimension $n$.
2. We generalize the work of Kenig and Toro [KT06] and show that VMO
equivalence of doubling interior and exterior elliptic measures for general
domains implies the tangent measures are always elliptic polynomials.
3. In a uniform domain that satisfies the capacity density condition and
whose boundary is locally finite and has a.e. positive lower $n$-Hausdorff
density, we show that if the elliptic measure is absolutely continuous with
respect to $n$-Hausdorff measure then the boundary is rectifiable. This
generalizes the work of Akman, Badger, Hofmann, and Martell [ABHM17].
Finally, we generalize one of the main results of [Bad11] by showing that if
$\omega$ is a Radon measure for which all tangent measures at a point are
harmonic polynomials vanishing at the origin, then they are all homogeneous
harmonic polynomials.
| 0 | 0 | 1 | 0 | 0 | 0 |
Aroma: Code Recommendation via Structural Code Search | Programmers often write code which have similarity to existing code written
somewhere. A tool that could help programmers to search such similar code would
be immensely useful. Such a tool could help programmers to extend partially
written code snippets to completely implement necessary functionality, help to
discover extensions to the partial code which are commonly done by other
programmers, help to cross-check against similar code written by other
programmers, or help to add extra code which would avoid common mistakes and
errors. We propose Aroma, a tool and technique for code recommendation via
structural code search. Aroma indexes a huge code corpus including thousands of
open-source projects, takes a partial code snippet as input, searches the
indexed method bodies which contain the partial code snippet, clusters and
intersects the results of search to recommend a small set of succinct code
snippets which contain the query snippet and which appears as part of several
programs in the corpus. We evaluated Aroma on several randomly selected queries
created from the corpus and as well as those derived from the code snippets
obtained from Stack Overflow, a popular website for discussing code. We found
that Aroma was able to retrieve and recommend most relevant code snippets
efficiently.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Hoffman's conjectural identity | In this paper, we shall prove the equality \[
\zeta(3,\{2\}^{n},1,2)=\zeta(\{2\}^{n+3})+2\zeta(3,3,\{2\}^{n}) \] conjectured
by Hoffman using certain identities among iterated integrals on
$\mathbb{P}^{1}\setminus\{0,1,\infty,z\}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Parametric gain and wavelength conversion via third order nonlinear optics a CMOS compatible waveguide | We demonstrate sub-picosecond wavelength conversion in the C-band via four
wave mixing in a 45cm long high index doped silica spiral waveguide. We achieve
an on/off conversion efficiency (signal to idler) of +16.5dB as well as a
parametric gain of +15dB for a peak pump power of 38W over a wavelength range
of 100nm. Furthermore, we demonstrated a minimum gain of +5dB over a wavelength
range as large as 200nm.
| 0 | 1 | 0 | 0 | 0 | 0 |
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization | In this paper, we develop a new accelerated stochastic gradient method for
efficiently solving the convex regularized empirical risk minimization problem
in mini-batch settings. The use of mini-batches is becoming a golden standard
in the machine learning community, because mini-batch settings stabilize the
gradient estimate and can easily make good use of parallel computing. The core
of our proposed method is the incorporation of our new "double acceleration"
technique and variance reduction technique. We theoretically analyze our
proposed method and show that our method much improves the mini-batch
efficiencies of previous accelerated stochastic methods, and essentially only
needs size $\sqrt{n}$ mini-batches for achieving the optimal iteration
complexities for both non-strongly and strongly convex objectives, where $n$ is
the training set size. Further, we show that even in non-mini-batch settings,
our method achieves the best known convergence rate for both non-strongly and
strongly convex objectives.
| 1 | 0 | 1 | 1 | 0 | 0 |
Automatically Leveraging MapReduce Frameworks for Data-Intensive Applications | MapReduce is a popular programming paradigm for developing large-scale,
data-intensive computation. Many frameworks that implement this paradigm have
recently been developed. To leverage these frameworks, however, developers must
become familiar with their APIs and rewrite existing code. Casper is a new tool
that automatically translates sequential Java programs into the MapReduce
paradigm. Casper identifies potential code fragments to rewrite and translates
them in two steps: (1) Casper uses program synthesis to search for a program
summary (i.e., a functional specification) of each code fragment. The summary
is expressed using a high-level intermediate language resembling the MapReduce
paradigm and verified to be semantically equivalent to the original using a
theorem prover. (2) Casper generates executable code from the summary, using
either the Hadoop, Spark, or Flink API. We evaluated Casper by automatically
converting real-world, sequential Java benchmarks to MapReduce. The resulting
benchmarks perform up to 48.2x faster compared to the original.
| 1 | 0 | 0 | 0 | 0 | 0 |
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise | In this paper, we propose a unified view of gradient-based algorithms for
stochastic convex composite optimization. By extending the concept of estimate
sequence introduced by Nesterov, we interpret a large class of stochastic
optimization methods as procedures that iteratively minimize a surrogate of the
objective. This point of view covers stochastic gradient descent (SGD), the
variance-reduction approaches SAGA, SVRG, MISO, their proximal variants, and
has several advantages: (i) we provide a simple generic proof of convergence
for all of the aforementioned methods; (ii) we naturally obtain new algorithms
with the same guarantees; (iii) we derive generic strategies to make these
algorithms robust to stochastic noise, which is useful when data is corrupted
by small random perturbations. Finally, we show that this viewpoint is useful
to obtain accelerated algorithms.
| 1 | 0 | 0 | 1 | 0 | 0 |
Results of the first NaI scintillating calorimeter prototypes by COSINUS | Over almost three decades the TAUP conference has seen a remarkable momentum
gain in direct dark matter search. An important accelerator were first
indications for a modulating signal rate in the DAMA/NaI experiment reported in
1997. Today the presence of an annual modulation, which matches in period and
phase the expectation for dark matter, is supported at > 9$\sigma$ confidence.
The underlying nature of dark matter, however, is still considered an open and
fundamental question of particle physics. No other direct dark matter search
could confirm the DAMA claim up to now; moreover, numerous null-results are in
clear contradiction under so-called standard assumptions for the dark matter
halo and the interaction mechanism of dark with ordinary matter. As both bear a
dependence on the target material, resolving this controversial situation will
convincingly only be possible with an experiment using sodium iodide (NaI) as
target. COSINUS aims to even go a step further by combining NaI with a novel
detection approach. COSINUS aims to operate NaI as a cryogenic calorimeter
reading scintillation light and phonon/heat signal. Two distinct advantages
arise from this approach, a substantially lower energy threshold for nuclear
recoils and particle identification on an event-by-event basis. These key
benefits will allow COSINUS to clarify a possible nuclear recoil origin of the
DAMA signal with comparatively little exposure of O(100kg days) and, thereby,
answer a long-standing question of particle physics. Today COSINUS is in R&D
phase; in this contribution we show results from the 2nd prototype, albeit the
first one of the final foreseen detector design. The key finding of this
measurement is that pure, undoped NaI is a truly excellent scintillator at low
temperatures: We measure 13.1% of the total deposited energy in the NaI crystal
in the form of scintillation light (in the light detector).
| 0 | 1 | 0 | 0 | 0 | 0 |
Deep SNP: An End-to-end Deep Neural Network with Attention-based Localization for Break-point Detection in SNP Array Genomic data | Diagnosis and risk stratification of cancer and many other diseases require
the detection of genomic breakpoints as a prerequisite of calling copy number
alterations (CNA). This, however, is still challenging and requires
time-consuming manual curation. As deep-learning methods outperformed classical
state-of-the-art algorithms in various domains and have also been successfully
applied to life science problems including medicine and biology, we here
propose Deep SNP, a novel Deep Neural Network to learn from genomic data.
Specifically, we used a manually curated dataset from 12 genomic single
nucleotide polymorphism array (SNPa) profiles as truth-set and aimed at
predicting the presence or absence of genomic breakpoints, an indicator of
structural chromosomal variations, in windows of 40,000 probes. We compare our
results with well-known neural network models as well as Rawcopy though this
tool is designed to predict breakpoints and in addition genomic segments with
high sensitivity. We show, that Deep SNP is capable of successfully predicting
the presence or absence of a breakpoint in large genomic windows and
outperforms state-of-the-art neural network models. Qualitative examples
suggest that integration of a localization unit may enable breakpoint detection
and prediction of genomic segments, even if the breakpoint coordinates were not
provided for network training. These results warrant further evaluation of
DeepSNP for breakpoint localization and subsequent calling of genomic segments.
| 0 | 0 | 0 | 0 | 1 | 0 |
$(L,M)$-fuzzy convex structures | In this paper, the notion of $(L,M)$-fuzzy convex structures is introduced.
It is a generalization of $L$-convex structures and $M$-fuzzifying convex
structures. In our definition of $(L,M)$-fuzzy convex structures, each
$L$-fuzzy subset can be regarded as an $L$-convex set to some degree. The
notion of convexity preserving functions is also generalized to lattice-valued
case. Moreover, under the framework of $(L,M)$-fuzzy convex structures, the
concepts of quotient structures, substructures and products are presented and
their fundamental properties are discussed. Finally, we create a functor
$\omega$ from $\mathbf{MYCS}$ to $\mathbf{LMCS}$ and show that there exists an
adjunction between $\mathbf{MYCS}$ and $\mathbf{LMCS}$, where $\mathbf{MYCS}$
and $\mathbf{LMCS}$ denote the category of $M$-fuzzifying convex structures,
and the category of $(L,M)$-fuzzy convex structures, respectively.
| 0 | 0 | 1 | 0 | 0 | 0 |
Passive Compliance Control of Aerial Manipulators | This paper presents a passive compliance control for aerial manipulators to
achieve stable environmental interactions. The main challenge is the absence of
actuation along body-planar directions of the aerial vehicle which might be
required during the interaction to preserve passivity. The controller proposed
in this paper guarantees passivity of the manipulator through a proper choice
of end-effector coordinates, and that of vehicle fuselage is guaranteed by
exploiting time domain passivity technique. Simulation studies validate the
proposed approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
The phase retrieval problem for solutions of the Helmholtz equation | In this paper we consider the phase retrieval problem for Herglotz functions,
that is, solutions of the Helmholtz equation $\Delta u+\lambda^2u=0$ on domains
$\Omega\subset\mathbb{R}^d$, $d\geq2$. In dimension $d=2$, if $u,v$ are two
such solutions then $|u|=|v|$ implies that either $u=cv$ or $u=c\bar v$ for
some $c\in\mathbb{C}$ with $|c|=1$. In dimension $d\geq3$, the same conclusion
holds under some restriction on $u$ and $v$: either they are real valued or
zonal functions or have non vanishing mean.
| 0 | 0 | 1 | 0 | 0 | 0 |
Counting intersecting and pairs of cross-intersecting families | A family of subsets of $\{1,\ldots,n\}$ is called {\it intersecting} if any
two of its sets intersect. A classical result in extremal combinatorics due to
Erdős, Ko, and Rado determines the maximum size of an intersecting family
of $k$-subsets of $\{1,\ldots, n\}$. In this paper we study the following
problem: how many intersecting families of $k$-subsets of $\{1,\ldots, n\}$ are
there? Improving a result of Balogh, Das, Delcourt, Liu, and Sharifzadeh, we
determine this quantity asymptotically for $n\ge 2k+2+2\sqrt{k\log k}$ and
$k\to \infty$. Moreover, under the same assumptions we also determine
asymptotically the number of {\it non-trivial} intersecting families, that is,
intersecting families for which the intersection of all sets is empty. We
obtain analogous results for pairs of cross-intersecting families.
| 1 | 0 | 1 | 0 | 0 | 0 |
Zero-field Skyrmions with a High Topological Number in Itinerant Magnets | Magnetic skyrmions are swirling spin textures with topologically protected
noncoplanarity. Recently, skyrmions with the topological number of unity have
been extensively studied in both experiment and theory. We here show that a
skyrmion crystal with an unusually high topological number of two is stabilized
in itinerant magnets at zero magnetic field. The results are obtained for a
minimal Kondo lattice model on a triangular lattice by an unrestricted
large-scale numerical simulation and variational calculations. We find that the
topological number can be switched by a magnetic field as $2\leftrightarrow
1\leftrightarrow 0$. The skyrmion crystals are formed by the superpositions of
three spin density waves induced by the Fermi surface effect, and hence, the
size of skyrmions can be controlled by the band structure and electron filling.
We also discuss the charge and spin textures of itinerant electrons in the
skyrmion crystals which are directly obtained in our numerical simulations.
| 0 | 1 | 0 | 0 | 0 | 0 |
DoShiCo Challenge: Domain Shift in Control Prediction | Training deep neural network policies end-to-end for real-world applications
so far requires big demonstration datasets in the real world or big sets
consisting of a large variety of realistic and closely related 3D CAD models.
These real or virtual data should, moreover, have very similar characteristics
to the conditions expected at test time. These stringent requirements and the
time consuming data collection processes that they entail, are currently the
most important impediment that keeps deep reinforcement learning from being
deployed in real-world applications. Therefore, in this work we advocate an
alternative approach, where instead of avoiding any domain shift by carefully
selecting the training data, the goal is to learn a policy that can cope with
it. To this end, we propose the DoShiCo challenge: to train a model in very
basic synthetic environments, far from realistic, in a way that it can be
applied in more realistic environments as well as take the control decisions on
real-world data. In particular, we focus on the task of collision avoidance for
drones. We created a set of simulated environments that can be used as
benchmark and implemented a baseline method, exploiting depth prediction as an
auxiliary task to help overcome the domain shift. Even though the policy is
trained in very basic environments, it can learn to fly without collisions in a
very different realistic simulated environment. Of course several benchmarks
for reinforcement learning already exist - but they never include a large
domain shift. On the other hand, several benchmarks in computer vision focus on
the domain shift, but they take the form of a static datasets instead of
simulated environments. In this work we claim that it is crucial to take the
two challenges together in one benchmark.
| 1 | 0 | 0 | 0 | 0 | 0 |
Knowledge Discovery from Layered Neural Networks based on Non-negative Task Decomposition | Interpretability has become an important issue in the machine learning field,
along with the success of layered neural networks in various practical tasks.
Since a trained layered neural network consists of a complex nonlinear
relationship between large number of parameters, we failed to understand how
they could achieve input-output mappings with a given data set. In this paper,
we propose the non-negative task decomposition method, which applies
non-negative matrix factorization to a trained layered neural network. This
enables us to decompose the inference mechanism of a trained layered neural
network into multiple principal tasks of input-output mapping, and reveal the
roles of hidden units in terms of their contribution to each principal task.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimal Stopping for Interval Estimation in Bernoulli Trials | We propose an optimal sequential methodology for obtaining confidence
intervals for a binomial proportion $\theta$. Assuming that an i.i.d. random
sequence of Benoulli($\theta$) trials is observed sequentially, we are
interested in designing a)~a stopping time $T$ that will decide when is the
best time to stop sampling the process, and b)~an optimum estimator
$\hat{\theta}_{T}$ that will provide the optimum center of the interval
estimate of $\theta$. We follow a semi-Bayesian approach, where we assume that
there exists a prior distribution for $\theta$, and our goal is to minimize the
average number of samples while we guarantee a minimal coverage probability
level. The solution is obtained by applying standard optimal stopping theory
and computing the optimum pair $(T,\hat{\theta}_{T})$ numerically. Regarding
the optimum stopping time component $T$, we demonstrate that it enjoys certain
very uncommon characteristics not encountered in solutions of other classical
optimal stopping problems. Finally, we compare our method with the optimum
fixed-sample-size procedure but also with existing alternative sequential
schemes.
| 0 | 0 | 0 | 1 | 0 | 0 |
Derivation of a Non-autonomous Linear Boltzmann Equation from a Heterogeneous Rayleigh Gas | A linear Boltzmann equation with nonautonomous collision operator is
rigorously derived in the Boltzmann-Grad limit for the deterministic dynamics
of a Rayleigh gas where a tagged particle is undergoing hard-sphere collisions
with heterogeneously distributed background particles, which do not interact
among each other. The validity of the linear Boltzmann equation holds for
arbitrary long times under moderate assumptions on spatial continuity and
higher moments of the initial distributions of the tagged particle and the
heterogeneous, non-equilibrium distribution of the background. The empiric
particle dynamics are compared to the Boltzmann dynamics using evolution
semigroups for Kolmogorov equations of associated probability measures on
collision histories.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Blockwise Symmetric Matchgate Signatures and Higher Domain \#CSP | For any $n\geq 3$ and $ q\geq 3$, we prove that the {\sc Equality} function
$(=_n)$ on $n$ variables over a domain of size $q$ cannot be realized by
matchgates under holographic transformations. This is a consequence of our
theorem on the structure of blockwise symmetric matchgate signatures. %due to
the rank of the matrix form of the blockwise symmetric standard signatures,
%where $(=_n)$ is an equality signature on domain $\{0, 1, \cdots, q-1\}$. This
has the implication that the standard holographic algorithms based on
matchgates, a methodology known to be universal for \#CSP over the Boolean
domain, cannot produce P-time algorithms for planar \#CSP over any higher
domain $q\geq 3$.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deconfined quantum critical points: symmetries and dualities | The deconfined quantum critical point (QCP), separating the Néel and
valence bond solid phases in a 2D antiferromagnet, was proposed as an example
of $2+1$D criticality fundamentally different from standard
Landau-Ginzburg-Wilson-Fisher {criticality}. In this work we present multiple
equivalent descriptions of deconfined QCPs, and use these to address the
possibility of enlarged emergent symmetries in the low energy limit. The
easy-plane deconfined QCP, besides its previously discussed self-duality, is
dual to $N_f = 2$ fermionic quantum electrodynamics (QED), which has its own
self-duality and hence may have an O(4)$\times Z_2^T$ symmetry. We propose
several dualities for the deconfined QCP with ${\mathrm{SU}(2)}$ spin symmetry
which together make natural the emergence of a previously suggested $SO(5)$
symmetry rotating the Néel and VBS orders. These emergent symmetries are
implemented anomalously. The associated infra-red theories can also be viewed
as surface descriptions of 3+1D topological paramagnets, giving further insight
into the dualities. We describe a number of numerical tests of these dualities.
We also discuss the possibility of "pseudocritical" behavior for deconfined
critical points, and the meaning of the dualities and emergent symmetries in
such a scenario.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks | Deep learning has become the state of the art approach in many machine
learning problems such as classification. It has recently been shown that deep
learning is highly vulnerable to adversarial perturbations. Taking the camera
systems of self-driving cars as an example, small adversarial perturbations can
cause the system to make errors in important tasks, such as classifying traffic
signs or detecting pedestrians. Hence, in order to use deep learning without
safety concerns a proper defense strategy is required. We propose to use
ensemble methods as a defense strategy against adversarial perturbations. We
find that an attack leading one model to misclassify does not imply the same
for other networks performing the same task. This makes ensemble methods an
attractive defense strategy against adversarial attacks. We empirically show
for the MNIST and the CIFAR-10 data sets that ensemble methods not only improve
the accuracy of neural networks on test data but also increase their robustness
against adversarial perturbations.
| 0 | 0 | 0 | 1 | 0 | 0 |
The unrolled quantum group inside Lusztig's quantum group of divided powers | In this letter we prove that the unrolled small quantum group, appearing in
quantum topology, is a Hopf subalgebra of Lusztig's quantum group of divided
powers. We do so by writing down non-obvious primitive elements with the right
adjoint action. We also construct a new larger Hopf algebra that contains the
full unrolled quantum group. In fact this Hopf algebra contains both the
enveloping of the Lie algebra and the ring of functions on the Lie group, and
it should be interesting in its own right. We finally explain how this gives a
realization of the unrolled quantum group as operators on a conformal field
theory and match some calculations on this side. Our result extends to other
Nichols algebras of diagonal type, including super Lie algebras.
| 0 | 0 | 1 | 0 | 0 | 0 |
Forward Collision Vehicular Radar with IEEE 802.11: Feasibility Demonstration through Measurements | Increasing safety and automation in transportation systems has led to the
proliferation of radar and IEEE 802.11 dedicated short range communication
(DSRC) in vehicles. Current implementations of vehicular radar devices,
however, are expensive, use a substantial amount of bandwidth, and are
susceptible to multiple security risks. Consider the feasibility of using an
IEEE 802.11 orthogonal frequency division multiplexing (OFDM) communications
waveform to perform radar functions. In this paper, we present an approach that
determines the mean-normalized channel energy from frequency domain channel
estimates and models it as a direct sinusoidal function of target range,
enabling closest target range estimation. In addition, we propose an
alternative to vehicular forward collision detection by extending IEEE 802.11
dedicated short-range communications (DSRC) and WiFi technology to radar,
providing a foundation for joint communications and radar framework.
Furthermore, we perform an experimental demonstration using existing IEEE
802.11 devices with minimal modification through algorithm processing on
frequency-domain channel estimates. The results of this paper show that our
solution delivers similar accuracy and reliability to mmWave radar devices with
as little as 20 MHz of spectrum (doubling DSRC's 10 MHz allocation), indicating
significant potential for industrial devices with joint vehicular
communications and radar capabilities.
| 1 | 0 | 0 | 0 | 0 | 0 |
Link Before You Share: Managing Privacy Policies through Blockchain | With the advent of numerous online content providers, utilities and
applications, each with their own specific version of privacy policies and its
associated overhead, it is becoming increasingly difficult for concerned users
to manage and track the confidential information that they share with the
providers. Users consent to providers to gather and share their Personally
Identifiable Information (PII). We have developed a novel framework to
automatically track details about how a users' PII data is stored, used and
shared by the provider. We have integrated our Data Privacy ontology with the
properties of blockchain, to develop an automated access control and audit
mechanism that enforces users' data privacy policies when sharing their data
across third parties. We have also validated this framework by implementing a
working system LinkShare. In this paper, we describe our framework on detail
along with the LinkShare system. Our approach can be adopted by Big Data users
to automatically apply their privacy policy on data operations and track the
flow of that data across various stakeholders.
| 1 | 0 | 0 | 0 | 0 | 0 |
Holistic Interstitial Lung Disease Detection using Deep Convolutional Neural Networks: Multi-label Learning and Unordered Pooling | Accurately predicting and detecting interstitial lung disease (ILD) patterns
given any computed tomography (CT) slice without any pre-processing
prerequisites, such as manually delineated regions of interest (ROIs), is a
clinically desirable, yet challenging goal. The majority of existing work
relies on manually-provided ILD ROIs to extract sampled 2D image patches from
CT slices and, from there, performs patch-based ILD categorization. Acquiring
manual ROIs is labor intensive and serves as a bottleneck towards
fully-automated CT imaging ILD screening over large-scale populations.
Furthermore, despite the considerable high frequency of more than one ILD
pattern on a single CT slice, previous works are only designed to detect one
ILD pattern per slice or patch.
To tackle these two critical challenges, we present multi-label deep
convolutional neural networks (CNNs) for detecting ILDs from holistic CT slices
(instead of ROIs or sub-images). Conventional single-labeled CNN models can be
augmented to cope with the possible presence of multiple ILD pattern labels,
via 1) continuous-valued deep regression based robust norm loss functions or 2)
a categorical objective as the sum of element-wise binary logistic losses. Our
methods are evaluated and validated using a publicly available database of 658
patient CT scans under five-fold cross-validation, achieving promising
performance on detecting four major ILD patterns: Ground Glass, Reticular,
Honeycomb, and Emphysema. We also investigate the effectiveness of a CNN
activation-based deep-feature encoding scheme using Fisher vector encoding,
which treats ILD detection as spatially-unordered deep texture classification.
| 1 | 0 | 0 | 0 | 0 | 0 |
Convergence Results for Neural Networks via Electrodynamics | We study whether a depth two neural network can learn another depth two
network using gradient descent. Assuming a linear output node, we show that the
question of whether gradient descent converges to the target function is
equivalent to the following question in electrodynamics: Given $k$ fixed
protons in $\mathbb{R}^d,$ and $k$ electrons, each moving due to the attractive
force from the protons and repulsive force from the remaining electrons,
whether at equilibrium all the electrons will be matched up with the protons,
up to a permutation. Under the standard electrical force, this follows from the
classic Earnshaw's theorem. In our setting, the force is determined by the
activation function and the input distribution. Building on this equivalence,
we prove the existence of an activation function such that gradient descent
learns at least one of the hidden nodes in the target network. Iterating, we
show that gradient descent can be used to learn the entire network one node at
a time.
| 1 | 1 | 0 | 0 | 0 | 0 |
The norm residue symbol for higher local fields | Since the development of higher local class field theory, several explicit
reciprocity laws have been constructed. In particular, there are formulas
describing the higher-dimensional Hilbert symbol given, among others, by M.
Kurihara, A. Zinoviev and S. Vostokov. K. Kato also has explicit formulas for
the higher-dimensional Kummer pairing associated to certain (one-dimensional)
$p$-divisible groups.
In this paper we construct an explicit reciprocity law describing the Kummer
pairing associated to any (one-dimensional) formal group. The formulas are a
generalization to higher-dimensional local fields of Kolyvagin's reciprocity
laws. The formulas obtained describe the values of the pairing in terms of
multidimensional $p$-adic differentiation, the logarithm of the formal group,
the generalized trace and the norm on Milnor K-groups.
In the second part of this paper, we will apply the results obtained here to
give explicit formulas for the generalized Hilbert symbol and the Kummer
pairing associated to a Lubin-Tate formal group. The results obtained in the
second paper constitute a generalization to higher local fields, of the
formulas of Artin-Hasse, K. Iwasawa and A. Wiles.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimizing tree decompositions in MSO | The classic algorithm of Bodlaender and Kloks [J. Algorithms, 1996] solves
the following problem in linear fixed-parameter time: given a tree
decomposition of a graph of (possibly suboptimal) width $k$, compute an
optimum-width tree decomposition of the graph. In this work, we prove that this
problem can also be solved in MSO in the following sense: for every positive
integer $k$, there is an MSO transduction from tree decompositions of width $k$
to tree decompositions of optimum width. Together with our recent results [LICS
2016], this implies that for every $k$ there exists an MSO transduction which
inputs a graph of treewidth $k$, and nondeterministically outputs its tree
decomposition of optimum width. We also show that MSO transductions can be
implemented in linear fixed-parameter time, which enables us to derive the
algorithmic result of Bodlaender and Kloks as a corollary of our main result.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nonparametric relative error estimation of the regression function for censored data | Let $ (T_i)_i$ be a sequence of independent identically distributed (i.i.d.)
random variables (r.v.) of interest distributed as $ T$ and $(X_i)_i$ be a
corresponding vector of covariates taking values on $ \mathbb{R}^d$. In
censorship models the r.v. $T$ is subject to random censoring by another r.v.
$C$. In this paper we built a new kernel estimator based on the so-called
synthetic data of the mean squared relative error for the regression function.
We establish the uniform almost sure convergence with rate over a compact set
and its asymptotic normality. The asymptotic variance is explicitly given and
as product we give a confidence bands. A simulation study has been conducted to
comfort our theoretical results
| 0 | 0 | 1 | 1 | 0 | 0 |
Intelligent Home Energy Management System for Distributed Renewable Generators, Dispatchable Residential Loads and Distributed Energy Storage Devices | This paper presents an intelligent home energy management system integrated
with dispatchable loads (e.g., clothes washers and dryers), distributed
renewable generators (e.g., roof-top solar panels), and distributed energy
storage devices (e.g., plug-in electric vehicles). The overall goal is to
reduce the total operating costs and the carbon emissions for a future
residential house, while satisfying the end-users comfort levels. This paper
models a wide variety of home appliances and formulates the economic operation
problem using mixed integer linear programming. Case studies are performed to
validate and demonstrate the effectiveness of the proposed solution algorithm.
Simulation results also show the positive impact of dispatchable loads,
distributed renewable generators, and distributed energy storage devices on a
future residential house.
| 0 | 0 | 1 | 0 | 0 | 0 |
Interpretable Structure-Evolving LSTM | This paper develops a general framework for learning interpretable data
representation via Long Short-Term Memory (LSTM) recurrent neural networks over
hierarchal graph structures. Instead of learning LSTM models over the pre-fixed
structures, we propose to further learn the intermediate interpretable
multi-level graph structures in a progressive and stochastic way from data
during the LSTM network optimization. We thus call this model the
structure-evolving LSTM. In particular, starting with an initial element-level
graph representation where each node is a small data element, the
structure-evolving LSTM gradually evolves the multi-level graph representations
by stochastically merging the graph nodes with high compatibilities along the
stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two
connected nodes from their corresponding LSTM gate outputs, which is used to
generate a merging probability. The candidate graph structures are accordingly
generated where the nodes are grouped into cliques with their merging
probabilities. We then produce the new graph structure with a
Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in
local optimums by stochastic sampling with an acceptance probability. Once a
graph structure is accepted, a higher-level graph is then constructed by taking
the partitioned cliques as its nodes. During the evolving process,
representation becomes more abstracted in higher-levels where redundant
information is filtered out, allowing more efficient propagation of long-range
data dependencies. We evaluate the effectiveness of structure-evolving LSTM in
the application of semantic object parsing and demonstrate its advantage over
state-of-the-art LSTM models on standard benchmarks.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Optimization over Tail Distributions | We investigate the use of optimization to compute bounds for extremal
performance measures. This approach takes a non-parametric viewpoint that aims
to alleviate the issue of model misspecification possibly encountered by
conventional methods in extreme event analysis. We make two contributions
towards solving these formulations, paying especial attention to the arising
tail issues. First, we provide a technique in parallel to Choquet's theory, via
a combination of integration by parts and change of measures, to transform
shape constrained problems (e.g., monotonicity of derivatives) into families of
moment problems. Second, we show how a moment problem cast over infinite
support can be reformulated into a problem over compact support with an
additional slack variable. In the context of optimization over tail
distributions, the latter helps resolve the issue of non-convergence of
solutions when using algorithms such as generalized linear programming. We
further demonstrate the applicability of this result to problems with
infinite-value constraints, which can arise in modeling heavy tails.
| 0 | 0 | 0 | 1 | 0 | 0 |
Isotropic covariance functions on graphs and their edges | We develop parametric classes of covariance functions on linear networks and
their extension to graphs with Euclidean edges, i.e., graphs with edges viewed
as line segments or more general sets with a coordinate system allowing us to
consider points on the graph which are vertices or points on an edge. Our
covariance functions are defined on the vertices and edge points of these
graphs and are isotropic in the sense that they depend only on the geodesic
distance or on a new metric called the resistance metric (which extends the
classical resistance metric developed in electrical network theory on the
vertices of a graph to the continuum of edge points). We discuss the advantages
of using the resistance metric in comparison with the geodesic metric as well
as the restrictions these metrics impose on the investigated covariance
functions. In particular, many of the commonly used isotropic covariance
functions in the spatial statistics literature (the power exponential,
Mat{é}rn, generalized Cauchy, and Dagum classes) are shown to be valid with
respect to the resistance metric for any graph with Euclidean edges, whilst
they are only valid with respect to the geodesic metric in more special cases.
| 0 | 0 | 1 | 1 | 0 | 0 |
Towards Probabilistic Formal Modeling of Robotic Cell Injection Systems | Cell injection is a technique in the domain of biological cell
micro-manipulation for the delivery of small volumes of samples into the
suspended or adherent cells. It has been widely applied in various areas, such
as gene injection, in-vitro fertilization (IVF), intracytoplasmic sperm
injection (ISCI) and drug development. However, the existing manual and
semi-automated cell injection systems require lengthy training and suffer from
high probability of contamination and low success rate. In the recently
introduced fully automated cell injection systems, the injection force plays a
vital role in the success of the process since even a tiny excessive force can
destroy the membrane or tissue of the biological cell. Traditionally, the force
control algorithms are analyzed using simulation, which is inherently
non-exhaustive and incomplete in terms of detecting system failures. Moreover,
the uncertainties in the system are generally ignored in the analysis. To
overcome these limitations, we present a formal analysis methodology based on
probabilistic model checking to analyze a robotic cell injection system
utilizing the impedance force control algorithm. The proposed methodology,
developed using the PRISM model checker, allowed to find a discrepancy in the
algorithm, which was not found by any of the previous analysis using the
traditional methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Representations of language in a model of visually grounded speech signal | We present a visually grounded model of speech perception which projects
spoken utterances and images to a joint semantic space. We use a multi-layer
recurrent highway network to model the temporal nature of spoken speech, and
show that it learns to extract both form and meaning-based linguistic knowledge
from the input signal. We carry out an in-depth analysis of the representations
used by different components of the trained model and show that encoding of
semantic aspects tends to become richer as we go up the hierarchy of layers,
whereas encoding of form-related aspects of the language input tends to
initially increase and then plateau or decrease.
| 1 | 0 | 0 | 0 | 0 | 0 |
Long-Term Video Interpolation with Bidirectional Predictive Network | This paper considers the challenging task of long-term video interpolation.
Unlike most existing methods that only generate few intermediate frames between
existing adjacent ones, we attempt to speculate or imagine the procedure of an
episode and further generate multiple frames between two non-consecutive frames
in videos. In this paper, we present a novel deep architecture called
bidirectional predictive network (BiPN) that predicts intermediate frames from
two opposite directions. The bidirectional architecture allows the model to
learn scene transformation with time as well as generate longer video
sequences. Besides, our model can be extended to predict multiple possible
procedures by sampling different noise vectors. A joint loss composed of clues
in image and feature spaces and adversarial loss is designed to train our
model. We demonstrate the advantages of BiPN on two benchmarks Moving 2D Shapes
and UCF101 and report competitive results to recent approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Evolutionary dynamics of cooperation in neutral populations | Cooperation is a difficult proposition in the face of Darwinian selection.
Those that defect have an evolutionary advantage over cooperators who should
therefore die out. However, spatial structure enables cooperators to survive
through the formation of homogeneous clusters, which is the hallmark of network
reciprocity. Here we go beyond this traditional setup and study the
spatiotemporal dynamics of cooperation in a population of populations. We use
the prisoner's dilemma game as the mathematical model and show that considering
several populations simultaneously give rise to fascinating spatiotemporal
dynamics and pattern formation. Even the simplest assumption that strategies
between different populations are payoff-neutral with one another results in
the spontaneous emergence of cyclic dominance, where defectors of one
population become prey of cooperators in the other population, and vice versa.
Moreover, if social interactions within different populations are characterized
by significantly different temptations to defect, we observe that defectors in
the population with the largest temptation counterintuitively vanish the
fastest, while cooperators that hang on eventually take over the whole
available space. Our results reveal that considering the simultaneous presence
of different populations significantly expands the complexity of evolutionary
dynamics in structured populations, and it allow us to understand the stability
of cooperation under adverse conditions that could never be bridged by network
reciprocity alone.
| 1 | 0 | 0 | 0 | 0 | 0 |
Large global-in-time solutions to a nonlocal model of chemotaxis | We consider the parabolic-elliptic model for the chemotaxis with fractional
(anomalous) diffusion. Global-in-time solutions are constructed under (nearly)
optimal assumptions on the size of radial initial data. Moreover, criteria for
blowup of radial solutions in terms of suitable Morrey spaces norms are
derived.
| 0 | 0 | 1 | 0 | 0 | 0 |
Microservices: Granularity vs. Performance | Microservice Architectures (MA) have the potential to increase the agility of
software development. In an era where businesses require software applications
to evolve to support software emerging requirements, particularly for Internet
of Things (IoT) applications, we examine the issue of microservice granularity
and explore its effect upon application latency. Two approaches to microservice
deployment are simulated; the first with microservices in a single container,
and the second with microservices partitioned across separate containers. We
observed a neglibible increase in service latency for the multiple container
deployment over a single container.
| 1 | 0 | 0 | 0 | 0 | 0 |
The strictly-correlated electron functional for spherically symmetric systems revisited | The strong-interaction limit of the Hohenberg-Kohn functional defines a
multimarginal optimal transport problem with Coulomb cost. From physical
arguments, the solution of this limit is expected to yield strictly-correlated
particle positions, related to each other by co-motion functions (or optimal
maps), but the existence of such a deterministic solution in the general
three-dimensional case is still an open question. A conjecture for the
co-motion functions for radially symmetric densities was presented in
Phys.~Rev.~A {\bf 75}, 042511 (2007), and later used to build approximate
exchange-correlation functionals for electrons confined in low-density quantum
dots. Colombo and Stra [Math.~Models Methods Appl.~Sci., {\bf 26} 1025 (2016)]
have recently shown that these conjectured maps are not always optimal. Here we
revisit the whole issue both from the formal and numerical point of view,
finding that even if the conjectured maps are not always optimal, they still
yield an interaction energy (cost) that is numerically very close to the true
minimum. We also prove that the functional built from the conjectured maps has
the expected functional derivative also when they are not optimal.
| 0 | 1 | 0 | 0 | 0 | 0 |
A stable numerical strategy for Reynolds-Rayleigh-Plesset coupling | The coupling of Reynolds and Rayleigh-Plesset equations has been used in
several works to simulate lubricated devices considering cavitation. The
numerical strategies proposed so far are variants of a staggered strategy where
Reynolds equation is solved considering the bubble dynamics frozen, and then
the Rayleigh-Plesset equation is solved to update the bubble radius with the
pressure frozen. We show that this strategy has severe stability issues and a
stable methodology is proposed. The proposed methodology performance is
assessed on two physical settings. The first one concerns the propagation of a
decompression wave along a fracture considering the presence of cavitation
nuclei. The second one is a typical journal bearing, in which the coupled model
is compared with the Elrod-Adams model.
| 0 | 1 | 0 | 0 | 0 | 0 |
Accelerated Sparse Subspace Clustering | State-of-the-art algorithms for sparse subspace clustering perform spectral
clustering on a similarity matrix typically obtained by representing each data
point as a sparse combination of other points using either basis pursuit (BP)
or orthogonal matching pursuit (OMP). BP-based methods are often prohibitive in
practice while the performance of OMP-based schemes are unsatisfactory,
especially in settings where data points are highly similar. In this paper, we
propose a novel algorithm that exploits an accelerated variant of orthogonal
least-squares to efficiently find the underlying subspaces. We show that under
certain conditions the proposed algorithm returns a subspace-preserving
solution. Simulation results illustrate that the proposed method compares
favorably with BP-based method in terms of running time while being
significantly more accurate than OMP-based schemes.
| 1 | 0 | 0 | 1 | 0 | 0 |
Prediction of helium vapor quality in steady state Two-phase operation for SST-1 Toroidal field magnets | Steady State Superconducting Tokamak (SST-1) at the Institute for Plasma
Research (IPR) is an operational device and is the first superconducting
Tokamak in India. Superconducting Magnets System (SCMS) in SST-1 comprises of
sixteen Toroidal field (TF) magnets and nine Poloidal Field (PF) magnets
manufactured using NbTi/Cu based cable-in-conduit-conductor (CICC) concept.
SST-1, superconducting TF magnets are operated in a Cryo-stable manner being
cooled with two-phase (TP) flow helium. The typical operating pressure of the
TP helium is 1.6 bar (a) at corresponding saturation temperature. The SCMS has
a typical cool-down time of about 14 days from 300 K down to 4.5 K using Helium
plant of equivalent cooling capacity of 1350 W at 4.5 K. Using the onset of
experimental data from the HRL, we estimated the vapor quality for the input
heat load on to the TF magnets system. In this paper, we report the
characteristics of two-phase flow for given thermo-hydraulic conditions during
long steady state operation of the SST-1 TF magnets. Finally, the
experimentally obtained results have been compared with the well-known
correlations of two-phase flow.
| 0 | 1 | 0 | 0 | 0 | 0 |
Synthesis of Highly Anisotropic Semiconducting GaTe Nanomaterials and Emerging Properties Enabled by Epitaxy | Pseudo-one dimensional (pseudo-1D) materials are a new-class of materials
where atoms are arranged in chain like structures in two-dimensions (2D).
Examples include recently discovered black phosphorus, ReS2 and ReSe2 from
transition metal dichalcogenides, TiS3 and ZrS3 from transition metal
trichalcogenides and most recently GaTe. The presence of structural anisotropy
impacts their physical properties and leads to direction dependent light-matter
interactions, dichroic optical responses, high mobility channels, and
anisotropic thermal conduction. Despite the numerous reports on the vapor phase
growth of isotropic TMDCs and post transition metal chalcogenides such as MoS2
and GaSe, the synthesis of pseudo-1D materials is particularly difficult due to
the anisotropy in interfacial energy, which stabilizes dendritic growth rather
than single crystalline growth with well-defined orientation. The growth of
monoclinic GaTe has been demonstrated on flexible mica substrates with superior
photodetecting performance. In this work, we demonstrate that pseudo-1D
monoclinic GaTe layers can be synthesized on a variety of other substrates
including GaAs (111), Si (111) and c-cut sapphire by physical vapor transport
techniques. High resolution transmission electron microscopy (HRTEM)
measurements, together with angle resolved micro-PL and micro-Raman techniques,
provide for the very first time atomic scale resolution experiments on
pseudo-1D structures in monoclinic GaTe and anisotropic properties.
Interestingly, GaTe nanomaterials grown on sapphire exhibit highly efficient
and narrow localized emission peaks below the band gap energy, which are found
to be related to select types of line and point defects as evidenced by PL and
Raman mapping scans. It makes the samples grown on sapphire more prominent than
those grown on GaAs and Si, which demonstrate more regular properties.
| 0 | 1 | 0 | 0 | 0 | 0 |
Criticality as It Could Be: organizational invariance as self-organized criticality in embodied agents | This paper outlines a methodological approach for designing adaptive agents
driving themselves near points of criticality. Using a synthetic approach we
construct a conceptual model that, instead of specifying mechanistic
requirements to generate criticality, exploits the maintenance of an
organizational structure capable of reproducing critical behavior. Our approach
exploits the well-known principle of universality, which classifies critical
phenomena inside a few universality classes of systems independently of their
specific mechanisms or topologies. In particular, we implement an artificial
embodied agent controlled by a neural network maintaining a correlation
structure randomly sampled from a lattice Ising model at a critical point. We
evaluate the agent in two classical reinforcement learning scenarios: the
Mountain Car benchmark and the Acrobot double pendulum, finding that in both
cases the neural controller reaches a point of criticality, which coincides
with a transition point between two regimes of the agent's behaviour,
maximizing the mutual information between neurons and sensorimotor patterns.
Finally, we discuss the possible applications of this synthetic approach to the
comprehension of deeper principles connected to the pervasive presence of
criticality in biological and cognitive systems.
| 1 | 1 | 0 | 0 | 0 | 0 |
Projection Free Rank-Drop Steps | The Frank-Wolfe (FW) algorithm has been widely used in solving nuclear norm
constrained problems, since it does not require projections. However, FW often
yields high rank intermediate iterates, which can be very expensive in time and
space costs for large problems. To address this issue, we propose a rank-drop
method for nuclear norm constrained problems. The goal is to generate descent
steps that lead to rank decreases, maintaining low-rank solutions throughout
the algorithm. Moreover, the optimization problems are constrained to ensure
that the rank-drop step is also feasible and can be readily incorporated into a
projection-free minimization method, e.g., Frank-Wolfe. We demonstrate that by
incorporating rank-drop steps into the Frank-Wolfe algorithm, the rank of the
solution is greatly reduced compared to the original Frank-Wolfe or its common
variants.
| 0 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.