title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Continuous Authentication of Smartphones Based on Application Usage | An empirical investigation of active/continuous authentication for
smartphones is presented in this paper by exploiting users' unique application
usage data, i.e., distinct patterns of use, modeled by a Markovian process.
Variations of Hidden Markov Models (HMMs) are evaluated for continuous user
verification, and challenges due to the sparsity of session-wise data, an
explosion of states, and handling unforeseen events in the test data are
tackled. Unlike traditional approaches, the proposed formulation does not
depend on the top N-apps, rather uses the complete app-usage information to
achieve low latency. Through experimentation, empirical assessment of the
impact of unforeseen events, i.e., unknown applications and unforeseen
observations, on user verification is done via a modified edit-distance
algorithm for simple sequence matching. It is found that for enhanced
verification performance, unforeseen events should be incorporated in the
models by adopting smoothing techniques with HMMs. For validation, extensive
experiments on two distinct datasets are performed. The marginal smoothing
technique is the most effective for user verification in terms of equal error
rate (EER) and with a sampling rate of 1/30s^{-1} and 30 minutes of historical
data, and the method is capable of detecting an intrusion within ~2.5 minutes
of application use.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the Universality of Invariant Networks | Constraining linear layers in neural networks to respect symmetry
transformations from a group $G$ is a common design principle for invariant
networks that has found many applications in machine learning.
In this paper, we consider a fundamental question that has received little
attention to date: Can these networks approximate any (continuous) invariant
function?
We tackle the rather general case where $G\leq S_n$ (an arbitrary subgroup of
the symmetric group) that acts on $\mathbb{R}^n$ by permuting coordinates. This
setting includes several recent popular invariant networks. We present two main
results: First, $G$-invariant networks are universal if high-order tensors are
allowed. Second, there are groups $G$ for which higher-order tensors are
unavoidable for obtaining universality.
$G$-invariant networks consisting of only first-order tensors are of special
interest due to their practical value. We conclude the paper by proving a
necessary condition for the universality of $G$-invariant networks that
incorporate only first-order tensors. Lastly, we propose a conjecture stating
that this condition is also sufficient.
| 1 | 0 | 0 | 1 | 0 | 0 |
Are Deep Policy Gradient Algorithms Truly Policy Gradient Algorithms? | We study how the behavior of deep policy gradient algorithms reflects the
conceptual framework motivating their development. We propose a fine-grained
analysis of state-of-the-art methods based on key aspects of this framework:
gradient estimation, value prediction, optimization landscapes, and trust
region enforcement. We find that from this perspective, the behavior of deep
policy gradient algorithms often deviates from what their motivating framework
would predict. Our analysis suggests first steps towards solidifying the
foundations of these algorithms, and in particular indicates that we may need
to move beyond the current benchmark-centric evaluation methodology.
| 1 | 0 | 0 | 0 | 0 | 0 |
Achieving Privacy in the Adversarial Multi-Armed Bandit | In this paper, we improve the previously best known regret bound to achieve
$\epsilon$-differential privacy in oblivious adversarial bandits from
$\mathcal{O}{(T^{2/3}/\epsilon)}$ to $\mathcal{O}{(\sqrt{T} \ln T /\epsilon)}$.
This is achieved by combining a Laplace Mechanism with EXP3. We show that
though EXP3 is already differentially private, it leaks a linear amount of
information in $T$. However, we can improve this privacy by relying on its
intrinsic exponential mechanism for selecting actions. This allows us to reach
$\mathcal{O}{(\sqrt{\ln T})}$-DP, with a regret of $\mathcal{O}{(T^{2/3})}$
that holds against an adaptive adversary, an improvement from the best known of
$\mathcal{O}{(T^{3/4})}$. This is done by using an algorithm that run EXP3 in a
mini-batch loop. Finally, we run experiments that clearly demonstrate the
validity of our theoretical analysis.
| 1 | 0 | 0 | 0 | 0 | 0 |
Proximity Variational Inference | Variational inference is a powerful approach for approximate posterior
inference. However, it is sensitive to initialization and can be subject to
poor local optima. In this paper, we develop proximity variational inference
(PVI). PVI is a new method for optimizing the variational objective that
constrains subsequent iterates of the variational parameters to robustify the
optimization path. Consequently, PVI is less sensitive to initialization and
optimization quirks and finds better local optima. We demonstrate our method on
three proximity statistics. We study PVI on a Bernoulli factor model and
sigmoid belief network with both real and synthetic data and compare to
deterministic annealing (Katahira et al., 2008). We highlight the flexibility
of PVI by designing a proximity statistic for Bayesian deep learning models
such as the variational autoencoder (Kingma and Welling, 2014; Rezende et al.,
2014). Empirically, we show that PVI consistently finds better local optima and
gives better predictive performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Mandarin tone modeling using recurrent neural networks | We propose an Encoder-Classifier framework to model the Mandarin tones using
recurrent neural networks (RNN). In this framework, extracted frames of
features for tone classification are fed in to the RNN and casted into a fixed
dimensional vector (tone embedding) and then classified into tone types using a
softmax layer along with other auxiliary inputs. We investigate various
configurations that help to improve the model, including pooling, feature
splicing and utilization of syllable-level tone embeddings. Besides, tone
embeddings and durations of the contextual syllables are exploited to
facilitate tone classification. Experimental results on Mandarin tone
classification show the proposed network setups improve tone classification
accuracy. The results indicate that the RNN encoder-classifier based tone model
flexibly accommodates heterogeneous inputs (sequential and segmental) and hence
has the advantages from both the sequential classification tone models and
segmental classification tone models.
| 1 | 0 | 0 | 0 | 0 | 0 |
Shallow water models with constant vorticity | We modify the nonlinear shallow water equations, the Korteweg-de Vries
equation, and the Whitham equation, to permit constant vorticity, and examine
wave breaking, or the lack thereof. By wave breaking, we mean that the solution
remains bounded but its slope becomes unbounded in finite time. We show that a
solution of the vorticity-modified shallow water equations breaks if it carries
an increase of elevation; the breaking time decreases to zero as the size of
vorticity increases. We propose a full-dispersion shallow water model, which
combines the dispersion relation of water waves and the nonlinear shallow water
equations in the constant vorticity setting, and which extends the Whitham
equation to permit bidirectional propagation. We show that its small amplitude
and periodic traveling wave is unstable to long wavelength perturbations if the
wave number is greater than a critical value, and stable otherwise, similarly
to the Benjamin-Feir instability in the irrotational setting; the critical wave
number grows unboundedly large with the size of vorticity. The result agrees
with that from a multiple scale expansion of the physical problem. We show that
vorticity considerably alters the modulational stability and instability in the
presence of the effects of surface tension.
| 0 | 1 | 1 | 0 | 0 | 0 |
Tensor Balancing on Statistical Manifold | We solve tensor balancing, rescaling an Nth order nonnegative tensor by
multiplying N tensors of order N - 1 so that every fiber sums to one. This
generalizes a fundamental process of matrix balancing used to compare matrices
in a wide range of applications from biology to economics. We present an
efficient balancing algorithm with quadratic convergence using Newton's method
and show in numerical experiments that the proposed algorithm is several orders
of magnitude faster than existing ones. To theoretically prove the correctness
of the algorithm, we model tensors as probability distributions in a
statistical manifold and realize tensor balancing as projection onto a
submanifold. The key to our algorithm is that the gradient of the manifold,
used as a Jacobian matrix in Newton's method, can be analytically obtained
using the Moebius inversion formula, the essential of combinatorial
mathematics. Our model is not limited to tensor balancing, but has a wide
applicability as it includes various statistical and machine learning models
such as weighted DAGs and Boltzmann machines.
| 1 | 0 | 0 | 1 | 0 | 0 |
Learning Unknown Markov Decision Processes: A Thompson Sampling Approach | We consider the problem of learning an unknown Markov Decision Process (MDP)
that is weakly communicating in the infinite horizon setting. We propose a
Thompson Sampling-based reinforcement learning algorithm with dynamic episodes
(TSDE). At the beginning of each episode, the algorithm generates a sample from
the posterior distribution over the unknown model parameters. It then follows
the optimal stationary policy for the sampled model for the rest of the
episode. The duration of each episode is dynamically determined by two stopping
criteria. The first stopping criterion controls the growth rate of episode
length. The second stopping criterion happens when the number of visits to any
state-action pair is doubled. We establish $\tilde O(HS\sqrt{AT})$ bounds on
expected regret under a Bayesian setting, where $S$ and $A$ are the sizes of
the state and action spaces, $T$ is time, and $H$ is the bound of the span.
This regret bound matches the best available bound for weakly communicating
MDPs. Numerical results show it to perform better than existing algorithms for
infinite horizon MDPs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Models of Random Knots | The study of knots and links from a probabilistic viewpoint provides insight
into the behavior of "typical" knots, and opens avenues for new constructions
of knots and other topological objects with interesting properties. The
knotting of random curves arises also in applications to the natural sciences,
such as in the context of the structure of polymers. We present here several
known and new randomized models of knots and links. We review the main known
results on the knot distribution in each model. We discuss the nature of these
models and the properties of the knots they produce. Of particular interest to
us are finite type invariants of random knots, and the recently studied
Petaluma model. We report on rigorous results and numerical experiments
concerning the asymptotic distribution of such knot invariants. Our approach
raises questions of universality and classification of the various random knot
models.
| 0 | 0 | 1 | 0 | 0 | 0 |
Revival structures of coherent states for Xm exceptional orthogonal polynomials of the Scarf I potential within position-dependent effective mass | The revival structures for the X_m exceptional orthogonal polynomials of the
Scarf I potential endowed with position-dependent effective mass is studied in
the context of the generalized Gazeau-Klauder coherent states. It is shown that
in the case of the constant mass, the deduced coherent states mimic full and
fractional revivals phenomena. However in the case of position-dependent
effective mass, although full revivals take place during their time evolution,
there is no fractional revivals as defined in the common sense. These
properties are illustrated numerically by means of some specific profile mass
functions, with and without singularities. We have also observed a close
connection between the coherence time {\tau}_coh^m? and the mass parameter ?.
| 0 | 0 | 1 | 0 | 0 | 0 |
Autonomous drone cinematographer: Using artistic principles to create smooth, safe, occlusion-free trajectories for aerial filming | Autonomous aerial cinematography has the potential to enable automatic
capture of aesthetically pleasing videos without requiring human intervention,
empowering individuals with the capability of high-end film studios. Current
approaches either only handle off-line trajectory generation, or offer
strategies that reason over short time horizons and simplistic representations
for obstacles, which result in jerky movement and low real-life applicability.
In this work we develop a method for aerial filming that is able to trade off
shot smoothness, occlusion, and cinematography guidelines in a principled
manner, even under noisy actor predictions. We present a novel algorithm for
real-time covariant gradient descent that we use to efficiently find the
desired trajectories by optimizing a set of cost functions. Experimental
results show that our approach creates attractive shots, avoiding obstacles and
occlusion 65 times over 1.25 hours of flight time, re-planning at 5 Hz with a
10 s time horizon. We robustly film human actors, cars and bicycles performing
different motion among obstacles, using various shot types.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fifth order finite volume WENO in general orthogonally-curvilinear coordinates | High order reconstruction in the finite volume (FV) approach is achieved by a
more fundamental form of the fifth order WENO reconstruction in the framework
of orthogonally-curvilinear coordinates, for solving the hyperbolic
conservation equations. The derivation employs a piecewise parabolic polynomial
approximation to the zone averaged values to reconstruct the right, middle, and
left interface values. The grid dependent linear weights of the WENO are
recovered by inverting a Vandermode-like linear system of equations with
spatially varying coefficients. A scheme for calculating the linear weights,
optimal weights, and smoothness indicator on a regularly- and
irregularly-spaced grid in orthogonally-curvilinear coordinates is proposed. A
grid independent relation for evaluating the smoothness indicator is derived
from the basic definition. Finally, the procedures for the source term
integration and extension to multi-dimensions are proposed. Analytical values
of the linear and optimal weights, and also the weights required for the source
term integration and flux averaging, are provided for a regularly-spaced grid
in Cartesian, cylindrical, and spherical coordinates. Conventional fifth order
WENO reconstruction for the regularly-spaced grids in the Cartesian coordinates
can be fully recovered in the case of limiting curvature. The fifth order
finite volume WENO-C (orthogonally-curvilinear version of WENO) reconstruction
scheme is tested for several 1D and 2D benchmark test cases involving smooth
and discontinuous flows in cylindrical and spherical coordinates.
| 0 | 1 | 0 | 0 | 0 | 0 |
Traffic Minimizing Caching and Latent Variable Distributions of Order Statistics | Given a statistical model for the request frequencies and sizes of data
objects in a caching system, we derive the probability density of the size of
the file that accounts for the largest amount of data traffic. This is
equivalent to finding the required size of the cache for a caching placement
that maximizes the expected byte hit ratio for given file size and popularity
distributions. The file that maximizes the expected byte hit ratio is the file
for which the product of its size and popularity is the highest -- thus, it is
the file that incurs the greatest load on the network. We generalize this
theoretical problem to cover factors and addends of arbitrary order statistics
for given parent distributions. Further, we study the asymptotic behavior of
these distributions. We give several factor and addend densities of widely-used
distributions, and verify our results by extensive computer simulations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Maximizing the information learned from finite data selects a simple model | We use the language of uninformative Bayesian prior choice to study the
selection of appropriately simple effective models. We advocate for the prior
which maximizes the mutual information between parameters and predictions,
learning as much as possible from limited data. When many parameters are poorly
constrained by the available data, we find that this prior puts weight only on
boundaries of the parameter manifold. Thus it selects a lower-dimensional
effective theory in a principled way, ignoring irrelevant parameter directions.
In the limit where there is sufficient data to tightly constrain any number of
parameters, this reduces to Jeffreys prior. But we argue that this limit is
pathological when applied to the hyper-ribbon parameter manifolds generic in
science, because it leads to dramatic dependence on effects invisible to
experiment.
| 0 | 0 | 1 | 1 | 0 | 0 |
Aging Feynman-Kac Equation | Aging, the process of growing old or maturing, is one of the most widely seen
natural phenomena in the world. For the stochastic processes, sometimes the
influence of aging can not be ignored. For example, in this paper, by analyzing
the functional distribution of the trajectories of aging particles performing
anomalous diffusion, we reveal that for the fraction of the occupation time
$T_+/t$ of strong aging particles, $\langle (T^+(t)^2)\rangle=\frac{1}{2}t^2$
with coefficient $\frac{1}{2}$, having no relation with the aging time $t_a$
and $\alpha$ and being completely different from the case of weak (none) aging.
In fact, we first build the models governing the corresponding functional
distributions, i.e., the aging forward and backward Feynman-Kac equations; the
above result is one of the applications of the models. Another application of
the models is to solve the asymptotic behaviors of the distribution of the
first passage time, $g(t_a,t)$. The striking discovery is that for weakly aging
systems, $g(t_a,t)\sim t_a^{\frac{\alpha}{2}}t^{-1-\frac{\alpha}{2}}$, while
for strongly aging systems, $g(t_a,t)$ behaves as $ t_a^{\alpha-1}t^{-\alpha}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods | We present and apply a general-purpose, multi-start algorithm for improving
the performance of low-energy samplers used for solving optimization problems.
The algorithm iteratively fixes the value of a large portion of the variables
to values that have a high probability of being optimal. The resulting problems
are smaller and less connected, and samplers tend to give better low-energy
samples for these problems. The algorithm is trivially parallelizable, since
each start in the multi-start algorithm is independent, and could be applied to
any heuristic solver that can be run multiple times to give a sample. We
present results for several classes of hard problems solved using simulated
annealing, path-integral quantum Monte Carlo, parallel tempering with
isoenergetic cluster moves, and a quantum annealer, and show that the success
metrics as well as the scaling are improved substantially. When combined with
this algorithm, the quantum annealer's scaling was substantially improved for
native Chimera graph problems. In addition, with this algorithm the scaling of
the time to solution of the quantum annealer is comparable to the Hamze--de
Freitas--Selby algorithm on the weak-strong cluster problems introduced by
Boixo et al. Parallel tempering with isoenergetic cluster moves was able to
consistently solve 3D spin glass problems with 8000 variables when combined
with our method, whereas without our method it could not solve any.
| 1 | 1 | 0 | 0 | 0 | 0 |
BOOK: Storing Algorithm-Invariant Episodes for Deep Reinforcement Learning | We introduce a novel method to train agents of reinforcement learning (RL) by
sharing knowledge in a way similar to the concept of using a book. The recorded
information in the form of a book is the main means by which humans learn
knowledge. Nevertheless, the conventional deep RL methods have mainly focused
either on experiential learning where the agent learns through interactions
with the environment from the start or on imitation learning that tries to
mimic the teacher. Contrary to these, our proposed book learning shares key
information among different agents in a book-like manner by delving into the
following two characteristic features: (1) By defining the linguistic function,
input states can be clustered semantically into a relatively small number of
core clusters, which are forwarded to other RL agents in a prescribed manner.
(2) By defining state priorities and the contents for recording, core
experiences can be selected and stored in a small container. We call this
container as `BOOK'. Our method learns hundreds to thousand times faster than
the conventional methods by learning only a handful of core cluster
information, which shows that deep RL agents can effectively learn through the
shared knowledge from other agents.
| 1 | 0 | 0 | 0 | 0 | 0 |
A global scavenging and circulation ocean model of thorium-230 and protactinium-231 with realistic particle dynamics (NEMO-ProThorP 0.1) | In this paper, we set forth a 3-D ocean model of the radioactive trace
isotopes Th-230 and Pa-231. The interest arises from the fact that these
isotopes are extensively used for investigating particle transport in the ocean
and reconstructing past ocean circulation. The tracers are reversibly scavenged
by biogenic and lithogenic particles.
Our simulations of Th-230 and Pa-231 are based on the NEMO-PISCES ocean
biogeochemistry general circulation model, which includes biogenic particles,
namely small and big particulate organic carbon, calcium carbonate and biogenic
silica. Small and big lithogenic particles from dust deposition are included in
our model as well. Their distributions generally compare well with the small
and big lithogenic particle concentrations from recent observations from the
GEOTRACES programme, except for boundary nepheloid layers for which, as up to
today, there are no non-trivial, prognostic models available on a global scale.
Our simulations reproduce Th-230 and Pa-231 dissolved concentrations: they
compare well with recent GEOTRACES observations in many parts of the ocean.
Particulate Th-230 and Pa-231 concentrations are significantly improved
compared to previous studies, but they are still too low because of missing
particles from nepheloid layers. Our simulation reproduces the main
characteristics of the Pa-231/Th-230 ratio observed in the sediments, and
supports a moderate affinity of Pa-231 to biogenic silica as suggested by
recent observations, relative to Th-230.
Future model development may further improve understanding, especially when
this will include a more complete representation of all particles, including
different size classes, manganese hydroxides and nepheloid layers. This can be
done based on our model, as its source code is readily available.
| 1 | 1 | 0 | 0 | 0 | 0 |
The Dynamic Geometry of Interaction Machine: A Call-by-need Graph Rewriter | Girard's Geometry of Interaction (GoI), a semantics designed for linear logic
proofs, has been also successfully applied to programming language semantics.
One way is to use abstract machines that pass a token on a fixed graph along a
path indicated by the GoI. These token-passing abstract machines are space
efficient, because they handle duplicated computation by repeating the same
moves of a token on the fixed graph. Although they can be adapted to obtain
sound models with regard to the equational theories of various evaluation
strategies for the lambda calculus, it can be at the expense of significant
time costs. In this paper we show a token-passing abstract machine that can
implement evaluation strategies for the lambda calculus, with certified time
efficiency. Our abstract machine, called the Dynamic GoI Machine (DGoIM),
rewrites the graph to avoid replicating computation, using the token to find
the redexes. The flexibility of interleaving token transitions and graph
rewriting allows the DGoIM to balance the trade-off of space and time costs.
This paper shows that the DGoIM can implement call-by-need evaluation for the
lambda calculus by using a strategy of interleaving token passing with as much
graph rewriting as possible. Our quantitative analysis confirms that the DGoIM
with this strategy of interleaving the two kinds of possible operations on
graphs can be classified as "efficient" following Accattoli's taxonomy of
abstract machines.
| 1 | 0 | 0 | 0 | 0 | 0 |
Classification of finite W-groups | We determine the structure of the W-group $\mathcal{G}_F$, the small Galois
quotient of the absolute Galois group $G_F$ of the Pythagorean formally real
field $F$ when the space of orderings $X_F$ has finite order. Based on
Marshall's work (1979), we reduce the structure of $\mathcal{G}_F$ to that of
$\mathcal{G}_{\bar{F}}$, the W-group of the residue field $\bar{F}$ when $X_F$
is a connected space. In the disconnected case, the structure of
$\mathcal{G}_F$ is the free product of the W-groups $\mathcal{G}_{F_i}$
corresponding to the connected components $X_i$ of $X_F$. We also give a
completely Galois theoretic proof for Marshall's Basic Lemma.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices | Principal component pursuit (PCP) is a state-of-the-art approach for
background estimation problems. Due to their higher computational cost, PCP
algorithms, such as robust principal component analysis (RPCA) and its
variants, are not feasible in processing high definition videos. To avoid the
curse of dimensionality in those algorithms, several methods have been proposed
to solve the background estimation problem in an incremental manner. We propose
a batch-incremental background estimation model using a special weighted
low-rank approximation of matrices. Through experiments with real and synthetic
video sequences, we demonstrate that our method is superior to the
state-of-the-art background estimation algorithms such as GRASTA, ReProCS,
incPCP, and GFL.
| 1 | 0 | 1 | 0 | 0 | 0 |
Chromospheric Activity of HAT-P-11: an Unusually Active Planet-Hosting K Star | Kepler photometry of the hot Neptune host star HAT-P-11 suggests that its
spot latitude distribution is comparable to the Sun's near solar maximum. We
search for evidence of an activity cycle in the CaII H & K chromospheric
emission $S$-index with archival Keck/HIRES spectra and observations from the
echelle spectrograph on the ARC 3.5 m Telescope at APO. The chromospheric
emission of HAT-P-11 is consistent with a $\gtrsim 10$ year activity cycle,
which plateaued near maximum during the Kepler mission. In the cycle that we
observed, the star seemed to spend more time near active maximum than minimum.
We compare the $\log R^\prime_{HK}$ normalized chromospheric emission index of
HAT-P-11 with other stars. HAT-P-11 has unusually strong chromospheric emission
compared to planet-hosting stars of similar effective temperature and rotation
period, perhaps due to tides raised by its planet.
| 0 | 1 | 0 | 0 | 0 | 0 |
Automated Detection of Serializability Violations under Weak Consistency | While a number of weak consistency mechanisms have been developed in recent
years to improve performance and ensure availability in distributed, replicated
systems, ensuring correctness of transactional applications running on top of
such systems remains a difficult and important problem. Serializability is a
well-understood correctness criterion for transactional programs; understanding
whether applications are serializable when executed in a weakly-consistent
environment, however remains a challenging exercise. In this work, we combine
the dependency graph-based characterization of serializability and the
framework of abstract executions to develop a fully automated approach for
statically finding bounded serializability violations under \emph{any} weak
consistency model. We reduce the problem of serializability to satisfiability
of a formula in First-Order Logic, which allows us to harness the power of
existing SMT solvers. We provide rules to automatically construct the FOL
encoding from programs written in SQL (allowing loops and conditionals) and the
consistency specification written as a formula in FOL. In addition to detecting
bounded serializability violations, we also provide two orthogonal schemes to
reason about unbounded executions by providing sufficient conditions (in the
form of FOL formulae) whose satisfiability would imply the absence of anomalies
in any arbitrary execution. We have applied the proposed technique on TPC-C, a
real world database program with complex application logic, and were able to
discover anomalies under Parallel Snapshot Isolation, and verify
serializability for unbounded executions under Snapshot Isolation, two
consistency mechanisms substantially weaker than serializability.
| 1 | 0 | 0 | 0 | 0 | 0 |
Andreev reflection in 2D relativistic materials with realistic tunneling transparency in normal-metal-superconductor junctions | The Andreev conductance across 2d normal metal (N)/superconductor (SC)
junctions with relativistic Dirac spectrum is investigated theoretically in the
Blonder-Tinkham-Klapwijk formalism. It is shown that for relativistic
materials, due to the Klein tunneling instead of impurity potentials, the local
strain in the junction is the key factor that determines the transparency of
the junction. The local strain is shown to generate an effective Dirac
$\delta$-gauge field. A remarkable suppression of the conductance are observed
as the strength of the gauge field increases. The behaviors of the conductance
are in well agreement with the results obtained in the case of 1d N/SC
junction. We also study the Andreev reflection in a topological material near
the chiral-to-helical phase transition in the presence of a local strain. The N
side of the N/SC junction is modeled by the doped Kane-Mele (KM) model. The SC
region is a doped correlated KM t-J (KMtJ) model, which has been shown to
feature d+id'-wave spin-singlet pairing. With increasing intrinsic spin-orbit
(SO) coupling, the doped KMtJ system undergoes a topological phase transition
from the chiral d-wave superconductivity to the spin-Chern superconducting
phase with helical Majorana fermions at edges. We explore the Andreev
conductance at the two inequivalent Dirac points, respectively and predict the
distinctive behaviors for the Andreev conductance across the topological phase
transition. Relevance of our results for the adatom-doped graphene is
discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multi-Task Learning for Contextual Bandits | Contextual bandits are a form of multi-armed bandit in which the agent has
access to predictive side information (known as the context) for each arm at
each time step, and have been used to model personalized news recommendation,
ad placement, and other applications. In this work, we propose a multi-task
learning framework for contextual bandit problems. Like multi-task learning in
the batch setting, the goal is to leverage similarities in contexts for
different arms so as to improve the agent's ability to predict rewards from
contexts. We propose an upper confidence bound-based multi-task learning
algorithm for contextual bandits, establish a corresponding regret bound, and
interpret this bound to quantify the advantages of learning in the presence of
high task (arm) similarity. We also describe an effective scheme for estimating
task similarity from data, and demonstrate our algorithm's performance on
several data sets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Admissible topologies on $C(Y,Z)$ and ${\cal O}_Z(Y)$ | Let $Y$ and $Z$ be two given topological spaces, ${\cal O}(Y)$ (respectively,
${\cal O}(Z)$) the set of all open subsets of $Y$ (respectively, $Z$), and
$C(Y,Z)$ the set of all continuous maps from $Y$ to $Z$. We study Scott type
topologies on ${\mathcal O}(Y)$ and we construct admissible topologies on
$C(Y,Z)$ and ${\mathcal O}_Z(Y)=\{f^{-1}(U)\in {\mathcal O}(Y): f\in C(Y,Z)\
{\rm and}\ U\in {\mathcal O}(Z)\}$, introducing new problems in the field.
| 0 | 0 | 1 | 0 | 0 | 0 |
Weighted spherical means generated by generalized translation and general Euler-Poisson-Darboux equation | We consider the spherical mean generated by a multidimensional generalized
translation and general Euler-Poisson-Darboux equation corresponding to this
mean. The Asgeirsson property of solutions of the ultrahyperbolic equation that
includes singular differential Bessel operators acting by each variable is
provided.
| 0 | 0 | 1 | 0 | 0 | 0 |
K-Means Clustering using Tabu Search with Quantized Means | The Tabu Search (TS) metaheuristic has been proposed for K-Means clustering
as an alternative to Lloyd's algorithm, which for all its ease of
implementation and fast runtime, has the major drawback of being trapped at
local optima. While the TS approach can yield superior performance, it involves
a high computational complexity. Moreover, the difficulty in parameter
selection in the existing TS approach does not make it any more attractive.
This paper presents an alternative, low-complexity formulation of the TS
optimization procedure for K-Means clustering. This approach does not require
many parameter settings. We initially constrain the centers to points in the
dataset. We then aim at evolving these centers using a unique neighborhood
structure that makes use of gradient information of the objective function.
This results in an efficient exploration of the search space, after which the
means are refined. The proposed scheme is implemented in MATLAB and tested on
four real-world datasets, and it achieves a significant improvement over the
existing TS approach in terms of the intra cluster sum of squares and
computational time.
| 1 | 0 | 0 | 0 | 0 | 0 |
In-silico Feedback Control of a MIMO Synthetic Toggle Switch via Pulse-Width Modulation | The synthetic toggle switch, first proposed by Gardner & Collins [1] is a
MIMO control system that can be controlled by varying the concentrations of two
inducer molecules, aTc and IPTG, to achieve a desired level of expression of
the two genes it comprises. It has been shown [2] that this can be accomplished
through an open-loop external control strategy where the two inputs are
selected as mutually exclusive periodic pulse waves of appropriate amplitude
and duty-cycle. In this paper, we use a recently derived average model of the
genetic toggle switch subject to these inputs to synthesize new feedback
control approaches that adjust the inputs duty-cycle in real-time via two
different possible strategies, a model based hybrid PI-PWM approach and a
so-called Zero-Average dynamics (ZAD) controller. The controllers are validated
in-silico via both deterministic and stochastic simulations (SSA) illustrating
the advantages and limitations of each strategy
| 1 | 0 | 0 | 0 | 0 | 0 |
Holomorphic foliations tangent to Levi-flat subsets | We study Segre varieties associated to Levi-flat subsets in complex manifolds
and apply them to establish local and global results on the integration of
tangent holomorphic foliations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Real-Time Panoramic Tracking for Event Cameras | Event cameras are a paradigm shift in camera technology. Instead of full
frames, the sensor captures a sparse set of events caused by intensity changes.
Since only the changes are transferred, those cameras are able to capture quick
movements of objects in the scene or of the camera itself. In this work we
propose a novel method to perform camera tracking of event cameras in a
panoramic setting with three degrees of freedom. We propose a direct camera
tracking formulation, similar to state-of-the-art in visual odometry. We show
that the minimal information needed for simultaneous tracking and mapping is
the spatial position of events, without using the appearance of the imaged
scene point. We verify the robustness to fast camera movements and dynamic
objects in the scene on a recently proposed dataset and self-recorded
sequences.
| 1 | 0 | 0 | 0 | 0 | 0 |
Halo nonlinear reconstruction | We apply the nonlinear reconstruction method to simulated halo fields. For
halo number density $2.77\times 10^{-2}$ $(h^{-1} {\rm Mpc})^{-3}$ at $z=0$,
corresponding to the SDSS main sample density, we find the scale where the
noise saturates the linear signal is improved to $k\gtrsim0.36\ h {\rm
Mpc}^{-1}$, a factor of $2.29$ improvement in scale, or $12$ in number of
linear modes. The improvement is less for higher redshift or lower halo
density. We expect this to substantially improve the BAO accuracy of dense, low
redshift surveys, including the SDSS main sample, 6dFGS and 21cm intensity
mapping initiatives.
| 0 | 1 | 0 | 0 | 0 | 0 |
Molecular semimetallic hydrogen | Establishing metallic hydrogen is a goal of intensive theoretical and
experimental work since 1935 when Wigner and Hungtinton [1] predicted that
insulating molecular hydrogen will dissociate at high pressures and transform
to a metal. This metal is predicted to be a superconductor with very high
critical temperature [2]. In another scenario, the metallization can be
realized through overlapping of electronic bands in molecular hydrogen in the
similar 400 - 500 GPa pressure range [3-5]. The calculations are not accurate
enough to predict which option will be realized. Our data are consistent with
transforms of hydrogen to semimetal by closing the indirect band gap in the
molecular phase III at pressure ~ 360 GPa. Above this pressure, the metallic
behaviour in the electrical conductivity appears, the reflection significantly
increases. With pressure, the electrical conductivity strongly increases as
measured up to 440 GPa. The Raman measurements evidence that hydrogen is in the
molecular phase III at pressures at least up to 440 GPa. At higher pressures
measured up to 480 GPa, the Raman signal gradually disappears indicating
further transformation to a good molecular metal or to an atomic state.
| 0 | 1 | 0 | 0 | 0 | 0 |
Order-Sensitivity and Equivariance of Scoring Functions | The relative performance of competing point forecasts is usually measured in
terms of loss or scoring functions. It is widely accepted that these scoring
function should be strictly consistent in the sense that the expected score is
minimized by the correctly specified forecast for a certain statistical
functional such as the mean, median, or a certain risk measure. Thus, strict
consistency opens the way to meaningful forecast comparison, but is also
important in regression and M-estimation. Usually strictly consistent scoring
functions for an elicitable functional are not unique. To give guidance on the
choice of a scoring function, this paper introduces two additional quality
criteria. Order-sensitivity opens the possibility to compare two deliberately
misspecified forecasts given that the forecasts are ordered in a certain sense.
On the other hand, equivariant scoring functions obey similar equivariance
properties as the functional at hand - such as translation invariance or
positive homogeneity. In our study, we consider scoring functions for popular
functionals, putting special emphasis on vector-valued functionals, e.g. the
pair (mean, variance) or (Value at Risk, Expected Shortfall).
| 0 | 0 | 1 | 1 | 0 | 0 |
Light in Power: A General and Parameter-free Algorithm for Caustic Design | We present in this paper a generic and parameter-free algorithm to
efficiently build a wide variety of optical components, such as mirrors or
lenses, that satisfy some light energy constraints. In all of our problems, one
is given a collimated or point light source and a desired illumination after
reflection or refraction and the goal is to design the geometry of a mirror or
lens which transports exactly the light emitted by the source onto the target.
We first propose a general framework and show that eight different optical
component design problems amount to solving a light energy conservation
equation that involves the computation of visibility diagrams. We then show
that these diagrams all have the same structure and can be obtained by
intersecting a 3D Power diagram with a planar or spherical domain. This allows
us to propose an efficient and fully generic algorithm capable to solve these
eight optical component design problems. The support of the prescribed target
illumination can be a set of directions or a set of points located at a finite
distance. Our solutions satisfy design constraints such as convexity or
concavity. We show the effectiveness of our algorithm on simulated and
fabricated examples.
| 1 | 0 | 0 | 0 | 0 | 0 |
Developing a machine learning framework for estimating soil moisture with VNIR hyperspectral data | In this paper, we investigate the potential of estimating the soil-moisture
content based on VNIR hyperspectral data combined with LWIR data. Measurements
from a multi-sensor field campaign represent the benchmark dataset which
contains measured hyperspectral, LWIR, and soil-moisture data conducted on
grassland site. We introduce a regression framework with three steps consisting
of feature selection, preprocessing, and well-chosen regression models. The
latter are mainly supervised machine learning models. An exception are the
self-organizing maps which combine unsupervised and supervised learning. We
analyze the impact of the distinct preprocessing methods on the regression
results. Of all regression models, the extremely randomized trees model without
preprocessing provides the best estimation performance. Our results reveal the
potential of the respective regression framework combined with the VNIR
hyperspectral data to estimate soil moisture measured under real-world
conditions. In conclusion, the results of this paper provide a basis for
further improvements in different research directions.
| 0 | 0 | 0 | 1 | 0 | 0 |
Distilling a Neural Network Into a Soft Decision Tree | Deep neural networks have proved to be a very effective way to perform
classification tasks. They excel when the input data is high dimensional, the
relationship between the input and the output is complicated, and the number of
labeled training examples is large. But it is hard to explain why a learned
network makes a particular classification decision on a particular test case.
This is due to their reliance on distributed hierarchical representations. If
we could take the knowledge acquired by the neural net and express the same
knowledge in a model that relies on hierarchical decisions instead, explaining
a particular decision would be much easier. We describe a way of using a
trained neural net to create a type of soft decision tree that generalizes
better than one learned directly from the training data.
| 1 | 0 | 0 | 1 | 0 | 0 |
A General Scheme Implicit Force Control for a Flexible-Link Manipulator | In this paper we propose an implicit force control scheme for a one-link
flexible manipulator that interact with a compliant environment. The controller
was based in the mathematical model of the manipulator, considering the
dynamics of the beam flexible and the gravitational force. With this method,
the controller parameters are obtained from the structural parameters of the
beam (link) of the manipulator. This controller ensure the stability based in
the Lyapunov Theory. The controller proposed has two closed loops: the inner
loop is a tracking control with gravitational force and vibration frequencies
compensation and the outer loop is a implicit force control. To evaluate the
performance of the controller, we have considered to three different
manipulators (the length, the diameter were modified) and three environments
with compliance modified. The results obtained from simulations verify the
asymptotic tracking and regulated in position and force respectively and the
vibrations suppression of the beam in a finite time.
| 1 | 1 | 0 | 0 | 0 | 0 |
CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments | Casual conversations involving multiple speakers and noises from surrounding
devices are part of everyday environments and pose challenges for automatic
speech recognition systems. These challenges in speech recognition are target
for the CHiME-5 challenge. In the present study, an attempt is made to overcome
these challenges by employing a convolutional neural network (CNN)-based
multichannel end-to-end speech recognition system. The system comprises an
attention-based encoder-decoder neural network that directly generates a text
as an output from a sound input. The mulitchannel CNN encoder, which uses
residual connections and batch renormalization, is trained with augmented data,
including white noise injection. The experimental results show that the word
error rate (WER) was reduced by 11.9% absolute from the end-to-end baseline.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision | Tool manipulation is vital for facilitating robots to complete challenging
task goals. It requires reasoning about the desired effect of the task and thus
properly grasping and manipulating the tool to achieve the task. Task-agnostic
grasping optimizes for grasp robustness while ignoring crucial task-specific
constraints. In this paper, we propose the Task-Oriented Grasping Network
(TOG-Net) to jointly optimize both task-oriented grasping of a tool and the
manipulation policy for that tool. The training process of the model is based
on large-scale simulated self-supervision with procedurally generated tool
objects. We perform both simulated and real-world experiments on two tool-based
manipulation tasks: sweeping and hammering. Our model achieves overall 71.1%
task success rate for sweeping and 80.0% task success rate for hammering.
Supplementary material is available at: bit.ly/task-oriented-grasp
| 1 | 0 | 0 | 1 | 0 | 0 |
Classification of Minimal Separating Sets in Low Genus Surfaces | Consider a surface $S$ and let $M\subset S$. If $S\setminus M$ is not
connected, then we say $M$ \emph{separates} $S$, and we refer to $M$ as a
\emph{separating set} of $S$. If $M$ separates $S$, and no proper subset of $M$
separates $S$, then we say $M$ is a \emph{minimal separating set} of $S$. In
this paper we use methods of computational combinatorial topology to classify
the minimal separating sets of the orientable surfaces of genus $g=2$ and
$g=3$. The classification for genus 0 and 1 was done in earlier work, using
methods of algebraic topology.
| 0 | 0 | 1 | 0 | 0 | 0 |
A feasibility study on SSVEP-based interaction with motivating and immersive virtual and augmented reality | Non-invasive steady-state visual evoked potential (SSVEP) based
brain-computer interface (BCI) systems offer high bandwidth compared to other
BCI types and require only minimal calibration and training. Virtual reality
(VR) has been already validated as effective, safe, affordable and motivating
feedback modality for BCI experiments. Augmented reality (AR) enhances the
physical world by superimposing informative, context sensitive, computer
generated content. In the context of BCI, AR can be used as a friendlier and
more intuitive real-world user interface, thereby facilitating a more seamless
and goal directed interaction. This can improve practicality and usability of
BCI systems and may help to compensate for their low bandwidth. In this
feasibility study, three healthy participants had to finish a complex
navigation task in immersive VR and AR conditions using an online SSVEP BCI.
Two out of three subjects were successful in all conditions. To our knowledge,
this is the first work to present an SSVEP BCI that operates using target
stimuli integrated in immersive VR and AR (head-mounted display and camera).
This research direction can benefit patients by introducing more intuitive and
effective real-world interaction (e.g. smart home control). It may also be
relevant for user groups that require or benefit from hands free operation
(e.g. due to temporary situational disability).
| 1 | 0 | 0 | 0 | 0 | 0 |
Intelligent User Interfaces - A Tutorial | IUIs aim to incorporate intelligent automated capabilities in human computer
interaction, where the net impact is a human-computer interaction that improves
performance or usability in critical ways. It also involves designing and
implementing an artificial intelligence (AI) component that effectively
leverages human skills and capabilities, so that human performance with an
application excels. IUIs embody capabilities that have traditionally been
associated more strongly with humans than with computers: how to perceive,
interpret, learn, use language, reason, plan, and decide.
| 1 | 0 | 0 | 0 | 0 | 0 |
Graph Partitioning with Acyclicity Constraints | Graphs are widely used to model execution dependencies in applications. In
particular, the NP-complete problem of partitioning a graph under constraints
receives enormous attention by researchers because of its applicability in
multiprocessor scheduling. We identified the additional constraint of acyclic
dependencies between blocks when mapping computer vision and imaging
applications to a heterogeneous embedded multiprocessor. Existing algorithms
and heuristics do not address this requirement and deliver results that are not
applicable for our use-case. In this work, we show that this more constrained
version of the graph partitioning problem is NP-complete and present heuristics
that achieve a close approximation of the optimal solution found by an
exhaustive search for small problem instances and much better scalability for
larger instances. In addition, we can show a positive impact on the schedule of
a real imaging application that improves communication volume and execution
time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning Sublinear-Time Indexing for Nearest Neighbor Search | Most of the efficient sublinear-time indexing algorithms for the
high-dimensional nearest neighbor search problem (NNS) are based on space
partitions of the ambient space $\mathbb{R}^d$. Inspired by recent theoretical
work on NNS for general metric spaces [Andoni, Naor, Nikolov, Razenshteyn,
Waingarten STOC 2018, FOCS 2018], we develop a new framework for constructing
such partitions that reduces the problem to balanced graph partitioning
followed by supervised classification. We instantiate this general approach
with the KaHIP graph partitioner [Sanders, Schulz SEA 2013] and neural
networks, respectively, to obtain a new partitioning procedure called Neural
Locality-Sensitive Hashing (Neural LSH). On several standard benchmarks for
NNS, our experiments show that the partitions found by Neural LSH consistently
outperform partitions found by quantization- and tree-based methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Lempel-Ziv Jaccard Distance, an Effective Alternative to Ssdeep and Sdhash | Recent work has proposed the Lempel-Ziv Jaccard Distance (LZJD) as a method
to measure the similarity between binary byte sequences for malware
classification. We propose and test LZJD's effectiveness as a similarity digest
hash for digital forensics. To do so we develop a high performance Java
implementation with the same command-line arguments as sdhash, making it easy
to integrate into existing workflows. Our testing shows that LZJD is effective
for this task, and significantly outperforms sdhash and ssdeep in its ability
to match related file fragments and files corrupted with random noise. In
addition, LZJD is up to 60x faster than sdhash at comparison time.
| 1 | 0 | 0 | 0 | 0 | 0 |
Graded Steinberg algebras and their representations | We study the category of left unital graded modules over the Steinberg
algebra of a graded ample Hausdorff groupoid. In the first part of the paper,
we show that this category is isomorphic to the category of unital left modules
over the Steinberg algebra of the skew-product groupoid arising from the
grading. To do this, we show that the Steinberg algebra of the skew product is
graded isomorphic to a natural generalisation of the the Cohen-Montgomery smash
product of the Steinberg algebra of the underlying groupoid with the grading
group. In the second part of the paper, we study the minimal (that is,
irreducible) representations in the category of graded modules of a Steinberg
algebra, and establish a connection between the annihilator ideals of these
minimal representations, and effectiveness of the groupoid.
Specialising our results, we produce a representation of the monoid of graded
finitely generated projective modules over a Leavitt path algebra. We deduce
that the lattice of order-ideals in the $K_0$-group of the Leavitt path algebra
is isomorphic to the lattice of graded ideals of the algebra. We also
investigate the graded monoid for Kumjian--Pask algebras of row-finite
$k$-graphs with no sources. We prove that these algebras are graded von Neumann
regular rings, and record some structural consequences of this.
| 0 | 0 | 1 | 0 | 0 | 0 |
Movement of time-delayed hot spots in Euclidean space for special initial states | We consider the Cauchy problem for the damped wave equation under the initial
state that the sum of an initial position and an initial velocity vanishes.
When the initial position is non-zero, non-negative and compactly supported, we
study the large time behavior of the spatial null, critical, maximum and
minimum sets of the solution. The spatial null set becomes a smooth
hyper-surface homeomorphic to a sphere after a large enough time. The spatial
critical set has at least three points after a large enough time. The set of
spatial maximum points escapes from the convex hull of the support of the
initial position. The set of spatial minimum points consists of one point after
a large time, and the unique spatial minimum point converges to the centroid of
the initial position at time infinity.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hierarchical Attention-Based Recurrent Highway Networks for Time Series Prediction | Time series prediction has been studied in a variety of domains. However, it
is still challenging to predict future series given historical observations and
past exogenous data. Existing methods either fail to consider the interactions
among different components of exogenous variables which may affect the
prediction accuracy, or cannot model the correlations between exogenous data
and target data. Besides, the inherent temporal dynamics of exogenous data are
also related to the target series prediction, and thus should be considered as
well. To address these issues, we propose an end-to-end deep learning model,
i.e., Hierarchical attention-based Recurrent Highway Network (HRHN), which
incorporates spatio-temporal feature extraction of exogenous variables and
temporal dynamics modeling of target variables into a single framework.
Moreover, by introducing the hierarchical attention mechanism, HRHN can
adaptively select the relevant exogenous features in different semantic levels.
We carry out comprehensive empirical evaluations with various methods over
several datasets, and show that HRHN outperforms the state of the arts in time
series prediction, especially in capturing sudden changes and sudden
oscillations of time series.
| 0 | 0 | 0 | 1 | 0 | 0 |
Mass distribution and skewness for passive scalar transport in pipes with polygonal and smooth cross-sections | We extend our previous results characterizing the loading properties of a
diffusing passive scalar advected by a laminar shear flow in ducts and channels
to more general cross-sectional shapes, including regular polygons and smoothed
corner ducts originating from deformations of ellipses. For the case of the
triangle, short time skewness is calculated exactly to be positive, while
long-time asymptotics shows it to be negative. Monte-Carlo simulations confirm
these predictions, and document the time scale for sign change. Interestingly,
the equilateral triangle is the only regular polygon with this property, all
other polygons possess positive skewness at all times, although this cannot
cannot be proved on finite times due to the lack of closed form flow solutions
for such geometries. Alternatively, closed form flow solutions can be
constructed for smooth deformations of ellipses, and illustrate how the
possibility of multiple sign switching in time is unrelated to domain corners.
Exact conditions relating the median and the skewness to the mean are developed
which guarantee when the sign for the skewness implies front (back) loading
properties of the evolving tracer distribution along the pipe. Short and long
time asymptotics confirm this condition, and Monte-Carlo simulations verify
this at all times.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Optics-Based Approach to Thermal Management of Photovoltaics: Selective-Spectral and Radiative Cooling | For commercial one-sun solar modules, up to 80% of the incoming sunlight may
be dissipated as heat, potentially raising the temperature 20 C - 30 C higher
than the ambient. In the long term, extreme self-heating erodes efficiency and
shortens lifetime, thereby dramatically reducing the total energy output.
Therefore, it is critically important to develop effective and practical (and
preferably passive) cooling methods to reduce operating temperature of PV
modules. In this paper, we explore two fundamental (but often overlooked)
origins of PV self-heating, namely, sub-bandgap absorption and imperfect
thermal radiation. The analysis suggests that we redesign the optical
properties of the solar module to eliminate parasitic absorption
(selective-spectral cooling) and enhance thermal emission (radiative cooling).
Our Comprehensive opto-electro-thermal simulation shows that the proposed
techniques would cool the one-sun and low-concentrated terrestrial solar
modules up to 10 C and 20 C, respectively. This self-cooling would
substantially extend the lifetime for solar modules, with The corresponding
increase in energy yields and reduced LCOE.
| 0 | 1 | 0 | 0 | 0 | 0 |
GOLDRUSH. II. Clustering of Galaxies at $z\sim 4-6$ Revealed with the Half-Million Dropouts Over the 100 deg$^2$ Area Corresponding to 1 Gpc$^3$ | We present clustering properties from 579,492 Lyman break galaxies (LBGs) at
z~4-6 over the 100 deg^2 sky (corresponding to a 1.4 Gpc^3 volume) identified
in early data of the Hyper Suprime-Cam (HSC) Subaru strategic program survey.
We derive angular correlation functions (ACFs) of the HSC LBGs with
unprecedentedly high statistical accuracies at z~4-6, and compare them with the
halo occupation distribution (HOD) models. We clearly identify significant ACF
excesses in 10"<$\theta$<90", the transition scale between 1- and 2-halo terms,
suggestive of the existence of the non-linear halo bias effect. Combining the
HOD models and previous clustering measurements of faint LBGs at z~4-7, we
investigate dark-matter halo mass (Mh) of the z~4-7 LBGs and its correlation
with various physical properties including the star-formation rate (SFR), the
stellar-to-halo mass ratio (SHMR), and the dark matter accretion rate (dotMh)
over a wide-mass range of Mh/M$_\odot$=4x10^10-4x10^12. We find that the SHMR
increases from z~4 to 7 by a factor of ~4 at Mh~1x10^11 M$_\odot$, while the
SHMR shows no strong evolution in the similar redshift range at Mh~1x10^12
M$_\odot$. Interestingly, we identify a tight relation of SFR/dotMh-Mh showing
no significant evolution beyond 0.15 dex in this wide-mass range over z~4-7.
This weak evolution suggests that the SFR/dotMh-Mh relation is a fundamental
relation in high-redshift galaxy formation whose star formation activities are
regulated by the dark matter mass assembly. Assuming this fundamental relation,
we calculate the cosmic SFR densities (SFRDs) over z=0-10 (a.k.a. Madau-Lilly
plot). The cosmic SFRD evolution based on the fundamental relation agrees with
the one obtained by observations, suggesting that the cosmic SFRD increase from
z~10 to 4-2 (decrease from z~4-2 to 0) is mainly driven by the increase of the
halo abundance (the decrease of the accretion rate).
| 0 | 1 | 0 | 0 | 0 | 0 |
The Least-Area Tetrahedral Tile of Space | We prove the least-area, unit-volume, tetrahedral tile of Euclidean space,
without the assumption of Gallagher et al. that the tiling uses only
orientation-preserving images of the tile. The winner remains Sommerville's
type 4v.
| 0 | 0 | 1 | 0 | 0 | 0 |
Renormalization group theory for percolation in time-varying networks | Motivated by multi-hop communication in unreliable wireless networks, we
present a percolation theory for time-varying networks. We develop a
renormalization group theory for a prototypical network on a regular grid,
where individual links switch stochastically between active and inactive
states. The question whether a given source node can communicate with a
destination node along paths of active links is equivalent to a percolation
problem. Our theory maps the temporal existence of multi-hop paths on an
effective two-state Markov process. We show analytically how this Markov
process converges towards a memory-less Bernoulli process as the hop distance
between source and destination node increases. Our work extends classical
percolation theory to the dynamic case and elucidates temporal correlations of
message losses. Quantification of temporal correlations has implications for
the design of wireless communication and control protocols, e.g. in
cyber-physical systems such as self-organized swarms of drones or smart traffic
networks.
| 1 | 1 | 0 | 0 | 0 | 0 |
Spectral properties and breathing dynamics of a few-body Bose-Bose mixture in a 1D harmonic trap | We investigate a few-body mixture of two bosonic components, each consisting
of two particles confined in a quasi one-dimensional harmonic trap. By means of
exact diagonalization with a correlated basis approach we obtain the low-energy
spectrum and eigenstates for the whole range of repulsive intra- and
inter-component interaction strengths. We analyse the eigenvalues as a function
of the inter-component coupling, covering hereby all the limiting regimes, and
characterize the behaviour in-between these regimes by exploiting the
symmetries of the Hamiltonian. Provided with this knowledge we study the
breathing dynamics in the linear-response regime by slightly quenching the trap
frequency symmetrically for both components. Depending on the choice of
interactions strengths, we identify 1 to 3 monopole modes besides the breathing
mode of the center of mass coordinate. For the uncoupled mixture each monopole
mode corresponds to the breathing oscillation of a specific relative
coordinate. Increasing the inter-component coupling first leads to multi-mode
oscillations in each relative coordinate, which turn into single-mode
oscillations of the same frequency in the composite-fermionization regime.
| 0 | 1 | 0 | 0 | 0 | 0 |
Study of secondary neutron interactions with $^{232}$Th, $^{129}$I, and $^{127}$I nuclei with the uranium assembly "QUINTA" at 2, 4, and 8 GeV deuteron beams of the JINR Nuclotron accelerator | The natural uranium assembly, "QUINTA", was irradiated with 2, 4, and 8 GeV
deuterons. The $^{232}$Th, $^{127}$I, and $^{129}$I samples have been exposed
to secondary neutrons produced in the assembly at a 20-cm radial distance from
the deuteron beam axis. The spectra of gamma rays emitted by the activated
$^{232}$Th, $^{127}$I, and $^{129}$I samples have been analyzed and several
tens of product nuclei have been identified. For each of those products,
neutron-induced reaction rates have been determined. The transmutation power
for the $^{129}$I samples is estimated. Experimental results were compared to
those calculated with well-known stochastic and deterministic codes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Short Proofs for Slow Consistency | Let $\operatorname{Con}(\mathbf T)\!\restriction\!x$ denote the finite
consistency statement "there are no proofs of contradiction in $\mathbf T$ with
$\leq x$ symbols". For a large class of natural theories $\mathbf T$, Pudlák
has shown that the lengths of the shortest proofs of
$\operatorname{Con}(\mathbf T)\!\restriction\!n$ in the theory $\mathbf T$
itself are bounded by a polynomial in $n$. At the same time he conjectures that
$\mathbf T$ does not have polynomial proofs of the finite consistency
statements $\operatorname{Con}(\mathbf T+\operatorname{Con}(\mathbf
T))\!\restriction\!n$. In contrast we show that Peano arithmetic
($\mathbf{PA}$) has polynomial proofs of
$\operatorname{Con}(\mathbf{PA}+\operatorname{Con}^*(\mathbf{PA}))\!\restriction\!n$,
where $\operatorname{Con}^*(\mathbf{PA})$ is the slow consistency statement for
Peano arithmetic, introduced by S.-D. Friedman, Rathjen and Weiermann. We also
obtain a new proof of the result that the usual consistency statement
$\operatorname{Con}(\mathbf{PA})$ is equivalent to $\varepsilon_0$ iterations
of slow consistency. Our argument is proof-theoretic, while previous
investigations of slow consistency relied on non-standard models of arithmetic.
| 0 | 0 | 1 | 0 | 0 | 0 |
Response to Comment on "Cell nuclei have lower refractive index and mass density than cytoplasm" | In a recent study entitled "Cell nuclei have lower refractive index and mass
density than cytoplasm", we provided strong evidence indicating that the
nuclear refractive index (RI) is lower than the RI of the cytoplasm for several
cell lines. In a complementary study in 2017, entitled "Is the nuclear
refractive index lower than cytoplasm? Validation of phase measurements and
implications for light scattering technologies", Steelman et al. observed a
lower nuclear RI also for other cell lines and ruled out methodological error
sources such as phase wrapping and scattering effects. Recently, Yurkin
composed a comment on these 2 publications, entitled "How a phase image of a
cell with nucleus refractive index smaller than that of the cytoplasm should
look like?", putting into question the methods used for measuring the cellular
and nuclear RI in the aforementioned publications by suggesting that a lower
nuclear RI would produce a characteristic dip in the measured phase profile in
situ. We point out the difficulty of identifying this dip in the presence of
other cell organelles, noise, or blurring due to the imaging point spread
function. Furthermore, we mitigate Yurkin's concerns regarding the ability of
the simple-transmission approximation to compare cellular and nuclear RI by
analyzing a set of phase images with a novel, scattering-based approach. We
conclude that the absence of a characteristic dip in the measured phase
profiles does not contradict the usage of the simple-transmission approximation
for the determination of the average cellular or nuclear RI. Our response can
be regarded as an addition to the response by Steelman, Eldridge and Wax. We
kindly ask the reader to attend to their thorough ascertainment prior to
reading our response.
| 0 | 0 | 0 | 0 | 1 | 0 |
On the composition of an arbitrary collection of $SU(2)$ spins: An Enumerative Combinatoric Approach | The whole enterprise of spin compositions can be recast as simple enumerative
combinatoric problems. We show here that enumerative combinatorics
(EC)\citep{book:Stanley-2011} is a natural setting for spin composition, and
easily leads to very general analytic formulae -- many of which hitherto not
present in the literature. Based on it, we propose three general methods for
computing spin multiplicities; namely, 1) the multi-restricted composition, 2)
the generalized binomial and 3) the generating function methods. Symmetric and
anti-symmetric compositions of $SU(2)$ spins are also discussed, using
generating functions. Of particular importance is the observation that while
the common Clebsch-Gordan decomposition (CGD) -- which considers the spins as
distinguishable -- is related to integer compositions, the symmetric and
anti-symmetric compositions (where one considers the spins as
indistinguishable) are obtained considering integer partitions. The integers in
question here are none other but the occupation numbers of the
Holstein-Primakoff bosons.
\par The pervasiveness of $q-$analogues in our approach is a testament to the
fundamental role they play in spin compositions. In the appendix, some new
results in the power series representation of Gaussian polynomials (or
$q-$binomial coefficients) -- relevant to symmetric and antisymmetric
compositions -- are presented.
| 1 | 0 | 1 | 0 | 0 | 0 |
Perceived Performance of Webpages In the Wild: Insights from Large-scale Crowdsourcing of Above-the-Fold QoE | Clearly, no one likes webpages with poor quality of experience (QoE). Being
perceived as slow or fast is a key element in the overall perceived QoE of web
applications. While extensive effort has been put into optimizing web
applications (both in industry and academia), not a lot of work exists in
characterizing what aspects of webpage loading process truly influence human
end-user's perception of the "Speed" of a page. In this paper we present
"SpeedPerception", a large-scale web performance crowdsourcing framework
focused on understanding the perceived loading performance of above-the-fold
(ATF) webpage content. Our end goal is to create free open-source benchmarking
datasets to advance the systematic analysis of how humans perceive webpage
loading process. In Phase-1 of our "SpeedPerception" study using Internet
Retailer Top 500 (IR 500) websites
(this https URL), we found that commonly used
navigation metrics such as "onLoad" and "Time To First Byte (TTFB)" fail (less
than 60% match) to represent majority human perception when comparing the speed
of two webpages. We present a simple 3-variable-based machine learning model
that explains the majority end-user choices better (with $87 \pm 2\%$
accuracy). In addition, our results suggest that the time needed by end-users
to evaluate relative perceived speed of webpage is far less than the time of
its "visualComplete" event.
| 1 | 0 | 0 | 1 | 0 | 0 |
Entanglement and exotic superfluidity in spin-imbalanced lattices | We investigate the properties of entanglement in one-dimensional fermionic
lattices at the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) superfluid regime. By
analyzing occupation probabilities, which are concepts closely related to FFLO
and entanglement, we obtain approximate analytical expressions for the
spin-flip processes at the FFLO regime. We also apply density matrix
renormalization group calculations to obtain the exact ground-state
entanglement of the system in superfluid and non-superfluid regimes. Our
results reveal a breaking pairs avalanche appearing precisely at the
FFLO-normal phase transition. We find that entanglement is non-monotonic in
superfluid regimes, feature that could be used as a signature of exotic
superfluidity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Informed Non-convex Robust Principal Component Analysis with Features | We revisit the problem of robust principal component analysis with features
acting as prior side information. To this aim, a novel, elegant, non-convex
optimization approach is proposed to decompose a given observation matrix into
a low-rank core and the corresponding sparse residual. Rigorous theoretical
analysis of the proposed algorithm results in exact recovery guarantees with
low computational complexity. Aptly designed synthetic experiments demonstrate
that our method is the first to wholly harness the power of non-convexity over
convexity in terms of both recoverability and speed. That is, the proposed
non-convex approach is more accurate and faster compared to the best available
algorithms for the problem under study. Two real-world applications, namely
image classification and face denoising further exemplify the practical
superiority of the proposed method.
| 0 | 0 | 0 | 1 | 0 | 0 |
A note on self orbit equivalences of Anosov flows and bundles with fiberwise Anosov flows | We show that a self orbit equivalence of a transitive Anosov flow on a
$3$-manifold which is homotopic to identity has to either preserve every orbit
or the Anosov flow is $\mathbb{R}$-covered and the orbit equivalence has to be
of a specific type. This result shows that one can remove a relatively
unnatural assumption in a result of Farrell and Gogolev about the topological
rigidity of bundles supporting a fiberwise Anosov flow when the fiber is
$3$-dimensional.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Metallicity of the Intracluster Medium Over Cosmic Time: Further Evidence for Early Enrichment | We use Chandra X-ray data to measure the metallicity of the intracluster
medium (ICM) in 245 massive galaxy clusters selected from X-ray and
Sunyaev-Zel'dovich (SZ) effect surveys, spanning redshifts $0<z<1.2$.
Metallicities were measured in three different radial ranges, spanning cluster
cores through their outskirts. We explore trends in these measurements as a
function of cluster redshift, temperature, and surface brightness "peakiness"
(a proxy for gas cooling efficiency in cluster centers). The data at large
radii (0.5--1 $r_{500}$) are consistent with a constant metallicity, while at
intermediate radii (0.1-0.5 $r_{500}$) we see a late-time increase in
enrichment, consistent with the expected production and mixing of metals in
cluster cores. In cluster centers, there are strong trends of metallicity with
temperature and peakiness, reflecting enhanced metal production in the
lowest-entropy gas. Within the cool-core/sharply peaked cluster population,
there is a large intrinsic scatter in central metallicity and no overall
evolution, indicating significant astrophysical variations in the efficiency of
enrichment. The central metallicity in clusters with flat surface brightness
profiles is lower, with a smaller intrinsic scatter, but increases towards
lower redshifts. Our results are consistent with other recent measurements of
ICM metallicity as a function of redshift. They reinforce the picture implied
by observations of uniform metal distributions in the outskirts of nearby
clusters, in which most of the enrichment of the ICM takes place before cluster
formation, with significant later enrichment taking place only in cluster
centers, as the stellar populations of the central galaxies evolve.
| 0 | 1 | 0 | 0 | 0 | 0 |
Graphical Models: An Extension to Random Graphs, Trees, and Other Objects | In this work, we consider an extension of graphical models to random graphs,
trees, and other objects. To do this, many fundamental concepts for
multivariate random variables (e.g., marginal variables, Gibbs distribution,
Markov properties) must be extended to other mathematical objects; it turns out
that this extension is possible, as we will discuss, if we have a consistent,
complete system of projections on a given object. Each projection defines a
marginal random variable, allowing one to specify independence assumptions
between them. Furthermore, these independencies can be specified in terms of a
small subset of these marginal variables (which we call the atomic variables),
allowing the compact representation of independencies by a directed graph.
Projections also define factors, functions on the projected object space, and
hence a projection family defines a set of possible factorizations for a
distribution; these can be compactly represented by an undirected graph.
The invariances used in graphical models are essential for learning
distributions, not just on multivariate random variables, but also on other
objects. When they are applied to random graphs and random trees, the result is
a general class of models that is applicable to a broad range of problems,
including those in which the graphs and trees have complicated edge structures.
These models need not be conditioned on a fixed number of vertices, as is often
the case in the literature for random graphs, and can be used for problems in
which attributes are associated with vertices and edges. For graphs,
applications include the modeling of molecules, neural networks, and relational
real-world scenes; for trees, applications include the modeling of infectious
diseases, cell fusion, the structure of language, and the structure of objects
in visual scenes. Many classic models are particular instances of this
framework.
| 1 | 0 | 0 | 1 | 0 | 0 |
Boundary Control method and De Branges spaces. Schrödinger equation, Dirac system and Discrete Schrödinger operator | In the framework of the application of the Boundary Control method to solving
the inverse dynamical problems for the one-dimensional Schrödinger and Dirac
operators on the half-line and semi-infinite discrete Schrödinger operator,
we establish the connections with the method of De Branges: for each of the
system we construct the De Branges space and give a natural dynamical
interpretation of all its ingredients: the set of function the De Brange space
consists of, the scalar product, the reproducing kernel.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the evolution of galaxy spin in a cosmological hydrodynamic simulation of galaxy clusters | The traditional view of the morphology-spin connection is being challenged by
recent integral-field-unit observations, as the majority of early-type galaxies
are found to have a rotational component that is often as large as a dispersion
component. Mergers are often suspected to be critical in galaxy spin evolution,
yet the details of their roles are still unclear. We present the first results
on the spin evolution of galaxies in cluster environments through a
cosmological hydrodynamic simulation. Galaxies spin down globally with cosmic
evolution. Major (mass ratios > 1/4) and minor (1/4 $\geq$ mass ratios > 1/50)
mergers are important contributors to the spin down in particular in massive
galaxies. Minor mergers appear to have stronger cumulative effects than major
mergers. Surprisingly, the dominant driver of galaxy spin down seems to be
environmental effects rather than mergers. However, since multiple processes
act in combination, it is difficult to separate their individual roles. We
briefly discuss the caveats and future studies that are called for.
| 0 | 1 | 0 | 0 | 0 | 0 |
Diagrammatic Approach to Multiphoton Scattering | We present a method to systematically study multi-photon transmission in one
dimensional systems comprised of correlated quantum emitters coupled to input
and output waveguides. Within the Green's function approach of the scattering
matrix (S-matrix), we develop a diagrammatic technique to analytically obtain
the system's scattering amplitudes while at the same time visualise all the
possible absorption and emission processes. Our method helps to reduce the
significant effort in finding the general response of a many-body bosonic
system, particularly the nonlinear response embedded in the Green's functions.
We demonstrate our proposal through physically relevant examples involving
scattering of multi-photon states from two-level emitters as well as from
arrays of correlated Kerr nonlinear resonators in the Bose-Hubbard model.
| 0 | 1 | 0 | 0 | 0 | 0 |
Efficient Computation of Feedback Control for Constrained Systems | A method is presented for solving the discrete-time finite-horizon Linear
Quadratic Regulator (LQR) problem subject to auxiliary linear equality
constraints, such as fixed end-point constraints. The method explicitly
determines an affine relationship between the control and state variables, as
in standard Riccati recursion, giving rise to feedback control policies that
account for constraints. Since the linearly-constrained LQR problem arises
commonly in robotic trajectory optimization, having a method that can
efficiently compute these solutions is important. We demonstrate some of the
useful properties and interpretations of said control policies, and we compare
the computation time of our method against existing methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Stochastic Large-scale Machine Learning Algorithm for Distributed Features and Observations | As the size of modern data sets exceeds the disk and memory capacities of a
single computer, machine learning practitioners have resorted to parallel and
distributed computing. Given that optimization is one of the pillars of machine
learning and predictive modeling, distributed optimization methods have
recently garnered ample attention, in particular when either observations or
features are distributed, but not both. We propose a general stochastic
algorithm where observations, features, and gradient components can be sampled
in a double distributed setting, i.e., with both features and observations
distributed. Very technical analyses establish convergence properties of the
algorithm under different conditions on the learning rate (diminishing to zero
or constant). Computational experiments in Spark demonstrate a superior
performance of our algorithm versus a benchmark in early iterations of the
algorithm, which is due to the stochastic components of the algorithm.
| 0 | 0 | 0 | 1 | 0 | 0 |
Unified Model of D-Term Inflation | Hybrid inflation, driven by a Fayet-Iliopoulos (FI) D term, is an intriguing
inflationary model. In its usual formulation, it however suffers from several
shortcomings. These pertain to the origin of the FI mass scale, the stability
of scalar fields during inflation, gravitational corrections in supergravity,
as well as to the latest constraints from the cosmic microwave background. We
demonstrate that these issues can be remedied if D-term inflation is realized
in the context of strongly coupled supersymmetric gauge theories. We suppose
that the D term is generated in consequence of dynamical supersymmetry
breaking. Moreover, we assume canonical kinetic terms in the Jordan frame as
well as an approximate shift symmetry along the inflaton direction. This
provides us with a unified picture of D-term inflation and high-scale
supersymmetry breaking. The D term may be associated with a gauged U(1)_B-L, so
that the end of inflation spontaneously breaks B-L in the visible sector.
| 0 | 1 | 0 | 0 | 0 | 0 |
Signal Recovery from Unlabeled Samples | In this paper, we study the recovery of a signal from a set of noisy linear
projections (measurements), when such projections are unlabeled, that is, the
correspondence between the measurements and the set of projection vectors
(i.e., the rows of the measurement matrix) is not known a priori. We consider a
special case of unlabeled sensing referred to as Unlabeled Ordered Sampling
(UOS) where the ordering of the measurements is preserved. We identify a
natural duality between this problem and classical Compressed Sensing (CS),
where we show that the unknown support (location of nonzero elements) of a
sparse signal in CS corresponds to the unknown indices of the measurements in
UOS. While in CS it is possible to recover a sparse signal from an
under-determined set of linear equations (less equations than the signal
dimension), successful recovery in UOS requires taking more samples than the
dimension of the signal. Motivated by this duality, we develop a Restricted
Isometry Property (RIP) similar to that in CS. We also design a low-complexity
Alternating Minimization algorithm that achieves a stable signal recovery under
the established RIP. We analyze our proposed algorithm for different signal
dimensions and number of measurements theoretically and investigate its
performance empirically via simulations. The results are reminiscent of
phase-transition similar to that occurring in CS.
| 1 | 0 | 0 | 1 | 0 | 0 |
Piecewise linear generalized Alexander's theorem in dimension at most 5 | We study piecewise linear co-dimension two embeddings of closed oriented
manifolds in Euclidean space, and show that any such embedding can always be
isotoped to be a closed braid as long as the ambient dimension is at most five,
extending results of Alexander (in ambient dimension three), and Viro and
independently Kamada (in ambient dimension four). We also show an analogous
result for higher co-dimension embeddings.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sample complexity of population recovery | The problem of population recovery refers to estimating a distribution based
on incomplete or corrupted samples. Consider a random poll of sample size $n$
conducted on a population of individuals, where each pollee is asked to answer
$d$ binary questions. We consider one of the two polling impediments: (a) in
lossy population recovery, a pollee may skip each question with probability
$\epsilon$, (b) in noisy population recovery, a pollee may lie on each question
with probability $\epsilon$. Given $n$ lossy or noisy samples, the goal is to
estimate the probabilities of all $2^d$ binary vectors simultaneously within
accuracy $\delta$ with high probability.
This paper settles the sample complexity of population recovery. For lossy
model, the optimal sample complexity is
$\tilde\Theta(\delta^{-2\max\{\frac{\epsilon}{1-\epsilon},1\}})$, improving the
state of the art by Moitra and Saks in several ways: a lower bound is
established, the upper bound is improved and the result depends at most on the
logarithm of the dimension. Surprisingly, the sample complexity undergoes a
phase transition from parametric to nonparametric rate when $\epsilon$ exceeds
$1/2$. For noisy population recovery, the sharp sample complexity turns out to
be more sensitive to dimension and scales as $\exp(\Theta(d^{1/3}
\log^{2/3}(1/\delta)))$ except for the trivial cases of $\epsilon=0,1/2$ or
$1$.
For both models, our estimators simply compute the empirical mean of a
certain function, which is found by pre-solving a linear program (LP).
Curiously, the dual LP can be understood as Le Cam's method for lower-bounding
the minimax risk, thus establishing the statistical optimality of the proposed
estimators. The value of the LP is determined by complex-analytic methods.
| 1 | 0 | 1 | 1 | 0 | 0 |
Asymptotic Exponentiality of the First Exit Time of the Shiryaev-Roberts Diffusion with Constant Positive Drift | We consider the first exit time of a Shiryaev-Roberts diffusion with constant
positive drift from the interval $[0,A]$ where $A>0$. We show that the moment
generating function (Laplace transform) of a suitably standardized version of
the first exit time converges to that of the unit-mean exponential distribution
as $A\to+\infty$. The proof is explicit in that the moment generating function
of the first exit time is first expressed analytically and in a closed form,
and then the desired limit as $A\to+\infty$ is evaluated directly. The result
is of importance in the area of quickest change-point detection, and its
discrete-time counterpart has been previously established - although in a
different manner - by Pollak and Tartakovsky (2009).
| 0 | 0 | 1 | 1 | 0 | 0 |
Learning to Teach Reinforcement Learning Agents | In this article we study the transfer learning model of action advice under a
budget. We focus on reinforcement learning teachers providing action advice to
heterogeneous students playing the game of Pac-Man under a limited advice
budget. First, we examine several critical factors affecting advice quality in
this setting, such as the average performance of the teacher, its variance and
the importance of reward discounting in advising. The experiments show the
non-trivial importance of the coefficient of variation (CV) as a statistic for
choosing policies that generate advice. The CV statistic relates variance to
the corresponding mean. Second, the article studies policy learning for
distributing advice under a budget. Whereas most methods in the relevant
literature rely on heuristics for advice distribution we formulate the problem
as a learning one and propose a novel RL algorithm capable of learning when to
advise, adapting to the student and the task at hand. Furthermore, we argue
that learning to advise under a budget is an instance of a more generic
learning problem: Constrained Exploitation Reinforcement Learning.
| 1 | 0 | 0 | 0 | 0 | 0 |
Giant room-temperature barocaloric effects in PDMS rubber at low pressures | The barocaloric effect is still an incipient scientific topic, but it has
been attracting an increasing attention in the last years due to the promising
perspectives for its application in alternative cooling devices. Here, we
present giant values of barocaloric entropy change and temperature change
induced by low pressures in PDMS elastomer around room temperature. Adiabatic
temperature changes of 12.0 K and 28.5 K were directly measured for pressure
changes of 173 MPa and 390 MPa, respectively, associated with large normalized
temperature changes (~70 K GPa-1). From adiabatic temperature change data, we
obtained entropy change values larger than 140 J kg-1 K-1. We found barocaloric
effect values that exceed those previously reported for any promising
barocaloric materials from direct measurements of temperature change around
room temperature. Our results stimulate the study of the barocaloric effect in
elastomeric polymers and broaden the pathway to use this effect in solid-state
cooling technologies.
| 0 | 1 | 0 | 0 | 0 | 0 |
$({\mathfrak{gl}}_M, {\mathfrak{gl}}_N)$-Dualities in Gaudin Models with Irregular Singularities | We establish $({\mathfrak{gl}}_M, {\mathfrak{gl}}_N)$-dualities between
quantum Gaudin models with irregular singularities. Specifically, for any $M, N
\in {\mathbb Z}_{\geq 1}$ we consider two Gaudin models: the one associated
with the Lie algebra ${\mathfrak{gl}}_M$ which has a double pole at infinity
and $N$ poles, counting multiplicities, in the complex plane, and the same
model but with the roles of $M$ and $N$ interchanged. Both models can be
realized in terms of Weyl algebras, i.e., free bosons; we establish that, in
this realization, the algebras of integrals of motion of the two models
coincide. At the classical level we establish two further generalizations of
the duality. First, we show that there is also a duality for realizations in
terms of free fermions. Second, in the bosonic realization we consider the
classical cyclotomic Gaudin model associated with the Lie algebra
${\mathfrak{gl}}_M$ and its diagram automorphism, with a double pole at
infinity and $2N$ poles, counting multiplicities, in the complex plane. We
prove that it is dual to a non-cyclotomic Gaudin model associated with the Lie
algebra ${\mathfrak{sp}}_{2N}$, with a double pole at infinity and $M$ simple
poles in the complex plane. In the special case $N=1$ we recover the well-known
self-duality in the Neumann model.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dynamic coupling of a finite element solver to large-scale atomistic simulations | We propose a method for efficiently coupling the finite element method with
atomistic simulations, while using molecular dynamics or kinetic Monte Carlo
techniques. Our method can dynamically build an optimized unstructured mesh
that follows the geometry defined by atomistic data. On this mesh, different
multiphysics problems can be solved to obtain distributions of physical
quantities of interest, which can be fed back to the atomistic system. The
simulation flow is optimized to maximize computational efficiency while
maintaining good accuracy. This is achieved by providing the modules for a)
optimization of the density of the generated mesh according to requirements of
a specific geometry and b) efficient extension of the finite element domain
without a need to extend the atomistic one. Our method is organized as an
open-source C++ code. In the current implementation, an efficient Laplace
equation solver for calculation of electric field distribution near rough
atomistic surface demonstrates the capability of the suggested approach.
| 1 | 1 | 0 | 0 | 0 | 0 |
A study of periodograms standardized using training data sets and application to exoplanet detection | When the noise affecting time series is colored with unknown statistics, a
difficulty for sinusoid detection is to control the true significance level of
the test outcome. This paper investigates the possibility of using training
data sets of the noise to improve this control. Specifically, we analyze the
performances of various detectors {applied to} periodograms standardized using
training data sets. Emphasis is put on sparse detection in the Fourier domain
and on the limitation posed by the necessarily finite size of the training sets
available in practice. We study the resulting false alarm and detection rates
and show that standardization leads in some cases to powerful constant false
alarm rate tests. The study is both analytical and numerical. Although
analytical results are derived in an asymptotic regime, numerical results show
that theory accurately describes the tests' behaviour for moderately large
sample sizes. Throughout the paper, an application of the considered
periodogram standardization is presented for exoplanet detection in radial
velocity data.
| 0 | 1 | 1 | 1 | 0 | 0 |
An Efficient Deep Learning Technique for the Navier-Stokes Equations: Application to Unsteady Wake Flow Dynamics | We present an efficient deep learning technique for the model reduction of
the Navier-Stokes equations for unsteady flow problems. The proposed technique
relies on the Convolutional Neural Network (CNN) and the stochastic gradient
descent method. Of particular interest is to predict the unsteady fluid forces
for different bluff body shapes at low Reynolds number. The discrete
convolution process with a nonlinear rectification is employed to approximate
the mapping between the bluff-body shape and the fluid forces. The deep neural
network is fed by the Euclidean distance function as the input and the target
data generated by the full-order Navier-Stokes computations for primitive bluff
body shapes. The convolutional networks are iteratively trained using the
stochastic gradient descent method with the momentum term to predict the fluid
force coefficients of different geometries and the results are compared with
the full-order computations. We attempt to provide a physical analogy of the
stochastic gradient method with the momentum term with the simplified form of
the incompressible Navier-Stokes momentum equation. We also construct a direct
relationship between the CNN-based deep learning and the Mori-Zwanzig formalism
for the model reduction of a fluid dynamical system. A systematic convergence
and sensitivity study is performed to identify the effective dimensions of the
deep-learned CNN process such as the convolution kernel size, the number of
kernels and the convolution layers. Within the error threshold, the prediction
based on our deep convolutional network has a speed-up nearly four orders of
magnitude compared to the full-order results and consumes an insignificant
fraction of computational resources. The proposed CNN-based approximation
procedure has a profound impact on the parametric design of bluff bodies and
the feedback control of separated flows.
| 0 | 1 | 0 | 0 | 0 | 0 |
Accurate fast computation of steady two-dimensional surface gravity waves in arbitrary depth | This paper describes an efficient algorithm for computing steady
two-dimensional surface gravity wave in irrotational motion. The algorithm
complexity is O(N log N), N being the number of Fourier modes. The algorithm
allows the arbitrary precision computation of waves in arbitrary depth, i.e.,
it works efficiently for Stokes, cnoidal and solitary waves, even for quite
large steepnesses. The method is based on conformal mapping, Babenko equation
rewritten in a suitable way, pseudo-spectral method and Petviashvili's
iterations. The efficiency of the algorithm is illustrated via some relevant
numerical examples. The code is open source, so interested readers can easily
check the claims, use and modify the algorithm.
| 0 | 1 | 1 | 0 | 0 | 0 |
AutoWIG: Automatic Generation of Python Bindings for C++ Libraries | Most of Python and R scientific packages incorporate compiled scientific
libraries to speed up the code and reuse legacy libraries. While several
semi-automatic solutions exist to wrap these compiled libraries, the process of
wrapping a large library is cumbersome and time consuming. In this paper, we
introduce AutoWIG, a Python package that wraps automatically compiled libraries
into high-level languages using LLVM/Clang technologies and the Mako templating
engine. Our approach is automatic, extensible, and applies to complex C++
libraries, composed of thousands of classes or incorporating modern
meta-programming constructs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fate of Weyl semimetals in the presence of incommensurate potentials | We investigate the effect of the incommensurate potential on Weyl semimetal,
which is proposed to be realized in ultracold atomic systems trapped in
three-dimensional optical lattices. For the system without the Fermi arc, we
find that the Weyl points are robust against the incommensurate potential and
the system enters into a metallic phase only when the incommensurate potential
strength exceeds a critical value. We unveil the trastition by analysing the
properties of wave functions and the density of states as a function of the
incommensurate potential strength. We also study the system with Fermi arcs and
find the Fermi arcs are sensitive against the incommensurate potential and can
be destoryed by a weak incommensurate potential.
| 0 | 1 | 0 | 0 | 0 | 0 |
Word equations in linear space | Word equations are an important problem on the intersection of formal
languages and algebra. Given two sequences consisting of letters and variables
we are to decide whether there is a substitution for the variables that turns
this equation into true equality of strings. The computational complexity of
this problem remains unknown, with the best lower and upper bounds being NP and
PSPACE. Recently, a novel technique of recompression was applied to this
problem, simplifying the known proofs and lowering the space complexity to
(nondeterministic) O(n log n). In this paper we show that word equations are in
nondeterministic linear space, thus the language of satisfiable word equations
is context-sensitive. We use the known recompression-based algorithm and
additionally employ Huffman coding for letters. The proof, however, uses
analysis of how the fragments of the equation depend on each other as well as a
new strategy for nondeterministic choices of the algorithm, which uses several
new ideas to limit the space occupied by the letters.
| 1 | 0 | 0 | 0 | 0 | 0 |
Selective Strong Structural Minimum Cost Resilient Co-Design for Regular Descriptor Linear Systems | This paper addresses the problem of minimum cost resilient
actuation-sensing-communication co-design for regular descriptor systems while
ensuring selective strong structural system's properties. More specifically,
the problem consists of determining the minimum cost deployment of actuation
and sensing technology, as well as communication between the these, such that
decentralized control approaches are viable for an arbitrary realization of
regular descriptor systems satisfying a pre-specified selective structure,
i.e., some entries can be zero, nonzero, or either zero/nonzero. Towards this
goal, we rely on strong structural systems theory and extend it to cope with
the selective structure that casts resiliency/robustness properties and
uncertainty properties of system's model. Upon such framework, we introduce the
notion of selective strong structural fixed modes as a characterization of the
feasibility of decentralized control laws. Also, we provide necessary and
sufficient conditions for this property to hold, and show how these conditions
can be leveraged to determine the minimum cost resilient placement of
actuation-sensing-communication technology ensuring feasible solutions. In
particular, we study the minimum cost resilient actuation and sensing
placement, upon which we construct the solution to our problem. Finally, we
illustrate the applicability the main results of this paper on an electric
power grid example.
| 0 | 0 | 1 | 0 | 0 | 0 |
Maximum a Posteriori Policy Optimisation | We introduce a new algorithm for reinforcement learning called Maximum
aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relative
entropy objective. We show that several existing methods can directly be
related to our derivation. We develop two off-policy algorithms and demonstrate
that they are competitive with the state-of-the-art in deep reinforcement
learning. In particular, for continuous control, our method outperforms
existing methods with respect to sample efficiency, premature convergence and
robustness to hyperparameter settings while achieving similar or better final
performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
On Geodesic Completeness for Riemannian Metrics on Smooth Probability Densities | The geometric approach to optimal transport and information theory has
triggered the interpretation of probability densities as an
infinite-dimensional Riemannian manifold. The most studied Riemannian
structures are Otto's metric, yielding the $L^2$-Wasserstein distance of
optimal mass transport, and the Fisher--Rao metric, predominant in the theory
of information geometry. On the space of smooth probability densities, none of
these Riemannian metrics are geodesically complete---a property desirable for
example in imaging applications. That is, the existence interval for solutions
to the geodesic flow equations cannot be extended to the whole real line. Here
we study a class of Hamilton--Jacobi-like partial differential equations
arising as geodesic flow equations for higher-order Sobolev type metrics on the
space of smooth probability densities. We give order conditions for global
existence and uniqueness, thereby providing geodesic completeness. The system
we study is an interesting example of a flow equation with loss of derivatives,
which is well-posed in the smooth category, yet non-parabolic and fully
non-linear. On a more general note, the paper establishes a link between
geometric analysis on the space of probability densities and analysis of
Euler-Arnold equations in topological hydrodynamics.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hacker Combat: A Competitive Sport from Programmatic Dueling & Cyberwarfare | The history of humanhood has included competitive activities of many
different forms. Sports have offered many benefits beyond that of
entertainment. At the time of this article, there exists not a competitive
ecosystem for cyber security beyond that of conventional capture the flag
competitions, and the like. This paper introduces a competitive framework with
a foundation on computer science, and hacking. This proposed competitive
landscape encompasses the ideas underlying information security, software
engineering, and cyber warfare. We also demonstrate the opportunity to rank,
score, & categorize actionable skill levels into tiers of capability.
Physiological metrics are analyzed from participants during gameplay. These
analyses provide support regarding the intricacies required for competitive
play, and analysis of play. We use these intricacies to build a case for an
organized competitive ecosystem. Using previous player behavior from gameplay,
we also demonstrate the generation of an artificial agent purposed with
gameplay at a competitive level.
| 1 | 0 | 0 | 0 | 0 | 0 |
Providing Effective Real-time Feedback in Simulation-based Surgical Training | Virtual reality simulation is becoming popular as a training platform in
surgical education. However, one important aspect of simulation-based surgical
training that has not received much attention is the provision of automated
real-time performance feedback to support the learning process. Performance
feedback is actionable advice that improves novice behaviour. In simulation,
automated feedback is typically extracted from prediction models trained using
data mining techniques. Existing techniques suffer from either low
effectiveness or low efficiency resulting in their inability to be used in
real-time. In this paper, we propose a random forest based method that finds a
balance between effectiveness and efficiency. Experimental results in a
temporal bone surgery simulation show that the proposed method is able to
extract highly effective feedback at a high level of efficiency.
| 1 | 0 | 0 | 0 | 0 | 0 |
Large-Scale Classification of Structured Objects using a CRF with Deep Class Embedding | This paper presents a novel deep learning architecture to classify structured
objects in datasets with a large number of visually similar categories. We
model sequences of images as linear-chain CRFs, and jointly learn the
parameters from both local-visual features and neighboring classes. The visual
features are computed by convolutional layers, and the class embeddings are
learned by factorizing the CRF pairwise potential matrix. This forms a highly
nonlinear objective function which is trained by optimizing a local likelihood
approximation with batch-normalization. This model overcomes the difficulties
of existing CRF methods to learn the contextual relationships thoroughly when
there is a large number of classes and the data is sparse. The performance of
the proposed method is illustrated on a huge dataset that contains images of
retail-store product displays, taken in varying settings and viewpoints, and
shows significantly improved results compared to linear CRF modeling and
unnormalized likelihood optimization.
| 1 | 0 | 0 | 0 | 0 | 0 |
code2seq: Generating Sequences from Structured Representations of Code | The ability to generate natural language sequences from source code snippets
has a variety of applications such as code summarization, documentation, and
retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine
translation (NMT), have achieved state-of-the-art performance on these tasks by
treating source code as a sequence of tokens. We present ${\rm {\scriptsize
CODE2SEQ}}$: an alternative approach that leverages the syntactic structure of
programming languages to better encode source code. Our model represents a code
snippet as the set of compositional paths in its abstract syntax tree (AST) and
uses attention to select the relevant paths while decoding. We demonstrate the
effectiveness of our approach for two tasks, two programming languages, and
four datasets of up to $16$M examples. Our model significantly outperforms
previous models that were specifically designed for programming languages, as
well as state-of-the-art NMT models. An interactive online demo of our model is
available at this http URL. Our code, data and trained models are
available at this http URL.
| 1 | 0 | 0 | 1 | 0 | 0 |
Learning Neural Parsers with Deterministic Differentiable Imitation Learning | We explore the problem of learning to decompose spatial tasks into segments,
as exemplified by the problem of a painting robot covering a large object.
Inspired by the ability of classical decision tree algorithms to construct
structured partitions of their input spaces, we formulate the problem of
decomposing objects into segments as a parsing approach. We make the insight
that the derivation of a parse-tree that decomposes the object into segments
closely resembles a decision tree constructed by ID3, which can be done when
the ground-truth available. We learn to imitate an expert parsing oracle, such
that our neural parser can generalize to parse natural images without ground
truth. We introduce a novel deterministic policy gradient update, DRAG (i.e.,
DeteRministically AGgrevate) in the form of a deterministic actor-critic
variant of AggreVaTeD, to train our neural parser. From another perspective,
our approach is a variant of the Deterministic Policy Gradient suitable for the
imitation learning setting. The deterministic policy representation offered by
training our neural parser with DRAG allows it to outperform state of the art
imitation and reinforcement learning approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
Dimension of the minimum set for the real and complex Monge-Ampère equations in critical Sobolev spaces | We prove that the zero set of a nonnegative plurisubharmonic function that
solves $\det (\partial \overline{\partial} u) \geq 1$ in $\mathbb{C}^n$ and is
in $W^{2, \frac{n(n-k)}{k}}$ contains no analytic sub-variety of dimension $k$
or larger. Along the way we prove an analogous result for the real
Monge-Ampère equation, which is also new. These results are sharp in view of
well-known examples of Pogorelov and B{\l}ocki. As an application, in the real
case we extend interior regularity results to the case that $u$ lies in a
critical Sobolev space (or more generally, certain Sobolev-Orlicz spaces).
| 0 | 0 | 1 | 0 | 0 | 0 |
Road Detection Technique Using Filters with Application to Autonomous Driving System | Autonomous driving systems are broadly used equipment in the industries and
in our daily lives, they assist in production, but are majorly used for
exploration in dangerous or unfamiliar locations. Thus, for a successful
exploration, navigation plays a significant role. Road detection is an
essential factor that assists autonomous robots achieved perfect navigation.
Various techniques using camera sensors have been proposed by numerous scholars
with inspiring results, but their techniques are still vulnerable to these
environmental noises: rain, snow, light intensity and shadow. In addressing
these problems, this paper proposed to enhance the road detection system with
filtering algorithm to overcome these limitations. Normalized Differences Index
(NDI) and morphological operation are the filtering algorithms used to address
the effect of shadow and guidance and re-guidance image filtering algorithms
are used to address the effect of rain and/or snow, while dark channel image
and specular-to-diffuse are the filters used to address light intensity
effects. The experimental performance of the road detection system with
filtering algorithms was tested qualitatively and quantitatively using the
following evaluation schemes: False Negative Rate (FNR) and False Positive Rate
(FPR). Comparison results of the road detection system with and without
filtering algorithm shows the filtering algorithm's capability to suppress the
effect of environmental noises because better road/non-road classification is
achieved by the road detection system. with filtering algorithm. This
achievement has further improved path planning/region classification for
autonomous driving system
| 1 | 0 | 0 | 0 | 0 | 0 |
Relativistic Spacecraft Propelled by Directed Energy | Achieving relativistic flight to enable extrasolar exploration is one of the
dreams of humanity and the long term goal of our NASA Starlight program. We
derive a fully relativistic solution for the motion of a spacecraft propelled
by radiation pressure from a directed energy system. Depending on the system
parameters, low mass spacecraft can achieve relativistic speeds; thereby
enabling interstellar exploration. The diffraction of the directed energy
system plays an important role and limits the maximum speed of the spacecraft.
We consider 'photon recycling' as a possible method to achieving higher speeds.
We also discuss recent claims that our previous work on this topic is incorrect
and show that these claims arise from an improper treatment of causality.
| 0 | 1 | 0 | 0 | 0 | 0 |
Outward Influence and Cascade Size Estimation in Billion-scale Networks | Estimating cascade size and nodes' influence is a fundamental task in social,
technological, and biological networks. Yet this task is extremely challenging
due to the sheer size and the structural heterogeneity of networks. We
investigate a new influence measure, termed outward influence (OI), defined as
the (expected) number of nodes that a subset of nodes $S$ will activate,
excluding the nodes in S. Thus, OI equals, the de facto standard measure,
influence spread of S minus |S|. OI is not only more informative for nodes with
small influence, but also, critical in designing new effective sampling and
statistical estimation methods.
Based on OI, we propose SIEA/SOIEA, novel methods to estimate influence
spread/outward influence at scale and with rigorous theoretical guarantees. The
proposed methods are built on two novel components 1) IICP an important
sampling method for outward influence, and 2) RSA, a robust mean estimation
method that minimize the number of samples through analyzing variance and range
of random variables. Compared to the state-of-the art for influence estimation,
SIEA is $\Omega(\log^4 n)$ times faster in theory and up to several orders of
magnitude faster in practice. For the first time, influence of nodes in the
networks of billions of edges can be estimated with high accuracy within a few
minutes. Our comprehensive experiments on real-world networks also give
evidence against the popular practice of using a fixed number, e.g. 10K or 20K,
of samples to compute the "ground truth" for influence spread.
| 1 | 0 | 0 | 0 | 0 | 0 |
Estimation of a noisy subordinated Brownian Motion via two-scales power variations | High frequency based estimation methods for a semiparametric pure-jump
subordinated Brownian motion exposed to a small additive microstructure noise
are developed building on the two-scales realized variations approach
originally developed by Zhang et. al. (2005) for the estimation of the
integrated variance of a continuous Ito process. The proposed estimators are
shown to be robust against the noise and, surprisingly, to attain better rates
of convergence than their precursors, method of moment estimators, even in the
absence of microstructure noise. Our main results give approximate optimal
values for the number K of regular sparse subsamples to be used, which is an
important tune-up parameter of the method. Finally, a data-driven plug-in
procedure is devised to implement the proposed estimators with the optimal
K-value. The developed estimators exhibit superior performance as illustrated
by Monte Carlo simulations and a real high-frequency data application.
| 0 | 0 | 1 | 1 | 0 | 0 |
Mirror actuation design for the interferometer control of the KAGRA gravitational wave telescope | KAGRA is a 3-km cryogenic interferometric gravitational wave telescope
located at an underground site in Japan. In order to achieve its target
sensitivity, the relative positions of the mirrors of the interferometer must
be finely adjusted with attached actuators. We have developed a model to
simulate the length control loops of the KAGRA interferometer with realistic
suspension responses and various noises for mirror actuation. Using our model,
we have designed the actuation parameters to have sufficient force range to
acquire lock as well as to control all the length degrees of freedom without
introducing excess noise.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.