text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: Transverse Shift in Andreev Reflection,
Abstract: An incoming electron is reflected back as a hole at a
normal-metal-superconductor interface, a process known as Andreev reflection.
We predict that there exists a universal transverse shift in this process due
to the effect of spin-orbit coupling in the normal metal. Particularly, using
both the scattering approach and the argument of angular momentum conservation,
we demonstrate that the shifts are pronounced for lightly-doped Weyl
semimetals, and are opposite for incoming electrons with different chirality,
generating a chirality-dependent Hall effect for the reflected holes. The
predicted shift is not limited to Weyl systems, but exists for a general
three-dimensional spin-orbit- coupled metal interfaced with a superconductor. | [
0,
1,
0,
0,
0,
0
] |
Title: Dynamics of resonances and equilibria of Low Earth Objects,
Abstract: The nearby space surrounding the Earth is densely populated by artificial
satellites and instruments, whose orbits are distributed within the
Low-Earth-Orbit region (LEO), ranging between 90 and 2 000 $km$ of altitude. As
a consequence of collisions and fragmentations, many space debris of different
sizes are left in the LEO region. Given the threat raised by the possible
damages which a collision of debris can provoke with operational or manned
satellites, the study of their dynamics is nowadays mandatory. This work is
focused on the existence of equilibria and the dynamics of resonances in LEO.
We base our results on a simplified model which includes the geopotential and
the atmospheric drag. Using such model, we make a qualitative study of the
resonances and the equilibrium positions, including their location and
stability. The dissipative effect due to the atmosphere provokes a tidal decay,
but we give examples of different behaviors, precisely a straightforward
passage through the resonance or rather a temporary capture. We also
investigate the effect of the solar cycle which is responsible of fluctuations
of the atmospheric density and we analyze the influence of Sun and Moon on LEO
objects. | [
0,
1,
0,
0,
0,
0
] |
Title: A null test of General Relativity: New limits on Local Position Invariance and the variation of fundamental constants,
Abstract: We compare the long-term fractional frequency variation of four hydrogen
masers that are part of an ensemble of clocks comprising the National Institute
of Standards and Technology,(NIST), Boulder, timescale with the fractional
frequencies of primary frequency standards operated by leading metrology
laboratories in the United States, France, Germany, Italy and the United
Kingdom for a period extending more than 14 years. The measure of the assumed
variation of non-gravitational interaction,(LPI parameter, $\beta$)---within
the atoms of H and Cs---over time as the earth orbits the sun, has been
constrained to $\beta=(2.2 \pm 2.5)\times 10^{-7}$, a factor of two improvement
over previous estimates. Using our results together with the previous best
estimates of $\beta$ based on Rb vs. Cs, and Rb vs. H comparisons, we impose
the most stringent limits to date on the dimensionless coupling constants that
relate the variation of fundamental constants such as the fine-structure
constant and the scaled quark mass with strong(QCD) interaction to the
variation in the local gravitational potential. For any metric theory of
gravity $\beta=0$. | [
0,
1,
0,
0,
0,
0
] |
Title: SG1120-1202: Mass-Quenching as Tracked by UV Emission in the Group Environment at z=0.37,
Abstract: We use the Hubble Space Telescope to obtain WFC3/F390W imaging of the
supergroup SG1120-1202 at z=0.37, mapping the UV emission of 138
spectroscopically confirmed members. We measure total (F390W-F814W) colors and
visually classify the UV morphology of individual galaxies as "clumpy" or
"smooth." Approximately 30% of the members have pockets of UV emission (clumpy)
and we identify for the first time in the group environment galaxies with UV
morphologies similar to the jellyfish galaxies observed in massive clusters. We
stack the clumpy UV members and measure a shallow internal color gradient,
which indicates unobscured star formation is occurring throughout these
galaxies. We also stack the four galaxy groups and measure a strong trend of
decreasing UV emission with decreasing projected group distance ($R_{proj}$).
We find that the strong correlation between decreasing UV emission and
increasing stellar mass can fully account for the observed trend in
(F390W-F814W) - $R_{proj}$, i.e., mass-quenching is the dominant mechanism for
extinguishing UV emission in group galaxies. Our extensive multi-wavelength
analysis of SG1120-1202 indicates that stellar mass is the primary predictor of
UV emission, but that the increasing fraction of massive (red/smooth) galaxies
at $R_{proj}$ < 2$R_{200}$ and existence of jellyfish candidates is due to the
group environment. | [
0,
1,
0,
0,
0,
0
] |
Title: Convergence rate bounds for a proximal ADMM with over-relaxation stepsize parameter for solving nonconvex linearly constrained problems,
Abstract: This paper establishes convergence rate bounds for a variant of the proximal
alternating direction method of multipliers (ADMM) for solving nonconvex
linearly constrained optimization problems. The variant of the proximal ADMM
allows the inclusion of an over-relaxation stepsize parameter belonging to the
interval $(0,2)$. To the best of our knowledge, all related papers in the
literature only consider the case where the over-relaxation parameter lies in
the interval $(0,(1+\sqrt{5})/2)$. | [
0,
0,
1,
0,
0,
0
] |
Title: Global Sensitivity Analysis of High Dimensional Neuroscience Models: An Example of Neurovascular Coupling,
Abstract: The complexity and size of state-of-the-art cell models have significantly
increased in part due to the requirement that these models possess complex
cellular functions which are thought--but not necessarily proven--to be
important. Modern cell models often involve hundreds of parameters; the values
of these parameters come, more often than not, from animal experiments whose
relationship to the human physiology is weak with very little information on
the errors in these measurements. The concomitant uncertainties in parameter
values result in uncertainties in the model outputs or Quantities of Interest
(QoIs). Global Sensitivity Analysis (GSA) aims at apportioning to individual
parameters (or sets of parameters) their relative contribution to output
uncertainty thereby introducing a measure of influence or importance of said
parameters. New GSA approaches are required to deal with increased model size
and complexity; a three stage methodology consisting of screening (dimension
reduction), surrogate modeling, and computing Sobol' indices, is presented. The
methodology is used to analyze a physiologically validated numerical model of
neurovascular coupling which possess 160 uncertain parameters. The sensitivity
analysis investigates three quantities of interest (QoIs), the average value of
$K^+$ in the extracellular space, the average volumetric flow rate through the
perfusing vessel, and the minimum value of the actin/myosin complex in the
smooth muscle cell. GSA provides a measure of the influence of each parameter,
for each of the three QoIs, giving insight into areas of possible physiological
dysfunction and areas of further investigation. | [
0,
0,
0,
0,
1,
0
] |
Title: Propagation in media as a probe for topological properties,
Abstract: The central goal of this thesis is to develop methods to experimentally study
topological phases. We do so by applying the powerful toolbox of quantum
simulation techniques with cold atoms in optical lattices. To this day, a
complete classification of topological phases remains elusive. In this context,
experimental studies are key, both for studying the interplay between topology
and complex effects and for identifying new forms of topological order. It is
therefore crucial to find complementary means to measure topological properties
in order to reach a fundamental understanding of topological phases. In one
dimensional chiral systems, we suggest a new way to construct and identify
topologically protected bound states, which are the smoking gun of these
materials. In two dimensional Hofstadter strips (i.e: systems which are very
short along one dimension), we suggest a new way to measure the topological
invariant directly from the atomic dynamics. | [
0,
1,
0,
0,
0,
0
] |
Title: SOTER: Programming Safe Robotics System using Runtime Assurance,
Abstract: Autonomous robots increasingly depend on third-party off-the-shelf components
and complex machine-learning techniques. This trend makes it challenging to
provide strong design-time certification of correct operation. To address this
challenge, we present SOTER, a programming framework that integrates the core
principles of runtime assurance to enable the use of uncertified controllers,
while still providing safety guarantees.
Runtime Assurance (RTA) is an approach used for safety-critical systems where
design-time analysis is coupled with run-time techniques to switch between
unverified advanced controllers and verified simple controllers. In this paper,
we present a runtime assurance programming framework for modular design of
provably-safe robotics software. \tool provides language primitives to
declaratively construct a \rta module consisting of an advanced controller
(untrusted), a safe controller (trusted), and the desired safety specification
(S). If the RTA module is well formed then the framework provides a formal
guarantee that it satisfies property S. The compiler generates code for
monitoring system state and switching control between the advanced and safe
controller in order to guarantee S. RTA allows complex systems to be
constructed through the composition of RTA modules.
To demonstrate the efficacy of our framework, we consider a real-world
case-study of building a safe drone surveillance system. Our experiments both
in simulation and on actual drones show that RTA-enabled RTA ensures safety of
the system, including when untrusted third-party components have bugs or
deviate from the desired behavior. | [
1,
0,
0,
0,
0,
0
] |
Title: Reconsidering Experiments,
Abstract: Experiments may not reveal their full import at the time that they are
performed. The scientists who perform them usually are testing a specific
hypothesis and quite often have specific expectations limiting the possible
inferences that can be drawn from the experiment. Nonetheless, as Hacking has
said, experiments have lives of their own. Those lives do not end with the
initial report of the results and consequences of the experiment. Going back
and rethinking the consequences of the experiment in a new context, theoretical
or empirical, has great merit as a strategy for investigation and for
scientific problem analysis. I apply this analysis to the interplay between
Fizeau's classic optical experiments and the building of special relativity.
Einstein's understanding of the problems facing classical electrodynamics and
optics, in part, was informed by Fizeau's 1851 experiments. However, between
1851 and 1905, Fizeau's experiments were duplicated and reinterpreted by a
succession of scientists, including Hertz, Lorentz, and Michelson. Einstein's
analysis of the consequences of the experiments is tied closely to this
theoretical and experimental tradition. However, Einstein's own inferences from
the experiments differ greatly from the inferences drawn by others in that
tradition. | [
0,
1,
0,
0,
0,
0
] |
Title: Streaming Kernel PCA with $\tilde{O}(\sqrt{n})$ Random Features,
Abstract: We study the statistical and computational aspects of kernel principal
component analysis using random Fourier features and show that under mild
assumptions, $O(\sqrt{n} \log n)$ features suffices to achieve
$O(1/\epsilon^2)$ sample complexity. Furthermore, we give a memory efficient
streaming algorithm based on classical Oja's algorithm that achieves this rate. | [
0,
0,
0,
1,
0,
0
] |
Title: A note on species realizations and nondegeneracy of potentials,
Abstract: In this note we show that a mutation theory of species with potential can be
defined so that a certain class of skew-symmetrizable integer matrices have a
species realization admitting a non-degenerate potential. This gives a partial
affirmative answer to a question raised by Jan Geuenich and Daniel
Labardini-Fragoso. We also provide an example of a class of skew-symmetrizable
$4 \times 4$ integer matrices, which are not globally unfoldable nor strongly
primitive, and that have a species realization admitting a non-degenerate
potential. | [
0,
0,
1,
0,
0,
0
] |
Title: Vortex states and spin textures of rotating spin-orbit-coupled Bose-Einstein condensates in a toroidal trap,
Abstract: We consider the ground-state properties of Rashba spin-orbit-coupled
pseudo-spin-1/2 Bose-Einstein condensates (BECs) in a rotating two-dimensional
(2D) toroidal trap. In the absence of spin-orbit coupling (SOC), the increasing
rotation frequency enhances the creation of giant vortices for the initially
miscible BECs, while it can lead to the formation of semiring density patterns
with irregular hidden vortex structures for the initially immiscible BECs.
Without rotation, strong 2D isotropic SOC yields a heliciform-stripe phase for
the initially immiscible BECs. Combined effects of rotation, SOC, and
interatomic interactions on the vortex structures and typical spin textures of
the ground state of the system are discussed systematically. In particular, for
fixed rotation frequency above the critical value, the increasing isotropic SOC
favors a visible vortex ring in each component which is accompanied by a hidden
giant vortex plus a (several) hidden vortex ring(s) in the central region. In
the case of 1D anisotropic SOC, large SOC strength results in the generation of
hidden linear vortex string and the transition from initial phase separation
(phase mixing) to phase mixing (phase separation). Furthermore, the peculiar
spin textures including skyrmion lattice, skyrmion pair and skyrmion string are
revealed in this system. | [
0,
1,
0,
0,
0,
0
] |
Title: Semi-Global Weighted Least Squares in Image Filtering,
Abstract: Solving the global method of Weighted Least Squares (WLS) model in image
filtering is both time- and memory-consuming. In this paper, we present an
alternative approximation in a time- and memory- efficient manner which is
denoted as Semi-Global Weighed Least Squares (SG-WLS). Instead of solving a
large linear system, we propose to iteratively solve a sequence of subsystems
which are one-dimensional WLS models. Although each subsystem is
one-dimensional, it can take two-dimensional neighborhood information into
account due to the proposed special neighborhood construction. We show such a
desirable property makes our SG-WLS achieve close performance to the original
two-dimensional WLS model but with much less time and memory cost. While
previous related methods mainly focus on the 4-connected/8-connected
neighborhood system, our SG-WLS can handle a more general and larger
neighborhood system thanks to the proposed fast solution. We show such a
generalization can achieve better performance than the 4-connected/8-connected
neighborhood system in some applications. Our SG-WLS is $\sim20$ times faster
than the WLS model. For an image of $M\times N$, the memory cost of SG-WLS is
at most at the magnitude of $max\{\frac{1}{M}, \frac{1}{N}\}$ of that of the
WLS model. We show the effectiveness and efficiency of our SG-WLS in a range of
applications. | [
1,
0,
0,
0,
0,
0
] |
Title: Deep Residual Networks and Weight Initialization,
Abstract: Residual Network (ResNet) is the state-of-the-art architecture that realizes
successful training of really deep neural network. It is also known that good
weight initialization of neural network avoids problem of vanishing/exploding
gradients. In this paper, simplified models of ResNets are analyzed. We argue
that goodness of ResNet is correlated with the fact that ResNets are relatively
insensitive to choice of initial weights. We also demonstrate how batch
normalization improves backpropagation of deep ResNets without tuning initial
values of weights. | [
1,
0,
0,
1,
0,
0
] |
Title: One- and two-channel Kondo model with logarithmic Van Hove singularity: a numerical renormalization group solution,
Abstract: Simple scaling consideration and NRG solution of the one- and two-channel
Kondo model in the presence of a logarithmic Van Hove singularity at the Fermi
level is given. The temperature dependences of local and impurity magnetic
susceptibility and impurity entropy are calculated. The low-temperature
behavior of the impurity susceptibility and impurity entropy turns out to be
non-universal in the Kondo sense and independent of the $s-d$ coupling $J$. The
resonant level model solution in the strong coupling regime confirms the NRG
results. In the two-channel case the local susceptibility demonstrates a
non-Fermi-liquid power-law behavior. | [
0,
1,
0,
0,
0,
0
] |
Title: Simulations and measurements of the impact of collective effects on dynamic aperture,
Abstract: We describe a benchmark study of collective and nonlinear dynamics in an APS
storage ring. A 1-mm long bunch was assumed in the calculation of wakefield and
element by element particle tracking with distributed wakefield component along
the ring was performed in Elegant simulation. The result of Elegant simulation
differed by less than 5 % from experimental measurement | [
0,
1,
0,
0,
0,
0
] |
Title: Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation,
Abstract: Deep learning approaches such as convolutional neural nets have consistently
outperformed previous methods on challenging tasks such as dense, semantic
segmentation. However, the various proposed networks perform differently, with
behaviour largely influenced by architectural choices and training settings.
This paper explores Ensembles of Multiple Models and Architectures (EMMA) for
robust performance through aggregation of predictions from a wide range of
methods. The approach reduces the influence of the meta-parameters of
individual models and the risk of overfitting the configuration to a particular
database. EMMA can be seen as an unbiased, generic deep learning model which is
shown to yield excellent performance, winning the first position in the BRATS
2017 competition among 50+ participating teams. | [
1,
0,
0,
0,
0,
0
] |
Title: Estimating Phase Duration for SPaT Messages,
Abstract: A SPaT (Signal Phase and Timing) message describes for each lane the current
phase at a signalized intersection together with an estimate of the residual
time of that phase. Accurate SPaT messages can be used to construct a speed
profile for a vehicle that reduces its fuel consumption as it approaches or
leaves an intersection. This paper presents SPaT estimation algorithms at an
intersection with a semi-actuated signal, using real-time signal phase
measurements. The algorithms are evaluated using high-resolution data from two
intersections in Montgomery County, MD. The algorithms can be readily
implemented at signal controllers. The study supports three findings. First,
real-time information dramatically improves the accuracy of the prediction of
the residual time compared with prediction based on historical data alone.
Second, as time increases the prediction of the residual time may increase or
decrease. Third, as drivers differently weight errors in predicting `end of
green' and `end of red', drivers on two different approaches may prefer
different estimates of the residual time of the same phase. | [
0,
0,
0,
1,
0,
0
] |
Title: TS-MPC for Autonomous Vehicles including a dynamic TS-MHE-UIO,
Abstract: In this work, a novel approach is presented to solve the problem of tracking
trajectories in autonomous vehicles. This approach is based on the use of a
cascade control where the external loop solves the position control using a
novel Takagi Sugeno - Model Predictive Control (TS-MPC) approach and the
internal loop is in charge of the dynamic control of the vehicle using a Takagi
Sugeno - Linear Quadratic Regulator technique designed via Linear Matrix
Inequalities (TS-LMI-LQR). Both techniques use a TS representation of the
kinematic and dynamic models of the vehicle. In addition, a novel Takagi Sugeno
estimator - Moving Horizon Estimator - Unknown Input Observer (TS-MHE-UIO) is
presented. This method estimates the dynamic states of the vehicle optimally as
well as the force of friction acting on the vehicle that is used to reduce the
control efforts. The innovative contribution of the TS-MPC and TS-MHE-UIO
techniques is that using the TS model formulation of the vehicle allows us to
solve the nonlinear problem as if it were linear, reducing computation times by
40-50 times. To demonstrate the potential of the TS-MPC we propose a comparison
between three methods of solving the kinematic control problem: using the
non-linear MPC formulation (NL-MPC), using TS-MPC without updating the
prediction model and using updated TS-MPC with the references of the planner. | [
1,
0,
0,
0,
0,
0
] |
Title: Complexity of short Presburger arithmetic,
Abstract: We study complexity of short sentences in Presburger arithmetic (Short-PA).
Here by "short" we mean sentences with a bounded number of variables,
quantifiers, inequalities and Boolean operations; the input consists only of
the integers involved in the inequalities. We prove that assuming Kannan's
partition can be found in polynomial time, the satisfiability of Short-PA
sentences can be decided in polynomial time. Furthermore, under the same
assumption, we show that the numbers of satisfying assignments of short
Presburger sentences can also be computed in polynomial time. | [
1,
0,
1,
0,
0,
0
] |
Title: Deep neural network based speech separation optimizing an objective estimator of intelligibility for low latency applications,
Abstract: Mean square error (MSE) has been the preferred choice as loss function in the
current deep neural network (DNN) based speech separation techniques. In this
paper, we propose a new cost function with the aim of optimizing the extended
short time objective intelligibility (ESTOI) measure. We focus on applications
where low algorithmic latency ($\leq 10$ ms) is important. We use long
short-term memory networks (LSTM) and evaluate our proposed approach on four
sets of two-speaker mixtures from extended Danish hearing in noise (HINT)
dataset. We show that the proposed loss function can offer improved or at par
objective intelligibility (in terms of ESTOI) compared to an MSE optimized
baseline while resulting in lower objective separation performance (in terms of
the source to distortion ratio (SDR)). We then proceed to propose an approach
where the network is first initialized with weights optimized for MSE criterion
and then trained with the proposed ESTOI loss criterion. This approach
mitigates some of the losses in objective separation performance while
preserving the gains in objective intelligibility. | [
1,
0,
0,
0,
0,
0
] |
Title: Characterization and control of linear coupling using turn-by-turn beam position monitor data in storage rings,
Abstract: We introduce a new application of measuring symplectic generators to
characterize and control the linear betatron coupling in storage rings. From
synchronized and consecutive BPM (Beam Position Monitor) turn-by-turn (TbT)
readings, symplectic Lie generators describing the coupled linear dynamics are
extracted. Four plane-crossing terms in the generators directly characterize
the coupling between the horizontal and the vertical planes. Coupling control
can be accomplished by utilizing the dependency of these plane-crossing terms
on skew quadrupoles. The method has been successfully demonstrated to reduce
the vertical effective emittance down to the diffraction limit in the newly
constructed National Synchrotron Light Source II (NSLS-II) storage ring. This
method can be automatized to realize linear coupling feedback control with
negligible disturbance on machine operation. | [
0,
1,
0,
0,
0,
0
] |
Title: Adaptive Inferential Method for Monotone Graph Invariants,
Abstract: We consider the problem of undirected graphical model inference. In many
applications, instead of perfectly recovering the unknown graph structure, a
more realistic goal is to infer some graph invariants (e.g., the maximum
degree, the number of connected subgraphs, the number of isolated nodes). In
this paper, we propose a new inferential framework for testing nested multiple
hypotheses and constructing confidence intervals of the unknown graph
invariants under undirected graphical models. Compared to perfect graph
recovery, our methods require significantly weaker conditions. This paper makes
two major contributions: (i) Methodologically, for testing nested multiple
hypotheses, we propose a skip-down algorithm on the whole family of monotone
graph invariants (The invariants which are non-decreasing under addition of
edges). We further show that the same skip-down algorithm also provides valid
confidence intervals for the targeted graph invariants. (ii) Theoretically, we
prove that the length of the obtained confidence intervals are optimal and
adaptive to the unknown signal strength. We also prove generic lower bounds for
the confidence interval length for various invariants. Numerical results on
both synthetic simulations and a brain imaging dataset are provided to
illustrate the usefulness of the proposed method. | [
0,
0,
1,
1,
0,
0
] |
Title: Methodological variations in lagged regression for detecting physiologic drug effects in EHR data,
Abstract: We studied how lagged linear regression can be used to detect the physiologic
effects of drugs from data in the electronic health record (EHR). We
systematically examined the effect of methodological variations ((i) time
series construction, (ii) temporal parameterization, (iii) intra-subject
normalization, (iv) differencing (lagged rates of change achieved by taking
differences between consecutive measurements), (v) explanatory variables, and
(vi) regression models) on performance of lagged linear methods in this
context. We generated two gold standards (one knowledge-base derived, one
expert-curated) for expected pairwise relationships between 7 drugs and 4 labs,
and evaluated how the 64 unique combinations of methodological perturbations
reproduce gold standards. Our 28 cohorts included patients in Columbia
University Medical Center/NewYork-Presbyterian Hospital clinical database. The
most accurate methods achieved AUROC of 0.794 for knowledge-base derived gold
standard (95%CI [0.741, 0.847]) and 0.705 for expert-curated gold standard (95%
CI [0.629, 0.781]). We observed a 0.633 mean AUROC (95%CI [0.610, 0.657],
expert-curated gold standard) across all methods that re-parameterize time
according to sequence and use either a joint autoregressive model with
differencing or an independent lag model without differencing. The complement
of this set of methods achieved a mean AUROC close to 0.5, indicating the
importance of these choices. We conclude that time- series analysis of EHR data
will likely rely on some of the beneficial pre-processing and modeling
methodologies identified, and will certainly benefit from continued careful
analysis of methodological perturbations. This study found that methodological
variations, such as pre-processing and representations, significantly affect
results, exposing the importance of evaluating these components when comparing
machine-learning methods. | [
0,
0,
0,
1,
1,
0
] |
Title: Exploring nucleon spin structure through neutrino neutral-current interactions in MicroBooNE,
Abstract: The net contribution of the strange quark spins to the proton spin, $\Delta
s$, can be determined from neutral current elastic neutrino-proton interactions
at low momentum transfer combined with data from electron-proton scattering.
The probability of neutrino-proton interactions depends in part on the axial
form factor, which represents the spin structure of the proton and can be
separated into its quark flavor contributions. Low momentum transfer neutrino
neutral current interactions can be measured in MicroBooNE, a high-resolution
liquid argon time projection chamber (LArTPC) in its first year of running in
the Booster Neutrino Beamline at Fermilab. The signal for these interactions in
MicroBooNE is a single short proton track. We present our work on the automated
reconstruction and classification of proton tracks in LArTPCs, an important
step in the determination of neutrino- nucleon cross sections and the
measurement of $\Delta s$. | [
0,
1,
0,
0,
0,
0
] |
Title: A unimodular Liouville hyperbolic souvlaki --- an appendix to [arXiv:1603.06712],
Abstract: Carmesin, Federici, and Georgakopoulos [arXiv:1603.06712] constructed a
transient hyperbolic graph that has no transient subtrees and that has the
Liouville property for harmonic functions. We modify their construction to get
a unimodular random graph with the same properties. | [
0,
0,
1,
0,
0,
0
] |
Title: Comparison Based Nearest Neighbor Search,
Abstract: We consider machine learning in a comparison-based setting where we are given
a set of points in a metric space, but we have no access to the actual
distances between the points. Instead, we can only ask an oracle whether the
distance between two points $i$ and $j$ is smaller than the distance between
the points $i$ and $k$. We are concerned with data structures and algorithms to
find nearest neighbors based on such comparisons. We focus on a simple yet
effective algorithm that recursively splits the space by first selecting two
random pivot points and then assigning all other points to the closer of the
two (comparison tree). We prove that if the metric space satisfies certain
expansion conditions, then with high probability the height of the comparison
tree is logarithmic in the number of points, leading to efficient search
performance. We also provide an upper bound for the failure probability to
return the true nearest neighbor. Experiments show that the comparison tree is
competitive with algorithms that have access to the actual distance values, and
needs less triplet comparisons than other competitors. | [
1,
0,
0,
1,
0,
0
] |
Title: Message-passing algorithm of quantum annealing with nonstoquastic Hamiltonian,
Abstract: Quantum annealing (QA) is a generic method for solving optimization problems
using fictitious quantum fluctuation. The current device performing QA involves
controlling the transverse field; it is classically simulatable by using the
standard technique for mapping the quantum spin systems to the classical ones.
In this sense, the current system for QA is not powerful despite utilizing
quantum fluctuation. Hence, we developed a system with a time-dependent
Hamiltonian consisting of a combination of the formulated Ising model and the
"driver" Hamiltonian with only quantum fluctuation. In the previous study, for
a fully connected spin model, quantum fluctuation can be addressed in a
relatively simple way. We proved that the fully connected antiferromagnetic
interaction can be transformed into a fluctuating transverse field and is thus
classically simulatable at sufficiently low temperatures. Using the fluctuating
transverse field, we established several ways to simulate part of the
nonstoquastic Hamiltonian on classical computers. We formulated a
message-passing algorithm in the present study. This algorithm is capable of
assessing the performance of QA with part of the nonstoquastic Hamiltonian
having a large number of spins. In other words, we developed a different
approach for simulating the nonstoquastic Hamiltonian without using the quantum
Monte Carlo technique. Our results were validated by comparison to the results
obtained by the replica method. | [
1,
0,
0,
1,
0,
0
] |
Title: Comparison of Polynomial Chaos and Gaussian Process surrogates for uncertainty quantification and correlation estimation of spatially distributed open-channel steady flows,
Abstract: Data assimilation is widely used to improve flood forecasting capability,
especially through parameter inference requiring statistical information on the
uncertain input parameters (upstream discharge, friction coefficient) as well
as on the variability of the water level and its sensitivity with respect to
the inputs. For particle filter or ensemble Kalman filter, stochastically
estimating probability density function and covariance matrices from a Monte
Carlo random sampling requires a large ensemble of model evaluations, limiting
their use in real-time application. To tackle this issue, fast surrogate models
based on Polynomial Chaos and Gaussian Process can be used to represent the
spatially distributed water level in place of solving the shallow water
equations. This study investigates the use of these surrogates to estimate
probability density functions and covariance matrices at a reduced
computational cost and without the loss of accuracy, in the perspective of
ensemble-based data assimilation. This study focuses on 1-D steady state flow
simulated with MASCARET over the Garonne River (South-West France). Results
show that both surrogates feature similar performance to the Monte-Carlo random
sampling, but for a much smaller computational budget; a few MASCARET
simulations (on the order of 10-100) are sufficient to accurately retrieve
covariance matrices and probability density functions all along the river, even
where the flow dynamic is more complex due to heterogeneous bathymetry. This
paves the way for the design of surrogate strategies suitable for representing
unsteady open-channel flows in data assimilation. | [
0,
1,
0,
1,
0,
0
] |
Title: Thermal Sunyaev-Zel'dovich effect in the intergalactic medium with primordial magnetic fields,
Abstract: The presence of ubiquitous magnetic fields in the universe is suggested from
observations of radiation and cosmic ray from galaxies or the intergalactic
medium (IGM). One possible origin of cosmic magnetic fields is the
magnetogenesis in the primordial universe. Such magnetic fields are called
primordial magnetic fields (PMFs), and are considered to affect the evolution
of matter density fluctuations and the thermal history of the IGM gas. Hence
the information of PMFs is expected to be imprinted on the anisotropies of the
cosmic microwave background (CMB) through the thermal Sunyaev-Zel'dovich (tSZ)
effect in the IGM. In this study, given an initial power spectrum of PMFs as
$P(k)\propto B_{\rm 1Mpc}^2 k^{n_{B}}$, we calculate dynamical and thermal
evolutions of the IGM under the influence of PMFs, and compute the resultant
angular power spectrum of the Compton $y$-parameter on the sky. As a result, we
find that two physical processes driven by PMFs dominantly determine the power
spectrum of the Compton $y$-parameter; (i) the heating due to the ambipolar
diffusion effectively works to increase the temperature and the ionization
fraction, and (ii) the Lorentz force drastically enhances the density contrast
just after the recombination epoch. These facts result in making the tSZ
angular power spectrum induced by the PMFs more remarkable at $\ell >10^4$ than
that by galaxy clusters even with $B_{\rm 1Mpc}=0.1$ nG and $n_{B}=-1.0$
because the contribution from galaxy clusters decreases with increasing $\ell$.
The measurement of the tSZ angular power spectrum on high $\ell$ modes can
provide the stringent constraint on PMFs. | [
0,
1,
0,
0,
0,
0
] |
Title: A lower bound on the positive semidefinite rank of convex bodies,
Abstract: The positive semidefinite rank of a convex body $C$ is the size of its
smallest positive semidefinite formulation. We show that the positive
semidefinite rank of any convex body $C$ is at least $\sqrt{\log d}$ where $d$
is the smallest degree of a polynomial that vanishes on the boundary of the
polar of $C$. This improves on the existing bound which relies on results from
quantifier elimination. The proof relies on the Bézout bound applied to the
Karush-Kuhn-Tucker conditions of optimality. We discuss the connection with the
algebraic degree of semidefinite programming and show that the bound is tight
(up to constant factor) for random spectrahedra of suitable dimension. | [
1,
0,
1,
0,
0,
0
] |
Title: Selective Classification for Deep Neural Networks,
Abstract: Selective classification techniques (also known as reject option) have not
yet been considered in the context of deep neural networks (DNNs). These
techniques can potentially significantly improve DNNs prediction performance by
trading-off coverage. In this paper we propose a method to construct a
selective classifier given a trained neural network. Our method allows a user
to set a desired risk level. At test time, the classifier rejects instances as
needed, to grant the desired risk (with high probability). Empirical results
over CIFAR and ImageNet convincingly demonstrate the viability of our method,
which opens up possibilities to operate DNNs in mission-critical applications.
For example, using our method an unprecedented 2% error in top-5 ImageNet
classification can be guaranteed with probability 99.9%, and almost 60% test
coverage. | [
1,
0,
0,
0,
0,
0
] |
Title: Chaotic laser based physical random bit streaming system with a computer application interface,
Abstract: We demonstrate a random bit streaming system that uses a chaotic laser as its
physical entropy source. By performing real-time bit manipulation for bias
reduction, we were able to provide the memory of a personal computer with a
constant supply of ready-to-use physical random bits at a throughput of up to 4
Gbps. We pay special attention to the end-to-end entropy source model
describing how the entropy from physical sources is converted into bit entropy.
We confirmed the statistical quality of the generated random bits by revealing
the pass rate of the NIST SP800-22 test suite to be 65 % to 75 %, which is
commonly considered acceptable for a reliable random bit generator. We also
confirmed the stable operation of our random bit steaming system with long-term
bias monitoring. | [
1,
1,
0,
0,
0,
0
] |
Title: Observation of Intrinsic Half-metallic Behavior of CrO$_2$ (100) Epitaxial Films by Bulk-sensitive Spin-resolved PES,
Abstract: We have investigated the electronic states and spin polarization of
half-metallic ferromagnet CrO$_2$ (100) epitaxial films by bulk-sensitive
spin-resolved photoemission spectroscopy with a focus on non-quasiparticle
(NQP) states derived from electron-magnon interactions. We found that the
averaged values of the spin polarization are approximately 100% and 40% at 40 K
and 300 K, respectively. This is consistent with the previously reported result
[H. Fujiwara et al., Appl. Phys. Lett. 106, 202404 (2015).]. At 100 K, peculiar
spin depolarization was observed at the Fermi level ($E_{F}$), which is
supported by theoretical calculations predicting NQP states. This suggests the
possible appearance of NQP states in CrO$_2$. We also compare the temperature
dependence of our spin polarizations with that of the magnetization. | [
0,
1,
0,
0,
0,
0
] |
Title: A deep search for metals near redshift 7: the line-of-sight towards ULAS J1120+0641,
Abstract: We present a search for metal absorption line systems at the highest
redshifts to date using a deep (30h) VLT/X-Shooter spectrum of the z = 7.084
quasi-stellar object (QSO) ULAS J1120+0641. We detect seven intervening systems
at z > 5.5, with the highest-redshift system being a C IV absorber at z = 6.51.
We find tentative evidence that the mass density of C IV remains flat or
declines with redshift at z < 6, while the number density of C II systems
remains relatively flat over 5 < z < 7. These trends are broadly consistent
with models of chemical enrichment by star formation-driven winds that include
a softening of the ultraviolet background towards higher redshifts. We find a
larger number of weak ( W_rest < 0.3A ) Mg II systems over 5.9 < z < 7.0 than
predicted by a power-law fit to the number density of stronger systems. This is
consistent with trends in the number density of weak Mg II systems at z = 2.5,
and suggests that the mechanisms that create these absorbers are already in
place at z = 7. Finally, we investigate the associated narrow Si IV, C IV, and
N V absorbers located near the QSO redshift, and find that at least one
component shows evidence of partial covering of the continuum source. | [
0,
1,
0,
0,
0,
0
] |
Title: Energy-Performance Trade-offs in Mobile Data Transfers,
Abstract: By year 2020, the number of smartphone users globally will reach 3 Billion
and the mobile data traffic (cellular + WiFi) will exceed PC internet traffic
the first time. As the number of smartphone users and the amount of data
transferred per smartphone grow exponentially, limited battery power is
becoming an increasingly critical problem for mobile devices which increasingly
depend on network I/O. Despite the growing body of research in power management
techniques for the mobile devices at the hardware layer as well as the lower
layers of the networking stack, there has been little work focusing on saving
energy at the application layer for the mobile systems during network I/O. In
this paper, to the best of our knowledge, we are first to provide an in depth
analysis of the effects of application layer data transfer protocol parameters
on the energy consumption of mobile phones. We show that significant energy
savings can be achieved with application layer solutions at the mobile systems
during data transfer with no or minimal performance penalty. In many cases,
performance increase and energy savings can be achieved simultaneously. | [
1,
0,
0,
0,
0,
0
] |
Title: A stability result on optimal Skorokhod embedding,
Abstract: Motivated by the model- independent pricing of derivatives calibrated to the
real market, we consider an optimization problem similar to the optimal
Skorokhod embedding problem, where the embedded Brownian motion needs only to
reproduce a finite number of prices of Vanilla options. We derive in this paper
the corresponding dualities and the geometric characterization of optimizers.
Then we show a stability result, i.e. when more and more Vanilla options are
given, the optimization problem converges to an optimal Skorokhod embedding
problem, which constitutes the basis of the numerical computation in practice.
In addition, by means of different metrics on the space of probability
measures, a convergence rate analysis is provided under suitable conditions. | [
0,
0,
1,
0,
0,
0
] |
Title: On Symmetric Losses for Learning from Corrupted Labels,
Abstract: This paper aims to provide a better understanding of a symmetric loss. First,
we show that using a symmetric loss is advantageous in the balanced error rate
(BER) minimization and area under the receiver operating characteristic curve
(AUC) maximization from corrupted labels. Second, we prove general theoretical
properties of symmetric losses, including a classification-calibration
condition, excess risk bound, conditional risk minimizer, and AUC-consistency
condition. Third, since all nonnegative symmetric losses are non-convex, we
propose a convex barrier hinge loss that benefits significantly from the
symmetric condition, although it is not symmetric everywhere. Finally, we
conduct experiments on BER and AUC optimization from corrupted labels to
validate the relevance of the symmetric condition. | [
1,
0,
0,
1,
0,
0
] |
Title: A finite Q-bad space,
Abstract: We prove that for a free noncyclic group $F$, $H_2(\hat F_\mathbb Q, \mathbb
Q)$ is an uncountable $\mathbb Q$-vector space. Here $\hat F_\mathbb Q$ is the
$\mathbb Q$-completion of $F$. This answers a problem of A.K. Bousfield for the
case of rational coefficients. As a direct consequence of this result it
follows that, a wedge of circles is $\mathbb Q$-bad in the sense of
Bousfield-Kan. The same methods as used in the proof of the above results allow
to show that, the homology $H_2(\hat F_\mathbb Z,\mathbb Z)$ is not divisible
group, where $\hat F_\mathbb Z$ is the integral pronilpotent completion of $F$. | [
0,
0,
1,
0,
0,
0
] |
Title: Increasing Papers' Discoverability with Precise Semantic Labeling: the sci.AI Platform,
Abstract: The number of published findings in biomedicine increases continually. At the
same time, specifics of the domain's terminology complicates the task of
relevant publications retrieval. In the current research, we investigate
influence of terms' variability and ambiguity on a paper's likelihood of being
retrieved. We obtained statistics that demonstrate significance of the issue
and its challenges, followed by presenting the sci.AI platform, which allows
precise terms labeling as a resolution. | [
1,
0,
0,
0,
0,
0
] |
Title: On topological obstructions to global stabilization of an inverted pendulum,
Abstract: We consider a classical problem of control of an inverted pendulum by means
of a horizontal motion of its pivot point. We suppose that the control law can
be non-autonomous and non-periodic w.r.t. the position of the pendulum. It is
shown that global stabilization of the vertical upward position of the pendulum
cannot be obtained for any Lipschitz control law, provided some natural
assumptions. Moreover, we show that there always exists a solution separated
from the vertical position and along which the pendulum never becomes
horizontal. Hence, we also prove that global stabilization cannot be obtained
in the system where the pendulum can impact the horizontal plane (for any
mechanical model of impact). Similar results are presented for several
analogous systems: a pendulum on a cart, a spherical pendulum, and a pendulum
with an additional torque control. | [
0,
1,
1,
0,
0,
0
] |
Title: Regularization of the Kernel Matrix via Covariance Matrix Shrinkage Estimation,
Abstract: The kernel trick concept, formulated as an inner product in a feature space,
facilitates powerful extensions to many well-known algorithms. While the kernel
matrix involves inner products in the feature space, the sample covariance
matrix of the data requires outer products. Therefore, their spectral
properties are tightly connected. This allows us to examine the kernel matrix
through the sample covariance matrix in the feature space and vice versa. The
use of kernels often involves a large number of features, compared to the
number of observations. In this scenario, the sample covariance matrix is not
well-conditioned nor is it necessarily invertible, mandating a solution to the
problem of estimating high-dimensional covariance matrices under small sample
size conditions. We tackle this problem through the use of a shrinkage
estimator that offers a compromise between the sample covariance matrix and a
well-conditioned matrix (also known as the "target") with the aim of minimizing
the mean-squared error (MSE). We propose a distribution-free kernel matrix
regularization approach that is tuned directly from the kernel matrix, avoiding
the need to address the feature space explicitly. Numerical simulations
demonstrate that the proposed regularization is effective in classification
tasks. | [
0,
0,
0,
1,
0,
0
] |
Title: Supervised Typing of Big Graphs using Semantic Embeddings,
Abstract: We propose a supervised algorithm for generating type embeddings in the same
semantic vector space as a given set of entity embeddings. The algorithm is
agnostic to the derivation of the underlying entity embeddings. It does not
require any manual feature engineering, generalizes well to hundreds of types
and achieves near-linear scaling on Big Graphs containing many millions of
triples and instances by virtue of an incremental execution. We demonstrate the
utility of the embeddings on a type recommendation task, outperforming a
non-parametric feature-agnostic baseline while achieving 15x speedup and
near-constant memory usage on a full partition of DBpedia. Using
state-of-the-art visualization, we illustrate the agreement of our
extensionally derived DBpedia type embeddings with the manually curated domain
ontology. Finally, we use the embeddings to probabilistically cluster about 4
million DBpedia instances into 415 types in the DBpedia ontology. | [
1,
0,
0,
0,
0,
0
] |
Title: Attention Please: Consider Mockito when Evaluating Newly Proposed Automated Program Repair Techniques,
Abstract: Automated program repair (APR) has attracted widespread attention in recent
years with substantial techniques being proposed. Meanwhile, a number of
benchmarks have been established for evaluating the performances of APR
techniques, among which Defects4J is one of the most wildly used benchmark.
However, bugs in Mockito, a project augmented in a later-version of Defects4J,
do not receive much attention by recent researches. In this paper, we aim at
investigating the necessity of considering Mockito bugs when evaluating APR
techniques. Our findings show that: 1) Mockito bugs are not more complex for
repairing compared with bugs from other projects; 2) the bugs repaired by the
state-of-the-art tools share the same repair patterns compared with those
patterns required to repair Mockito bugs; however, 3) the state-of-the-art
tools perform poorly on Mockito bugs (Nopol can only correctly fix one bug
while SimFix and CapGen cannot fix any bug in Mockito even if all the buggy
locations have been exposed). We conclude from these results that existing APR
techniques may be overfitting to their evaluated subjects and we should
consider Mockito, or even more bugs from other projects, when evaluating newly
proposed APR techniques. We further find out a unique repair action required to
repair Mockito bugs named external package addition. Importing the external
packages from the test code associated with the source code is feasible for
enlarging the search space and this action can be augmented with existing
repair actions to advance existing techniques. | [
1,
0,
0,
0,
0,
0
] |
Title: Learning in anonymous nonatomic games with applications to first-order mean field games,
Abstract: We introduce a model of anonymous games with the player dependent action
sets. We propose several learning procedures based on the well-known Fictitious
Play and Online Mirror Descent and prove their convergence to equilibrium under
the classical monotonicity condition. Typical examples are first-order mean
field games. | [
0,
0,
1,
0,
0,
0
] |
Title: Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums,
Abstract: In this work, we study the problem of minimizing the sum of strongly convex
functions split over a network of $n$ nodes. We propose the decentralized and
asynchronous algorithm ADFS to tackle the case when local functions are
themselves finite sums with $m$ components. ADFS converges linearly when local
functions are smooth, and matches the rates of the best known finite sum
algorithms when executed on a single machine. On several machines, ADFS enjoys
a $O (\sqrt{n})$ or $O(n)$ speed-up depending on the leading complexity term as
long as the diameter of the network is not too big with respect to $m$. This
also leads to a $\sqrt{m}$ speed-up over state-of-the-art distributed batch
methods, which is the expected speed-up for finite sum algorithms. In terms of
communication times and network parameters, ADFS scales as well as optimal
distributed batch algorithms. As a side contribution, we give a generalized
version of the accelerated proximal coordinate gradient algorithm using
arbitrary sampling that we apply to a well-chosen dual problem to derive ADFS.
Yet, ADFS uses primal proximal updates that only require solving
one-dimensional problems for many standard machine learning applications.
Finally, ADFS can be formulated for non-smooth objectives with equally good
scaling properties. We illustrate the improvement of ADFS over state-of-the-art
approaches with simulations. | [
1,
0,
0,
0,
0,
0
] |
Title: A new fractional derivative of variable order with non-singular kernel and fractional differential equations,
Abstract: In this paper, we introduce two new non-singular kernel fractional
derivatives and present a class of other fractional derivatives derived from
the new formulations. We present some important results of uniformly convergent
sequences of continuous functions, in particular the Comparison's principle,
and others that allow, the study of the limitation of fractional nonlinear
differential equations. | [
0,
0,
1,
0,
0,
0
] |
Title: Safe Semi-Supervised Learning of Sum-Product Networks,
Abstract: In several domains obtaining class annotations is expensive while at the same
time unlabelled data are abundant. While most semi-supervised approaches
enforce restrictive assumptions on the data distribution, recent work has
managed to learn semi-supervised models in a non-restrictive regime. However,
so far such approaches have only been proposed for linear models. In this work,
we introduce semi-supervised parameter learning for Sum-Product Networks
(SPNs). SPNs are deep probabilistic models admitting inference in linear time
in number of network edges. Our approach has several advantages, as it (1)
allows generative and discriminative semi-supervised learning, (2) guarantees
that adding unlabelled data can increase, but not degrade, the performance
(safe), and (3) is computationally efficient and does not enforce restrictive
assumptions on the data distribution. We show on a variety of data sets that
safe semi-supervised learning with SPNs is competitive compared to
state-of-the-art and can lead to a better generative and discriminative
objective value than a purely supervised approach. | [
1,
0,
0,
1,
0,
0
] |
Title: A Structured Approach to the Analysis of Remote Sensing Images,
Abstract: The number of studies for the analysis of remote sensing images has been
growing exponentially in the last decades. Many studies, however, only report
results---in the form of certain performance metrics---by a few selected
algorithms on a training and testing sample. While this often yields valuable
insights, it tells little about some important aspects. For example, one might
be interested in understanding the nature of a study by the interaction of
algorithm, features, and the sample as these collectively contribute to the
outcome; among these three, which would be a more productive direction in
improving a study; how to assess the sample quality or the value of a set of
features etc. With a focus on land-use classification, we advocate the use of a
structured analysis. The output of a study is viewed as the result of the
interplay among three input dimensions: feature, sample, and algorithm.
Similarly, another dimension, the error, can be decomposed into error along
each input dimension. Such a structural decomposition of the inputs or error
could help better understand the nature of the problem and potentially suggest
directions for improvement. We use the analysis of a remote sensing image at a
study site in Guangzhou, China, to demonstrate how such a structured analysis
could be carried out and what insights it generates. The structured analysis
could be applied to a new study, or as a diagnosis to an existing one. We
expect this will inform practice in the analysis of remote sensing images, and
help advance the state-of-the-art of land-use classification. | [
0,
0,
0,
1,
0,
0
] |
Title: Approximations of the Restless Bandit Problem,
Abstract: The multi-armed restless bandit problem is studied in the case where the
pay-off distributions are stationary $\varphi$-mixing. This version of the
problem provides a more realistic model for most real-world applications, but
cannot be optimally solved in practice, since it is known to be PSPACE-hard.
The objective of this paper is to characterize a sub-class of the problem where
{\em good} approximate solutions can be found using tractable approaches.
Specifically, it is shown that under some conditions on the $\varphi$-mixing
coefficients, a modified version of UCB can prove effective. The main challenge
is that, unlike in the i.i.d. setting, the distributions of the sampled
pay-offs may not have the same characteristics as those of the original bandit
arms. In particular, the $\varphi$-mixing property does not necessarily carry
over. This is overcome by carefully controlling the effect of a sampling policy
on the pay-off distributions. Some of the proof techniques developed in this
paper can be more generally used in the context of online sampling under
dependence. Proposed algorithms are accompanied with corresponding regret
analysis. | [
0,
0,
1,
1,
0,
0
] |
Title: Memory effects, transient growth, and wave breakup in a model of paced atrium,
Abstract: The mechanisms underlying cardiac fibrillation have been investigated for
over a century, but we are still finding surprising results that change our
view of this phenomenon. The present study focuses on the transition from
normal rhythm to atrial fibrillation associated with a gradual increase in the
pacing rate. While some of our findings are consistent with existing
experimental, numerical, and theoretical studies of this problem, one result
appears to contradict the accepted picture. Specifically we show that, in a
two-dimensional model of paced homogeneous atrial tissue, transition from
discordant alternans to conduction block, wave breakup, reentry, and spiral
wave chaos is associated with transient growth of finite amplitude disturbances
rather than a conventional instability. It is mathematically very similar to
subcritical, or bypass, transition from laminar fluid flow to turbulence, which
allows many of the tools developed in the context of fluid turbulence to be
used for improving our understanding of cardiac arrhythmias. | [
0,
1,
0,
0,
0,
0
] |
Title: Probabilistic Generative Adversarial Networks,
Abstract: We introduce the Probabilistic Generative Adversarial Network (PGAN), a new
GAN variant based on a new kind of objective function. The central idea is to
integrate a probabilistic model (a Gaussian Mixture Model, in our case) into
the GAN framework which supports a new kind of loss function (based on
likelihood rather than classification loss), and at the same time gives a
meaningful measure of the quality of the outputs generated by the network.
Experiments with MNIST show that the model learns to generate realistic images,
and at the same time computes likelihoods that are correlated with the quality
of the generated images. We show that PGAN is better able to cope with
instability problems that are usually observed in the GAN training procedure.
We investigate this from three aspects: the probability landscape of the
discriminator, gradients of the generator, and the perfect discriminator
problem. | [
1,
0,
0,
1,
0,
0
] |
Title: Model comparison for Gibbs random fields using noisy reversible jump Markov chain Monte Carlo,
Abstract: The reversible jump Markov chain Monte Carlo (RJMCMC) method offers an
across-model simulation approach for Bayesian estimation and model comparison,
by exploring the sampling space that consists of several models of possibly
varying dimensions. A naive implementation of RJMCMC to models like Gibbs
random fields suffers from computational difficulties: the posterior
distribution for each model is termed doubly-intractable since computation of
the likelihood function is rarely available. Consequently, it is simply
impossible to simulate a transition of the Markov chain in the presence of
likelihood intractability. A variant of RJMCMC is presented, called noisy
RJMCMC, where the underlying transition kernel is replaced with an
approximation based on unbiased estimators. Based on previous theoretical
developments, convergence guarantees for the noisy RJMCMC algorithm are
provided. The experiments show that the noisy RJMCMC algorithm can be much more
efficient than other exact methods, provided that an estimator with controlled
Monte Carlo variance is used, a fact which is in agreement with the theoretical
analysis. | [
0,
0,
0,
1,
0,
0
] |
Title: Functorial compactification of linear spaces,
Abstract: We define compactifications of vector spaces which are functorial with
respect to certain linear maps. These "many-body" compactifications are
manifolds with corners, and the linear maps lift to b-maps in the sense of
Melrose. We derive a simple criterion under which the lifted maps are in fact
b-fibrations, and identify how these restrict to boundary hypersurfaces. This
theory is an application of a general result on the iterated blow-up of cleanly
intersecting submanifolds which extends related results in the literature. | [
0,
0,
1,
0,
0,
0
] |
Title: The Authority of "Fair" in Machine Learning,
Abstract: In this paper, we argue for the adoption of a normative definition of
fairness within the machine learning community. After characterizing this
definition, we review the current literature of Fair ML in light of its
implications. We end by suggesting ways to incorporate a broader community and
generate further debate around how to decide what is fair in ML. | [
1,
0,
0,
0,
0,
0
] |
Title: The Social Bow Tie,
Abstract: Understanding tie strength in social networks, and the factors that influence
it, have received much attention in a myriad of disciplines for decades.
Several models incorporating indicators of tie strength have been proposed and
used to quantify relationships in social networks, and a standard set of
structural network metrics have been applied to predominantly online social
media sites to predict tie strength. Here, we introduce the concept of the
"social bow tie" framework, a small subgraph of the network that consists of a
collection of nodes and ties that surround a tie of interest, forming a
topological structure that resembles a bow tie. We also define several
intuitive and interpretable metrics that quantify properties of the bow tie. We
use random forests and regression models to predict categorical and continuous
measures of tie strength from different properties of the bow tie, including
nodal attributes. We also investigate what aspects of the bow tie are most
predictive of tie strength in two distinct social networks: a collection of 75
rural villages in India and a nationwide call network of European mobile phone
users. Our results indicate several of the bow tie metrics are highly
predictive of tie strength, and we find the more the social circles of two
individuals overlap, the stronger their tie, consistent with previous findings.
However, we also find that the more tightly-knit their non-overlapping social
circles, the weaker the tie. This new finding complements our current
understanding of what drives the strength of ties in social networks. | [
0,
0,
0,
1,
0,
0
] |
Title: Response Regimes in Equivalent Mechanical Model of Moderately Nonlinear Liquid Sloshing,
Abstract: The paper considers non-stationary responses in reduced-order model of
partially liquid-filled tank under external forcing. The model involves one
common degree of freedom for the tank and the non-sloshing portion of the
liquid, and the other one -- for the sloshing portion of the liquid. The
coupling between these degrees of freedom is nonlinear, with the lowest-order
potential dictated by symmetry considerations. Since the mass of the sloshing
liquid in realistic conditions does not exceed 10% of the total mass of the
system, the reduced-order model turns to be formally equivalent to well-studied
oscillatory systems with nonlinear energy sinks (NES). Exploiting this analogy,
and applying the methodology known from the studies of the systems with the
NES, we predict a multitude of possible non-stationary responses in the
considered model. These responses conform, at least on the qualitative level,
to the responses observed in experimental sloshing settings, multi-modal
theoretical models and full-scale numeric simulations. | [
0,
1,
0,
0,
0,
0
] |
Title: Markov Decision Processes with Continuous Side Information,
Abstract: We consider a reinforcement learning (RL) setting in which the agent
interacts with a sequence of episodic MDPs. At the start of each episode the
agent has access to some side-information or context that determines the
dynamics of the MDP for that episode. Our setting is motivated by applications
in healthcare where baseline measurements of a patient at the start of a
treatment episode form the context that may provide information about how the
patient might respond to treatment decisions. We propose algorithms for
learning in such Contextual Markov Decision Processes (CMDPs) under an
assumption that the unobserved MDP parameters vary smoothly with the observed
context. We also give lower and upper PAC bounds under the smoothness
assumption. Because our lower bound has an exponential dependence on the
dimension, we consider a tractable linear setting where the context is used to
create linear combinations of a finite set of MDPs. For the linear setting, we
give a PAC learning algorithm based on KWIK learning techniques. | [
1,
0,
0,
1,
0,
0
] |
Title: Computational Thinking in Patch,
Abstract: With the future likely to see even more pervasive computation, computational
thinking (problem-solving skills incorporating computing knowledge) is now
being recognized as a fundamental skill needed by all students. Computational
thinking is conceptualizing as opposed to programming, promotes natural human
thinking style than algorithmic reasoning, complements and combines
mathematical and engineering thinking, and it emphasizes ideas, not artifacts.
In this paper, we outline a new visual language, called Patch, using which
students are able to express their solutions to eScience computational problems
in abstract visual tools. Patch is closer to high level procedural languages
such as C++ or Java than Scratch or Snap! but similar to them in ease of use
and combines simplicity and expressive power in one single platform. | [
1,
0,
0,
0,
0,
0
] |
Title: Skoda's Ideal Generation from Vanishing Theorem for Semipositive Nakano Curvature and Cauchy-Schwarz Inequality for Tensors,
Abstract: Skoda's 1972 result on ideal generation is a crucial ingredient in the
analytic approach to the finite generation of the canonical ring and the
abundance conjecture. Special analytic techniques developed by Skoda, other
than applications of the usual vanishing theorems and L2 estimates for the
d-bar equation, are required for its proof. This note (which is part of a
lecture given in the 60th birthday conference for Lawrence Ein) gives a
simpler, more straightforward proof of Skoda's result, which makes it a natural
consequence of the standard techniques in vanishing theorems and solving d-bar
equation with L2 estimates. The proof involves the following three ingredients:
(i) one particular Cauchy-Schwarz inequality for tensors with a special factor
which accounts for the exponent of the denominator in the formulation of the
integral condition for Skoda's ideal generation, (ii) the nonnegativity of
Nakano curvature of the induced metric of a special co-rank-1 subbundle of a
trivial vector bundle twisted by a special scalar weight function, and (iii)
the vanishing theorem and solvability of d-bar equation with L2 estimates for
vector bundles of nonnegative Nakano curvature on a strictly pseudoconvex
domain. Our proof gives readily other similar results on ideal generation. | [
0,
0,
1,
0,
0,
0
] |
Title: Method for Computationally Efficient Design of Dielectric Laser Accelerators,
Abstract: Dielectric microstructures have generated much interest in recent years as a
means of accelerating charged particles when powered by solid state lasers. The
acceleration gradient (or particle energy gain per unit length) is an important
figure of merit. To design structures with high acceleration gradients, we
explore the adjoint variable method, a highly efficient technique used to
compute the sensitivity of an objective with respect to a large number of
parameters. With this formalism, the sensitivity of the acceleration gradient
of a dielectric structure with respect to its entire spatial permittivity
distribution is calculated by the use of only two full-field electromagnetic
simulations, the original and adjoint. The adjoint simulation corresponds
physically to the reciprocal situation of a point charge moving through the
accelerator gap and radiating. Using this formalism, we perform numerical
optimizations aimed at maximizing acceleration gradients, which generate
fabricable structures of greatly improved performance in comparison to
previously examined geometries. | [
0,
1,
0,
0,
0,
0
] |
Title: DiVM: Model Checking with LLVM and Graph Memory,
Abstract: In this paper, we introduce the concept of a virtual machine with
graph-organised memory as a versatile backend for both explicit-state and
abstraction-driven verification of software. Our virtual machine uses the LLVM
IR as its instruction set, enriched with a small set of hypercalls. We show
that the provided hypercalls are sufficient to implement a small operating
system, which can then be linked with applications to provide a
POSIX-compatible verification environment. Finally, we demonstrate the
viability of the approach through a comparison with a more
traditionally-designed LLVM model checker. | [
1,
0,
0,
0,
0,
0
] |
Title: Justifications in Constraint Handling Rules for Logical Retraction in Dynamic Algorithms,
Abstract: We present a straightforward source-to-source transformation that introduces
justifications for user-defined constraints into the CHR programming language.
Then a scheme of two rules suffices to allow for logical retraction (deletion,
removal) of constraints during computation. Without the need to recompute from
scratch, these rules remove not only the constraint but also undo all
consequences of the rule applications that involved the constraint. We prove a
confluence result concerning the rule scheme and show its correctness. When
algorithms are written in CHR, constraints represent both data and operations.
CHR is already incremental by nature, i.e. constraints can be added at runtime.
Logical retraction adds decrementality. Hence any algorithm written in CHR with
justifications will become fully dynamic. Operations can be undone and data can
be removed at any point in the computation without compromising the correctness
of the result. We present two classical examples of dynamic algorithms, written
in our prototype implementation of CHR with justifications that is available
online: maintaining the minimum of a changing set of numbers and shortest paths
in a graph whose edges change. | [
1,
0,
0,
0,
0,
0
] |
Title: Semi-decidable equivalence relations obtained by composition and lattice join of decidable equivalence relations,
Abstract: Composition and lattice join (transitive closure of a union) of equivalence
relations are operations taking pairs of decidable equivalence relations to
relations that are semi-decidable, but not necessarily decidable. This article
addresses the question, is every semi-decidable equivalence relation obtainable
in those ways from a pair of decidable equivalence relations? It is shown that
every semi-decidable equivalence relation, of which every equivalence class is
infinite, is obtainable as both a composition and a lattice join of decidable
equivalence relations having infinite equivalence classes. An example is
constructed of a semi-decidable, but not decidable, equivalence relation having
finite equivalence classes that can be obtained from decidable equivalence
relations, both by composition and also by lattice join. Another example is
constructed, in which such a relation cannot be obtained from decidable
equivalence relations in either of the two ways. | [
0,
0,
1,
0,
0,
0
] |
Title: Crawling migration under chemical signalling: a stochastic particle model,
Abstract: Cell migration is a fundamental process involved in physiological phenomena
such as the immune response and morphogenesis, but also in pathological
processes, such as the development of tumor metastasis. These functions are
effectively ensured because cells are active systems that adapt to their
environment. In this work, we consider a migrating cell as an active particle,
where its intracellular activity is responsible for motion. Such system was
already modeled in a previous model where the protrusion activity of the cell
was described by a stochastic Markovian jump process. The model was proven able
to capture the diversity in observed trajectories. Here, we add a description
of the effect of an external chemical attractive signal on the protrusion
dynamics, that may vary in time. We show that the resulting stochastic model is
a well-posed non-homogeneous Markovian process, and provide cell trajectories
in different settings, illustrating the effects of the signal on long-term
trajectories. | [
0,
0,
0,
0,
1,
0
] |
Title: Towards a Deeper Understanding of Adversarial Losses,
Abstract: Recent work has proposed various adversarial losses for training generative
adversarial networks. Yet, it remains unclear what certain types of functions
are valid adversarial loss functions, and how these loss functions perform
against one another. In this paper, we aim to gain a deeper understanding of
adversarial losses by decoupling the effects of their component functions and
regularization terms. We first derive some necessary and sufficient conditions
of the component functions such that the adversarial loss is a divergence-like
measure between the data and the model distributions. In order to
systematically compare different adversarial losses, we then propose DANTest, a
new, simple framework based on discriminative adversarial networks. With this
framework, we evaluate an extensive set of adversarial losses by combining
different component functions and regularization approaches. This study leads
to some new insights into the adversarial losses. For reproducibility, all
source code is available at this https URL . | [
1,
0,
0,
1,
0,
0
] |
Title: Transit Visibility Zones of the Solar System Planets,
Abstract: The detection of thousands of extrasolar planets by the transit method
naturally raises the question of whether potential extrasolar observers could
detect the transits of the Solar System planets. We present a comprehensive
analysis of the regions in the sky from where transit events of the Solar
System planets can be detected. We specify how many different Solar System
planets can be observed from any given point in the sky, and find the maximum
number to be three. We report the probabilities of a randomly positioned
external observer to be able to observe single and multiple Solar System planet
transits; specifically, we find a probability of 2.518% to be able to observe
at least one transiting planet, 0.229% for at least two transiting planets, and
0.027% for three transiting planets. We identify 68 known exoplanets that have
a favourable geometric perspective to allow transit detections in the Solar
System and we show how the ongoing K2 mission will extend this list. We use
occurrence rates of exoplanets to estimate that there are $3.2\pm1.2$ and
$6.6^{+1.3}_{-0.8}$ temperate Earth-sized planets orbiting GK and M dwarf stars
brighter than $V=13$ and $V=16$ respectively, that are located in the Earth's
transit zone. | [
0,
1,
0,
0,
0,
0
] |
Title: Nearest-neighbour Markov point processes on graphs with Euclidean edges,
Abstract: We define nearest-neighbour point processes on graphs with Euclidean edges
and linear networks. They can be seen as the analogues of renewal processes on
the real line. We show that the Delaunay neighbourhood relation on a tree
satisfies the Baddeley--M{\o}ller consistency conditions and provide a
characterisation of Markov functions with respect to this relation. We show
that a modified relation defined in terms of the local geometry of the graph
satisfies the consistency conditions for all graphs with Euclidean edges. | [
0,
0,
1,
1,
0,
0
] |
Title: Essentially Finite Vector Bundles on Normal Pseudo-proper Algebraic Stacks,
Abstract: Let $X$ be a normal, connected and projective variety over an algebraically
closed field $k$. It is known that a vector bundle $V$ on $X$ is essentially
finite if and only if it is trivialized by a proper surjective morphism $f:Y\to
X$. In this paper we introduce a different approach to this problem which
allows to extend the results to normal, connected and strongly pseudo-proper
algebraic stack of finite type over an arbitrary field $k$. | [
0,
0,
1,
0,
0,
0
] |
Title: Species tree estimation using ASTRAL: how many genes are enough?,
Abstract: Species tree reconstruction from genomic data is increasingly performed using
methods that account for sources of gene tree discordance such as incomplete
lineage sorting. One popular method for reconstructing species trees from
unrooted gene tree topologies is ASTRAL. In this paper, we derive theoretical
sample complexity results for the number of genes required by ASTRAL to
guarantee reconstruction of the correct species tree with high probability. We
also validate those theoretical bounds in a simulation study. Our results
indicate that ASTRAL requires $\mathcal{O}(f^{-2} \log n)$ gene trees to
reconstruct the species tree correctly with high probability where n is the
number of species and f is the length of the shortest branch in the species
tree. Our simulations, which are the first to test ASTRAL explicitly under the
anomaly zone, show trends consistent with the theoretical bounds and also
provide some practical insights on the conditions where ASTRAL works well. | [
1,
0,
1,
1,
0,
0
] |
Title: Schoenberg Representations and Gramian Matrices of Matérn Functions,
Abstract: We represent Matérn functions in terms of Schoenberg's integrals which
ensure the positive definiteness and prove the systems of translates of
Matérn functions form Riesz sequences in $L^2(\R^n)$ or Sobolev spaces. Our
approach is based on a new class of integral transforms that generalize Fourier
transforms for radial functions. We also consider inverse multi-quadrics and
obtain similar results. | [
0,
0,
1,
0,
0,
0
] |
Title: High-precision measurement of the proton's atomic mass,
Abstract: We report on the precise measurement of the atomic mass of a single proton
with a purpose-built Penning-trap system. With a precision of 32
parts-per-trillion our result not only improves on the current CODATA
literature value by a factor of three, but also disagrees with it at a level of
about 3 standard deviations. | [
0,
1,
0,
0,
0,
0
] |
Title: Unconditional bases of subspaces related to non-self-adjoint perturbations of self-adjoint operators,
Abstract: Assume that $T$ is a self-adjoint operator on a Hilbert space $\mathcal{H}$
and that the spectrum of $T$ is confined in the union $\bigcup_{j\in
J}\Delta_j$, $J\subseteq\mathbb{Z}$, of segments $\Delta_j=[\alpha_j,
\beta_j]\subset\mathbb{R}$ such that $\alpha_{j+1}>\beta_j$ and $$ \inf_{j}
\left(\alpha_{j+1}-\beta_j\right) = d > 0. $$ If $B$ is a bounded (in general
non-self-adjoint) perturbation of $T$ with $\|B\|=:b<d/2$ then the spectrum of
the perturbed operator $A=T+B$ lies in the union $\bigcup_{j\in J}
U_{b}(\Delta_j)$ of the mutually disjoint closed $b$-neighborhoods
$U_{b}(\Delta_j)$ of the segments $\Delta_j$ in $\mathbb{C}$. Let $Q_j$ be the
Riesz projection onto the invariant subspace of $A$ corresponding to the part
of the spectrum of $A$ lying in $U_{b}\left(\Delta_j\right)$, $j\in J$. Our
main result is as follows: The subspaces $\mathcal{L}_j=Q_j(\mathcal H)$, $j\in
J$, form an unconditional basis in the whole space $\mathcal H$. | [
0,
0,
1,
0,
0,
0
] |
Title: Typesafe Abstractions for Tensor Operations,
Abstract: We propose a typesafe abstraction to tensors (i.e. multidimensional arrays)
exploiting the type-level programming capabilities of Scala through
heterogeneous lists (HList), and showcase typesafe abstractions of common
tensor operations and various neural layers such as convolution or recurrent
neural networks. This abstraction could lay the foundation of future typesafe
deep learning frameworks that runs on Scala/JVM. | [
1,
0,
0,
0,
0,
0
] |
Title: Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training,
Abstract: Exploiting sparsity enables hardware systems to run neural networks faster
and more energy-efficiently. However, most prior sparsity-centric optimization
techniques only accelerate the forward pass of neural networks and usually
require an even longer training process with iterative pruning and retraining.
We observe that artificially inducing sparsity in the gradients of the gates in
an LSTM cell has little impact on the training quality. Further, we can enforce
structured sparsity in the gate gradients to make the LSTM backward pass up to
45% faster than the state-of-the-art dense approach and 168% faster than the
state-of-the-art sparsifying method on modern GPUs. Though the structured
sparsifying method can impact the accuracy of a model, this performance gap can
be eliminated by mixing our sparse training method and the standard dense
training method. Experimental results show that the mixed method can achieve
comparable results in a shorter time span than using purely dense training. | [
0,
0,
0,
1,
0,
0
] |
Title: An inexact subsampled proximal Newton-type method for large-scale machine learning,
Abstract: We propose a fast proximal Newton-type algorithm for minimizing regularized
finite sums that returns an $\epsilon$-suboptimal point in
$\tilde{\mathcal{O}}(d(n + \sqrt{\kappa d})\log(\frac{1}{\epsilon}))$ FLOPS,
where $n$ is number of samples, $d$ is feature dimension, and $\kappa$ is the
condition number. As long as $n > d$, the proposed method is more efficient
than state-of-the-art accelerated stochastic first-order methods for non-smooth
regularizers which requires $\tilde{\mathcal{O}}(d(n + \sqrt{\kappa
n})\log(\frac{1}{\epsilon}))$ FLOPS. The key idea is to form the subsampled
Newton subproblem in a way that preserves the finite sum structure of the
objective, thereby allowing us to leverage recent developments in stochastic
first-order methods to solve the subproblem. Experimental results verify that
the proposed algorithm outperforms previous algorithms for $\ell_1$-regularized
logistic regression on real datasets. | [
1,
0,
0,
1,
0,
0
] |
Title: Future Energy Consumption Prediction Based on Grey Forecast Model,
Abstract: We use grey forecast model to predict the future energy consumption of four
states in the U.S, and make some improvments to the model. | [
0,
0,
0,
1,
0,
0
] |
Title: The cosmic shoreline: the evidence that escape determines which planets have atmospheres, and what this may mean for Proxima Centauri b,
Abstract: The planets of the Solar System divide neatly between those with atmospheres
and those without when arranged by insolation ($I$) and escape velocity
($v_{\mathrm{esc}}$). The dividing line goes as $I \propto v_{\mathrm{esc}}^4$.
Exoplanets with reported masses and radii are shown to crowd against the
extrapolation of the Solar System trend, making a metaphorical cosmic shoreline
that unites all the planets. The $I \propto v_{\mathrm{esc}}^4$ relation may
implicate thermal escape. We therefore address the general behavior of
hydrodynamic thermal escape models ranging from Pluto to highly-irradiated
Extrasolar Giant Planets (EGPs). Energy-limited escape is harder to test
because copious XUV radiation is mostly a feature of young stars, and hence
requires extrapolating to historic XUV fluences ($I_{\mathrm{xuv}}$) using
proxies and power laws. An energy-limited shoreline should scale as
$I_{\mathrm{xuv}} \propto v_{\mathrm{esc}}^3\sqrt{\rho}$, which differs
distinctly from the apparent $I_{\mathrm{xuv}} \propto v_{\mathrm{esc}}^4$
relation. Energy-limited escape does provide good quantitative agreement to the
highly irradiated EGPs. Diffusion-limited escape implies that no planet can
lose more than 1% of its mass as H$_2$. Impact erosion, to the extent that
impact velocities $v_{\mathrm{imp}}$ can be estimated for exoplanets, fits to a
$v_{\mathrm{imp}} \approx 4\,-\,5\, v_{\mathrm{esc}}$ shoreline. The
proportionality constant is consistent with what the collision of comet
Shoemaker-Levy 9 showed us we should expect of modest impacts in deep
atmospheres. With respect to the shoreline, Proxima Centauri b is on the
metaphorical beach. Known hazards include its rapid energetic accretion, high
impact velocities, its early life on the wrong side of the runaway greenhouse,
and Proxima Centauri's XUV radiation. In its favor is a vast phase space of
unknown unknowns. | [
0,
1,
0,
0,
0,
0
] |
Title: Agent-based computing from multi-agent systems to agent-based Models: a visual survey,
Abstract: Agent-Based Computing is a diverse research domain concerned with the
building of intelligent software based on the concept of "agents". In this
paper, we use Scientometric analysis to analyze all sub-domains of agent-based
computing. Our data consists of 1,064 journal articles indexed in the ISI web
of knowledge published during a twenty year period: 1990-2010. These were
retrieved using a topic search with various keywords commonly used in
sub-domains of agent-based computing. In our proposed approach, we have
employed a combination of two applications for analysis, namely Network
Workbench and CiteSpace - wherein Network Workbench allowed for the analysis of
complex network aspects of the domain, detailed visualization-based analysis of
the bibliographic data was performed using CiteSpace. Our results include the
identification of the largest cluster based on keywords, the timeline of
publication of index terms, the core journals and key subject categories. We
also identify the core authors, top countries of origin of the manuscripts
along with core research institutes. Finally, our results have interestingly
revealed the strong presence of agent-based computing in a number of
non-computing related scientific domains including Life Sciences, Ecological
Sciences and Social Sciences. | [
1,
1,
0,
0,
0,
0
] |
Title: Mott metal-insulator transition in the Doped Hubbard-Holstein model,
Abstract: Motivated by the current interest in the understanding of the Mott insulators
away from half filling, observed in many perovskite oxides, we study the Mott
metal-insulator transition (MIT) in the doped Hubbard-Holstein model using the
Hatree-Fock mean field theory. The Hubbard-Holstein model is the simplest model
containing both the Coulomb and the electron-lattice interactions, which are
important ingredients in the physics of the perovskite oxides. In contrast to
the half-filled Hubbard model, which always results in a single phase (either
metallic or insulating), our results show that away from half-filling, a mixed
phase of metallic and insulating regions occur. As the dopant concentration is
increased, the metallic part progressively grows in volume, until it exceeds
the percolation threshold, leading to percolative conduction. This happens
above a critical dopant concentration $\delta_c$, which, depending on the
strength of the electron-lattice interaction, can be a significant fraction of
unity. This means that the material could be insulating even for a substantial
amount of doping, in contrast to the expectation that doped holes would destroy
the insulating behavior of the half-filled Hubbard model. Our theory provides a
framework for the understanding of the density-driven metal-insulator
transition observed in many complex oxides. | [
0,
1,
0,
0,
0,
0
] |
Title: ASDA : Analyseur Syntaxique du Dialecte Alg{é}rien dans un but d'analyse s{é}mantique,
Abstract: Opinion mining and sentiment analysis in social media is a research issue
having a great interest in the scientific community. However, before begin this
analysis, we are faced with a set of problems. In particular, the problem of
the richness of languages and dialects within these media. To address this
problem, we propose in this paper an approach of construction and
implementation of Syntactic analyzer named ASDA. This tool represents a parser
for the Algerian dialect that label the terms of a given corpus. Thus, we
construct a labeling table containing for each term its stem, different
prefixes and suffixes, allowing us to determine the different grammatical parts
a sort of POS tagging. This labeling will serve us later in the semantic
processing of the Algerian dialect, like the automatic translation of this
dialect or sentiment analysis | [
1,
0,
0,
0,
0,
0
] |
Title: Latent Intention Dialogue Models,
Abstract: Developing a dialogue agent that is capable of making autonomous decisions
and communicating by natural language is one of the long-term goals of machine
learning research. Traditional approaches either rely on hand-crafting a small
state-action set for applying reinforcement learning that is not scalable or
constructing deterministic models for learning dialogue sentences that fail to
capture natural conversational variability. In this paper, we propose a Latent
Intention Dialogue Model (LIDM) that employs a discrete latent variable to
learn underlying dialogue intentions in the framework of neural variational
inference. In a goal-oriented dialogue scenario, these latent intentions can be
interpreted as actions guiding the generation of machine responses, which can
be further refined autonomously by reinforcement learning. The experimental
evaluation of LIDM shows that the model out-performs published benchmarks for
both corpus-based and human evaluation, demonstrating the effectiveness of
discrete latent variable models for learning goal-oriented dialogues. | [
1,
0,
0,
1,
0,
0
] |
Title: Quasiconvex elastodynamics: weak-strong uniqueness for measure-valued solutions,
Abstract: A weak-strong uniqueness result is proved for measure-valued solutions to the
system of conservation laws arising in elastodynamics. The main novelty brought
forward by the present work is that the underlying stored-energy function of
the material is assumed strongly quasiconvex. The proof employs tools from the
calculus of variations to establish general convexity-type bounds on
quasiconvex functions and recasts them in order to adapt the relative entropy
method to quasiconvex elastodynamics. | [
0,
0,
1,
0,
0,
0
] |
Title: Human experts vs. machines in taxa recognition,
Abstract: The step of expert taxa recognition currently slows down the response time of
many bioassessments. Shifting to quicker and cheaper state-of-the-art machine
learning approaches is still met with expert scepticism towards the ability and
logic of machines. In our study, we investigate both the differences in
accuracy and in the identification logic of taxonomic experts and machines. We
propose a systematic approach utilizing deep Convolutional Neural Nets with the
transfer learning paradigm and extensively evaluate it over a multi-label and
multi-pose taxonomic dataset specifically created for this comparison. We also
study the prediction accuracy on different ranks of taxonomic hierarchy in
detail. Our results revealed that human experts using actual specimens yield
the lowest classification error. However, our proposed, much faster, automated
approach using deep Convolutional Neural Nets comes very close to human
accuracy. Contrary to previous findings in the literature, we find that
machines following a typical flat classification approach commonly used in
machine learning performs better than forcing machines to adopt a hierarchical,
local per parent node approach used by human taxonomic experts. Finally, we
publicly share our unique dataset to serve as a public benchmark dataset in
this field. | [
1,
0,
0,
1,
0,
0
] |
Title: Angular momentum evolution of galaxies over the past 10-Gyr: A MUSE and KMOS dynamical survey of 400 star-forming galaxies from z=0.3-1.7,
Abstract: We present a MUSE and KMOS dynamical study 405 star-forming galaxies at
redshift z=0.28-1.65 (median redshift z=0.84). Our sample are representative of
star-forming, main-sequence galaxies, with star-formation rates of
SFR=0.1-30Mo/yr and stellar masses M=10^8-10^11Mo. For 49+/-4% of our sample,
the dynamics suggest rotational support, 24+/-3% are unresolved systems and
5+/-2% appear to be early-stage major mergers with components on 8-30kpc
scales. The remaining 22+/-5% appear to be dynamically complex, irregular (or
face-on systems). For galaxies whose dynamics suggest rotational support, we
derive inclination corrected rotational velocities and show these systems lie
on a similar scaling between stellar mass and specific angular momentum as
local spirals with j*=J/M*\propto M^(2/3) but with a redshift evolution that
scales as j*\propto M^{2/3}(1+z)^(-1). We identify a correlation between
specific angular momentum and disk stability such that galaxies with the
highest specific angular momentum, log(j*/M^(2/3))>2.5, are the most stable,
with Toomre Q=1.10+/-0.18, compared to Q=0.53+/-0.22 for galaxies with
log(j*/M^(2/3))<2.5. At a fixed mass, the HST morphologies of galaxies with the
highest specific angular momentum resemble spiral galaxies, whilst those with
low specific angular momentum are morphologically complex and dominated by
several bright star-forming regions. This suggests that angular momentum plays
a major role in defining the stability of gas disks: at z~1, massive galaxies
that have disks with low specific angular momentum, appear to be globally
unstable, clumpy and turbulent systems. In contrast, galaxies with high
specific angular have evolved in to stable disks with spiral structures. | [
0,
1,
0,
0,
0,
0
] |
Title: Iterative Object and Part Transfer for Fine-Grained Recognition,
Abstract: The aim of fine-grained recognition is to identify sub-ordinate categories in
images like different species of birds. Existing works have confirmed that, in
order to capture the subtle differences across the categories, automatic
localization of objects and parts is critical. Most approaches for object and
part localization relied on the bottom-up pipeline, where thousands of region
proposals are generated and then filtered by pre-trained object/part models.
This is computationally expensive and not scalable once the number of
objects/parts becomes large. In this paper, we propose a nonparametric
data-driven method for object and part localization. Given an unlabeled test
image, our approach transfers annotations from a few similar images retrieved
in the training set. In particular, we propose an iterative transfer strategy
that gradually refine the predicted bounding boxes. Based on the located
objects and parts, deep convolutional features are extracted for recognition.
We evaluate our approach on the widely-used CUB200-2011 dataset and a new and
large dataset called Birdsnap. On both datasets, we achieve better results than
many state-of-the-art approaches, including a few using oracle (manually
annotated) bounding boxes in the test images. | [
1,
0,
0,
0,
0,
0
] |
Title: On measures of edge-uncolorability of cubic graphs: A brief survey and some new results,
Abstract: There are many hard conjectures in graph theory, like Tutte's 5-flow
conjecture, and the 5-cycle double cover conjecture, which would be true in
general if they would be true for cubic graphs. Since most of them are
trivially true for 3-edge-colorable cubic graphs, cubic graphs which are not
3-edge-colorable, often called {\em snarks}, play a key role in this context.
Here, we survey parameters measuring how far apart a non 3-edge-colorable graph
is from being 3-edge-colorable. We study their interrelation and prove some new
results. Besides getting new insight into the structure of snarks, we show that
such measures give partial results with respect to these important conjectures.
The paper closes with a list of open problems and conjectures. | [
0,
0,
1,
0,
0,
0
] |
Title: Distributed Newton Methods for Deep Neural Networks,
Abstract: Deep learning involves a difficult non-convex optimization problem with a
large number of weights between any two adjacent layers of a deep structure. To
handle large data sets or complicated networks, distributed training is needed,
but the calculation of function, gradient, and Hessian is expensive. In
particular, the communication and the synchronization cost may become a
bottleneck. In this paper, we focus on situations where the model is
distributedly stored, and propose a novel distributed Newton method for
training deep neural networks. By variable and feature-wise data partitions,
and some careful designs, we are able to explicitly use the Jacobian matrix for
matrix-vector products in the Newton method. Some techniques are incorporated
to reduce the running time as well as the memory consumption. First, to reduce
the communication cost, we propose a diagonalization method such that an
approximate Newton direction can be obtained without communication between
machines. Second, we consider subsampled Gauss-Newton matrices for reducing the
running time as well as the communication cost. Third, to reduce the
synchronization cost, we terminate the process of finding an approximate Newton
direction even though some nodes have not finished their tasks. Details of some
implementation issues in distributed environments are thoroughly investigated.
Experiments demonstrate that the proposed method is effective for the
distributed training of deep neural networks. In compared with stochastic
gradient methods, it is more robust and may give better test accuracy. | [
0,
0,
0,
1,
0,
0
] |
Title: Being Robust (in High Dimensions) Can Be Practical,
Abstract: Robust estimation is much more challenging in high dimensions than it is in
one dimension: Most techniques either lead to intractable optimization problems
or estimators that can tolerate only a tiny fraction of errors. Recent work in
theoretical computer science has shown that, in appropriate distributional
models, it is possible to robustly estimate the mean and covariance with
polynomial time algorithms that can tolerate a constant fraction of
corruptions, independent of the dimension. However, the sample and time
complexity of these algorithms is prohibitively large for high-dimensional
applications. In this work, we address both of these issues by establishing
sample complexity bounds that are optimal, up to logarithmic factors, as well
as giving various refinements that allow the algorithms to tolerate a much
larger fraction of corruptions. Finally, we show on both synthetic and real
data that our algorithms have state-of-the-art performance and suddenly make
high-dimensional robust estimation a realistic possibility. | [
1,
0,
0,
1,
0,
0
] |
Title: Properties of In-Plane Graphene/MoS2 Heterojunctions,
Abstract: The graphene/MoS2 heterojunction formed by joining the two components
laterally in a single plane promises to exhibit a low-resistance contact
according to the Schottky-Mott rule. Here we provide an atomic-scale
description of the structural, electronic, and magnetic properties of this type
of junction. We first identify the energetically favorable structures in which
the preference of forming C-S or C-Mo bonds at the boundary depends on the
chemical conditions. We find that significant charge transfer between graphene
and MoS2 is localized at the boundary. We show that the abundant 1D boundary
states substantially pin the Fermi level in the lateral contact between
graphene and MoS2, in close analogy to the effect of 2D interfacial states in
the contacts between 3D materials. Furthermore, we propose specific ways in
which these effects can be exploited to achieve spin-polarized currents. | [
0,
1,
0,
0,
0,
0
] |
Title: Genetic and Memetic Algorithm with Diversity Equilibrium based on Greedy Diversification,
Abstract: The lack of diversity in a genetic algorithm's population may lead to a bad
performance of the genetic operators since there is not an equilibrium between
exploration and exploitation. In those cases, genetic algorithms present a fast
and unsuitable convergence.
In this paper we develop a novel hybrid genetic algorithm which attempts to
obtain a balance between exploration and exploitation. It confronts the
diversity problem using the named greedy diversification operator. Furthermore,
the proposed algorithm applies a competition between parent and children so as
to exploit the high quality visited solutions. These operators are complemented
by a simple selection mechanism designed to preserve and take advantage of the
population diversity.
Additionally, we extend our proposal to the field of memetic algorithms,
obtaining an improved model with outstanding results in practice.
The experimental study shows the validity of the approach as well as how
important is taking into account the exploration and exploitation concepts when
designing an evolutionary algorithm. | [
1,
0,
0,
0,
0,
0
] |
Title: Cherlin's conjecture for almost simple groups of Lie rank 1,
Abstract: We prove Cherlin's conjecture, concerning binary primitive permutation
groups, for those groups with socle isomorphic to $\mathrm{PSL}_2(q)$,
${^2\mathrm{B}_2}(q)$, ${^2\mathrm{G}_2}(q)$ or $\mathrm{PSU}_3(q)$. Our method
uses the notion of a "strongly non-binary action". | [
0,
0,
1,
0,
0,
0
] |
Title: Network Essence: PageRank Completion and Centrality-Conforming Markov Chains,
Abstract: Jiří Matoušek (1963-2015) had many breakthrough contributions in
mathematics and algorithm design. His milestone results are not only profound
but also elegant. By going beyond the original objects --- such as Euclidean
spaces or linear programs --- Jirka found the essence of the challenging
mathematical/algorithmic problems as well as beautiful solutions that were
natural to him, but were surprising discoveries to the field.
In this short exploration article, I will first share with readers my initial
encounter with Jirka and discuss one of his fundamental geometric results from
the early 1990s. In the age of social and information networks, I will then
turn the discussion from geometric structures to network structures, attempting
to take a humble step towards the holy grail of network science, that is to
understand the network essence that underlies the observed
sparse-and-multifaceted network data. I will discuss a simple result which
summarizes some basic algebraic properties of personalized PageRank matrices.
Unlike the traditional transitive closure of binary relations, the personalized
PageRank matrices take "accumulated Markovian closure" of network data. Some of
these algebraic properties are known in various contexts. But I hope featuring
them together in a broader context will help to illustrate the desirable
properties of this Markovian completion of networks, and motivate systematic
developments of a network theory for understanding vast and ubiquitous
multifaceted network data. | [
1,
0,
0,
1,
0,
0
] |
Title: Sentiment Perception of Readers and Writers in Emoji use,
Abstract: Previous research has traditionally analyzed emoji sentiment from the point
of view of the reader of the content not the author. Here, we analyze emoji
sentiment from the point of view of the author and present a emoji sentiment
benchmark that was built from an employee happiness dataset where emoji happen
to be annotated with daily happiness of the author of the comment. The data
spans over 3 years, and 4k employees of 56 companies based in Barcelona. We
compare sentiment of writers to readers. Results indicate that, there is an 82%
agreement in how emoji sentiment is perceived by readers and writers. Finally,
we report that when authors use emoji they report higher levels of happiness.
Emoji use was not found to be correlated with differences in author moodiness. | [
1,
0,
0,
0,
0,
0
] |
Title: Confidence Intervals for Quantiles from Histograms and Other Grouped Data,
Abstract: Interval estimation of quantiles has been treated by many in the literature.
However, to the best of our knowledge there has been no consideration for
interval estimation when the data are available in grouped format. Motivated by
this, we introduce several methods to obtain confidence intervals for quantiles
when only grouped data is available. Our preferred method for interval
estimation is to approximate the underlying density using the Generalized
Lambda Distribution (GLD) to both estimate the quantiles and variance of the
quantile estimators. We compare the GLD method with some other methods that we
also introduce which are based on a frequency approximation approach and a
linear interpolation approximation of the density. Our methods are strongly
supported by simulations showing that excellent coverage can be achieved for a
wide number of distributions. These distributions include highly-skewed
distributions such as the log-normal, Dagum and Singh-Maddala distributions. We
also apply our methods to real data and show that inference can be carried out
on published outcomes that have been summarized only by a histogram. Our
methods are therefore useful for a broad range of applications. We have also
created a web application that can be used to conveniently calculate the
estimators. | [
0,
0,
0,
1,
0,
0
] |
Title: Exact solution of a two-species quantum dimer model for pseudogap metals,
Abstract: We present an exact ground state solution of a quantum dimer model introduced
in Ref.[1], which features ordinary bosonic spin-singlet dimers as well as
fermionic dimers that can be viewed as bound states of spinons and holons in a
hole-doped resonating valence bond liquid. Interestingly, this model captures
several essential properties of the metallic pseudogap phase in high-$T_c$
cuprate superconductors. We identify a line in parameter space where the exact
ground state wave functions can be constructed at an arbitrary density of
fermionic dimers. At this exactly solvable line the ground state has a huge
degeneracy, which can be interpreted as a flat band of fermionic excitations.
Perturbing around the exactly solvable line, this degeneracy is lifted and the
ground state is a fractionalized Fermi liquid with a small pocket Fermi surface
in the low doping limit. | [
0,
1,
0,
0,
0,
0
] |
Title: Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement,
Abstract: We study the instability of standing wave solutions for nonlinear
Schrödinger equations with a one-dimensional harmonic potential in
dimension $N\ge 2$. We prove that if the nonlinearity is $L^2$-critical or
supercritical in dimension $N-1$, then any ground states are strongly unstable
by blowup. | [
0,
0,
1,
0,
0,
0
] |
Title: Multi-armed Bandit Problems with Strategic Arms,
Abstract: We study a strategic version of the multi-armed bandit problem, where each
arm is an individual strategic agent and we, the principal, pull one arm each
round. When pulled, the arm receives some private reward $v_a$ and can choose
an amount $x_a$ to pass on to the principal (keeping $v_a-x_a$ for itself). All
non-pulled arms get reward $0$. Each strategic arm tries to maximize its own
utility over the course of $T$ rounds. Our goal is to design an algorithm for
the principal incentivizing these arms to pass on as much of their private
rewards as possible.
When private rewards are stochastically drawn each round ($v_a^t \leftarrow
D_a$), we show that:
- Algorithms that perform well in the classic adversarial multi-armed bandit
setting necessarily perform poorly: For all algorithms that guarantee low
regret in an adversarial setting, there exist distributions $D_1,\ldots,D_k$
and an approximate Nash equilibrium for the arms where the principal receives
reward $o(T)$.
- Still, there exists an algorithm for the principal that induces a game
among the arms where each arm has a dominant strategy. When each arm plays its
dominant strategy, the principal sees expected reward $\mu'T - o(T)$, where
$\mu'$ is the second-largest of the means $\mathbb{E}[D_{a}]$. This algorithm
maintains its guarantee if the arms are non-strategic ($x_a = v_a$), and also
if there is a mix of strategic and non-strategic arms. | [
1,
0,
0,
1,
0,
0
] |
Title: Emerging Topics in Assistive Reading Technology: From Presentation to Content Accessibility,
Abstract: With the recent focus in the accessibility field, researchers from academia
and industry have been very active in developing innovative techniques and
tools for assistive technology. Especially with handheld devices getting ever
powerful and being able to recognize the user's voice, screen magnification for
individuals with low-vision, and eye tracking devices used in studies with
individuals with physical and intellectual disabilities, the science field is
quickly adapting and creating conclusions as well as products to help. In this
paper, we will focus on new technology and tools to help make reading
easier--including reformatting document presentation (for people with physical
vision impairments) and text simplification to make information itself easier
to interpret (for people with intellectual disabilities). A real-world case
study is reported based on our experience to make documents more accessible. | [
1,
0,
0,
0,
0,
0
] |
Title: Predictive and Prescriptive Analytics for Location Selection of Add-on Retail Products,
Abstract: In this paper, we study an analytical approach to selecting expansion
locations for retailers selling add-on products whose demand is derived from
the demand of another base product. Demand for the add-on product is realized
only as a supplement to the demand of the base product. In our context, either
of the two products could be subject to spatial autocorrelation where demand at
a given location is impacted by demand at other locations. Using data from an
industrial partner selling add-on products, we build predictive models for
understanding the derived demand of the add-on product and establish an
optimization framework for automating expansion decisions to maximize expected
sales. Interestingly, spatial autocorrelation and the complexity of the
predictive model impact the complexity and the structure of the prescriptive
optimization model. Our results indicate that the models formulated are highly
effective in predicting add-on product sales, and that using the optimization
framework built on the predictive model can result in substantial increases in
expected sales over baseline policies. | [
0,
0,
0,
1,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.