title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Nonequilibrium photonic transport and phase transition in an array of optical cavities | We characterize photonic transport in a boundary driven array of nonlinear
optical cavities. We find that the output field suddenly drops when the chain
length is increased beyond a threshold. After this threshold a highly chaotic
and unstable regime emerges, which marks the onset of a super-diffusive
photonic transport. We show the scaling of the threshold with pump intensity
and nonlinearity. Finally, we address the competition of disorder and
nonlinearity presenting a diffusive-insulator phase transition.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Unusual Effectiveness of Averaging in GAN Training | We show empirically that the optimal strategy of parameter averaging in a
minmax convex-concave game setting is also strikingly effective in the non
convex-concave GAN setting, specifically alleviating the convergence issues
associated with cycling behavior observed in GANs. We show that averaging over
generator parameters outside of the trainig loop consistently improves
inception and FID scores on different architectures and for different GAN
objectives. We provide comprehensive experimental results across a range of
datasets, bilinear games, mixture of Gaussians, CIFAR-10, STL-10, CelebA and
ImageNet, to demonstrate its effectiveness. We achieve state-of-the-art results
on CIFAR-10 and produce clean CelebA face images, demonstrating that averaging
is one of the most effective techniques for training highly performant GANs.
| 0 | 0 | 0 | 1 | 0 | 0 |
A null test of General Relativity: New limits on Local Position Invariance and the variation of fundamental constants | We compare the long-term fractional frequency variation of four hydrogen
masers that are part of an ensemble of clocks comprising the National Institute
of Standards and Technology,(NIST), Boulder, timescale with the fractional
frequencies of primary frequency standards operated by leading metrology
laboratories in the United States, France, Germany, Italy and the United
Kingdom for a period extending more than 14 years. The measure of the assumed
variation of non-gravitational interaction,(LPI parameter, $\beta$)---within
the atoms of H and Cs---over time as the earth orbits the sun, has been
constrained to $\beta=(2.2 \pm 2.5)\times 10^{-7}$, a factor of two improvement
over previous estimates. Using our results together with the previous best
estimates of $\beta$ based on Rb vs. Cs, and Rb vs. H comparisons, we impose
the most stringent limits to date on the dimensionless coupling constants that
relate the variation of fundamental constants such as the fine-structure
constant and the scaled quark mass with strong(QCD) interaction to the
variation in the local gravitational potential. For any metric theory of
gravity $\beta=0$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stellarator bootstrap current and plasma flow velocity at low collisionality | The bootstrap current and flow velocity of a low-collisionality stellarator
plasma are calculated. As far as possible, the analysis is carried out in a
uniform way across all low-collisionality regimes in general stellarator
geometry, assuming only that the confinement is good enough that the plasma is
approximately in local thermodynamic equilibrium. It is found that conventional
expressions for the ion flow speed and bootstrap current in the
low-collisionality limit are accurate only in the $1/\nu$-collisionality regime
and need to be modified in the $\sqrt{\nu}$-regime. The correction due to
finite collisionality is also discussed and is found to scale as $\nu^{2/5}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
SG1120-1202: Mass-Quenching as Tracked by UV Emission in the Group Environment at z=0.37 | We use the Hubble Space Telescope to obtain WFC3/F390W imaging of the
supergroup SG1120-1202 at z=0.37, mapping the UV emission of 138
spectroscopically confirmed members. We measure total (F390W-F814W) colors and
visually classify the UV morphology of individual galaxies as "clumpy" or
"smooth." Approximately 30% of the members have pockets of UV emission (clumpy)
and we identify for the first time in the group environment galaxies with UV
morphologies similar to the jellyfish galaxies observed in massive clusters. We
stack the clumpy UV members and measure a shallow internal color gradient,
which indicates unobscured star formation is occurring throughout these
galaxies. We also stack the four galaxy groups and measure a strong trend of
decreasing UV emission with decreasing projected group distance ($R_{proj}$).
We find that the strong correlation between decreasing UV emission and
increasing stellar mass can fully account for the observed trend in
(F390W-F814W) - $R_{proj}$, i.e., mass-quenching is the dominant mechanism for
extinguishing UV emission in group galaxies. Our extensive multi-wavelength
analysis of SG1120-1202 indicates that stellar mass is the primary predictor of
UV emission, but that the increasing fraction of massive (red/smooth) galaxies
at $R_{proj}$ < 2$R_{200}$ and existence of jellyfish candidates is due to the
group environment.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dynamical patterns in individual trajectories toward extremism | Society faces a fundamental global problem of understanding which individuals
are currently developing strong support for some extremist entity such as ISIS
(Islamic State) -- even if they never end up doing anything in the real world.
The importance of online connectivity in developing intent has been confirmed
by recent case-studies of already convicted terrorists. Here we identify
dynamical patterns in the online trajectories that individuals take toward
developing a high level of extremist support -- specifically, for ISIS. Strong
memory effects emerge among individuals whose transition is fastest, and hence
may become 'out of the blue' threats in the real world. A generalization of
diagrammatic expansion theory helps quantify these characteristics, including
the impact of changes in geographical location, and can facilitate prediction
of future risks. By quantifying the trajectories that individuals follow on
their journey toward expressing high levels of pro-ISIS support -- irrespective
of whether they then carry out a real-world attack or not -- our findings can
help move safety debates beyond reliance on static watch-list identifiers such
as ethnic background or immigration status, and/or post-fact interviews with
already-convicted individuals. Given the broad commonality of social media
platforms, our results likely apply quite generally: for example, even on
Telegram where (like Twitter) there is no built-in group feature as in our
study, individuals tend to collectively build and pass through so-called
super-group accounts.
| 1 | 1 | 0 | 0 | 0 | 0 |
Convergence rate bounds for a proximal ADMM with over-relaxation stepsize parameter for solving nonconvex linearly constrained problems | This paper establishes convergence rate bounds for a variant of the proximal
alternating direction method of multipliers (ADMM) for solving nonconvex
linearly constrained optimization problems. The variant of the proximal ADMM
allows the inclusion of an over-relaxation stepsize parameter belonging to the
interval $(0,2)$. To the best of our knowledge, all related papers in the
literature only consider the case where the over-relaxation parameter lies in
the interval $(0,(1+\sqrt{5})/2)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Detection, Recognition and Tracking of Moving Objects from Real-time Video via Visual Vocabulary Model and Species Inspired PSO | In this paper, we address the basic problem of recognizing moving objects in
video images using Visual Vocabulary model and Bag of Words and track our
object of interest in the subsequent video frames using species inspired PSO.
Initially, the shadow free images are obtained by background modelling followed
by foreground modeling to extract the blobs of our object of interest.
Subsequently, we train a cubic SVM with human body datasets in accordance with
our domain of interest for recognition and tracking. During training, using the
principle of Bag of Words we extract necessary features of certain domains and
objects for classification. Subsequently, matching these feature sets with
those of the extracted object blobs that are obtained by subtracting the shadow
free background from the foreground, we detect successfully our object of
interest from the test domain. The performance of the classification by cubic
SVM is satisfactorily represented by confusion matrix and ROC curve reflecting
the accuracy of each module. After classification, our object of interest is
tracked in the test domain using species inspired PSO. By combining the
adaptive learning tools with the efficient classification of description, we
achieve optimum accuracy in recognition of the moving objects. We evaluate our
algorithm benchmark datasets: iLIDS, VIVID, Walking2, Woman. Comparative
analysis of our algorithm against the existing state-of-the-art trackers shows
very satisfactory and competitive results.
| 1 | 0 | 0 | 0 | 0 | 0 |
Microfluidics for Chemical Synthesis: Flow Chemistry | Klavs F. Jensen is Warren K. Lewis Professor in Chemical Engineering and
Materials Science and Engineering at the Massachusetts Institute of Technology.
Here he describes the use of microfluidics for chemical synthesis, from the
early demonstration examples to the current efforts with automated droplet
microfluidic screening and optimization techniques.
| 0 | 0 | 0 | 0 | 1 | 0 |
Global Sensitivity Analysis of High Dimensional Neuroscience Models: An Example of Neurovascular Coupling | The complexity and size of state-of-the-art cell models have significantly
increased in part due to the requirement that these models possess complex
cellular functions which are thought--but not necessarily proven--to be
important. Modern cell models often involve hundreds of parameters; the values
of these parameters come, more often than not, from animal experiments whose
relationship to the human physiology is weak with very little information on
the errors in these measurements. The concomitant uncertainties in parameter
values result in uncertainties in the model outputs or Quantities of Interest
(QoIs). Global Sensitivity Analysis (GSA) aims at apportioning to individual
parameters (or sets of parameters) their relative contribution to output
uncertainty thereby introducing a measure of influence or importance of said
parameters. New GSA approaches are required to deal with increased model size
and complexity; a three stage methodology consisting of screening (dimension
reduction), surrogate modeling, and computing Sobol' indices, is presented. The
methodology is used to analyze a physiologically validated numerical model of
neurovascular coupling which possess 160 uncertain parameters. The sensitivity
analysis investigates three quantities of interest (QoIs), the average value of
$K^+$ in the extracellular space, the average volumetric flow rate through the
perfusing vessel, and the minimum value of the actin/myosin complex in the
smooth muscle cell. GSA provides a measure of the influence of each parameter,
for each of the three QoIs, giving insight into areas of possible physiological
dysfunction and areas of further investigation.
| 0 | 0 | 0 | 0 | 1 | 0 |
Few new reals | We introduce a new method for building models of CH, together with $\Pi_2$
statements over $H(\omega_2)$, by forcing over a model of CH. Unlike similar
constructions in the literature, our construction adds new reals, but only
$\aleph_1$-many of them. Using this approach, we prove that a very strong form
of the negation of Club Guessing at $\omega_1$ known as Measuring is consistent
together with CH, thereby answering a well-known question of Moore. The
construction works over any model of ZFC + CH and can be described as a finite
support forcing construction with finite systems of countable models with
markers as side conditions and with strong symmetry constraints on both side
conditions and working parts.
| 0 | 0 | 1 | 0 | 0 | 0 |
Boundary Hamiltonian theory for gapped topological phases on an open surface | In this paper we propose a Hamiltonian approach to gapped topological phases
on an open surface with boundary. Our setting is an extension of the Levin-Wen
model to a 2d graph on the open surface, whose boundary is part of the graph.
We systematically construct a series of boundary Hamiltonians such that each of
them, when combined with the usual Levin-Wen bulk Hamiltonian, gives rise to a
gapped energy spectrum which is topologically protected; and the corresponding
wave functions are robust under changes of the underlying graph that maintain
the spatial topology of the system. We derive explicit ground-state
wavefunctions of the system and show that the boundary types are classified by
Morita-equivalent Frobenius algebras. We also construct boundary quasiparticle
creation, measuring and hopping operators. These operators allow us to
characterize the boundary quasiparticles by bimodules of Frobenius algebras.
Our approach also offers a concrete set of tools for computations. We
illustrate our approach by a few examples.
| 0 | 1 | 1 | 0 | 0 | 0 |
Fast Inverse Nonlinear Fourier Transformation using Exponential One-Step Methods, Part I: Darboux Transformation | This paper considers the non-Hermitian Zakharov-Shabat (ZS) scattering
problem which forms the basis for defining the SU$(2)$-nonlinear Fourier
transformation (NFT). The theoretical underpinnings of this generalization of
the conventional Fourier transformation is quite well established in the
Ablowitz-Kaup-Newell-Segur (AKNS) formalism; however, efficient numerical
algorithms that could be employed in practical applications are still
unavailable.
In this paper, we present a unified framework for the forward and inverse NFT
using exponential one-step methods which are amenable to FFT-based fast
polynomial arithmetic. Within this discrete framework, we propose a fast
Darboux transformation (FDT) algorithm having an operational complexity of
$\mathscr{O}\left(KN+N\log^2N\right)$ such that the error in the computed
$N$-samples of the $K$-soliton vanishes as $\mathscr{O}\left(N^{-p}\right)$
where $p$ is the order of convergence of the underlying one-step method. For
fixed $N$, this algorithm outperforms the the classical DT (CDT) algorithm
which has a complexity of $\mathscr{O}\left(K^2N\right)$. We further present
extension of these algorithms to the general version of DT which allows one to
add solitons to arbitrary profiles that are admissible as scattering potentials
in the ZS-problem. The general CDT/FDT algorithms have the same operational
complexity as that of the $K$-soliton case and the order of convergence matches
that of the underlying one-step method. A comparative study of these algorithms
is presented through exhaustive numerical tests.
| 0 | 1 | 0 | 0 | 0 | 0 |
Propagation in media as a probe for topological properties | The central goal of this thesis is to develop methods to experimentally study
topological phases. We do so by applying the powerful toolbox of quantum
simulation techniques with cold atoms in optical lattices. To this day, a
complete classification of topological phases remains elusive. In this context,
experimental studies are key, both for studying the interplay between topology
and complex effects and for identifying new forms of topological order. It is
therefore crucial to find complementary means to measure topological properties
in order to reach a fundamental understanding of topological phases. In one
dimensional chiral systems, we suggest a new way to construct and identify
topologically protected bound states, which are the smoking gun of these
materials. In two dimensional Hofstadter strips (i.e: systems which are very
short along one dimension), we suggest a new way to measure the topological
invariant directly from the atomic dynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Leverage Score Sampling for Faster Accelerated Regression and ERM | Given a matrix $\mathbf{A}\in\mathbb{R}^{n\times d}$ and a vector $b
\in\mathbb{R}^{d}$, we show how to compute an $\epsilon$-approximate solution
to the regression problem $ \min_{x\in\mathbb{R}^{d}}\frac{1}{2} \|\mathbf{A} x
- b\|_{2}^{2} $ in time $ \tilde{O} ((n+\sqrt{d\cdot\kappa_{\text{sum}}})\cdot
s\cdot\log\epsilon^{-1}) $ where
$\kappa_{\text{sum}}=\mathrm{tr}\left(\mathbf{A}^{\top}\mathbf{A}\right)/\lambda_{\min}(\mathbf{A}^{T}\mathbf{A})$
and $s$ is the maximum number of non-zero entries in a row of $\mathbf{A}$. Our
algorithm improves upon the previous best running time of $ \tilde{O}
((n+\sqrt{n \cdot\kappa_{\text{sum}}})\cdot s\cdot\log\epsilon^{-1})$.
We achieve our result through a careful combination of leverage score
sampling techniques, proximal point methods, and accelerated coordinate
descent. Our method not only matches the performance of previous methods, but
further improves whenever leverage scores of rows are small (up to
polylogarithmic factors). We also provide a non-linear generalization of these
results that improves the running time for solving a broader class of ERM
problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
SOTER: Programming Safe Robotics System using Runtime Assurance | Autonomous robots increasingly depend on third-party off-the-shelf components
and complex machine-learning techniques. This trend makes it challenging to
provide strong design-time certification of correct operation. To address this
challenge, we present SOTER, a programming framework that integrates the core
principles of runtime assurance to enable the use of uncertified controllers,
while still providing safety guarantees.
Runtime Assurance (RTA) is an approach used for safety-critical systems where
design-time analysis is coupled with run-time techniques to switch between
unverified advanced controllers and verified simple controllers. In this paper,
we present a runtime assurance programming framework for modular design of
provably-safe robotics software. \tool provides language primitives to
declaratively construct a \rta module consisting of an advanced controller
(untrusted), a safe controller (trusted), and the desired safety specification
(S). If the RTA module is well formed then the framework provides a formal
guarantee that it satisfies property S. The compiler generates code for
monitoring system state and switching control between the advanced and safe
controller in order to guarantee S. RTA allows complex systems to be
constructed through the composition of RTA modules.
To demonstrate the efficacy of our framework, we consider a real-world
case-study of building a safe drone surveillance system. Our experiments both
in simulation and on actual drones show that RTA-enabled RTA ensures safety of
the system, including when untrusted third-party components have bugs or
deviate from the desired behavior.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Evaluation of Silicon Photomultipliers for Use as Photosensors in Liquid Xenon Detectors | Silicon photomultipliers (SiPMs) are potential solid-state alternatives to
traditional photomultiplier tubes (PMTs) for single-photon detection. In this
paper, we report on evaluating SensL MicroFC-10035-SMT SiPMs for their
suitability as PMT replacements. The devices were successfully operated in a
liquid-xenon detector, which demonstrates that SiPMs can be used in noble
element time projection chambers as photosensors. The devices were also cooled
down to 170 K to observe dark count dependence on temperature. No dependencies
on the direction of an applied 3.2 kV/cm electric field were observed with
respect to dark-count rate, gain, or photon detection efficiency.
| 0 | 1 | 0 | 0 | 0 | 0 |
Reconsidering Experiments | Experiments may not reveal their full import at the time that they are
performed. The scientists who perform them usually are testing a specific
hypothesis and quite often have specific expectations limiting the possible
inferences that can be drawn from the experiment. Nonetheless, as Hacking has
said, experiments have lives of their own. Those lives do not end with the
initial report of the results and consequences of the experiment. Going back
and rethinking the consequences of the experiment in a new context, theoretical
or empirical, has great merit as a strategy for investigation and for
scientific problem analysis. I apply this analysis to the interplay between
Fizeau's classic optical experiments and the building of special relativity.
Einstein's understanding of the problems facing classical electrodynamics and
optics, in part, was informed by Fizeau's 1851 experiments. However, between
1851 and 1905, Fizeau's experiments were duplicated and reinterpreted by a
succession of scientists, including Hertz, Lorentz, and Michelson. Einstein's
analysis of the consequences of the experiments is tied closely to this
theoretical and experimental tradition. However, Einstein's own inferences from
the experiments differ greatly from the inferences drawn by others in that
tradition.
| 0 | 1 | 0 | 0 | 0 | 0 |
Streaming Kernel PCA with $\tilde{O}(\sqrt{n})$ Random Features | We study the statistical and computational aspects of kernel principal
component analysis using random Fourier features and show that under mild
assumptions, $O(\sqrt{n} \log n)$ features suffices to achieve
$O(1/\epsilon^2)$ sample complexity. Furthermore, we give a memory efficient
streaming algorithm based on classical Oja's algorithm that achieves this rate.
| 0 | 0 | 0 | 1 | 0 | 0 |
Universal Protocols for Information Dissemination Using Emergent Signals | We consider a population of $n$ agents which communicate with each other in a
decentralized manner, through random pairwise interactions. One or more agents
in the population may act as authoritative sources of information, and the
objective of the remaining agents is to obtain information from or about these
source agents. We study two basic tasks: broadcasting, in which the agents are
to learn the bit-state of an authoritative source which is present in the
population, and source detection, in which the agents are required to decide if
at least one source agent is present in the population or not.We focus on
designing protocols which meet two natural conditions: (1) universality, i.e.,
independence of population size, and (2) rapid convergence to a correct global
state after a reconfiguration, such as a change in the state of a source agent.
Our main positive result is to show that both of these constraints can be met.
For both the broadcasting problem and the source detection problem, we obtain
solutions with a convergence time of $O(\log^2 n)$ rounds, w.h.p., from any
starting configuration. The solution to broadcasting is exact, which means that
all agents reach the state broadcast by the source, while the solution to
source detection admits one-sided error on a $\varepsilon$-fraction of the
population (which is unavoidable for this problem). Both protocols are easy to
implement in practice and have a compact formulation.Our protocols exploit the
properties of self-organizing oscillatory dynamics. On the hardness side, our
main structural insight is to prove that any protocol which meets the
constraints of universality and of rapid convergence after reconfiguration must
display a form of non-stationary behavior (of which oscillatory dynamics are an
example). We also observe that the periodicity of the oscillatory behavior of
the protocol, when present, must necessarily depend on the number $^\\# X$ of
source agents present in the population. For instance, our protocols inherently
rely on the emergence of a signal passing through the population, whose period
is $\Theta(\log \frac{n}{^\\# X})$ rounds for most starting configurations. The
design of clocks with tunable frequency may be of independent interest, notably
in modeling biological networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
A note on species realizations and nondegeneracy of potentials | In this note we show that a mutation theory of species with potential can be
defined so that a certain class of skew-symmetrizable integer matrices have a
species realization admitting a non-degenerate potential. This gives a partial
affirmative answer to a question raised by Jan Geuenich and Daniel
Labardini-Fragoso. We also provide an example of a class of skew-symmetrizable
$4 \times 4$ integer matrices, which are not globally unfoldable nor strongly
primitive, and that have a species realization admitting a non-degenerate
potential.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Unified Stochastic Formulation of Dissipative Quantum Dynamics. II. Beyond Linear Response of Spin Baths | We use the "generalized hierarchical equation of motion" proposed in Paper I
to study decoherence in a system coupled to a spin bath. The present
methodology allows a systematic incorporation of higher order anharmonic
effects of the bath in dynamical calculations. We investigate the leading order
corrections to the linear response approximations for spin bath models. Two
types of spin-based environments are considered: (1) a bath of spins
discretized from a continuous spectral density and (2) a bath of physical spins
such as nuclear or electron spins. The main difference resides with how the
bath frequency and the system-bath coupling parameters are chosen to represent
an environment. When discretized from a continuous spectral density, the
system-bath coupling typically scales as $\sim 1/\sqrt{N_B}$ where $N_B$ is the
number of bath spins. This scaling suppresses the non-Gaussian characteristics
of the spin bath and justify the linear response approximations in the
thermodynamic limit. For the physical spin bath models, system-bath couplings
are directly deduced from spin-spin interactions with no reason to obey the
$1/\sqrt{N_B}$ scaling. It is not always possible to justify the linear
response approximations. Furthermore, if the spin-spin Hamiltonian and/or the
bath parameters are highly symmetrical, these additional constraints generate
non-Markovian and persistent dynamics that is beyond the linear response
treatments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Vortex states and spin textures of rotating spin-orbit-coupled Bose-Einstein condensates in a toroidal trap | We consider the ground-state properties of Rashba spin-orbit-coupled
pseudo-spin-1/2 Bose-Einstein condensates (BECs) in a rotating two-dimensional
(2D) toroidal trap. In the absence of spin-orbit coupling (SOC), the increasing
rotation frequency enhances the creation of giant vortices for the initially
miscible BECs, while it can lead to the formation of semiring density patterns
with irregular hidden vortex structures for the initially immiscible BECs.
Without rotation, strong 2D isotropic SOC yields a heliciform-stripe phase for
the initially immiscible BECs. Combined effects of rotation, SOC, and
interatomic interactions on the vortex structures and typical spin textures of
the ground state of the system are discussed systematically. In particular, for
fixed rotation frequency above the critical value, the increasing isotropic SOC
favors a visible vortex ring in each component which is accompanied by a hidden
giant vortex plus a (several) hidden vortex ring(s) in the central region. In
the case of 1D anisotropic SOC, large SOC strength results in the generation of
hidden linear vortex string and the transition from initial phase separation
(phase mixing) to phase mixing (phase separation). Furthermore, the peculiar
spin textures including skyrmion lattice, skyrmion pair and skyrmion string are
revealed in this system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Semi-Global Weighted Least Squares in Image Filtering | Solving the global method of Weighted Least Squares (WLS) model in image
filtering is both time- and memory-consuming. In this paper, we present an
alternative approximation in a time- and memory- efficient manner which is
denoted as Semi-Global Weighed Least Squares (SG-WLS). Instead of solving a
large linear system, we propose to iteratively solve a sequence of subsystems
which are one-dimensional WLS models. Although each subsystem is
one-dimensional, it can take two-dimensional neighborhood information into
account due to the proposed special neighborhood construction. We show such a
desirable property makes our SG-WLS achieve close performance to the original
two-dimensional WLS model but with much less time and memory cost. While
previous related methods mainly focus on the 4-connected/8-connected
neighborhood system, our SG-WLS can handle a more general and larger
neighborhood system thanks to the proposed fast solution. We show such a
generalization can achieve better performance than the 4-connected/8-connected
neighborhood system in some applications. Our SG-WLS is $\sim20$ times faster
than the WLS model. For an image of $M\times N$, the memory cost of SG-WLS is
at most at the magnitude of $max\{\frac{1}{M}, \frac{1}{N}\}$ of that of the
WLS model. We show the effectiveness and efficiency of our SG-WLS in a range of
applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Universal elliptic Gauß sums for Atkin primes in Schoof's algorithm | This work builds on earlier results. We define universal elliptic Gau{\ss}
sums for Atkin primes in Schoof's algorithm for counting points on elliptic
curves. Subsequently, we show these quantities admit an efficiently computable
representation in terms of the $j$-invariant and two other modular functions.
We analyse the necessary computations in detail and derive an alternative
approach for determining the trace of the Frobenius homomorphism for Atkin
primes using these pre-computations. A rough run-time analysis shows, however,
that this new method is not competitive with existing ones.
| 0 | 0 | 1 | 0 | 0 | 0 |
Deep Residual Networks and Weight Initialization | Residual Network (ResNet) is the state-of-the-art architecture that realizes
successful training of really deep neural network. It is also known that good
weight initialization of neural network avoids problem of vanishing/exploding
gradients. In this paper, simplified models of ResNets are analyzed. We argue
that goodness of ResNet is correlated with the fact that ResNets are relatively
insensitive to choice of initial weights. We also demonstrate how batch
normalization improves backpropagation of deep ResNets without tuning initial
values of weights.
| 1 | 0 | 0 | 1 | 0 | 0 |
Wavelet graphs for the direct detection of gravitational waves | A second generation of gravitational wave detectors will soon come online
with the objective of measuring for the first time the tiny gravitational
signal from the coalescence of black hole and/or neutron star binaries. In this
communication, we propose a new time-frequency search method alternative to
matched filtering techniques that are usually employed to detect this signal.
This method relies on a graph that encodes the time evolution of the signal and
its variability by establishing links between coefficients in the multi-scale
time-frequency decomposition of the data. We provide a proof of concept for
this approach.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Survey on Hypergraph Products (Erratum) | A surprising diversity of different products of hypergraphs have been
discussed in the literature. Most of the hypergraph products can be viewed as
generalizations of one of the four standard graph products. The most widely
studied variant, the so-called square product, does not have this property,
however. Here we survey the literature on hypergraph products with an emphasis
on comparing the alternative generalizations of graph products and the
relationships among them. In this context the so-called 2-sections and
L2-sections are considered. These constructions are closely linked to related
colored graph structures that seem to be a useful tool for the prime factor
decompositions w.r.t.\ specific hypergraph products. We summarize the current
knowledge on the propagation of hypergraph invariants under the different
hypergraph multiplications. While the overwhelming majority of the material
concerns finite (undirected) hypergraphs, the survey also covers a summary of
the few results on products of infinite and directed hypergraphs.
| 1 | 0 | 0 | 0 | 0 | 0 |
One- and two-channel Kondo model with logarithmic Van Hove singularity: a numerical renormalization group solution | Simple scaling consideration and NRG solution of the one- and two-channel
Kondo model in the presence of a logarithmic Van Hove singularity at the Fermi
level is given. The temperature dependences of local and impurity magnetic
susceptibility and impurity entropy are calculated. The low-temperature
behavior of the impurity susceptibility and impurity entropy turns out to be
non-universal in the Kondo sense and independent of the $s-d$ coupling $J$. The
resonant level model solution in the strong coupling regime confirms the NRG
results. In the two-channel case the local susceptibility demonstrates a
non-Fermi-liquid power-law behavior.
| 0 | 1 | 0 | 0 | 0 | 0 |
A deep Convolutional Neural Network for topology optimization with strong generalization ability | This paper proposes a deep Convolutional Neural Network(CNN) with strong
generalization ability for structural topology optimization. The architecture
of the neural network is made up of encoding and decoding parts, which provide
down- and up-sampling operations. In addition, a popular technique, namely
U-Net, was adopted to improve the performance of the proposed neural network.
The input of the neural network is a well-designed tensor with each channel
includes different information for the problem, and the output is the layout of
the optimal structure. To train the neural network, a large dataset is
generated by a conventional topology optimization approach, i.e. SIMP. The
performance of the proposed method was evaluated by comparing its efficiency
and accuracy with SIMP on a series of typical optimization problems. Results
show that a significant reduction in computation cost was achieved with little
sacrifice on the optimality of design solutions. Furthermore, the proposed
method can intelligently solve problems under boundary conditions not being
included in the training dataset.
| 1 | 0 | 0 | 1 | 0 | 0 |
Focused Hierarchical RNNs for Conditional Sequence Processing | Recurrent Neural Networks (RNNs) with attention mechanisms have obtained
state-of-the-art results for many sequence processing tasks. Most of these
models use a simple form of encoder with attention that looks over the entire
sequence and assigns a weight to each token independently. We present a
mechanism for focusing RNN encoders for sequence modelling tasks which allows
them to attend to key parts of the input as needed. We formulate this using a
multi-layer conditional sequence encoder that reads in one token at a time and
makes a discrete decision on whether the token is relevant to the context or
question being asked. The discrete gating mechanism takes in the context
embedding and the current hidden state as inputs and controls information flow
into the layer above. We train it using policy gradient methods. We evaluate
this method on several types of tasks with different attributes. First, we
evaluate the method on synthetic tasks which allow us to evaluate the model for
its generalization ability and probe the behavior of the gates in more
controlled settings. We then evaluate this approach on large scale Question
Answering tasks including the challenging MS MARCO and SearchQA tasks. Our
models shows consistent improvements for both tasks over prior work and our
baselines. It has also shown to generalize significantly better on synthetic
tasks as compared to the baselines.
| 0 | 0 | 0 | 1 | 0 | 0 |
The Faraday room of the CUORE Experiment | The paper describes the Faraday room that shields the CUORE experiment
against electromagnetic fields, from 50 Hz up to high frequency. Practical
contraints led to choose panels made of light shielding materials. The seams
between panels were optimized with simulations to minimize leakage.
Measurements of shielding performance show attenuation of a factor 15 at 50 Hz,
and a factor 1000 above 1 KHz up to about 100 MHz.
| 0 | 1 | 0 | 0 | 0 | 0 |
Simulations and measurements of the impact of collective effects on dynamic aperture | We describe a benchmark study of collective and nonlinear dynamics in an APS
storage ring. A 1-mm long bunch was assumed in the calculation of wakefield and
element by element particle tracking with distributed wakefield component along
the ring was performed in Elegant simulation. The result of Elegant simulation
differed by less than 5 % from experimental measurement
| 0 | 1 | 0 | 0 | 0 | 0 |
Estimation of the asymptotic variance of univariate and multivariate random fields and statistical inference | Correlated random fields are a common way to model dependence struc- tures in
high-dimensional data, especially for data collected in imaging. One important
parameter characterizing the degree of dependence is the asymp- totic variance
which adds up all autocovariances in the temporal and spatial domain.
Especially, it arises in the standardization of test statistics based on
partial sums of random fields and thus the construction of tests requires its
estimation. In this paper we propose consistent estimators for this parameter
for strictly stationary {\phi}-mixing random fields with arbitrary dimension of
the domain and taking values in a Euclidean space of arbitrary dimension, thus
allowing for multivariate random fields. We establish consistency, provide cen-
tral limit theorems and show that distributional approximations of related test
statistics based on sample autocovariances of random fields can be obtained by
the subsampling approach. As in applications the spatial-temporal correlations
are often quite local, such that a large number of autocovariances vanish or
are negligible, we also investigate a thresholding approach where sample
autocovariances of small magnitude are omitted. Extensive simulation studies
show that the proposed estimators work well in practice and, when used to
standardize image test statistics, can provide highly accurate image testing
procedures.
| 0 | 0 | 1 | 1 | 0 | 0 |
Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation | Deep learning approaches such as convolutional neural nets have consistently
outperformed previous methods on challenging tasks such as dense, semantic
segmentation. However, the various proposed networks perform differently, with
behaviour largely influenced by architectural choices and training settings.
This paper explores Ensembles of Multiple Models and Architectures (EMMA) for
robust performance through aggregation of predictions from a wide range of
methods. The approach reduces the influence of the meta-parameters of
individual models and the risk of overfitting the configuration to a particular
database. EMMA can be seen as an unbiased, generic deep learning model which is
shown to yield excellent performance, winning the first position in the BRATS
2017 competition among 50+ participating teams.
| 1 | 0 | 0 | 0 | 0 | 0 |
Estimating Phase Duration for SPaT Messages | A SPaT (Signal Phase and Timing) message describes for each lane the current
phase at a signalized intersection together with an estimate of the residual
time of that phase. Accurate SPaT messages can be used to construct a speed
profile for a vehicle that reduces its fuel consumption as it approaches or
leaves an intersection. This paper presents SPaT estimation algorithms at an
intersection with a semi-actuated signal, using real-time signal phase
measurements. The algorithms are evaluated using high-resolution data from two
intersections in Montgomery County, MD. The algorithms can be readily
implemented at signal controllers. The study supports three findings. First,
real-time information dramatically improves the accuracy of the prediction of
the residual time compared with prediction based on historical data alone.
Second, as time increases the prediction of the residual time may increase or
decrease. Third, as drivers differently weight errors in predicting `end of
green' and `end of red', drivers on two different approaches may prefer
different estimates of the residual time of the same phase.
| 0 | 0 | 0 | 1 | 0 | 0 |
Efficient Spatial Variation Characterization via Matrix Completion | In this paper, we propose a novel method to estimate and characterize spatial
variations on dies or wafers. This new technique exploits recent developments
in matrix completion, enabling estimation of spatial variation across wafers or
dies with a small number of randomly picked sampling points while still
achieving fairly high accuracy. This new approach can be easily generalized,
including for estimation of mixed spatial and structure or device type
information.
| 1 | 0 | 0 | 0 | 0 | 0 |
TS-MPC for Autonomous Vehicles including a dynamic TS-MHE-UIO | In this work, a novel approach is presented to solve the problem of tracking
trajectories in autonomous vehicles. This approach is based on the use of a
cascade control where the external loop solves the position control using a
novel Takagi Sugeno - Model Predictive Control (TS-MPC) approach and the
internal loop is in charge of the dynamic control of the vehicle using a Takagi
Sugeno - Linear Quadratic Regulator technique designed via Linear Matrix
Inequalities (TS-LMI-LQR). Both techniques use a TS representation of the
kinematic and dynamic models of the vehicle. In addition, a novel Takagi Sugeno
estimator - Moving Horizon Estimator - Unknown Input Observer (TS-MHE-UIO) is
presented. This method estimates the dynamic states of the vehicle optimally as
well as the force of friction acting on the vehicle that is used to reduce the
control efforts. The innovative contribution of the TS-MPC and TS-MHE-UIO
techniques is that using the TS model formulation of the vehicle allows us to
solve the nonlinear problem as if it were linear, reducing computation times by
40-50 times. To demonstrate the potential of the TS-MPC we propose a comparison
between three methods of solving the kinematic control problem: using the
non-linear MPC formulation (NL-MPC), using TS-MPC without updating the
prediction model and using updated TS-MPC with the references of the planner.
| 1 | 0 | 0 | 0 | 0 | 0 |
Flexibility Analysis for Smart Grid Demand Response | Flexibility is a key enabler for the smart grid, required to facilitate
Demand Side Management (DSM) programs, managing electrical consumption to
reduce peaks, balance renewable generation and provide ancillary services to
the grid. Flexibility analysis is required to identify and quantify the
available electrical load of a site or building which can be shed or increased
in response to a DSM signal. A methodology for assessing flexibility is
developed, based on flexibility formulations and optimization requirements. The
methodology characterizes the loads, storage and on-site generation,
incorporates site assessment using the ISO 50002:2014 energy audit standard and
benchmarks performance against documented studies. An example application of
the methodology is detailed using a pilot site demonstrator.
| 1 | 0 | 1 | 0 | 0 | 0 |
Duluth at SemEval-2017 Task 6: Language Models in Humor Detection | This paper describes the Duluth system that participated in SemEval-2017 Task
6 #HashtagWars: Learning a Sense of Humor. The system participated in Subtasks
A and B using N-gram language models, ranking highly in the task evaluation.
This paper discusses the results of our system in the development and
evaluation stages and from two post-evaluation runs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Classification of grasping tasks based on EEG-EMG coherence | This work presents an innovative application of the well-known concept of
cortico-muscular coherence for the classification of various motor tasks, i.e.,
grasps of different kinds of objects. Our approach can classify objects with
different weights (motor-related features) and different surface frictions
(haptics-related features) with high accuracy (over 0:8). The outcomes
presented here provide information about the synchronization existing between
the brain and the muscles during specific activities; thus, this may represent
a new effective way to perform activity recognition.
| 0 | 0 | 0 | 0 | 1 | 0 |
Kepler sheds new and unprecedented light on the variability of a blue supergiant: gravity waves in the O9.5Iab star HD 188209 | Stellar evolution models are most uncertain for evolved massive stars.
Asteroseismology based on high-precision uninterrupted space photometry has
become a new way to test the outcome of stellar evolution theory and was
recently applied to a multitude of stars, but not yet to massive evolved
supergiants.Our aim is to detect, analyse and interpret the photospheric and
wind variability of the O9.5Iab star HD 188209 from Kepler space photometry and
long-term high-resolution spectroscopy. We used Kepler scattered-light
photometry obtained by the nominal mission during 1460d to deduce the
photometric variability of this O-type supergiant. In addition, we assembled
and analysed high-resolution high signal-to-noise spectroscopy taken with four
spectrographs during some 1800d to interpret the temporal spectroscopic
variability of the star. The variability of this blue supergiant derived from
the scattered-light space photometry is in full in agreement with the one found
in the ground-based spectroscopy. We find significant low-frequency variability
that is consistently detected in all spectral lines of HD 188209. The
photospheric variability propagates into the wind, where it has similar
frequencies but slightly higher amplitudes. The morphology of the frequency
spectra derived from the long-term photometry and spectroscopy points towards a
spectrum of travelling waves with frequency values in the range expected for an
evolved O-type star. Convectively-driven internal gravity waves excited in the
stellar interior offer the most plausible explanation of the detected
variability.
| 0 | 1 | 0 | 0 | 0 | 0 |
Complexity of short Presburger arithmetic | We study complexity of short sentences in Presburger arithmetic (Short-PA).
Here by "short" we mean sentences with a bounded number of variables,
quantifiers, inequalities and Boolean operations; the input consists only of
the integers involved in the inequalities. We prove that assuming Kannan's
partition can be found in polynomial time, the satisfiability of Short-PA
sentences can be decided in polynomial time. Furthermore, under the same
assumption, we show that the numbers of satisfying assignments of short
Presburger sentences can also be computed in polynomial time.
| 1 | 0 | 1 | 0 | 0 | 0 |
Deep neural network based speech separation optimizing an objective estimator of intelligibility for low latency applications | Mean square error (MSE) has been the preferred choice as loss function in the
current deep neural network (DNN) based speech separation techniques. In this
paper, we propose a new cost function with the aim of optimizing the extended
short time objective intelligibility (ESTOI) measure. We focus on applications
where low algorithmic latency ($\leq 10$ ms) is important. We use long
short-term memory networks (LSTM) and evaluate our proposed approach on four
sets of two-speaker mixtures from extended Danish hearing in noise (HINT)
dataset. We show that the proposed loss function can offer improved or at par
objective intelligibility (in terms of ESTOI) compared to an MSE optimized
baseline while resulting in lower objective separation performance (in terms of
the source to distortion ratio (SDR)). We then proceed to propose an approach
where the network is first initialized with weights optimized for MSE criterion
and then trained with the proposed ESTOI loss criterion. This approach
mitigates some of the losses in objective separation performance while
preserving the gains in objective intelligibility.
| 1 | 0 | 0 | 0 | 0 | 0 |
Characterization and control of linear coupling using turn-by-turn beam position monitor data in storage rings | We introduce a new application of measuring symplectic generators to
characterize and control the linear betatron coupling in storage rings. From
synchronized and consecutive BPM (Beam Position Monitor) turn-by-turn (TbT)
readings, symplectic Lie generators describing the coupled linear dynamics are
extracted. Four plane-crossing terms in the generators directly characterize
the coupling between the horizontal and the vertical planes. Coupling control
can be accomplished by utilizing the dependency of these plane-crossing terms
on skew quadrupoles. The method has been successfully demonstrated to reduce
the vertical effective emittance down to the diffraction limit in the newly
constructed National Synchrotron Light Source II (NSLS-II) storage ring. This
method can be automatized to realize linear coupling feedback control with
negligible disturbance on machine operation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Adaptive Inferential Method for Monotone Graph Invariants | We consider the problem of undirected graphical model inference. In many
applications, instead of perfectly recovering the unknown graph structure, a
more realistic goal is to infer some graph invariants (e.g., the maximum
degree, the number of connected subgraphs, the number of isolated nodes). In
this paper, we propose a new inferential framework for testing nested multiple
hypotheses and constructing confidence intervals of the unknown graph
invariants under undirected graphical models. Compared to perfect graph
recovery, our methods require significantly weaker conditions. This paper makes
two major contributions: (i) Methodologically, for testing nested multiple
hypotheses, we propose a skip-down algorithm on the whole family of monotone
graph invariants (The invariants which are non-decreasing under addition of
edges). We further show that the same skip-down algorithm also provides valid
confidence intervals for the targeted graph invariants. (ii) Theoretically, we
prove that the length of the obtained confidence intervals are optimal and
adaptive to the unknown signal strength. We also prove generic lower bounds for
the confidence interval length for various invariants. Numerical results on
both synthetic simulations and a brain imaging dataset are provided to
illustrate the usefulness of the proposed method.
| 0 | 0 | 1 | 1 | 0 | 0 |
High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing | High-dose-rate brachytherapy is a tumor treatment method where a highly
radioactive source is brought in close proximity to the tumor. In this paper we
develop a simulated annealing algorithm to optimize the dwell times at
preselected dwell positions to maximize tumor coverage under dose-volume
constraints on the organs at risk. Compared to existing algorithms, our
algorithm has advantages in terms of speed and objective value and does not
require an expensive general purpose solver. Its success mainly depends on
exploiting the efficiency of matrix multiplication and a careful selection of
the neighboring states. In this paper we outline its details and make an
in-depth comparison with existing methods using real patient data.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scaling the Scattering Transform: Deep Hybrid Networks | We use the scattering network as a generic and fixed ini-tialization of the
first layers of a supervised hybrid deep network. We show that early layers do
not necessarily need to be learned, providing the best results to-date with
pre-defined representations while being competitive with Deep CNNs. Using a
shallow cascade of 1 x 1 convolutions, which encodes scattering coefficients
that correspond to spatial windows of very small sizes, permits to obtain
AlexNet accuracy on the imagenet ILSVRC2012. We demonstrate that this local
encoding explicitly learns invariance w.r.t. rotations. Combining scattering
networks with a modern ResNet, we achieve a single-crop top 5 error of 11.4% on
imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing
only 10 layers. We also find that hybrid architectures can yield excellent
performance in the small sample regime, exceeding their end-to-end
counterparts, through their ability to incorporate geometrical priors. We
demonstrate this on subsets of the CIFAR-10 dataset and on the STL-10 dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Methodological variations in lagged regression for detecting physiologic drug effects in EHR data | We studied how lagged linear regression can be used to detect the physiologic
effects of drugs from data in the electronic health record (EHR). We
systematically examined the effect of methodological variations ((i) time
series construction, (ii) temporal parameterization, (iii) intra-subject
normalization, (iv) differencing (lagged rates of change achieved by taking
differences between consecutive measurements), (v) explanatory variables, and
(vi) regression models) on performance of lagged linear methods in this
context. We generated two gold standards (one knowledge-base derived, one
expert-curated) for expected pairwise relationships between 7 drugs and 4 labs,
and evaluated how the 64 unique combinations of methodological perturbations
reproduce gold standards. Our 28 cohorts included patients in Columbia
University Medical Center/NewYork-Presbyterian Hospital clinical database. The
most accurate methods achieved AUROC of 0.794 for knowledge-base derived gold
standard (95%CI [0.741, 0.847]) and 0.705 for expert-curated gold standard (95%
CI [0.629, 0.781]). We observed a 0.633 mean AUROC (95%CI [0.610, 0.657],
expert-curated gold standard) across all methods that re-parameterize time
according to sequence and use either a joint autoregressive model with
differencing or an independent lag model without differencing. The complement
of this set of methods achieved a mean AUROC close to 0.5, indicating the
importance of these choices. We conclude that time- series analysis of EHR data
will likely rely on some of the beneficial pre-processing and modeling
methodologies identified, and will certainly benefit from continued careful
analysis of methodological perturbations. This study found that methodological
variations, such as pre-processing and representations, significantly affect
results, exposing the importance of evaluating these components when comparing
machine-learning methods.
| 0 | 0 | 0 | 1 | 1 | 0 |
Leontief Meets Shannon - Measuring the Complexity of the Economic System | We develop a complexity measure for large-scale economic systems based on
Shannon's concept of entropy. By adopting Leontief's perspective of the
production process as a circular flow, we formulate the process as a Markov
chain. Then we derive a measure of economic complexity as the average number of
bits required to encode the flow of goods and services in the production
process. We illustrate this measure using data from seven national economies,
spanning several decades.
| 0 | 1 | 0 | 1 | 0 | 0 |
Exploring nucleon spin structure through neutrino neutral-current interactions in MicroBooNE | The net contribution of the strange quark spins to the proton spin, $\Delta
s$, can be determined from neutral current elastic neutrino-proton interactions
at low momentum transfer combined with data from electron-proton scattering.
The probability of neutrino-proton interactions depends in part on the axial
form factor, which represents the spin structure of the proton and can be
separated into its quark flavor contributions. Low momentum transfer neutrino
neutral current interactions can be measured in MicroBooNE, a high-resolution
liquid argon time projection chamber (LArTPC) in its first year of running in
the Booster Neutrino Beamline at Fermilab. The signal for these interactions in
MicroBooNE is a single short proton track. We present our work on the automated
reconstruction and classification of proton tracks in LArTPCs, an important
step in the determination of neutrino- nucleon cross sections and the
measurement of $\Delta s$.
| 0 | 1 | 0 | 0 | 0 | 0 |
A unimodular Liouville hyperbolic souvlaki --- an appendix to [arXiv:1603.06712] | Carmesin, Federici, and Georgakopoulos [arXiv:1603.06712] constructed a
transient hyperbolic graph that has no transient subtrees and that has the
Liouville property for harmonic functions. We modify their construction to get
a unimodular random graph with the same properties.
| 0 | 0 | 1 | 0 | 0 | 0 |
Comparison Based Nearest Neighbor Search | We consider machine learning in a comparison-based setting where we are given
a set of points in a metric space, but we have no access to the actual
distances between the points. Instead, we can only ask an oracle whether the
distance between two points $i$ and $j$ is smaller than the distance between
the points $i$ and $k$. We are concerned with data structures and algorithms to
find nearest neighbors based on such comparisons. We focus on a simple yet
effective algorithm that recursively splits the space by first selecting two
random pivot points and then assigning all other points to the closer of the
two (comparison tree). We prove that if the metric space satisfies certain
expansion conditions, then with high probability the height of the comparison
tree is logarithmic in the number of points, leading to efficient search
performance. We also provide an upper bound for the failure probability to
return the true nearest neighbor. Experiments show that the comparison tree is
competitive with algorithms that have access to the actual distance values, and
needs less triplet comparisons than other competitors.
| 1 | 0 | 0 | 1 | 0 | 0 |
LSTM Networks for Data-Aware Remaining Time Prediction of Business Process Instances | Predicting the completion time of business process instances would be a very
helpful aid when managing processes under service level agreement constraints.
The ability to know in advance the trend of running process instances would
allow business managers to react in time, in order to prevent delays or
undesirable situations. However, making such accurate forecasts is not easy:
many factors may influence the required time to complete a process instance. In
this paper, we propose an approach based on deep Recurrent Neural Networks
(specifically LSTMs) that is able to exploit arbitrary information associated
to single events, in order to produce an as-accurate-as-possible prediction of
the completion time of running instances. Experiments on real-world datasets
confirm the quality of our proposal.
| 1 | 0 | 0 | 1 | 0 | 0 |
Message-passing algorithm of quantum annealing with nonstoquastic Hamiltonian | Quantum annealing (QA) is a generic method for solving optimization problems
using fictitious quantum fluctuation. The current device performing QA involves
controlling the transverse field; it is classically simulatable by using the
standard technique for mapping the quantum spin systems to the classical ones.
In this sense, the current system for QA is not powerful despite utilizing
quantum fluctuation. Hence, we developed a system with a time-dependent
Hamiltonian consisting of a combination of the formulated Ising model and the
"driver" Hamiltonian with only quantum fluctuation. In the previous study, for
a fully connected spin model, quantum fluctuation can be addressed in a
relatively simple way. We proved that the fully connected antiferromagnetic
interaction can be transformed into a fluctuating transverse field and is thus
classically simulatable at sufficiently low temperatures. Using the fluctuating
transverse field, we established several ways to simulate part of the
nonstoquastic Hamiltonian on classical computers. We formulated a
message-passing algorithm in the present study. This algorithm is capable of
assessing the performance of QA with part of the nonstoquastic Hamiltonian
having a large number of spins. In other words, we developed a different
approach for simulating the nonstoquastic Hamiltonian without using the quantum
Monte Carlo technique. Our results were validated by comparison to the results
obtained by the replica method.
| 1 | 0 | 0 | 1 | 0 | 0 |
RLE Plots: Visualising Unwanted Variation in High Dimensional Data | Unwanted variation can be highly problematic and so its detection is often
crucial. Relative log expression (RLE) plots are a powerful tool for
visualising such variation in high dimensional data. We provide a detailed
examination of these plots, with the aid of examples and simulation, explaining
what they are and what they can reveal. RLE plots are particularly useful for
assessing whether a procedure aimed at removing unwanted variation, i.e. a
normalisation procedure, has been successful. These plots, while originally
devised for gene expression data from microarrays, can also be used to reveal
unwanted variation in many other kinds of high dimensional data, where such
variation can be problematic.
| 0 | 0 | 0 | 1 | 0 | 0 |
ALMA Observations of Gas-Rich Galaxies in z~1.6 Galaxy Clusters: Evidence for Higher Gas Fractions in High-Density Environments | We present ALMA CO (2-1) detections in 11 gas-rich cluster galaxies at z~1.6,
constituting the largest sample of molecular gas measurements in z>1.5 clusters
to date. The observations span three galaxy clusters, derived from the Spitzer
Adaptation of the Red-sequence Cluster Survey. We augment the >5sigma
detections of the CO (2-1) fluxes with multi-band photometry, yielding stellar
masses and infrared-derived star formation rates, to place some of the first
constraints on molecular gas properties in z~1.6 cluster environments. We
measure sizable gas reservoirs of 0.5-2x10^11 solar masses in these objects,
with high gas fractions and long depletion timescales, averaging 62% and 1.4
Gyr, respectively. We compare our cluster galaxies to the scaling relations of
the coeval field, in the context of how gas fractions and depletion timescales
vary with respect to the star-forming main sequence. We find that our cluster
galaxies lie systematically off the field scaling relations at z=1.6 toward
enhanced gas fractions, at a level of ~4sigma, but have consistent depletion
timescales. Exploiting CO detections in lower-redshift clusters from the
literature, we investigate the evolution of the gas fraction in cluster
galaxies, finding it to mimic the strong rise with redshift in the field. We
emphasize the utility of detecting abundant gas-rich galaxies in high-redshift
clusters, deeming them as crucial laboratories for future statistical studies.
| 0 | 1 | 0 | 0 | 0 | 0 |
CardiacNET: Segmentation of Left Atrium and Proximal Pulmonary Veins from MRI Using Multi-View CNN | Anatomical and biophysical modeling of left atrium (LA) and proximal
pulmonary veins (PPVs) is important for clinical management of several cardiac
diseases. Magnetic resonance imaging (MRI) allows qualitative assessment of LA
and PPVs through visualization. However, there is a strong need for an advanced
image segmentation method to be applied to cardiac MRI for quantitative
analysis of LA and PPVs. In this study, we address this unmet clinical need by
exploring a new deep learning-based segmentation strategy for quantification of
LA and PPVs with high accuracy and heightened efficiency. Our approach is based
on a multi-view convolutional neural network (CNN) with an adaptive fusion
strategy and a new loss function that allows fast and more accurate convergence
of the backpropagation based optimization. After training our network from
scratch by using more than 60K 2D MRI images (slices), we have evaluated our
segmentation strategy to the STACOM 2013 cardiac segmentation challenge
benchmark. Qualitative and quantitative evaluations, obtained from the
segmentation challenge, indicate that the proposed method achieved the
state-of-the-art sensitivity (90%), specificity (99%), precision (94%), and
efficiency levels (10 seconds in GPU, and 7.5 minutes in CPU).
| 1 | 0 | 0 | 1 | 0 | 0 |
Comparison of Polynomial Chaos and Gaussian Process surrogates for uncertainty quantification and correlation estimation of spatially distributed open-channel steady flows | Data assimilation is widely used to improve flood forecasting capability,
especially through parameter inference requiring statistical information on the
uncertain input parameters (upstream discharge, friction coefficient) as well
as on the variability of the water level and its sensitivity with respect to
the inputs. For particle filter or ensemble Kalman filter, stochastically
estimating probability density function and covariance matrices from a Monte
Carlo random sampling requires a large ensemble of model evaluations, limiting
their use in real-time application. To tackle this issue, fast surrogate models
based on Polynomial Chaos and Gaussian Process can be used to represent the
spatially distributed water level in place of solving the shallow water
equations. This study investigates the use of these surrogates to estimate
probability density functions and covariance matrices at a reduced
computational cost and without the loss of accuracy, in the perspective of
ensemble-based data assimilation. This study focuses on 1-D steady state flow
simulated with MASCARET over the Garonne River (South-West France). Results
show that both surrogates feature similar performance to the Monte-Carlo random
sampling, but for a much smaller computational budget; a few MASCARET
simulations (on the order of 10-100) are sufficient to accurately retrieve
covariance matrices and probability density functions all along the river, even
where the flow dynamic is more complex due to heterogeneous bathymetry. This
paves the way for the design of surrogate strategies suitable for representing
unsteady open-channel flows in data assimilation.
| 0 | 1 | 0 | 1 | 0 | 0 |
Thermal Sunyaev-Zel'dovich effect in the intergalactic medium with primordial magnetic fields | The presence of ubiquitous magnetic fields in the universe is suggested from
observations of radiation and cosmic ray from galaxies or the intergalactic
medium (IGM). One possible origin of cosmic magnetic fields is the
magnetogenesis in the primordial universe. Such magnetic fields are called
primordial magnetic fields (PMFs), and are considered to affect the evolution
of matter density fluctuations and the thermal history of the IGM gas. Hence
the information of PMFs is expected to be imprinted on the anisotropies of the
cosmic microwave background (CMB) through the thermal Sunyaev-Zel'dovich (tSZ)
effect in the IGM. In this study, given an initial power spectrum of PMFs as
$P(k)\propto B_{\rm 1Mpc}^2 k^{n_{B}}$, we calculate dynamical and thermal
evolutions of the IGM under the influence of PMFs, and compute the resultant
angular power spectrum of the Compton $y$-parameter on the sky. As a result, we
find that two physical processes driven by PMFs dominantly determine the power
spectrum of the Compton $y$-parameter; (i) the heating due to the ambipolar
diffusion effectively works to increase the temperature and the ionization
fraction, and (ii) the Lorentz force drastically enhances the density contrast
just after the recombination epoch. These facts result in making the tSZ
angular power spectrum induced by the PMFs more remarkable at $\ell >10^4$ than
that by galaxy clusters even with $B_{\rm 1Mpc}=0.1$ nG and $n_{B}=-1.0$
because the contribution from galaxy clusters decreases with increasing $\ell$.
The measurement of the tSZ angular power spectrum on high $\ell$ modes can
provide the stringent constraint on PMFs.
| 0 | 1 | 0 | 0 | 0 | 0 |
One-dimensional model of chiral fermions with Feshbach resonant interactions | We study a model of two species of one-dimensional linearly dispersing
fermions interacting via an s-wave Feshbach resonance at zero temperature.
While this model is known to be integrable, it possesses novel features that
have not previously been investigated. Here, we present an exact solution based
on the coordinate Bethe Ansatz. In the limit of infinite resonance strength,
which we term the strongly interacting limit, the two species of fermions
behave as free Fermi gases. In the limit of infinitely weak resonance, or the
weakly interacting limit, the gases can be in different phases depending on the
detuning, the relative velocities of the particles, and the particle densities.
When the molecule moves faster or slower than both species of atoms, the atomic
velocities get renormalized and the atoms may even become non-chiral. On the
other hand, when the molecular velocity is between that of the atoms, the
system may behave like a weakly interacting Lieb-Liniger gas.
| 0 | 1 | 0 | 0 | 0 | 0 |
A lower bound on the positive semidefinite rank of convex bodies | The positive semidefinite rank of a convex body $C$ is the size of its
smallest positive semidefinite formulation. We show that the positive
semidefinite rank of any convex body $C$ is at least $\sqrt{\log d}$ where $d$
is the smallest degree of a polynomial that vanishes on the boundary of the
polar of $C$. This improves on the existing bound which relies on results from
quantifier elimination. The proof relies on the Bézout bound applied to the
Karush-Kuhn-Tucker conditions of optimality. We discuss the connection with the
algebraic degree of semidefinite programming and show that the bound is tight
(up to constant factor) for random spectrahedra of suitable dimension.
| 1 | 0 | 1 | 0 | 0 | 0 |
Klt varieties with trivial canonical class - Holonomy, differential forms, and fundamental groups | We investigate the holonomy group of singular Kähler-Einstein metrics on
klt varieties with numerically trivial canonical divisor. Finiteness of the
number of connected components, a Bochner principle for holomorphic tensors,
and a connection between irreducibility of holonomy representations and
stability of the tangent sheaf are established. As a consequence, known
decompositions for tangent sheaves of varieties with trivial canonical divisor
are refined. In particular, we show that up to finite quasi-étale covers,
varieties with strongly stable tangent sheaf are either Calabi-Yau or
irreducible holomorphic symplectic. These results form one building block for
Höring-Peternell's recent proof of a singular version of the
Beauville-Bogomolov Decomposition Theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Selective Classification for Deep Neural Networks | Selective classification techniques (also known as reject option) have not
yet been considered in the context of deep neural networks (DNNs). These
techniques can potentially significantly improve DNNs prediction performance by
trading-off coverage. In this paper we propose a method to construct a
selective classifier given a trained neural network. Our method allows a user
to set a desired risk level. At test time, the classifier rejects instances as
needed, to grant the desired risk (with high probability). Empirical results
over CIFAR and ImageNet convincingly demonstrate the viability of our method,
which opens up possibilities to operate DNNs in mission-critical applications.
For example, using our method an unprecedented 2% error in top-5 ImageNet
classification can be guaranteed with probability 99.9%, and almost 60% test
coverage.
| 1 | 0 | 0 | 0 | 0 | 0 |
Crowdsourcing Ground Truth for Medical Relation Extraction | Cognitive computing systems require human labeled data for evaluation, and
often for training. The standard practice used in gathering this data minimizes
disagreement between annotators, and we have found this results in data that
fails to account for the ambiguity inherent in language. We have proposed the
CrowdTruth method for collecting ground truth through crowdsourcing, that
reconsiders the role of people in machine learning based on the observation
that disagreement between annotators provides a useful signal for phenomena
such as ambiguity in the text. We report on using this method to build an
annotated data set for medical relation extraction for the $cause$ and $treat$
relations, and how this data performed in a supervised training experiment. We
demonstrate that by modeling ambiguity, labeled data gathered from crowd
workers can (1) reach the level of quality of domain experts for this task
while reducing the cost, and (2) provide better training data at scale than
distant supervision. We further propose and validate new weighted measures for
precision, recall, and F-measure, that account for ambiguity in both human and
machine performance on this task.
| 1 | 0 | 0 | 0 | 0 | 0 |
Chaotic laser based physical random bit streaming system with a computer application interface | We demonstrate a random bit streaming system that uses a chaotic laser as its
physical entropy source. By performing real-time bit manipulation for bias
reduction, we were able to provide the memory of a personal computer with a
constant supply of ready-to-use physical random bits at a throughput of up to 4
Gbps. We pay special attention to the end-to-end entropy source model
describing how the entropy from physical sources is converted into bit entropy.
We confirmed the statistical quality of the generated random bits by revealing
the pass rate of the NIST SP800-22 test suite to be 65 % to 75 %, which is
commonly considered acceptable for a reliable random bit generator. We also
confirmed the stable operation of our random bit steaming system with long-term
bias monitoring.
| 1 | 1 | 0 | 0 | 0 | 0 |
Observation of Intrinsic Half-metallic Behavior of CrO$_2$ (100) Epitaxial Films by Bulk-sensitive Spin-resolved PES | We have investigated the electronic states and spin polarization of
half-metallic ferromagnet CrO$_2$ (100) epitaxial films by bulk-sensitive
spin-resolved photoemission spectroscopy with a focus on non-quasiparticle
(NQP) states derived from electron-magnon interactions. We found that the
averaged values of the spin polarization are approximately 100% and 40% at 40 K
and 300 K, respectively. This is consistent with the previously reported result
[H. Fujiwara et al., Appl. Phys. Lett. 106, 202404 (2015).]. At 100 K, peculiar
spin depolarization was observed at the Fermi level ($E_{F}$), which is
supported by theoretical calculations predicting NQP states. This suggests the
possible appearance of NQP states in CrO$_2$. We also compare the temperature
dependence of our spin polarizations with that of the magnetization.
| 0 | 1 | 0 | 0 | 0 | 0 |
Development and Characterisation of a Gas System and its Associated Slow-Control System for an ATLAS Small-Strip Thin Gap Chamber Testing Facility | A quality assurance and performance qualification laboratory was built at
McGill University for the Canadian-made small-strip Thin Gap Chamber (sTGC)
muon detectors produced for the 2019-2020 ATLAS experiment muon spectrometer
upgrade. The facility uses cosmic rays as a muon source to ionise the quenching
gas mixture of pentane and carbon dioxide flowing through the sTGC detector. A
gas system was developed and characterised for this purpose, with a simple and
efficient gas condenser design utilizing a Peltier thermoelectric cooler (TEC).
The gas system was tested to provide the desired 45 vol% pentane concentration.
For continuous operations, a state-machine system was implemented with alerting
and remote monitoring features to run all cosmic-ray data-acquisition
associated slow-control systems, such as high/low voltage, gas system and
environmental monitoring, in a safe and continuous mode, even in the absence of
an operator.
| 0 | 1 | 0 | 0 | 0 | 0 |
Convergence of extreme value statistics in a two-layer quasi-geostrophic atmospheric model | We search for the signature of universal properties of extreme events,
theoretically predicted for Axiom A flows, in a chaotic and high dimensional
dynamical system by studying the convergence of GEV (Generalized Extreme Value)
and GP (Generalized Pareto) shape parameter estimates to a theoretical value,
expressed in terms of partial dimensions of the attractor, which are global
properties. We consider a two layer quasi-geostrophic (QG) atmospheric model
using two forcing levels, and analyse extremes of different types of physical
observables (local, zonally-averaged energy, and the average value of energy
over the mid-latitudes). Regarding the predicted universality, we find closer
agreement in the shape parameter estimates only in the case of strong forcing,
producing a highly chaotic behaviour, for some observables (the local energy at
every latitude). Due to the limited (though very large) data size and the
presence of serial correlations, it is difficult to obtain robust statistics of
extremes in case of the other observables. In the case of weak forcing,
inducing a less pronounced chaotic flow with regime behaviour, we find worse
agreement with the theory developed for Axiom A flows, which is unsurprising
considering the properties of the system.
| 0 | 1 | 0 | 1 | 0 | 0 |
A deep search for metals near redshift 7: the line-of-sight towards ULAS J1120+0641 | We present a search for metal absorption line systems at the highest
redshifts to date using a deep (30h) VLT/X-Shooter spectrum of the z = 7.084
quasi-stellar object (QSO) ULAS J1120+0641. We detect seven intervening systems
at z > 5.5, with the highest-redshift system being a C IV absorber at z = 6.51.
We find tentative evidence that the mass density of C IV remains flat or
declines with redshift at z < 6, while the number density of C II systems
remains relatively flat over 5 < z < 7. These trends are broadly consistent
with models of chemical enrichment by star formation-driven winds that include
a softening of the ultraviolet background towards higher redshifts. We find a
larger number of weak ( W_rest < 0.3A ) Mg II systems over 5.9 < z < 7.0 than
predicted by a power-law fit to the number density of stronger systems. This is
consistent with trends in the number density of weak Mg II systems at z = 2.5,
and suggests that the mechanisms that create these absorbers are already in
place at z = 7. Finally, we investigate the associated narrow Si IV, C IV, and
N V absorbers located near the QSO redshift, and find that at least one
component shows evidence of partial covering of the continuum source.
| 0 | 1 | 0 | 0 | 0 | 0 |
Energy-Performance Trade-offs in Mobile Data Transfers | By year 2020, the number of smartphone users globally will reach 3 Billion
and the mobile data traffic (cellular + WiFi) will exceed PC internet traffic
the first time. As the number of smartphone users and the amount of data
transferred per smartphone grow exponentially, limited battery power is
becoming an increasingly critical problem for mobile devices which increasingly
depend on network I/O. Despite the growing body of research in power management
techniques for the mobile devices at the hardware layer as well as the lower
layers of the networking stack, there has been little work focusing on saving
energy at the application layer for the mobile systems during network I/O. In
this paper, to the best of our knowledge, we are first to provide an in depth
analysis of the effects of application layer data transfer protocol parameters
on the energy consumption of mobile phones. We show that significant energy
savings can be achieved with application layer solutions at the mobile systems
during data transfer with no or minimal performance penalty. In many cases,
performance increase and energy savings can be achieved simultaneously.
| 1 | 0 | 0 | 0 | 0 | 0 |
A stability result on optimal Skorokhod embedding | Motivated by the model- independent pricing of derivatives calibrated to the
real market, we consider an optimization problem similar to the optimal
Skorokhod embedding problem, where the embedded Brownian motion needs only to
reproduce a finite number of prices of Vanilla options. We derive in this paper
the corresponding dualities and the geometric characterization of optimizers.
Then we show a stability result, i.e. when more and more Vanilla options are
given, the optimization problem converges to an optimal Skorokhod embedding
problem, which constitutes the basis of the numerical computation in practice.
In addition, by means of different metrics on the space of probability
measures, a convergence rate analysis is provided under suitable conditions.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Symmetric Losses for Learning from Corrupted Labels | This paper aims to provide a better understanding of a symmetric loss. First,
we show that using a symmetric loss is advantageous in the balanced error rate
(BER) minimization and area under the receiver operating characteristic curve
(AUC) maximization from corrupted labels. Second, we prove general theoretical
properties of symmetric losses, including a classification-calibration
condition, excess risk bound, conditional risk minimizer, and AUC-consistency
condition. Third, since all nonnegative symmetric losses are non-convex, we
propose a convex barrier hinge loss that benefits significantly from the
symmetric condition, although it is not symmetric everywhere. Finally, we
conduct experiments on BER and AUC optimization from corrupted labels to
validate the relevance of the symmetric condition.
| 1 | 0 | 0 | 1 | 0 | 0 |
Phase matched nonlinear optics via patterning layered materials | The ease of integration coupled with large second-order nonlinear coefficient
of atomically thin layered 2D materials presents a unique opportunity to
realize second-order nonlinearity in silicon compatible integrated photonic
system. However, the phase matching requirement for second-order nonlinear
optical processes makes the nanophotonic design difficult. We show that by
nano-patterning the 2D material, quasi-phase matching can be achieved. Such
patterning based phase-matching could potentially compensate for inevitable
fabrication errors and significantly simplify the design process of the
nonlinear nano-photonic devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
Any cyclic quadrilateral can be inscribed in any closed convex smooth curve | We prove that any cyclic quadrilateral can be inscribed in any closed convex
$C^1$-curve. The smoothness condition is not required if the quadrilateral is a
rectangle.
| 0 | 0 | 1 | 0 | 0 | 0 |
A finite Q-bad space | We prove that for a free noncyclic group $F$, $H_2(\hat F_\mathbb Q, \mathbb
Q)$ is an uncountable $\mathbb Q$-vector space. Here $\hat F_\mathbb Q$ is the
$\mathbb Q$-completion of $F$. This answers a problem of A.K. Bousfield for the
case of rational coefficients. As a direct consequence of this result it
follows that, a wedge of circles is $\mathbb Q$-bad in the sense of
Bousfield-Kan. The same methods as used in the proof of the above results allow
to show that, the homology $H_2(\hat F_\mathbb Z,\mathbb Z)$ is not divisible
group, where $\hat F_\mathbb Z$ is the integral pronilpotent completion of $F$.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Blocking Collisions between People, Objects and other Robots | Intentional or unintentional contacts are bound to occur increasingly more
often due to the deployment of autonomous systems in human environments. In
this paper, we devise methods to computationally predict imminent collisions
between objects, robots and people, and use an upper-body humanoid robot to
block them if they are likely to happen. We employ statistical methods for
effective collision prediction followed by sensor-based trajectory generation
and real-time control to attempt to stop the likely collisions using the most
favorable part of the blocking robot. We thoroughly investigate collisions in
various types of experimental setups involving objects, robots, and people.
Overall, the main contribution of this paper is to devise sensor-based
prediction, trajectory generation and control processes for highly articulated
robots to prevent collisions against people, and conduct numerous experiments
to validate this approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Increasing Papers' Discoverability with Precise Semantic Labeling: the sci.AI Platform | The number of published findings in biomedicine increases continually. At the
same time, specifics of the domain's terminology complicates the task of
relevant publications retrieval. In the current research, we investigate
influence of terms' variability and ambiguity on a paper's likelihood of being
retrieved. We obtained statistics that demonstrate significance of the issue
and its challenges, followed by presenting the sci.AI platform, which allows
precise terms labeling as a resolution.
| 1 | 0 | 0 | 0 | 0 | 0 |
On topological obstructions to global stabilization of an inverted pendulum | We consider a classical problem of control of an inverted pendulum by means
of a horizontal motion of its pivot point. We suppose that the control law can
be non-autonomous and non-periodic w.r.t. the position of the pendulum. It is
shown that global stabilization of the vertical upward position of the pendulum
cannot be obtained for any Lipschitz control law, provided some natural
assumptions. Moreover, we show that there always exists a solution separated
from the vertical position and along which the pendulum never becomes
horizontal. Hence, we also prove that global stabilization cannot be obtained
in the system where the pendulum can impact the horizontal plane (for any
mechanical model of impact). Similar results are presented for several
analogous systems: a pendulum on a cart, a spherical pendulum, and a pendulum
with an additional torque control.
| 0 | 1 | 1 | 0 | 0 | 0 |
Automated Refactoring: Can They Pass The Turing Test? | Refactoring is a maintenance activity that aims to improve design quality
while preserving the behavior of a system. Several (semi)automated approaches
have been proposed to support developers in this maintenance activity, based on
the correction of anti-patterns, which are "poor solutions" to recurring design
problems. However, little quantitative evidence exists about the impact of
automatically refactored code on program comprehension, and in which context
automated refactoring can be as effective as manual refactoring. We performed
an empirical study to investigate whether the use of automated refactoring
approaches affects the understandability of systems during comprehension tasks.
(1) We surveyed 80 developers, asking them to identify from a set of 20
refactoring changes if they were generated by developers or by machine, and to
rate the refactorings according to their design quality; (2) we asked 30
developers to complete code comprehension tasks on 10 systems that were
refactored by either a freelancer or an automated refactoring tool. We measured
developers' performance using the NASA task load index for their effort; the
time that they spent performing the tasks; and their percentages of correct
answers. Results show that for 3 out the 5 types of studied anti-patterns,
developers cannot recognize the origin of the refactoring (i.e., whether it was
performed by a human or an automatic tool). We also observe that developers do
not prefer human refactorings over automated refactorings, except when
refactoring Blob classes; and that there is no statistically significant
difference between the impact on code understandability of human refactorings
and automated refactorings. We conclude that automated refactorings can be as
effective as manual refactorings. However, for complex anti-patterns types like
the Blob, the perceived quality of human refactorings is slightly higher.
| 1 | 0 | 0 | 0 | 0 | 0 |
Free fermions on a piecewise linear four-manifold. II: Pachner moves | This is the second in a series of papers where we construct an invariant of a
four-dimensional piecewise linear manifold $M$ with a given middle cohomology
class $h\in H^2(M,\mathbb C)$. This invariant is the square root of the torsion
of unusual chain complex introduced in Part I (arXiv:1605.06498) of our work,
multiplied by a correcting factor. Here we find this factor by studying the
behavior of our construction under all four-dimensional Pachner moves, and show
that it can be represented in a multiplicative form: a product of same-type
multipliers over all 2-faces, multiplied by a product of same-type multipliers
over all pentachora.
| 0 | 0 | 1 | 0 | 0 | 0 |
Scalable Metropolis-Hastings for Exact Bayesian Inference with Large Datasets | Bayesian inference via standard Markov Chain Monte Carlo (MCMC) methods such
as Metropolis-Hastings is too computationally intensive to handle large
datasets, since the cost per step usually scales like $O(n)$ in the number of
data points $n$. We propose the Scalable Metropolis-Hastings (SMH) kernel that
exploits Gaussian concentration of the posterior to require processing on
average only $O(1)$ or even $O(1/\sqrt{n})$ data points per step. This scheme
is based on a combination of factorized acceptance probabilities, procedures
for fast simulation of Bernoulli processes, and control variate ideas. Contrary
to many MCMC subsampling schemes such as fixed step-size Stochastic Gradient
Langevin Dynamics, our approach is exact insofar as the invariant distribution
is the true posterior and not an approximation to it. We characterise the
performance of our algorithm theoretically, and give realistic and verifiable
conditions under which it is geometrically ergodic. This theory is borne out by
empirical results that demonstrate overall performance benefits over standard
Metropolis-Hastings and various subsampling algorithms.
| 1 | 0 | 0 | 1 | 0 | 0 |
Regularization of the Kernel Matrix via Covariance Matrix Shrinkage Estimation | The kernel trick concept, formulated as an inner product in a feature space,
facilitates powerful extensions to many well-known algorithms. While the kernel
matrix involves inner products in the feature space, the sample covariance
matrix of the data requires outer products. Therefore, their spectral
properties are tightly connected. This allows us to examine the kernel matrix
through the sample covariance matrix in the feature space and vice versa. The
use of kernels often involves a large number of features, compared to the
number of observations. In this scenario, the sample covariance matrix is not
well-conditioned nor is it necessarily invertible, mandating a solution to the
problem of estimating high-dimensional covariance matrices under small sample
size conditions. We tackle this problem through the use of a shrinkage
estimator that offers a compromise between the sample covariance matrix and a
well-conditioned matrix (also known as the "target") with the aim of minimizing
the mean-squared error (MSE). We propose a distribution-free kernel matrix
regularization approach that is tuned directly from the kernel matrix, avoiding
the need to address the feature space explicitly. Numerical simulations
demonstrate that the proposed regularization is effective in classification
tasks.
| 0 | 0 | 0 | 1 | 0 | 0 |
Fixing an error in Caponnetto and de Vito (2007) | The seminal paper of Caponnetto and de Vito (2007) provides minimax-optimal
rates for kernel ridge regression in a very general setting. Its proof,
however, contains an error in its bound on the effective dimensionality. In
this note, we explain the mistake, provide a correct bound, and show that the
main theorem remains true.
| 0 | 0 | 1 | 1 | 0 | 0 |
Supervised Typing of Big Graphs using Semantic Embeddings | We propose a supervised algorithm for generating type embeddings in the same
semantic vector space as a given set of entity embeddings. The algorithm is
agnostic to the derivation of the underlying entity embeddings. It does not
require any manual feature engineering, generalizes well to hundreds of types
and achieves near-linear scaling on Big Graphs containing many millions of
triples and instances by virtue of an incremental execution. We demonstrate the
utility of the embeddings on a type recommendation task, outperforming a
non-parametric feature-agnostic baseline while achieving 15x speedup and
near-constant memory usage on a full partition of DBpedia. Using
state-of-the-art visualization, we illustrate the agreement of our
extensionally derived DBpedia type embeddings with the manually curated domain
ontology. Finally, we use the embeddings to probabilistically cluster about 4
million DBpedia instances into 415 types in the DBpedia ontology.
| 1 | 0 | 0 | 0 | 0 | 0 |
Flexible Attributed Network Embedding | Network embedding aims to find a way to encode network by learning an
embedding vector for each node in the network. The network often has property
information which is highly informative with respect to the node's position and
role in the network. Most network embedding methods fail to utilize this
information during network representation learning. In this paper, we propose a
novel framework, FANE, to integrate structure and property information in the
network embedding process. In FANE, we design a network to unify heterogeneity
of the two information sources, and define a new random walking strategy to
leverage property information and make the two information compensate. FANE is
conceptually simple and empirically powerful. It improves over the
state-of-the-art methods on Cora dataset classification task by over 5%, more
than 10% on WebKB dataset classification task. Experiments also show that the
results improve more than the state-of-the-art methods as increasing training
size. Moreover, qualitative visualization show that our framework is helpful in
network property information exploration. In all, we present a new way for
efficiently learning state-of-the-art task-independent representations in
complex attributed networks. The source code and datasets of this paper can be
obtained from this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
Attention Please: Consider Mockito when Evaluating Newly Proposed Automated Program Repair Techniques | Automated program repair (APR) has attracted widespread attention in recent
years with substantial techniques being proposed. Meanwhile, a number of
benchmarks have been established for evaluating the performances of APR
techniques, among which Defects4J is one of the most wildly used benchmark.
However, bugs in Mockito, a project augmented in a later-version of Defects4J,
do not receive much attention by recent researches. In this paper, we aim at
investigating the necessity of considering Mockito bugs when evaluating APR
techniques. Our findings show that: 1) Mockito bugs are not more complex for
repairing compared with bugs from other projects; 2) the bugs repaired by the
state-of-the-art tools share the same repair patterns compared with those
patterns required to repair Mockito bugs; however, 3) the state-of-the-art
tools perform poorly on Mockito bugs (Nopol can only correctly fix one bug
while SimFix and CapGen cannot fix any bug in Mockito even if all the buggy
locations have been exposed). We conclude from these results that existing APR
techniques may be overfitting to their evaluated subjects and we should
consider Mockito, or even more bugs from other projects, when evaluating newly
proposed APR techniques. We further find out a unique repair action required to
repair Mockito bugs named external package addition. Importing the external
packages from the test code associated with the source code is feasible for
enlarging the search space and this action can be augmented with existing
repair actions to advance existing techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
Communication Modalities for Supervised Teleoperation in Highly Dexterous Tasks - Does one size fit all? | This study tries to explain the connection between communication modalities
and levels of supervision in teleoperation during a dexterous task, like
surgery. This concept is applied to two surgical related tasks: incision and
peg transfer. It was found that as the complexity of the task escalates, the
combination linking human supervision with a more expressive modality shows
better performance than other combinations of modalities and control. More
specifically, in the peg transfer task, the combination of speech modality and
action level supervision achieves shorter task completion time (77.1 +- 3.4 s)
with fewer mistakes (0.20 +- 0.17 pegs dropped).
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning in anonymous nonatomic games with applications to first-order mean field games | We introduce a model of anonymous games with the player dependent action
sets. We propose several learning procedures based on the well-known Fictitious
Play and Online Mirror Descent and prove their convergence to equilibrium under
the classical monotonicity condition. Typical examples are first-order mean
field games.
| 0 | 0 | 1 | 0 | 0 | 0 |
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums | In this work, we study the problem of minimizing the sum of strongly convex
functions split over a network of $n$ nodes. We propose the decentralized and
asynchronous algorithm ADFS to tackle the case when local functions are
themselves finite sums with $m$ components. ADFS converges linearly when local
functions are smooth, and matches the rates of the best known finite sum
algorithms when executed on a single machine. On several machines, ADFS enjoys
a $O (\sqrt{n})$ or $O(n)$ speed-up depending on the leading complexity term as
long as the diameter of the network is not too big with respect to $m$. This
also leads to a $\sqrt{m}$ speed-up over state-of-the-art distributed batch
methods, which is the expected speed-up for finite sum algorithms. In terms of
communication times and network parameters, ADFS scales as well as optimal
distributed batch algorithms. As a side contribution, we give a generalized
version of the accelerated proximal coordinate gradient algorithm using
arbitrary sampling that we apply to a well-chosen dual problem to derive ADFS.
Yet, ADFS uses primal proximal updates that only require solving
one-dimensional problems for many standard machine learning applications.
Finally, ADFS can be formulated for non-smooth objectives with equally good
scaling properties. We illustrate the improvement of ADFS over state-of-the-art
approaches with simulations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Intermediate curvatures and highly connected manifolds | We show that after forming a connected sum with a homotopy sphere, all
(2j-1)-connected 2j-parallelisable manifolds in dimension 4j+1, j > 0, can be
equipped with Riemannian metrics of 2-positive Ricci curvature. When j=1 we
extend the above to certain classes of simply-connected non-spin 5-manifolds.
The condition of 2-positive Ricci curvature is defined to mean that the sum of
the two smallest eigenvalues of the Ricci tensor is positive at every point.
This result is a counterpart to a previous result of the authors concerning the
existence of positive Ricci curvature on highly connected manifolds in
dimensions 4j-1 for j > 1, and in dimensions 4j+1 for j > 0 with torsion-free
cohomology.
| 0 | 0 | 1 | 0 | 0 | 0 |
Structured Differential Learning for Automatic Threshold Setting | We introduce a technique that can automatically tune the parameters of a
rule-based computer vision system comprised of thresholds, combinational logic,
and time constants. This lets us retain the flexibility and perspicacity of a
conventionally structured system while allowing us to perform approximate
gradient descent using labeled data. While this is only a heuristic procedure,
as far as we are aware there is no other efficient technique for tuning such
systems. We describe the components of the system and the associated supervised
learning mechanism. We also demonstrate the utility of the algorithm by
comparing its performance versus hand tuning for an automotive headlight
controller. Despite having over 100 parameters, the method is able to
profitably adjust the system values given just the desired output for a number
of videos.
| 0 | 0 | 0 | 1 | 0 | 0 |
A new fractional derivative of variable order with non-singular kernel and fractional differential equations | In this paper, we introduce two new non-singular kernel fractional
derivatives and present a class of other fractional derivatives derived from
the new formulations. We present some important results of uniformly convergent
sequences of continuous functions, in particular the Comparison's principle,
and others that allow, the study of the limitation of fractional nonlinear
differential equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Safe Semi-Supervised Learning of Sum-Product Networks | In several domains obtaining class annotations is expensive while at the same
time unlabelled data are abundant. While most semi-supervised approaches
enforce restrictive assumptions on the data distribution, recent work has
managed to learn semi-supervised models in a non-restrictive regime. However,
so far such approaches have only been proposed for linear models. In this work,
we introduce semi-supervised parameter learning for Sum-Product Networks
(SPNs). SPNs are deep probabilistic models admitting inference in linear time
in number of network edges. Our approach has several advantages, as it (1)
allows generative and discriminative semi-supervised learning, (2) guarantees
that adding unlabelled data can increase, but not degrade, the performance
(safe), and (3) is computationally efficient and does not enforce restrictive
assumptions on the data distribution. We show on a variety of data sets that
safe semi-supervised learning with SPNs is competitive compared to
state-of-the-art and can lead to a better generative and discriminative
objective value than a purely supervised approach.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Structured Approach to the Analysis of Remote Sensing Images | The number of studies for the analysis of remote sensing images has been
growing exponentially in the last decades. Many studies, however, only report
results---in the form of certain performance metrics---by a few selected
algorithms on a training and testing sample. While this often yields valuable
insights, it tells little about some important aspects. For example, one might
be interested in understanding the nature of a study by the interaction of
algorithm, features, and the sample as these collectively contribute to the
outcome; among these three, which would be a more productive direction in
improving a study; how to assess the sample quality or the value of a set of
features etc. With a focus on land-use classification, we advocate the use of a
structured analysis. The output of a study is viewed as the result of the
interplay among three input dimensions: feature, sample, and algorithm.
Similarly, another dimension, the error, can be decomposed into error along
each input dimension. Such a structural decomposition of the inputs or error
could help better understand the nature of the problem and potentially suggest
directions for improvement. We use the analysis of a remote sensing image at a
study site in Guangzhou, China, to demonstrate how such a structured analysis
could be carried out and what insights it generates. The structured analysis
could be applied to a new study, or as a diagnosis to an existing one. We
expect this will inform practice in the analysis of remote sensing images, and
help advance the state-of-the-art of land-use classification.
| 0 | 0 | 0 | 1 | 0 | 0 |
Approximations of the Restless Bandit Problem | The multi-armed restless bandit problem is studied in the case where the
pay-off distributions are stationary $\varphi$-mixing. This version of the
problem provides a more realistic model for most real-world applications, but
cannot be optimally solved in practice, since it is known to be PSPACE-hard.
The objective of this paper is to characterize a sub-class of the problem where
{\em good} approximate solutions can be found using tractable approaches.
Specifically, it is shown that under some conditions on the $\varphi$-mixing
coefficients, a modified version of UCB can prove effective. The main challenge
is that, unlike in the i.i.d. setting, the distributions of the sampled
pay-offs may not have the same characteristics as those of the original bandit
arms. In particular, the $\varphi$-mixing property does not necessarily carry
over. This is overcome by carefully controlling the effect of a sampling policy
on the pay-off distributions. Some of the proof techniques developed in this
paper can be more generally used in the context of online sampling under
dependence. Proposed algorithms are accompanied with corresponding regret
analysis.
| 0 | 0 | 1 | 1 | 0 | 0 |
Iteratively reweighted $\ell_1$ algorithms with extrapolation | Iteratively reweighted $\ell_1$ algorithm is a popular algorithm for solving
a large class of optimization problems whose objective is the sum of a
Lipschitz differentiable loss function and a possibly nonconvex sparsity
inducing regularizer. In this paper, motivated by the success of extrapolation
techniques in accelerating first-order methods, we study how widely used
extrapolation techniques such as those in [4,5,22,28] can be incorporated to
possibly accelerate the iteratively reweighted $\ell_1$ algorithm. We consider
three versions of such algorithms. For each version, we exhibit an explicitly
checkable condition on the extrapolation parameters so that the sequence
generated provably clusters at a stationary point of the optimization problem.
We also investigate global convergence under additional Kurdyka-$\L$ojasiewicz
assumptions on certain potential functions. Our numerical experiments show that
our algorithms usually outperform the general iterative shrinkage and
thresholding algorithm in [21] and an adaptation of the iteratively reweighted
$\ell_1$ algorithm in [23, Algorithm 7] with nonmonotone line-search for
solving random instances of log penalty regularized least squares problems in
terms of both CPU time and solution quality.
| 0 | 0 | 0 | 1 | 0 | 0 |
Automatic generation of analysis class diagrams from use case specifications | In object oriented software development, the analysis modeling is concerned
with the task of identifying problem level objects along with the relationships
between them from software requirements. The software requirements are usually
written in some natural language, and the analysis modeling is normally
performed by experienced human analysts. The huge gap between the software
requirements which are unstructured texts and analysis models which are usually
structured UML diagrams, along with human slip-ups inevitably makes the
transformation process error prone. The automation of this process can help in
reducing the errors in the transformation. In this paper we propose a tool
supported approach for automated transformation of use case specifications
documented in English language into analysis class diagrams. The approach works
in four steps. It first takes the textual specification of a use case as input,
and then using a natural language parser generates type dependencies and parts
of speech tags for each sentence in the specification. Then, it identifies the
sentence structure of each sentence using a set of comprehensive sentence
structure rules. Next, it applies a set of transformation rules on the type
dependencies and parts of speech tags of the sentences to discover the problem
level objects and the relationships between them. Finally, it generates and
visualizes the analysis class diagram. We conducted a controlled experiment to
compare the correctness, completeness and redundancy of the analysis class
diagrams generated by our approach with those generated by the existing
automated approaches. The results showed that the analysis class diagrams
generated by our approach were more correct, more complete, and less redundant
than those generated by the other approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Learning in Pharmacogenomics: From Gene Regulation to Patient Stratification | This Perspective provides examples of current and future applications of deep
learning in pharmacogenomics, including: (1) identification of novel regulatory
variants located in noncoding domains and their function as applied to
pharmacoepigenomics; (2) patient stratification from medical records; and (3)
prediction of drugs, targets, and their interactions. Deep learning
encapsulates a family of machine learning algorithms that over the last decade
has transformed many important subfields of artificial intelligence (AI) and
has demonstrated breakthrough performance improvements on a wide range of tasks
in biomedicine. We anticipate that in the future deep learning will be widely
used to predict personalized drug response and optimize medication selection
and dosing, using knowledge extracted from large and complex molecular,
epidemiological, clinical, and demographic datasets.
| 0 | 0 | 0 | 1 | 1 | 0 |
Virtual Crystals and Nakajima Monomials | An explicit description of the virtualization map for the (modified) Nakajima
monomial model for crystals is given. We give an explicit description of the
Lusztig data for modified Nakajima monomials in type $A_n$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.