text
stringlengths 6
128k
|
---|
Erbium-doped lithium niobate on insulator (LNOI) laser plays an important
role in the complete photonic integrated circuits (PICs). Here, we demonstrate
an integrated tunable whisper galley single mode laser (WGSML) by making use of
a pair of coupled microdisk and microring on LNOI. A 974 nm single-mode pump
light can have an excellent resonance in the designed microdisk, which is
beneficial to the whisper gallery mode (WGM) laser generation. The WGSML at
1560.40 nm with a maximum 31.4 dB side mode suppression ratio (SMSR) has been
achieved. By regulating the temperature, WGSMLs output power increased and the
central wavelength can be changed from 1560.30 nm to 1560.40 nm. What's more,
1560.60 nm and 1565.00 nm WGSMLs have been achieved by changing the coupling
gap width between microdisk and microring. We can also use the electro-optic
effect of LNOI to obtain more accurate adjustable WGSMLs in further research.
|
Column Generation (CG) is an iterative algorithm for solving linear programs
(LPs) with an extremely large number of variables (columns). CG is the
workhorse for tackling large-scale \textit{integer} linear programs, which rely
on CG to solve LP relaxations within a branch and price algorithm. Two
canonical applications are the Cutting Stock Problem (CSP) and Vehicle Routing
Problem with Time Windows (VRPTW). In VRPTW, for example, each binary variable
represents the decision to include or exclude a \textit{route}, of which there
are exponentially many; CG incrementally grows the subset of columns being
used, ultimately converging to an optimal solution. We propose RLCG, the first
Reinforcement Learning (RL) approach for CG. Unlike typical column selection
rules which myopically select a column based on local information at each
iteration, we treat CG as a sequential decision-making problem: the column
selected in a given iteration affects subsequent column selections. This
perspective lends itself to a Deep Reinforcement Learning approach that uses
Graph Neural Networks (GNNs) to represent the variable-constraint structure in
the LP of interest. We perform an extensive set of experiments using the
publicly available BPPLIB benchmark for CSP and Solomon benchmark for VRPTW.
RLCG converges faster and reduces the number of CG iterations by 22.4\% for CSP
and 40.9\% for VRPTW on average compared to a commonly used greedy policy. Our
code is available at
https://github.com/chichengmessi/reinforcement-learning-for-column-generation.git.
|
All datasets contain some biases, often unintentional, due to how they were
acquired and annotated. These biases distort machine-learning models'
performance, creating spurious correlations that the models can unfairly
exploit, or, contrarily destroying clear correlations that the models could
learn. With the popularity of deep learning models, automated skin lesion
analysis is starting to play an essential role in the early detection of
Melanoma. The ISIC Archive is one of the most used skin lesion sources to
benchmark deep learning-based tools. Bissoto et al. experimented with different
bounding-box based masks and showed that deep learning models could classify
skin lesion images without clinically meaningful information in the input data.
Their findings seem confounding since the ablated regions (random rectangular
boxes) are not significant. The shape of the lesion is a crucial factor in the
clinical characterization of a skin lesion. In that context, we performed a set
of experiments that generate shape-preserving masks instead of rectangular
bounding-box based masks. A deep learning model trained on these
shape-preserving masked images does not outperform models trained on images
without clinically meaningful information. That strongly suggests spurious
correlations guiding the models. We propose use of general adversarial network
(GAN) to mitigate the underlying bias.
|
We obtain a critical imbedding and then, concentration-compactness principles
for fractional Sobolev spaces with variable exponents. As an application of
these results, we obtain the existence of many solutions for a class of
critical nonlocal problems with variable exponents, which is even new for
constant exponent case.
|
The NaI and BGO detectors on the Gamma ray Burst Monitor (GBM) on Fermi are
now being used for long term monitoring of the hard X-ray/low energy gamma ray
sky. Using the Earth occultation technique demonstrated previously by the BATSE
instrument on the Compton Gamma Ray Observatory, GBM produces multiband light
curves and spectra for known sources and transient outbursts in the 8 keV - 1
MeV band with its NaI detectors and up to 40 MeV with its BGO. Coverage of the
entire sky is obtained every two orbits, with sensitivity exceeding that of
BATSE at energies below ~25 keV and above ~1.5 MeV. We describe the technique
and present preliminary results after the first ~17 months of observations at
energies above 100 keV. Seven sources are detected: the Crab, Cyg X-1, Swift
J1753.5-0127, 1E 1740-29, Cen A, GRS 1915+105, and the transient source XTE
J1752-223.
|
The three-parameter Indian buffet process is generalized. The possibly
different role played by customers is taken into account by suitable (random)
weights. Various limit theorems are also proved for such generalized Indian
buffet process. Let $L_n$ be the number of dishes experimented by the first $n$
customers, and let $\overline{K}_n=(1/n)\sum_{i=1}^nK_i$ where $K_i$ is the
number of dishes tried by customer $i$. The asymptotic distributions of $L_n$
and $\overline{K}_n$, suitably centered and scaled, are obtained. The
convergence turns out to be stable (and not only in distribution). As a
particular case, the results apply to the standard (i.e., nongeneralized)
Indian buffet process.
|
The spinless Falicov-Kimball model is solved exactly in the limit of
infinite-dimensions on both the hypercubic and Bethe lattices. The competition
between segregation, which is present for large U, and charge-density-wave
order, which is prevalent at moderate U, is examined in detail. We find a rich
phase diagram which displays both of these phases. The model also shows
nonanalytic behavior in the charge-density-wave transition temperature when U
is large enough to generate a correlation-induced gap in the single-particle
density of states.
|
Restricting a linear system for the KP hierarchy to those independent
variables t\_n with odd n, its compatibility (Zakharov-Shabat conditions) leads
to the "odd KP hierarchy". The latter consists of pairs of equations for two
dependent variables, taking values in a (typically noncommutative) associative
algebra. If the algebra is commutative, the odd KP hierarchy is known to admit
reductions to the BKP and the CKP hierarchy. We approach the odd KP hierarchy
and its relation to BKP and CKP in different ways, and address the question
whether noncommutative versions of the BKP and the CKP equation (and some of
their reductions) exist. In particular, we derive a functional representation
of a linear system for the odd KP hierarchy, which in the commutative case
produces functional representations of the BKP and CKP hierarchies in terms of
a tau function. Furthermore, we consider a functional representation of the KP
hierarchy that involves a second (auxiliary) dependent variable and features
the odd KP hierarchy directly as a subhierarchy. A method to generate large
classes of exact solutions to the KP hierarchy from solutions to a linear
matrix ODE system, via a hierarchy of matrix Riccati equations, then also
applies to the odd KP hierarchy, and this in turn can be exploited, in
particular, to obtain solutions to the BKP and CKP hierarchies.
|
Three-dimensional tracking of animal systems is the key to the comprehension
of collective behavior. Experimental data collected via a stereo camera system
allow the reconstruction of the 3d trajectories of each individual in the
group. Trajectories can then be used to compute some quantities of interest to
better understand collective motion, such as velocities, distances between
individuals and correlation functions. The reliability of the retrieved
trajectories is strictly related to the accuracy of the 3d reconstruction. In
this paper, we perform a careful analysis of the most significant errors
affecting 3d reconstruction, showing how the accuracy depends on the camera
system set-up and on the precision of the calibration parameters.
|
Whether or not the problem of finding maximal independent sets (MIS) in
hypergraphs is in (R)NC is one of the fundamental problems in the theory of
parallel computing. Unlike the well-understood case of MIS in graphs, for the
hypergraph problem, our knowledge is quite limited despite considerable work.
It is known that the problem is in \emph{RNC} when the edges of the hypergraph
have constant size. For general hypergraphs with $n$ vertices and $m$ edges,
the fastest previously known algorithm works in time $O(\sqrt{n})$ with
$\text{poly}(m,n)$ processors. In this paper we give an EREW PRAM algorithm
that works in time $n^{o(1)}$ with $\text{poly}(m,n)$ processors on general
hypergraphs satisfying $m \leq n^{\frac{\log^{(2)}n}{8(\log^{(3)}n)^2}}$, where
$\log^{(2)}n = \log\log n$ and $\log^{(3)}n = \log\log\log n$. Our algorithm is
based on a sampling idea that reduces the dimension of the hypergraph and
employs the algorithm for constant dimension hypergraphs as a subroutine.
|
We study the effect of a finite proximity superconducting (SC) coherence
length in SN and SNS junctions consisting of a semiconducting topological
insulating wire whose ends are connected to either one or two s-wave
superconductors. We find that such systems behave exactly as SN and SNS
junctions made from a single wire for which some regions are sitting on top of
superconductors, the size of the topological SC region being determined by the
SC coherence length. We also analyze the effect of a non-perfect transmission
at the NS interface on the spatial extension of the Majorana fermions.
Moreover, we study the effects of continuous phase gradients in both an open
and closed (ring) SNS junction. We find that such phase gradients play an
important role in the spatial localization of the Majorana fermions.
|
We consider an inflationary universe model in which the phase of accelerated
expansion was preceded by a non-singular bounce and a period of contraction
which involves a phase of deceleration. We follow fluctuations which exit the
Hubble radius in the radiation-dominated contracting phase as quantum vacuum
fluctuations, re-enter the Hubble radius in the deflationary period and
re-cross during the phase of inflationary expansion. Evolving the fluctuations
using the general relativistic linear perturbation equations, we find that they
exit the Hubble radius during inflation not with a scale-invariant spectrum,
but with a highly red spectrum with index $n_s = -3$. We also show that the
back-reaction of fluctuations limits the time interval of deflation. Our toy
model demonstrates the importance for inflationary cosmology both of the
trans-Planckian problem for cosmological perturbations and of back-reaction
effects . Firstly, without understanding both Planck-scale physics and the
phase which preceded inflation, it is a non-trivial assumption to take the
perturbations to be in their local vacuum state when they exit the Hubble
radius at late times. Secondly, the back-reaction effects of fluctuations can
influence the background in an important way.
|
We present a direct proof of asymptotic consensus in the nonlinear
Hegselmann-Krause model with transmission-type delay, where the communication
weights depend on the particle distance in phase space. Our approach is based
on an explicit estimate of the shrinkage of the group diameter on finite time
intervals and avoids the usage of Lyapunov-type functionals or results from
nonnegative matrix theory. It works for both the original formulation of the
model with communication weights scaled by the number of agents, and the
modification with weights normalized a'la Motsch-Tadmor. We pose only minimal
assumptions on the model parameters. In particular, we only assume global
positivity of the influence function, without imposing any conditions on its
decay rate or monotonicity. Moreover, our result holds for any length of the
delay.
|
We compare cluster scaling relations published for three different samples
selected via X-ray and Sunyaev-Zel'dovich (SZ) signatures. We find tensions
driven mainly by two factors: i) systematic differences in the X-ray cluster
observables used to derive the scaling relations, and ii) uncertainty in the
modeling of how the gas mass of galaxy clusters scales with total mass. All
scaling relations are in agreement after accounting for these two effects. We
describe a multivariate scaling model that enables a fully self-consistent
treatment of multiple observational catalogs in the presence of property
covariance, and apply this formalism when interpreting published results. The
corrections due to scatter and observable covariance can be significant. For
instance, our predicted Ysz-Lx scaling relation differs from that derived using
the naive "plug in" method by \approx 25%. Finally, we test the mass
normalization for each of the X-ray data sets we consider by applying a space
density consistency test: we compare the observed REFLEX luminosity function to
expectations from published Lx-M relations convolved with the mass function for
a WMAP7 flat \Lambda CDM model.
|
We extend the classical heterotic instanton solutions to all orders in
$\alpha'$ using the equations of anomaly-free supergravity, and discuss the
relation between these equations and the string theory $\beta$-functions.
|
MESS (Mass-loss of Evolved StarS) is a Guaranteed Time Key Program that uses
the PACS and SPIRE instruments on board the Herschel Space Observatory to
observe a representative sample of evolved stars, that include asymptotic giant
branch (AGB) and post-AGB stars, planetary nebulae and red supergiants, as well
as luminous blue variables, Wolf-Rayet stars and supernova remnants. In total,
of order 150 objects are observed in imaging and about 50 objects in
spectroscopy.
This paper describes the target selection and target list, and the observing
strategy. Key science projects are described, and illustrated using results
obtained during Herschel's science demonstration phase.
Aperture photometry is given for the 70 AGB and post-AGB stars observed up to
October 17, 2010, which constitutes the largest single uniform database of
far-IR and sub-mm fluxes for late-type stars.
|
Let O_d be the Cuntz algebra on generators S_1,...,S_d, 2 \leq d < \infty,
and let D_d \subset O_d be the abelian subalgebra generated by monomials
S_\alpha S_\alpha^*
=S_{\alpha_{1}}...S_{\alpha_{k}}S_{\alpha_{k}}^*...S_{\alpha_{1}}^* where
\alpha=(\alpha_1...\alpha_k) ranges over all multi-indices formed from
{1,...,d}. In any representation of O_d, D_d may be simultaneously
diagonalized. Using S_i(S_\alpha S_\alpha^*) =(S_{i\alpha}S_{i\alpha}^*)S_i, we
show that the operators S_i from a general representation of O_d may be
expressed directly in terms of the spectral representation of D_d. We use this
in describing a class of type III representations of O_d and corresponding
endomorphisms, and the heart of the paper is a description of an associated
family of AF-algebras arising as the fixed-point algebras of the associated
modular automorphism groups. Chapters 5--18 are devoted to finding effective
methods to decide isomorphism and non-isomorphism in this class of AF-algebras.
|
Kashaev and Reshetikhin proposed a generalization of the Reshetikhin-Turaev
link invariant construction to tangles with a flat connection in a principal
G-bundle over the complement of the tangle. The purpose of this paper is to
adapt and renormalize their construction to define invariants of G-links using
the semi-cyclic representations of the non-restricted quantum group associated
to sl2, defined by De Concini and Kac. Our construction uses a modified Markov
trace. In our main example, the semi-cyclic invariants are a natural extension
of the generalized Alexander polynomial invariants defined by Akutsu, Deguchi,
and Ohtsuki. Surprisingly, direct computations suggest that these invariants
are actually equal.
|
The inclusive decay rate into pions of the charmed $D_s$ meson is
surprisingly larger than estimates expected from the $W$ annihilation, adopting
commonly used values of current-algebra up and down quark masses. We then go
beyond this tree diagram and consider possible QCD effects that might cause
such a large rate. There are two; the first one is related to the spectator
decay $ c{\bar s} \rightarrow s{\bar s} + u {\bar d}$ followed by $ s {\bar s}
\rightarrow d{\bar d}~,~ u{\bar u}$ via two-gluon exchange box diagram. The
second one is a gluon emission in weak annihilation for which the usual
helicity suppression is vitiated: $D_s\rightarrow W+g$ followed by $
W\rightarrow u {\bar d},~ g\rightarrow d{\bar d},~u{\bar u}$. These two
contributions, however, turn out to be insufficient to explain data, implying
that the puzzle could be understood if the up, down quarks have higher mass
values. Furthermore, on the basis of experimental informations on the spectral
function $\rho_{3\pi}(Q^2)$ deduced from the exclusive $ D_s \rightarrow 3\pi$
mode, the QCD sum rules also point to a higher mass for light quarks.
|
In this article, we verify the low Mach number limit of strong solutions to
the non-isentropic compressible magnetohydrodynamic equations with zero
magnetic diffusivity and ill-prepared initial data in three-dimensional bounded
domains, when the density and the temperature vary around constant states.
Invoking a new weighted energy functional, we establish the uniform estimates
with respect to the Mach number, especially for the spatial derivatives of high
order. Due to the vorticity-slip boundary condition of the velocity, we
decompose the uniform estimates into the part for the fast variables and the
other one for the slow variables. In particular, the weighted estimates of
highest-order spatial derivatives of the fast variables are crucial for the
uniform bounds. Finally, the low Mach number limit is justified by the strong
convergence of the density and the temperature, the divergence-free component
of the velocity, and the weak convergence of other variables. The methods in
this paper can be applied to singular limits of general hydrodynamic equations
of hyperbolic-parabolic type, including the full Navier-Stokes equations.
|
We determine the full automorphism group of two recently constructed families
$\tilde{\mathcal{S}}_q$ and $\tilde{\mathcal{R}}_q$ of maximal curves over
finite fields. These curves are covers of the Suzuki and Ree curves, and are
analogous to the Giulietti-Korchm\'aros cover of the Hermitian curve. We also
show that $\tilde{\mathcal{S}}_q$ is not Galois covered by the Hermitian curve
maximal over $\mathbb{F}_{q^4}$, and $\tilde{\mathcal{R}}_q$ is not Galois
covered by the Hermitian curve maximal over $\mathbb{F}_{q^6}$. Finally, we
compute the genera of many Galois subcovers of $\tilde{\mathcal{S}}_q$ and
$\tilde{\mathcal{R}}_q$; this provides new genera for maximal curves.
|
A surface in the Teichm\"uller space, where the systole function attains its
maximum, is called a maximal surface. For genus two there exists a unique
maximal surface which is called the Bolza surface. In this article, we study
the complexity of the set of systolic geodesics on the Bolza surface. We show
that any non-systolic geodesic intersects the systolic geodesics in $2n$
points, where $n\geq 5$. Furthermore, we show that there are $12$ second
systolic geodesics on the Bolza surface and they form a triangulation of the
surface.
|
In this paper we study an inverse boundary value problem for the biharmonic
operator with first order perturbation. Our geometric setting is that of a
bounded simply connected domain in the Euclidean space of dimension three or
higher. Assuming that the inaccessible portion of the boundary is flat, and we
have knowledge of the Dirichlet-to-Neumann map on the complement, we prove
logarithmic type stability estimates for both the first and the zeroth order
perturbation of the biharmonic operator.
|
We prove that the curvature flow of an embedded planar network of three
curves connected through a triple junction, with fixed endpoints on the
boundary of a given strictly convex domain, exists smooth until the lengths of
the three curves stay far from zero. If this is the case for all times, then
the evolution exists for all times and the network converges to the Steiner
minimal connection between the three endpoints.
|
The Minimum Description Length principle for online sequence
estimation/prediction in a proper learning setup is studied. If the underlying
model class is discrete, then the total expected square loss is a particularly
interesting performance measure: (a) this quantity is finitely bounded,
implying convergence with probability one, and (b) it additionally specifies
the convergence speed. For MDL, in general one can only have loss bounds which
are finite but exponentially larger than those for Bayes mixtures. We show that
this is even the case if the model class contains only Bernoulli distributions.
We derive a new upper bound on the prediction error for countable Bernoulli
classes. This implies a small bound (comparable to the one for Bayes mixtures)
for certain important model classes. We discuss the application to Machine
Learning tasks such as classification and hypothesis testing, and
generalization to countable classes of i.i.d. models.
|
We give the global homotopy classification of nematic textures for a general
domain with weak anchoring boundary conditions and arbitrary defect set in
terms of twisted cohomology, and give an explicit computation for the case of
knotted and linked defects in $\mathbb{R}^3$, showing that the distinct
homotopy classes have a 1-1 correspondence with the first homology group of the
branched double cover, branched over the disclination loops. We show further
that the subset of those classes corresponding to elements of order 2 in this
group have representatives that are planar and characterise the obstruction for
other classes in terms of merons. The planar textures are a feature of the
global defect topology that is not reflected in any local characterisation.
Finally, we describe how the global classification relates to recent
experiments on nematic droplets and how elements of order 4 relate to the
presence of $\tau$ lines in cholesterics.
|
NiO layers were grown on MgO(100), MgO(110), and MgO(111) substrates by
plasma-assisted molecular beam epitaxy under Ni-flux limited growth conditions.
Single crystalline growth with a cube-on-cube epitaxial relationship was
confirmed by X-ray diffraction measurements for all used growth conditions and
substrates except MgO(111). A detailed growth series on MgO(100) was prepared
using substrate temperatures ranging from 20 {\deg}C to 900 {\deg}C to
investigate the influence on the layer characteristics. Energy-dispersive X-ray
spectroscopy indicated close-to-stoichiometric layers with an oxygen content of
~47 at. % and ~50 at. % grown under low and high O-flux, respectively. All NiO
layers had a root-mean-square surface roughness below 1 nm, measured by atomic
force microscopy, except for rougher layers grown at 900 {\deg}C or using
molecular oxygen. Growth at 900 {\deg}C led to a significant diffusion of Mg
from the substrate into the film. The relative intensity of the quasi-forbidden
one-phonon Raman peak is introduced as a gauge of the crystal quality,
indicating the highest layer quality for growth at low oxygen flux and high
growth temperature, likely due to the resulting high adatom diffusion length
during growth. The optical and electrical properties were investigated by
spectroscopic ellipsometry and resistance measurements, respectively. All NiO
layers were transparent with an optical bandgap around 3.6 eV and
semi-insulating at room temperature. However, changes upon exposure to reducing
or oxidizing gases of the resistance of a representative layer at elevated
temperature were able to confirm p-type conductivity, highlighting their
suitability as a model system for research on oxide-based gas sensing.
|
Using estimates on Hooley's $\Delta$-function and a short interval version of
the celebrated Dirichlet hyperbola principle, we derive an asymptotic formula
for a class of arithmetic functions over short segments. Numerous examples are
also given.
|
Regularized empirical risk minimization problem with linear predictor appears
frequently in machine learning. In this paper, we propose a new stochastic
primal-dual method to solve this class of problems. Different from existing
methods, our proposed methods only require O(1) operations in each iteration.
We also develop a variance-reduction variant of the algorithm that converges
linearly. Numerical experiments suggest that our methods are faster than
existing ones such as proximal SGD, SVRG and SAGA on high-dimensional problems.
|
Inverted indexes are vital in providing fast key-word-based search. For every
term in the document collection, a list of identifiers of documents in which
the term appears is stored, along with auxiliary information such as term
frequency, and position offsets. While very effective, inverted indexes have
large memory requirements for web-sized collections. Recently, the concept of
learned index structures was introduced, where machine learned models replace
common index structures such as B-tree-indexes, hash-indexes, and
bloom-filters. These learned index structures require less memory, and can be
computationally much faster than their traditional counterparts. In this paper,
we consider whether such models may be applied to conjunctive Boolean querying.
First, we investigate how a learned model can replace document postings of an
inverted index, and then evaluate the compromises such an approach might have.
Second, we evaluate the potential gains that can be achieved in terms of memory
requirements. Our work shows that learned models have great potential in
inverted indexing, and this direction seems to be a promising area for future
research.
|
Dimension reduction techniques usually lose information in the sense that
reconstructed data are not identical to the original data. However, we argue
that it is possible to have reconstructed data identically distributed as the
original data, irrespective of the retained dimension or the specific mapping.
This can be achieved by learning a distributional model that matches the
conditional distribution of data given its low-dimensional latent variables.
Motivated by this, we propose Distributional Principal Autoencoder (DPA) that
consists of an encoder that maps high-dimensional data to low-dimensional
latent variables and a decoder that maps the latent variables back to the data
space. For reducing the dimension, the DPA encoder aims to minimise the
unexplained variability of the data with an adaptive choice of the latent
dimension. For reconstructing data, the DPA decoder aims to match the
conditional distribution of all data that are mapped to a certain latent value,
thus ensuring that the reconstructed data retains the original data
distribution. Our numerical results on climate data, single-cell data, and
image benchmarks demonstrate the practical feasibility and success of the
approach in reconstructing the original distribution of the data. DPA
embeddings are shown to preserve meaningful structures of data such as the
seasonal cycle for precipitations and cell types for gene expression.
|
We show that the pure mapping class group is uniformly perfect for a certain
class of infinite type surfaces with noncompact boundary components. We then
combine this result with recent work in the remaining cases to give a complete
classification of the perfect and uniformly perfect pure mapping class groups
for infinite type surfaces. We also develop a method to cut a general surface
into simpler surfaces and extend some mapping class group results to the
general case.
|
Periodicity in population dynamics is a fundamental issue. In addition to
current species-specific analyses, allometry facilitates understanding of limit
cycles amongst different species. So far, body-size regressions have been
derived for the oscillation period of the population densities of warm-blooded
species, in particular herbivores. Here, we extend the allometric analysis to
other clades, allowing for a comparison between the obtained slopes and
intercepts. The oscillation periods were derived from databases and original
studies to cover a broad range of conditions and species. Then, values were
related to specific body size by regression analysis. For different groups of
herbivorous species, the oscillation period increased as a function of
individual mass as a power law with exponents of 0.11-0.27. The intercepts of
the resulting linear regressions indicated that cycle times for equally-sized
species increased from homeotherms up to invertebrates. Overall, cycle times
for predators did not scale to body size. Implications for these differences
were addressed in the light of intra- and interspecific delays.
|
We survey a number of recent generalizations and sharpenings of Nehari's
extension of Schwarz' lemma for holomorphic self-maps of the unit disk. In
particular, we discuss the case of infinitely many critical points and its
relation to the zero sets and invariant subspaces for Bergman spaces, as well
as the case of equality at the boundary.
|
Based on the complementarity relation between entanglement of a composite
system and the purity of a subsystem, we propose a simple method to measure the
amount of entanglement. The method can be applied to a bipartite system in a
pure state of any arbitrary dimension. It requires only single qudit rotations
and straightforward probability measurements performed on one of the
subsystems, and can thus be easily implemented experimentally using linear
optical devices.
|
We prove that a closed 3-orbifold that fibers over a hyperbolic polygonal
2-orbifold admits a family of hyperbolic cone structures that are viewed as
regeneration of the polygon, provided that the perimeter is minimal.
|
A recent paradigm shift in bioinformatics from a single reference genome to a
pangenome brought with it several graph structures. These graph structures must
implement operations, such as efficient construction from multiple genomes and
read mapping. Read mapping is a well-studied problem in sequential data, and,
together with data structures such as suffix array and Burrows-Wheeler
transform, allows for efficient computation. Attempts to achieve comparatively
high performance on graphs bring many complications since the common data
structures on strings are not easily obtainable for graphs. In this work, we
introduce prefix-free graphs, a novel pangenomic data structure; we show how to
construct them and how to use them to obtain well-known data structures from
stringology in sublinear space, allowing for many efficient operations on
pangenomes.
|
We explain the physics of compressional heating of the deep interior of an
accreting white dwarf (WD) at accretion rates low enough so that the
accumulated hydrogen burns unstably and initiates a classical nova (CN). In
this limit, the WD core temperature (T_c) reaches an equilibrium value (T_c,eq)
after accreting an amount of mass much less than the WD's mass. Once this
equilibrium is reached, the compressional heating from within the envelope
exits the surface. This equilibrium yields useful relations between the WD
surface temperature, accretion rate and mass that can be employed to measure
accretion rates from observed WD effective temperatures, thus testing binary
evolution models for cataclysmic variables.
|
Seven-year long seeing-free observations of solar magnetic fields with the
Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory
(SDO) were used to study the sources of the solar mean magnetic field, SMMF,
defined as the net line-of-sight magnetic flux divided over the solar disk
area. To evaluate the contribution of different regions to the SMMF, we
separated all the pixels of each SDO/HMI magnetogram into three subsets: weak
(B_W), intermediate (B_I), and strong (B_S) fields. The B_W component
represents areas with magnetic flux densities below the chosen threshold; the
B_I component is mainly represented by network fields, remains of decayed
active regions (ARs), and ephemeral regions. The B_S component consists of
magnetic elements in ARs. To derive the contribution of a subset to the total
SMMF, the linear regression coefficients between the corresponding component
and the SMMF were calculated. We found that: i) when the threshold level of 30
Mx cm^-2 is applied, the B_I and B_S components together contribute from 65% to
95% of the SMMF, while the fraction of the occupied area varies in a range of
2-6% of the disk area; ii) as the threshold magnitude is lowered to 6 Mx cm^-2,
the contribution from B_I+B_S grows to 98%, and the fraction of the occupied
area reaches the value of about 40% of the solar disk. In summary, we found
that regardless of the threshold level, only a small part of the solar disk
area contributes to the SMMF. This means that the photospheric magnetic
structure is an intermittent, inherently porous medium, resembling a
percolation cluster. These findings suggest that the long-standing concept that
continuous vast unipolar areas on the solar surface are the source of the SMMF
may need to be reconsidered.
|
Applying elastic deformation can tune a material physical properties locally
and reversibly. Spatially modulated lattice deformation can create a bandgap
gradient, favouring photo-generated charge separation and collection in
optoelectronic devices. These advantages are hindered by the maximum elastic
strain that a material can withstand before breaking. Nanomaterials derived by
exfoliating transition metal dichalcogenides TMDs are an ideal playground for
elastic deformation, as they can sustain large elastic strains, up to a few
percent. However, exfoliable TMDs with highly strain-tunable properties have
proven challenging for researchers to identify. We investigated 1T-ZrS2 and
1T-ZrSe2, exfoliable semiconductors with large bandgaps. Under compressive
deformation, both TMDs dramatically change their physical properties. 1T-ZrSe2
undergoes a reversible transformation into an exotic three-dimensional lattice,
with a semiconductor-to-metal transition. In ZrS2, the irreversible
transformation between two different layered structures is accompanied by a
sudden 14 % bandgap reduction. These results establish that Zr-based TMDs are
an optimal strain-tunable platform for spatially textured bandgaps, with a
strong potential for novel optoelectronic devices and light harvesting.
|
The paper study recovery problem for discrete time signals with a finite
number of missing values. The paper establishes recoverability of these missing
values for signals with Z-transform vanishing with a certain rate at a single
point.
The transfer functions for the corresponding recovering kernels are presented
explicitly.
Some robustness of the recovery with respect to data truncation or noise
contamination is established.
|
We introduce a simulator of charge transport in fully-depleted, thick CCDs
that include Coulomb repulsion between carriers. The calculation of this
long-range interaction is highly intensive computationally, and only a few
thousands of carriers can be simulated in reasonable times using regular CPUs.
G-CoReCCD takes advantage of the high number of multiprocessors available in a
graphical processing unit (GPU) to parallelize the operations and thus achieve
a massive speedup. We can simulate the path inside the CCD bulk for up to
hundreds of thousands of carriers in only a few hours using modern GPUs.
|
In this work we characterize all the static and spherically symmetric vacuum
solutions in $f(R)$ gravity when the principal null directions of the Weyl
tensor are non-expanding. In contrast to General Relativity, we show that the
Nariai spacetime is not the only solution of this type when general $f(R)$
theories are considered. In particular, we find four different solutions for
the non-constant Ricci scalar case, all of them corresponding to the same
theory, given by $f(R) = r_0^{-1}\left\lvert R-3/r_0^2\right\rvert^{1/2}$,
where $r_0$ is a non-null constant. Finally, we briefly present some geometric
properties of these solutions.
|
We consider problems of finding a maximum size/weight $t$-matching without
forbidden subgraphs in an undirected graph $G$ with the maximum degree bounded
by $t+1$, where $t$ is an integer greater than $2$. Depending on the variant
forbidden subgraphs denote certain subsets of $t$-regular complete partite
subgraphs of $G$. A graph is complete partite if there exists a partition of
its vertex set such that every pair of vertices from different sets is
connected by an edge and vertices from the same set form an independent set. A
clique $K_t$ and a bipartite clique $K_{t,t}$ are examples of complete partite
graphs. These problems are natural generalizations of the triangle-free and
square-free $2$-matching problems in subcubic graphs. In the weighted setting
we assume that the weights of edges of $G$ are vertex-induced on every
forbidden subgraph. We present simple and fast combinatorial algorithms for
these problems. The presented algorithms are the first ones for the weighted
versions, and for the unweighted ones, are faster than those known previously.
Our approach relies on the use of gadgets with so-called half-edges. A
half-edge of edge $e$ is, informally speaking, a half of $e$ containing exactly
one of its endpoints.
|
We study the probability distribution $\mathcal{P}(H,t,L)$ of the surface
height $h(x=0,t)=H$ in the Kardar-Parisi-Zhang (KPZ) equation in $1+1$
dimension when starting from a parabolic interface, $h(x,t=0)=x^2/L$. The
limits of $L\to\infty$ and $L\to 0$ have been recently solved exactly for any
$t>0$. Here we address the early-time behavior of $\mathcal{P}(H,t,L)$ for
general $L$. We employ the weak-noise theory - a variant of WKB approximation
-- which yields the optimal history of the interface, conditioned on reaching
the given height $H$ at the origin at time $t$. We find that at small $H$
$\mathcal{P}(H,t,L)$ is Gaussian, but its tails are non-Gaussian and highly
asymmetric. In the leading order and in a proper moving frame, the tails behave
as $-\ln \mathcal{P}= f_{+}|H|^{5/2}/t^{1/2}$ and $f_{-}|H|^{3/2}/t^{1/2}$. The
factor $f_{+}(L,t)$ monotonically increases as a function of $L$, interpolating
between time-independent values at $L=0$ and $L=\infty$ that were previously
known. The factor $f_{-}$ is independent of $L$ and $t$, signalling
universality of this tail for a whole class of deterministic initial
conditions.
|
An universal quantum network which can implement a general quantum computing
is proposed. In this sense, it can be called the quantum central processing
unit (QCPU). For a given quantum computing, its realization of QCPU is just its
quantum network. QCPU is standard and easy-assemble because it only has two
kinds of basic elements and two auxiliary elements. QCPU and its realizations
are scalable, that is, they can be connected together, and so they can
construct the whole quantum network to implement the general quantum algorithm
and quantum simulating procedure.
|
We investigate the superconducting ternary lithium borohydride phase diagram
at pressures of 0 and 200$\,$GPa using methods for evolutionary crystal
structure prediction and linear-response calculations for the electron-phonon
coupling. Our calculations show that the ground state phase at ambient
pressure, LiBH$_4$, stays in the $Pnma$ space group and remains a wide band-gap
insulator at all pressures investigated. Other phases along the 1:1:$x$ Li:B:H
line are also insulating. However, a full search of the ternary phase diagram
at 200$\,$GPa revealed a metallic Li$_2$BH$_6$ phase, which is
thermodynamically stable down to 100$\,$GPa. This {\em superhydride} phase,
crystallizing in a $Fm\bar{3}m$ space group, is characterized by six-fold
hydrogen-coordinated boron atoms occupying the $fcc$ sites of the unit cell.
Due to strong hydrogen-boron bonding this phase displays a critical temperature
of $\sim$ 100$\,$K between 100 and 200$\,$GPa. Our investigations confirm that
ternary compounds used in hydrogen-storage applications are a suitable choice
for observing high-$T_\text{c}$ conventional superconductivity in diamond anvil
cell experiments, and suggest a viable route to optimize the critical
temperature of high-pressure hydrides.
|
One of the models explaining the high luminosity of pulsing ultra-luminous
X-ray sources (pULXs) was suggested by Mushtukov et al. (2015). They showed
that the accretion columns on the surfaces of highly magnetized neutron stars
can be very luminous due to opacity reduction in the high magnetic field.
However, a strong magnetic field leads also to amplification of the
electron-positron pairs creation. Therefore, increasing of the electron and
positron number densities compensates the cross-section reduction, and the
electron scattering opacity does not decrease with the magnetic field
magnification. As a result, the maximum possible luminosity of the accretion
column does not increase with the magnetic field. It ranges between 10$^{40} -
10^{41}$ erg s$^{-1}$ depending only slightly on the magnetic field strength.
|
We study the electronic structure of the doped paramagnetic insulator by
finite temperature Quantum Monte-Carlo simulations for the 2D Hubbard model.
Throughout we use the moderately high temperature T=0.33t, where the spin
correlation length has dropped to < 1.5 lattice spacings, and study the
evolution of the band structure with hole doping. The effect of doping can be
best described as a rigid shift of the chemical potential into the lower
Hubbard band, accompanied by some transfer of spectral weight. For hole dopings
<20% the Luttinger theorem is violated, and the Fermi surface volume, inferred
from the Fermi level crossings of the `quasiparticle band', shows a similar
topology and doping dependence as predicted by the Hubbard I and related
approximations.
|
We analyze the role of the symmetry energy slope parameter $L$ on the {\it
r}-mode instability of neutron stars. Our study is performed using both
microscopic and phenomenological approaches of the nuclear equation of state.
The microscopic ones include the Brueckner--Hartree--Fock approximation, the
well known variational equation of state of Akmal, Pandharipande and Ravenhall,
and a parametrization of recent Auxiliary Field Diffusion Monte Carlo
calculations. For the phenomenological approaches, we use several Skyrme forces
and relativisic mean field models. Our results show that the {\it r}-mode
instability region is smaller for those models which give larger values of $L$.
The reason is that both bulk ($\xi$) and shear ($\eta$) viscosities increase
with $L$ and, therefore, the damping of the mode is more efficient for the
models with larger $L$. We show also that the dependence of both viscosities on
$L$ can be described at each density by simple power-laws of the type
$\xi=A_{\xi}L^{B_\xi}$ and $\eta=A_{\eta}L^{B_\eta}$. Using the measured spin
frequency and the estimated core temperature of the pulsar in the low-mass
X-ray binary 4U 1608-52, we conclude that observational data seem to favor
values of $L$ larger than $\sim 50$ MeV if this object is assumed to be outside
the instability region, its radius is in the range $11.5-12$($11.5-13$) km, and
its mass $1.4M_\odot$($2M_\odot$). Outside this range it is not possible to
draw any conclusion on $L$ from this pulsar.
|
In this paper we explore the parameter efficiency of BERT arXiv:1810.04805 on
version 2.0 of the Stanford Question Answering dataset (SQuAD2.0). We evaluate
the parameter efficiency of BERT while freezing a varying number of final
transformer layers as well as including the adapter layers proposed in
arXiv:1902.00751. Additionally, we experiment with the use of context-aware
convolutional (CACNN) filters, as described in arXiv:1709.08294v3, as a final
augmentation layer for the SQuAD2.0 tasks.
This exploration is motivated in part by arXiv:1907.10597, which made a
compelling case for broadening the evaluation criteria of artificial
intelligence models to include various measures of resource efficiency. While
we do not evaluate these models based on their floating point operation
efficiency as proposed in arXiv:1907.10597, we examine efficiency with respect
to training time, inference time, and total number of model parameters. Our
results largely corroborate those of arXiv:1902.00751 for adapter modules,
while also demonstrating that gains in F1 score from adding context-aware
convolutional filters are not practical due to the increase in training and
inference time.
|
The physics of light-matter interactions is strongly constrained by both the
small value of the fine-structure constant and the small size of the atom.
Overcoming these limitations is a long-standing challenge. Recent theoretical
and experimental breakthroughs have shown that two dimensional systems, such as
graphene, can support strongly confined light in the form of plasmons. These 2D
systems have a unique ability to squeeze the wavelength of light by over two
orders of magnitude. Such high confinement requires a revisitation of the main
assumptions of light-matter interactions. In this letter, we provide a general
theory of light-matter interactions in 2D systems which support plasmons. This
theory reveals that conventionally forbidden light-matter interactions, such
as: high-order multipolar transitions, two-plasmon spontaneous emission, and
spin-flip transitions can occur on very short time-scales - comparable to those
of conventionally fast transitions. Our findings enable new platforms for
spectroscopy, sensing, broadband light generation, and a potential test-ground
for non-perturbative quantum electrodynamics.
|
Optimization with constraints is a typical problem in quantum physics and
quantum information science that becomes especially challenging for
high-dimensional systems and complex architectures like tensor networks. Here
we use ideas of Riemannian geometry to perform optimization on manifolds of
unitary and isometric matrices as well as the cone of positive-definite
matrices. Combining this approach with the up-to-date computational methods of
automatic differentiation, we demonstrate the efficacy of the Riemannian
optimization in the study of the low-energy spectrum and eigenstates of
multipartite Hamiltonians, variational search of a tensor network in the form
of the multiscale entanglement-renormalization ansatz, preparation of arbitrary
states (including highly entangled ones) in the circuit implementation of
quantum computation, decomposition of quantum gates, and tomography of quantum
states. Universality of the developed approach together with the provided open
source software enable one to apply the Riemannian optimization to complex
quantum architectures well beyond the listed problems, for instance, to the
optimal control of noisy quantum systems.
|
With the recent release of large (i.e., > hundred million objects),
well-calibrated photometric surveys, such as DPOSS, 2MASS, and SDSS,
spectroscopic identification of important targets is no longer a simple issue.
In order to enhance the returns from a spectroscopic survey, candidate sources
are often preferentially selected to be of interest, such as brown dwarfs or
high redshift quasars. This approach, while useful for targeted projects, risks
missing new or unusual species. We have, as a result, taken the alternative
path of spectroscopically identifying interesting sources with the sole
criterion being that they are in low density areas of the g - r and r - i
color-space defined by the DPOSS survey. In this paper, we present three
peculiar broad absorption line quasars that were discovered during this
spectroscopic survey, demonstrating the efficacy of this approach. PSS
J0052+2405 is an Iron LoBAL quasar at a redshift z = 2.4512 with very broad
absorption from many species. PSS J0141+3334 is a reddened LoBAL quasar at z =
3.005 with no obvious emission lines. PSS J1537+1227 is a Iron LoBAL at a
redshift of z = 1.212 with strong narrow Mgii and Feii emission. Follow-up high
resolution spectroscopy of these three quasars promises to improve our
understanding of BAL quasars. The sensitivity of particular parameter spaces,
in this case a two-color space, to the redshift of these three sources is
dramatic, raising questions about traditional techniques of defining quasar
populations for statistical analysis.
|
We report on the source of greater than 300 MeV protons during the
SOL2014-09-01 sustained gamma-ray emission (SGRE) event based on
multi-wavelength data from a wide array of space- and ground-based instruments.
Based on the eruption geometry we provide concrete explanation for the
spatially and temporally extended {\gamma}-ray emission from the eruption. We
show that the associated flux rope is of low inclination (roughly oriented in
the east-west direction), which enables the associated shock to extend to the
frontside. We compare the centroid of the SGRE source with the location of the
flux rope leg to infer that the high-energy protons must be precipitating
between the flux rope leg and the shock front. The durations of the
SOL2014-09-01 SGRE event and the type II radio burst agree with the linear
relationship between these parameters obtained for other SGRE events with
duration exceeding 3 hrs. The fluence spectrum of the SEP event is very hard,
indicating the presence of high-energy (GeV) particles in this event. This is
further confirmed by the presence of an energetic coronal mass ejection (CME)
with a speed more than 2000 km/s, similar to those in ground level enhancement
(GLE) events. The type II radio burst had emission components from metric to
kilometric wavelengths as in events associated with GLE events. All these
factors indicate that the high-energy particles from the shock were in
sufficient numbers needed for the production of {\gamma}-rays via neutral pion
decay.
|
In this paper we examine the N-photon absorption properties of "N00N" states,
a subclass of path entangled number states. We consider two cases. The first
involves the N-photon absorption properties of the ideal N00N state, one that
does not include spectral information. We study how the N-photon absorption
probability of this state scales with N. We compare this to the absorption
probability of various other states. The second case is that of two-photon
absorption for an N = 2 N00N state generated from a type II spontaneous down
conversion event. In this situation we find that the absorption probability is
both better than analogous coherent light (due to frequency entanglement) and
highly dependent on the optical setup. We show that the poor production rates
of quantum states of light may be partially mitigated by adjusting the spectral
parameters to improve their two-photon absorption rates. This work has
application to quantum imaging, particularly quantum lithography, where the
N-photon absorbing process in the lithographic resist must be optimized for
practical applications.
|
The effects of the IR aspects of gravity on quantum mechanics is
investigated. At large distances where due to gravity the space-time is curved,
there appears nonzero minimal uncertainty $\Delta p_{0}$ in the momentum of a
quantum mechanical particle. We apply the minimal uncertainty momentum to some
quantum mechanical interferometry examples and show that the phase shift
depends on the area surrounded by the path of the test particle . We also put
some limits on the related parameters. This prediction may be tested through
future experiments. The assumption of minimal uncertainty in momentum can also
explain the anomalous excess of the mass of the Cooper pair in a rotating thin
superconductor ring.
|
We address the problem of visual storytelling, i.e., generating a story for a
given sequence of images. While each sentence of the story should describe a
corresponding image, a coherent story also needs to be consistent and relate to
both future and past images. To achieve this we develop ordered image attention
(OIA). OIA models interactions between the sentence-corresponding image and
important regions in other images of the sequence. To highlight the important
objects, a message-passing-like algorithm collects representations of those
objects in an order-aware manner. To generate the story's sentences, we then
highlight important image attention vectors with an Image-Sentence Attention
(ISA). Further, to alleviate common linguistic mistakes like repetitiveness, we
introduce an adaptive prior. The obtained results improve the METEOR score on
the VIST dataset by 1%. In addition, an extensive human study verifies
coherency improvements and shows that OIA and ISA generated stories are more
focused, shareable, and image-grounded.
|
We prove that if any error channel has a Kraus decomposition that is
simultaneously correctable and Hilbert-Schmidt (HS) complete, then the
existence of Kraus sets with these properties guarantees the correctability of
all quantum channels. As a proof of the existence of such Kraus sets, the
$n$-level depolarization channel is shown to have a random-unitary (RU)
decomposition that is both HS complete and correctable due its RU nature,
thereby proving that all quantum channels are correctable. As an application,
conditions for universal error-correction operations are presented.
|
We establish a type of positive energy theorem for asymptotically anti-de
Sitter Einstein-Maxwell initial data sets by using Witten's spinoral
techniques.
|
In the case of electromagnetic waves it is necessary to distinguish between
inward and outward on-shell integral equations. Both kinds of equation are
derived. A correct implementation of the photonic KKR method then requires the
inward equations and it follows directly from them. A derivation of the KKR
method from a variational principle is also outlined. Rather surprisingly, the
variational KKR method cannot be entirely written in terms of surface integrals
unless permeabilities are piecewise constant. Both kinds of photonic KKR method
use the standard structure constants of the electronic KKR method and hence
allow for a direct numerical application. As a by-product, matching rules are
obtained for derivatives of fields on different sides of the discontinuity of
permeabilities.
Key words: The Maxwell equations, photonic band gap calculations
|
A low cost scheme to determine the frequency sweep nonlinearity using atomic
saturated absorption spectroscopy is demonstrated. The frequency modulation
rate is determined by directly measuring the interference fringe number and
frequency gap between two atomic transition peaks of rubidium atom.
Experimental results show that the frequency sweep nonlinearity is ~7.68%, with
the average frequency modulation rate of ~28.95 GHz/s, which is in good
agreement with theoretical expectation. With this method, the absolute optical
frequency and optical path difference between two laser beams are
simultaneously measured. This novel technique can be used for applications such
as optical frequency sweep nonlinearity correction and real-time frequency
monitor.
|
We study nonparametric Bayesian statistical inference for the parameters
governing a pure jump process of the form $$Y_t = \sum_{k=1}^{N(t)} Z_k,~~~ t
\ge 0,$$ where $N(t)$ is a standard Poisson process of intensity $\lambda$, and
$Z_k$ are drawn i.i.d.~from jump measure $\mu$. A high-dimensional wavelet
series prior for the L\'evy measure $\nu = \lambda \mu$ is devised and the
posterior distribution arises from observing discrete samples $Y_\Delta,
Y_{2\Delta}, \dots, Y_{n\Delta}$ at fixed observation distance $\Delta$, giving
rise to a nonlinear inverse inference problem. We derive contraction rates in
uniform norm for the posterior distribution around the true L\'evy density that
are optimal up to logarithmic factors over H\"older classes, as sample size $n$
increases. We prove a functional Bernstein-von Mises theorem for the
distribution functions of both $\mu$ and $\nu$, as well as for the intensity
$\lambda$, establishing the fact that the posterior distribution is
approximated by an infinite-dimensional Gaussian measure whose covariance
structure is shown to attain the information lower bound for this inverse
problem. As a consequence posterior based inferences, such as nonparametric
credible sets, are asymptotically valid and optimal from a frequentist point of
view.
|
We study here schedulers for a class of rules that naturally arise in the
context of rule-based constraint programming. We systematically derive a
scheduler for them from a generic iteration algorithm of Apt [2000]. We apply
this study to so-called membership rules of Apt and Monfroy [2001]. This leads
to an implementation that yields for these rules a considerably better
performance than their execution as standard CHR rules.
|
We present the resummation of one-jettiness for the colour-singlet plus jet
production process $p p \to ( \gamma^*/Z \to \ell^+ \ell^-) + {\text{jet}}$ at
hadron colliders up to the fourth logarithmic order (N$^3$LL). This is the
first resummation at this order for processes involving three coloured partons
at the Born level. We match our resummation formula to the corresponding
fixed-order predictions, extending the validity of our results to regions of
the phase space where further hard emissions are present. This result paves the
way for the construction of next-to-next-to-leading order simulations for
colour-singlet plus jet production matched to parton showers in the GENEVA
framework.
|
Using first principles methods, we investigate topological phase transitions
as a function of exchange field in a Bi(111) bilayer. Evaluation of the spin
Chern number for different magnitudes of the exchange field reveals that when
the time reversal symmetry is broken by a small exchange field, the system
enters the time-reversal broken topological insulator phase, introduced by Yang
{\it et al.} in Phys. Rev. Lett. 107, 066602 (2011). After a metallic phase in
the intermediate region, the quantum anomalous Hall phase with non-zero Chern
number emerges at a sufficiently large exchange field. We analyze the phase
diagram from the viewpoint of the evolution of the electronic structure, edge
states and transport properties, and demonstrate that different topological
phases can be distinguished by the spin-polarization of the edge states as well
as spin or charge transverse conductivity.
|
We have performed high-precision astrometry of H2O maser sources in Galactic
star forming region Sharpless 269 (S269) with VERA. We have successfully
detected a trigonometric parallax of 189+/-8 micro-arcsec, corresponding to the
source distance of 5.28 +0.24/-0.22 kpc. This is the smallest parallax ever
measured, and the first one detected beyond 5 kpc. The source distance as well
as proper motions are used to constrain the outer rotation curve of the Galaxy,
demonstrating that the difference of rotation velocities at the Sun and at S269
(which is 13.1 kpc away from the Galaxy's center) is less than 3%. This gives
the strongest constraint on the flatness of the outer rotation curve and
provides a direct confirmation on the existence of large amount of dark matter
in the Galaxy's outer disk.
|
In this article we construct link invariants and 3-manifold invariants from
the quantum group associated with Lie superalgebra $\mathfrak{sl}(2|1)$. This
construction based on nilpotent irreducible finite dimensional representations
of quantum group $\mathcal{U}_{\xi}\mathfrak{sl}(2|1)$ where $\xi$ is a root of
unity of odd order. These constructions use the notion of modified trace and
relative $\mathit{G}$-modular category.
|
As a contribution to the hypothesis of mixing of three active neutrinos with,
at least, one sterile neutrino, we report on a simple $4\times4$ texture whose
$3\times3$ part arises from the popular bimaximal texture for three active
neutrinos $\nu_e,\nu_\mu,\nu_\tau$, where $c_{12}=1/\sqrt{2} = s_{12}$, $c_{23}
= 1/\sqrt{2} = s_{23}$ and $s_{13} = 0$. Such a $3\times 3$ bimaximal texture
is perturbed through a rotation in the 14 plane, where $\nu_4$ is the extra
neutrino mass state induced by the sterile neutrino $\nu_s$ which becomes
responsible for the LSND effect. Then, with $m^2_1 \simeq m^2_2$ we predict
that $\sin^2 2\theta_{\rm atm} = {1/2}(1+ c^2_{14}) \sim 0.95$ and $\sin^2
2\theta_{\rm LSND} = {1/2}s^4_{14} \sim 5\times10^{-3}$, and in addition
$\Delta m^2_{\rm atm} = \Delta m^2_{32}$ and $\Delta m^2_{\rm LSND} = |\Delta
m^2_{41}|$, where $c^2_{14} = \sin^2 2\theta_{\rm sol} \sim 0.9$ and $\Delta
m^2_{21} = \Delta m^2_{\rm sol} \sim 10^{-7} {\rm eV}^2$ if e.g. the LOW solar
solution is applied.
|
This paper focuses on a kind of linear quadratic non-zero sum differential
game driven by backward stochastic differential equation with asymmetric
information, which is a natural continuation of Wang and Yu [IEEE TAC (2010)
55: 1742-1747, Automatica (2012) 48: 342-352]. Different from Wang and Yu [IEEE
TAC (2010) 55: 1742-1747, Automatica (2012) 48: 342-352], novel motivations for
studying this kind of game are provided. Some feedback Nash equilibrium points
are uniquely obtained by forward-backward stochastic differential equations,
their filters and the corresponding Riccati equations with Markovian setting.
|
Digital magnetic recording is based on the storage of a bit of information in
the orientation of a magnetic system with two stable ground states. Here we
address two fundamental problems that arise when this is done on a quantized
spin: quantum spin tunneling and back-action of the readout process. We show
that fundamental differences exist between integer and semi-integer spins when
it comes to both, read and record classical information in a quantized spin.
Our findings imply fundamental limits to the miniaturization of magnetic bits
and are relevant to recent experiments where spin polarized scanning tunneling
microscope reads and records a classical bit in the spin orientation of a
single magnetic atom.
|
Music-driven choreography is a challenging problem with a wide variety of
industrial applications. Recently, many methods have been proposed to
synthesize dance motions from music for a single dancer. However, generating
dance motion for a group remains an open problem. In this paper, we present
$\rm AIOZ-GDANCE$, a new large-scale dataset for music-driven group dance
generation. Unlike existing datasets that only support single dance, our new
dataset contains group dance videos, hence supporting the study of group
choreography. We propose a semi-autonomous labeling method with humans in the
loop to obtain the 3D ground truth for our dataset. The proposed dataset
consists of 16.7 hours of paired music and 3D motion from in-the-wild videos,
covering 7 dance styles and 16 music genres. We show that naively applying
single dance generation technique to creating group dance motion may lead to
unsatisfactory results, such as inconsistent movements and collisions between
dancers. Based on our new dataset, we propose a new method that takes an input
music sequence and a set of 3D positions of dancers to efficiently produce
multiple group-coherent choreographies. We propose new evaluation metrics for
measuring group dance quality and perform intensive experiments to demonstrate
the effectiveness of our method. Our project facilitates future research on
group dance generation and is available at:
https://aioz-ai.github.io/AIOZ-GDANCE/
|
Although ultra-luminous X-ray sources (ULX) are important for astrophysics
due to their extreme apparent super-Eddington luminosities, their nature is
still poorly known. Theoretical and observational studies suggest that ULXs
could be a diversified group of objects composed of low-mass X-ray binaries,
high-mass X-ray binaries and marginally also systems containing
intermediate-mass black holes, which is supported by their presence in a
variety of environments. Observational data on the ULX donors could
significantly boost our understanding of these systems, but only a few were
detected. There are several candidates, mostly red supergiants (RSGs), but
surveys are typically biased toward luminous near-infrared objects.
Nevertheless, it is worth exploring if RSGs can be members of ULX binaries. In
such systems matter accreted onto the compact body would have to be provided by
the stellar wind of the companion, since a Roche-lobe overflow could be
unstable for relevant mass-ratios. Here we present a comprehensive study of the
evolution and population of wind-fed ULXs and provide a theoretical support for
the link between RSGs and ULXs. Our estimated upper limit on contribution of
wind-fed ULX to the overall ULX population is $\sim75$--$96\%$ for young
($<100$ Myr) star forming environments, $\sim 49$--$87\%$ for prolonged
constant star formation (e.g., disk of Milky Way), and $\lesssim1\%$ for
environments in which star formation ceased long time ($>2$ Gyr) ago. We show
also that some wind-fed ULXs (up to $6\%$) may evolve into merging double
compact objects (DCOs), but typical systems are not viable progenitors of such
binaries because of their large separations. We demonstrate that, the exclusion
of wind-fed ULXs from population studies of ULXs, might have lead to
systematical errors in their conclusions.
|
The evolution of the metallicity of damped Lyman alpha systems (DLAs) is
investigated in order to understand the nature of these systems. The
observational data on chemical abundances of DLAs are analysed with robust
statistical methods, and the abundances are corrected for dust depletion. The
results of this analysis are compared to predictions of several classes of
chemical evolution models: one-zone dwarf galaxy models, multizone disk models,
and chemodynamical models representing dwarf galaxies. We compare the
observational data on the [alpha/Fe] and [N/alpha] ratios to the predictions
from the models. In DLAs, these ratios are only partially reproduced by the
dwarf galaxy one-zone model and by the disk model. On the other hand, the
chemodynamical model for dwarf galaxies reproduces the properties of nearly all
DLAs. We derive the formation epoch of dwarf galaxies, and we find that dwarf
galaxies make a significant contribution to the total neutral gas density in
DLAs, and that this contribution is more important at high redshifts (z > 2-3).
We propose a scenario in which the DLA population is dominated by dwarf
galaxies at high redshifts and by disks at lower redshifts. We also find that
Lyman Break Galaxies (LBGs) may constitute a sequence rather than present a
sharp dichotomy between the two populations. We also arise the possibility that
we could be missing a whole population of high HI density column objects, with
metallicities intermediate between those of DLAs and LBGs. Finally, we discuss
the possibility that relying only on the observations of DLAs could lead to an
underestimate of the metal content of the high redshift Universe.
|
Using the idea of transformation medium, a cloak can be designed to make a
domain invisible for one target frequency. In this article, we examine the
possibility to extend the bandwidth of such a cloak. We obtained a constraint
of the band width, which is summarized as a simple inequality that states that
limits the bandwidth of operation. The constraint originates from causality
requirements. We suggest a simple strategy that can get around the constraint.
|
Binary systems anchor many of the fundamental relations relied upon in
asteroseismology. Masses and radii are rarely constrained better than when
measured via orbital dynamics and eclipse depths. Pulsating binaries have much
to offer. They are clocks, moving in space, that encode orbital motion in the
Doppler-shifted pulsation frequencies. They offer twice the opportunity to
obtain an asteroseismic age, which is then applicable to both stars. They
enable comparative asteroseismology -- the study of two stars by their
pulsation properties, whose only fundamental differences are the mass and
rotation rates with which they were born. In eccentric binaries, oscillations
can be excited tidally, informing our knowledge of tidal dissipation and
resonant frequency locking. Eclipsing binaries offer benchmarks against which
the asteroseismic scaling relations can be tested. We review these themes in
light of both observational and theoretical developments recently made possible
by space-based photometry.
|
We introduce tangent cones of subsets of cartesian powers of a real closed
field, generalising the notion of the classical tangent cones of subsets of
Euclidean space. We then study the impact of non-archimedean stratifications
(t-stratifications) on these tangent cones. Our main result is that a
t-stratification induces stratifications of the same nature on the tangent
cones of a definable set. As a consequence, we show that the archimedean
counterpart of a t-stratification induces Whitney stratifications on the
tangent cones of a semi-algebraic set. The latter statement is achieved by
working with the natural valuative structure of non-standard models of the real
field.
|
Microgels are cross-linked, colloidal polymer networks with great potential
for stimuli-response release in drug-delivery applications, as their size in
the nanometer range allows them to pass human cell boundaries. For applications
with specified requirements regarding size, producing tailored microgels in a
continuous flow reactor is advantageous because the microgel properties can be
controlled tightly. However, no fully-specified mechanistic models are
available for continuous microgel synthesis, as the physical properties of the
included components are only studied partly. To address this gap and accelerate
tailor-made microgel development, we propose a data-driven optimization in a
hardware-in-the-loop approach to efficiently synthesize microgels with defined
sizes. We optimize the synthesis regarding conflicting objectives (maximum
production efficiency, minimum energy consumption, and the desired microgel
radius) by applying Bayesian optimization via the solver ``Thompson sampling
efficient multi-objective optimization'' (TS-EMO). We validate the optimization
using the deterministic global solver ``McCormick-based Algorithm for
mixed-integer Nonlinear Global Optimization'' (MAiNGO) and verify three
computed Pareto optimal solutions via experiments. The proposed framework can
be applied to other desired microgel properties and reactor setups and has the
potential of efficient development by minimizing number of experiments and
modelling effort needed.
|
The problem of multiple hypothesis testing with observation control is
considered in both fixed sample size and sequential settings. In the fixed
sample size setting, for binary hypothesis testing, the optimal exponent for
the maximal error probability corresponds to the maximum Chernoff information
over the choice of controls, and a pure stationary open-loop control policy is
asymptotically optimal within the larger class of all causal control policies.
For multihypothesis testing in the fixed sample size setting, lower and upper
bounds on the optimal error exponent are derived. It is also shown through an
example with three hypotheses that the optimal causal control policy can be
strictly better than the optimal open-loop control policy. In the sequential
setting, a test based on earlier work by Chernoff for binary hypothesis
testing, is shown to be first-order asymptotically optimal for multihypothesis
testing in a strong sense, using the notion of decision making risk in place of
the overall probability of error. Another test is also designed to meet hard
risk constrains while retaining asymptotic optimality. The role of past
information and randomization in designing optimal control policies is
discussed.
|
Cardiovascular diseases (CVDs) encompass a group of disorders affecting the
heart and blood vessels, including conditions such as coronary artery disease,
heart failure, stroke, and hypertension. In cardiovascular diseases, heart
failure is one of the main causes of death and also long-term suffering in
patients worldwide. Prediction is one of the risk factors that is highly
valuable for treatment and intervention to minimize heart failure. In this
work, an attention learning-based heart failure prediction approach is proposed
on EHR(electronic health record) cardiovascular data such as ejection fraction
and serum creatinine. Moreover, different optimizers with various learning rate
approaches are applied to fine-tune the proposed approach. Serum creatinine and
ejection fraction are the two most important features to predict the patient's
heart failure. The computational result shows that the RMSProp optimizer with
0.001 learning rate has a better prediction based on serum creatinine. On the
other hand, the combination of SGD optimizer with 0.01 learning rate exhibits
optimum performance based on ejection fraction features. Overall, the proposed
attention learning-based approach performs very efficiently in predicting heart
failure compared to the existing state-of-the-art such as LSTM approach.
|
Submodular function maximization is a central problem in combinatorial
optimization, generalizing many important problems including Max Cut in
directed/undirected graphs and in hypergraphs, certain constraint satisfaction
problems, maximum entropy sampling, and maximum facility location problems.
Unlike submodular minimization, submodular maximization is NP-hard. For the
problem of maximizing a non-monotone submodular function, Feige, Mirrokni, and
Vondr\'ak recently developed a $2\over 5$-approximation algorithm \cite{FMV07},
however, their algorithms do not handle side constraints.} In this paper, we
give the first constant-factor approximation algorithm for maximizing any
non-negative submodular function subject to multiple matroid or knapsack
constraints. We emphasize that our results are for {\em non-monotone}
submodular functions. In particular, for any constant $k$, we present a
$({1\over k+2+{1\over k}+\epsilon})$-approximation for the submodular
maximization problem under $k$ matroid constraints, and a $({1\over
5}-\epsilon)$-approximation algorithm for this problem subject to $k$ knapsack
constraints ($\epsilon>0$ is any constant). We improve the approximation
guarantee of our algorithm to ${1\over k+1+{1\over k-1}+\epsilon}$ for $k\ge 2$
partition matroid constraints. This idea also gives a $({1\over
k+\epsilon})$-approximation for maximizing a {\em monotone} submodular function
subject to $k\ge 2$ partition matroids, which improves over the previously best
known guarantee of $\frac{1}{k+1}$.
|
We use adversarial network architectures together with the Wasserstein
distance to generate or refine simulated detector data. The data reflect
two-dimensional projections of spatially distributed signal patterns with a
broad spectrum of applications. As an example, we use an observatory to detect
cosmic ray-induced air showers with a ground-based array of particle detectors.
First we investigate a method of generating detector patterns with variable
signal strengths while constraining the primary particle energy. We then
present a technique to refine simulated time traces of detectors to match
corresponding data distributions. With this method we demonstrate that training
a deep network with refined data-like signal traces leads to a more precise
energy reconstruction of data events compared to training with the originally
simulated traces.
|
Wavelets have been shown to be effective bases for many classes of natural
signals and images. Standard wavelet bases have the entire vector space
$\mathbb R^n$ as their natural domain. It is fairly straightforward to adapt
these to rectangular subdomains, and there also exist constructions for domains
with more complex boundaries. However those methods are ineffective when we
deal with domains that are very arbitrary and convoluted. A particular example
of interest is the human cortex, which is the part of the human brain where all
the cognitive activity takes place. In this thesis, we use the lifting scheme
to design wavelets on arbitrary volumes, and in particular on volumes having
the structure of the human cortex. These wavelets have an element of randomness
in their construction, which allows us to repeat the analysis with many
different realizations of the wavelet bases and averaging the results, a method
that improves the power of the analysis. Next, we apply this type of wavelet
transforms to the statistical analysis to fMRI data, and we show that it
enables us to achieve greater spatial localization than other, more standard
techniques.
|
We study the linear conductance through a double-quantum-dot system
consisting of an interacting dot in its Kondo regime and an effectively
noninteracting dot, connected in parallel to metallic leads. Signatures in the
zero-bias conductance at temperatures $T>0$ mark a pair of quantum (T=0) phase
transitions between a Kondo-screened many-body ground state and non-Kondo
ground states. Notably, the conductance features become more prominent with
increasing $T$, which enhances the experimental prospects for accessing the
quantum-critical region through tuning of gate voltages in a single device.
|
The recent discovery of materials hosting persistent spin texture (PST) opens
an avenue for the realization of energy-saving spintronics since they support
an extraordinarily long spin lifetime. However, the stability of the PST is
sensitively affected by symmetry breaking of the crystal induced by external
perturbation such as the electric field. In this paper, through
first-principles calculations supplemented by symmetry analysis, we report the
emergence of the robust and stable PST with large spin splitting in the
two-dimensional ferroelectric bilayer WTe$_{2}$. Due to the low symmetry of the
crystal ($C_{s}$ point group), we observe a canted PST in the spin-split bands
around the Fermi level displaying a unidirectional spin configuration tilted
along the $yz$ plane in the first Brillouin zone. Such a typical PST can be
effectively reversed by out-of-plane ferroelectric switching induced by
interlayer sliding along the in-plane direction. We further demonstrated that
the reversible PST is realized by the application of an out-of-plane external
electric field. Thus, our findings uncover the possibility of an electrically
tunable PST in the 2D materials, offering a promising platform for highly
efficient spintronics devices.
|
The rarity of large landslides reduces the number of observations and hinders
the understanding of these phenomena. Runout distance was used here to
determine whether the large landslide deposit formed several thousand years ago
in northern Tahiti was caused by a single or multiple events. Using modelling
to quantify the dynamics of this event suggested that a single event or a small
number of events (n<10) were responsible, and that the maximum slide velocity
was high (>125 m/s) under partially submarine conditions. Such submarine
propagation favoured a slower dynamic but a longer runout. The effective basal
friction under submarine conditions ranged from 0.2 < $\mu$ < 0.3.
|
In this review we first discuss extension of Bohr's 1913 molecular model and
show that it corresponds to the large-D limit of a dimensional scaling
(D-scaling) analysis, as developed by Herschbach and coworkers.
In a separate but synergetic approach to the two-electron problem, we
summarize recent advances in constructing analytical models for describing the
two-electron bond. The emphasis here is not maximally attainable numerical
accuracy, but beyond textbook accuracy as informed by physical insights. We
demonstrate how the interplay of the cusp condition, the asymptotic condition,
the electron-correlation, configuration interaction, and the exact one electron
two-center orbitals, can produce energy results approaching chemical accuracy.
Reviews of more traditional calculational approaches, such as Hartree-Fock, are
also given.
The inclusion of electron correlation via Hylleraas type functions is well
known to be important, but difficult to implement for more than two electrons.
The use of the D-scaled Bohr model offers the tantalizing possibility of
obtaining electron correlation energy in a non-traditional way.
|
In agile software development, test code can considerably contribute to the
overall source code size. Being a valuable asset both in terms of verification
and documentation, the composition of a test suite needs to be well understood
in order to identify opportunities as well as weaknesses for further evolution.
In this paper, we argue that the visualization of structural characteristics is
a viable means to support the exploration of test suites. Thanks to general
agreement on a limited set of key test design principles, such visualizations
are relatively easy to interpret. In particular, we present visualizations that
support testers in (i) locating test cases; (ii) examining the relation between
test code and production code; and (iii) studying the composition of and
dependencies within test cases. By means of two case studies, we demonstrate
how visual patterns help to identify key test suite characteristics. This
approach forms the first step in assisting a developer to build up
understanding about test suites beyond code reading.
|
We consider a (2+1)-dimensional field theory, assumed to be holographically
dual to the extremal Reissner-Nordstrom AdS(4) black hole background, and
calculate the retarded correlators of charge (vector) current and
energy-momentum (tensor) operators at finite momentum and frequency. We show
that, similar to what was observed previously for the correlators of scalar and
spinor operators, these correlators exhibit emergent scaling behavior at low
frequency. We numerically compute the electromagnetic and gravitational
quasinormal frequencies (in the shear channel) of the extremal
Reissner-Nordstrom AdS(4) black hole corresponding to the spectrum of poles in
the retarded correlators. The picture that emerges is quite simple: there is a
branch cut along the negative imaginary frequency axis, and a series of
isolated poles corresponding to damped excitations. All of these poles are
always in the lower half complex frequency plane, indicating stability. We show
that this analytic structure can be understood as the proper limit of finite
temperature results as T is taken to zero holding the chemical potential fixed.
|
At zero temperature and density, the nature of the chiral phase transition in
QED$_3$ with $\textit{N}_{f}$ massless fermion flavors is investigated. To this
end, in Landau gauge, we numerically solve the coupled Dyson-Schwinger
equations for the fermion and boson propagator within the bare and simplified
Ball-Chiu vertices separately. It is found that, in the bare vertex
approximation, the system undergoes a high-order continuous phase transition
from the Nambu-Goldstone phase into the Wigner phase when the number of fermion
flavors $\textit{N}_{f}$ reaches the critical number $\textit{N}_{f,c}$, while
the system exhibits a typical characteristic of second-order phase transition
for the simplified Ball-Chiu vertex.
|
I discuss issues of inverting feasibly computable functions, optimal
discovery algorithms, and the constant overheads in their performance.
|
Motivated by the problem of Casimir energy, we investigate the idea of using
inhomogeneity of surfaces instead of their corrugation, which leads to Casimir
interaction between two inhomogeneous semi-transparent concentric cylinders.
Using the multiple scattering method, we study the Casimir energy and torque
between the cylinders with different potentials subjected to Dirichlet boundary
conditions, both in weak and strong coupling regimes. We also extend our
formalism to the case of two inhomogeneous dielectrics in a weak coupling
regime.
|
We demonstrate that VPMS J170850.95+433223.7 is a weak line quasar (WLQ)
which is remarkable in several respects. It was already classified as a
probable quasar two decades ago, but with considerable uncertainty. The
non-significant proper motion and parallax from the Gaia early data release 3
have solidified this assumption. Based on previously unpublished spectra, we
show that VPMS J170850.95+433223.7 is a WLQ at z = 2.345 with immeasurably
faint broad emission lines in the rest-frame ultraviolet. A preliminary
estimate suggests that it hosts a supermassive black hole of ~10^9 M_sun
accreting close to the Eddington limit, perhaps at the super-Eddington level.
We identify two absorber systems with blueward velocity offsets of 0.05c and
0.1c, which could represent high-velocity outflows, which are perhaps related
to the high accretion state of the quasar.
|
We wholeheartedly congratulate Drs. Rohe and Zeng for their insightful paper
\cite{rohe2020vintage} on vintage factor analysis with Varimax rotation. This
note discusses the conditions to guarantee Varimax consistently recovers the
subspace rotation.
|
Unsupervised Domain Adaptation (UDA) for object detection aims to adapt a
model trained on a source domain to detect instances from a new target domain
for which annotations are not available. Different from traditional approaches,
we propose ConfMix, the first method that introduces a sample mixing strategy
based on region-level detection confidence for adaptive object detector
learning. We mix the local region of the target sample that corresponds to the
most confident pseudo detections with a source image, and apply an additional
consistency loss term to gradually adapt towards the target data distribution.
In order to robustly define a confidence score for a region, we exploit the
confidence score per pseudo detection that accounts for both the
detector-dependent confidence and the bounding box uncertainty. Moreover, we
propose a novel pseudo labelling scheme that progressively filters the pseudo
target detections using the confidence metric that varies from a loose to
strict manner along the training. We perform extensive experiments with three
datasets, achieving state-of-the-art performance in two of them and approaching
the supervised target model performance in the other. Code is available at:
https://github.com/giuliomattolin/ConfMix.
|
For given integers $k$ and $r$, the Folkman number $f(k;r)$ is the smallest
number of vertices in a graph $G$ which contains no clique on $k+1$ vertices,
yet for every partition of its edges into $r$ parts, some part contains a
clique of order $k$. The existence (finiteness) of Folkman numbers was
established by Folkman (1970) for $r=2$ and by Ne\v{s}et\v{r}il and R\"odl
(1976) for arbitrary $r$, but these proofs led to very weak upper bounds on
$f(k;r)$.
Recently, Conlon and Gowers and independently the authors obtained a doubly
exponential bound on $f(k;2)$. Here, we establish a further improvement by
showing an upper bound on $f(k;r)$ which is exponential in a polynomial
function of $k$ and $r$. This is comparable to the known lower bound
$2^{\Omega(rk)}$.
Our proof relies on a recent result of Saxton and Thomason (2015) (or,
alternatively, on a recent result of Balogh, Morris, and Samotij (2015)) from
which we deduce a quantitative version of Ramsey's theorem in random graphs.
|
The rate of escape of polymers from a two-dimensionally confining potential
well has been evaluated using self-avoiding as well as ideal chain
representations of varying length, up to 80 beads. Long timescale Langevin
trajectories were calculated using the path integral hyperdynamics method to
evaluate the escape rate. A minimum is found in the rate for self-avoiding
polymers of intermediate length while the escape rate decreases monotonically
with polymer length for ideal polymers. The increase in the rate for long,
self-avoiding polymers is ascribed to crowding in the potential well which
reduces the free energy escape barrier. An effective potential curve obtained
using the centroid as an independent variable was evaluated by thermodynamic
averaging and Kramers rate theory then applied to estimate the escape rate.
While the qualitative features are well reproduced by this approach, it
significantly overestimates the rate, especially for the longer polymers. The
reason for this is illustrated by constructing a two-dimensional effective
energy surface using the radius of gyration as well as the centroid as
controlled variables. This shows that the description of a transition state
dividing surface using only the centroid fails to confine the system to the
region corresponding to the free energy barrier and this problem becomes more
pronounced the longer the polymer is. A proper definition of a transition state
for polymer escape needs to take into account the shape as well as the location
of the polymer.
|
Introducing factors, that is to say, word features such as linguistic
information referring to the source tokens, is known to improve the results of
neural machine translation systems in certain settings, typically in recurrent
architectures. This study proposes enhancing the current state-of-the-art
neural machine translation architecture, the Transformer, so that it allows to
introduce external knowledge. In particular, our proposed modification, the
Factored Transformer, uses linguistic factors that insert additional knowledge
into the machine translation system. Apart from using different kinds of
features, we study the effect of different architectural configurations.
Specifically, we analyze the performance of combining words and features at the
embedding level or at the encoder level, and we experiment with two different
combination strategies. With the best-found configuration, we show improvements
of 0.8 BLEU over the baseline Transformer in the IWSLT German-to-English task.
Moreover, we experiment with the more challenging FLoRes English-to-Nepali
benchmark, which includes both extremely low-resourced and very distant
languages, and obtain an improvement of 1.2 BLEU.
|
Wave energy technologies have the potential to play a significant role in the
supply of renewable energy on a world scale. One of the most promising designs
for wave energy converters (WECs) are fully submerged buoys. In this work, we
explore the optimisation of WEC arrays consisting of a three-tether buoy model
called CETO. Such arrays can be optimised for total energy output by adjusting
both the relative positions of buoys in farms and also the power-take-off (PTO)
parameters for each buoy. The search space for these parameters is complex and
multi-modal. Moreover, the evaluation of each parameter setting is
computationally expensive -- limiting the number of full model evaluations that
can be made. To handle this problem, we propose a new hybrid cooperative
co-evolution algorithm (HCCA). HCCA consists of a symmetric local search plus
Nelder-Mead and a cooperative co-evolution algorithm (CC) with a backtracking
strategy for optimising the positions and PTO settings of WECs, respectively.
Moreover, a new adaptive scenario is proposed for tuning grey wolf optimiser
(AGWO) hyper-parameter. AGWO participates notably with other applied optimisers
in HCCA. For assessing the effectiveness of the proposed approach five popular
Evolutionary Algorithms (EAs), four alternating optimisation methods and two
modern hybrid ideas (LS-NM and SLS-NM-B) are carefully compared in four real
wave situations (Adelaide, Tasmania, Sydney and Perth) with two wave farm sizes
(4 and 16). According to the experimental outcomes, the hybrid cooperative
framework exhibits better performance in terms of both runtime and quality of
obtained solutions.
|
Frontier exploration and reinforcement learning have historically been used
to solve the problem of enabling many mobile robots to autonomously and
cooperatively explore complex surroundings. These methods need to keep an
internal global map for navigation, but they do not take into consideration the
high costs of communication and information sharing between robots. This study
offers CQLite, a novel distributed Q-learning technique designed to minimize
data communication overhead between robots while achieving rapid convergence
and thorough coverage in multi-robot exploration. The proposed CQLite method
uses ad hoc map merging, and selectively shares updated Q-values at recently
identified frontiers to significantly reduce communication costs. The
theoretical analysis of CQLite's convergence and efficiency, together with
extensive numerical verification on simulated indoor maps utilizing several
robots, demonstrates the method's novelty. With over 2x reductions in
computation and communication alongside improved mapping performance, CQLite
outperformed cutting-edge multi-robot exploration techniques like Rapidly
Exploring Random Trees and Deep Reinforcement Learning. Related codes are
open-sourced at \url{https://github.com/herolab-uga/cqlite}.
|
Subsets and Splits